Data Engineer
Talenza
- Sydney, NSW
- $160,000-170,000 per year
- Permanent
- Full-time
- Utilize your expertise in data warehousing and information management to contribute to the development of our future-ready Data Platform.
- Design and implement data models to support business requirements.
- Leverage your Linux/Unix skills and prior experience with AWS to build robust and scalable data solutions.
- Develop and optimize complex SQL queries and data manipulation processes.
- Utilize Big Data querying tools such as Hive, Spark, and Presto to extract insights from large datasets.
- Implement API-based integrations and orchestrate data pipelines using tools like Apache Oozie, Airflow, or Argo Workflows.
- Apply your knowledge of technical solutions, design patterns, and code to develop medium/complex applications deployed in clustered computing environments.
- Proficiency in programming languages such as Scala, Java, or Python, particularly in the context of Big Data technologies like Spark.
- Redshift.
- Experience working with Docker, Kubernetes, and familiarity with containerized environments.
- Exposure to security concepts and best practices related to data engineering.
- Knowledge of machine learning concepts and tools such as Spark ML or R.
- Experience with major Hadoop distributions such as Cloudera, MapR, or Hortonworks HDP.
- Familiarity with build tools like Maven, Gradle, and Ant.