Description:
Job Title: Junior Data Engineer
Location: Australia
Job Type: Full-time
We are looking for a motivated and detail-oriented Junior Data Engineer to join our dynamic team in Australia. As a Junior Data Engineer, you will work closely with senior data engineers, data scientists, and analysts to support the development, management, and optimization of data pipelines and systems. You will help build the infrastructure that enables data collection, storage, and analysis at scale, ensuring our data is clean, accessible, and ready for analysis.
Key Responsibilities:- Assist in designing, developing, and maintaining data pipelines and ETL (Extract, Transform, Load) processes to collect, process, and store large datasets.
- Collaborate with data scientists and analysts to ensure data is easily accessible and aligned with business needs.
- Work with cloud-based data storage solutions (e.g., AWS, Azure, Google Cloud) to manage data infrastructure.
- Ensure data quality by developing scripts and tools for data validation, cleaning, and transformation.
- Perform data extraction, aggregation, and transformation to support business intelligence and analytical efforts.
- Work on optimizing existing data systems for performance, scalability, and reliability.
- Document data workflows, architectures, and processes for transparency and knowledge sharing.
- Assist in troubleshooting data pipeline issues and performance bottlenecks.
- Stay up to date with emerging technologies and best practices in data engineering.
- Education: Bachelor’s degree in Computer Science, Data Engineering, Information Technology, or a related field (or equivalent experience).
- Experience: 0-2 years of experience in data engineering, software development, or a related role (internships, personal projects, or academic experience are welcome).
- Familiarity with programming languages such as Python , Java , or Scala .
- Experience with SQL for data querying, data manipulation, and database management.
- Knowledge of cloud platforms such as AWS , Azure , or Google Cloud .
- Basic understanding of data warehousing concepts and technologies (e.g., Redshift , BigQuery , Snowflake ).
- Familiarity with ETL tools and frameworks (e.g., Apache Airflow , Talend , Apache NiFi ).
- Strong analytical and problem-solving skills with attention to detail.
- Ability to collaborate effectively with cross-functional teams in an agile environment.
- Good communication skills, with the ability to explain technical concepts to non-technical stakeholders.
- Experience with distributed computing and big data tools (e.g., Apache Hadoop , Apache Spark ).
- Knowledge of data visualization tools like Tableau or Power BI .
- Familiarity with version control tools like Git .
- Experience working with APIs for data integration.
- Knowledge of containerization tools like Docker or Kubernetes .
- Familiarity with machine learning concepts or tools is a plus.
- Competitive salary and benefits package.
- Opportunities for growth and development in a fast-evolving field.
- Access to cutting-edge technologies and tools in data engineering.
- A collaborative and supportive work culture that encourages innovation and learning.
- Flexible working hours and remote work options.
- Exposure to a variety of data projects across different industries.
5 Jan 2026;
from:
linkedin.com