Design, develop, and maintain robust data pipelines and ETL processes to ingest, transform, and load data from diverse sources into our data warehouse. Data Quality and Governance:
Implement and monitor data quality checks, ensuring accuracy, consistency, and reliability of data. Optimization:
Optimize data processing workflows for performance, scalability, and cost-efficiency. Collaboration:
Work closely with data scientists, analysts, and other engineering teams to understand data requirements and deliver solutions that meet their needs. Innovation:
Stay current with emerging technologies and industry trends in data engineering, and evaluate their potential application to our environment. Mentorship:
Provide technical guidance and mentorship to junior engineers, promoting best practices in software development and data engineering. Documentation:
Maintain comprehensive documentation for data pipelines, systems architecture, and processes. Qualifications: Education:
Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Experience:
Minimum of 5 years of experience in software development, with at least 2 years focused on data engineering. Technical Skills:
Proficiency in programming languages such as Python, Java, or Scala. Knowledge of data modeling and schema design. Strong system design skills of data-intensive applications. Strong SQL skills and experience with relational databases (e.g., PostgreSQL, MySQL). Experience with at least one cloud platform (e.g., AWS, Azure, Google Cloud) and its data services.
Analytical Skills:
Strong problem-solving skills with a keen eye for detail and a passion for data. Communication:
Excellent written and verbal communication skills, with the ability to articulate complex technical concepts to non-technical stakeholders. Team Player:
Ability to work effectively in a collaborative team environment, as well as independently. Preferred Qualifications: Experience with big data technologies (e.g., Hadoop, Spark, Kafka). Experience with AWS and its data services (e.g. S3, Athena, AWS Glue). Familiarity with data warehousing solutions (e.g., Redshift, BigQuery, Snowflake). Knowledge of containerization and orchestration tools (e.g., Docker, ECS, Kubernetes). Knowledge of data orchestration tools (e.g. Prefect, Apache Airflow). Familiarity with CI/CD pipelines and DevOps practices. Familiarity with Infrastructure-as-code tools (e.g. Terraform, AWS CDK). Experience with machine learning pipelines and MLOps. Employee Benefits At Intelmatix, our benefits package is designed to meet the diverse needs of our employees, reflecting our dedication to their well-being and professional growth. Depending on your office location and specific needs, our benefits may include: Comprehensive Medical Insurance for you and your dependents. In-Office Snacks Pantry. Relocation Support. Children's School Allowance. Role-Related Training Support. Wellness Programs. Salary Advance for Housing Costs. Travel Tickets. Pension Contributions. We are committed to continuously enhancing our benefits package to adapt to the unique needs and circumstances of our valued team members, ensuring a supportive and enriching environment for everyone at Intelmatix.
#J-18808-Ljbffr