Develop and Optimise Pipelines:
Design, build, and maintain reliable and scalable ETL pipelines using Databricks, Python, and SQL, striving for exceptional quality and accuracy in data processes. Leverage Cloud Technology:
Utilise the Azure stack (Data Factory, Synapse Analytics, Data Lake, Blob Storage) to create robust, scalable cloud-based data solutions. Integrate Diverse Data Sources:
Work with structured and unstructured data from APIs, cloud services, and internal databases to provide a unified view of critical datasets. Support Data-Driven Decision Making:
Collaborate with data analysts, scientists, and stakeholders to deliver timely, clean, and actionable data that drives business outcomes. Ensure Data Quality and Security:
Implement rigorous practices for data governance, quality assurance, and compliance with relevant regulations. Innovate and Improve:
Continuously identify opportunities to enhance data processes, reduce latency, and improve system performance. Monitor and Troubleshoot:
Maintain system reliability by proactively monitoring pipelines, diagnosing issues, and implementing solutions to prevent recurrence. Skills
Programming and Data Engineering Proficiency:
Expertise in Python, SQL, and modern data stacks, including data pipelines, distributed systems, and cloud-based storage solutions. Data Quality and Observability:
Strong focus on data quality with experience in testing practices (unit and integration testing) and monitoring tools. Cloud Infrastructure and CI/CD:
Advanced knowledge of cloud platforms like Azure (Data Factory, Data Lake, Synapse) and CI/CD pipelines. Independence and Problem Solving:
Capable of independently completing complex tasks, owning projects end-to-end. Collaboration and Communication:
Strong communicator, adept at explaining technical concepts to non-technical stakeholders. Adaptability and Innovation:
Skilled in balancing multiple tasks and driving innovation by contributing ideas for features and improvements. Continuous Learning:
Committed to staying informed about emerging technologies and industry best practices. Qualifications
Education:
Bachelor’s degree in computer science, Software Engineering, Mathematics, or related field, or equivalent professional experience. Professional Experience:
Professional experience in Data Engineering teams, with a demonstrated ability to work independently on complex projects. Unfortunately, we will not be able to sponsor candidates for this position. Our Core Values at Expana At Expana, we take great pride in our core values, which guide our actions and shape our culture: Brilliance:
We strive for excellence in everything we do. Connected:
We believe in the power of collaboration and communication. Make a Difference:
We are committed to making a positive impact in our industry and community. Other values such as trust, transparency, creativity, and ownership define our brand and company culture. In return for your hard work and dedication, we offer fantastic rewards. Please note that we operate with an agile working model, allowing for remote work and occasional travel to our offices. It is important to mention that we use E-Verify with third-party suppliers in our hiring practices to ensure compliance with employment laws.
#J-18808-Ljbffr