Senior Data Engineer

TEKsystems


Date: 22 hours ago
City: Auckland, Auckland
Contract type: Full time
Role: Senior Data Engineer

Location: Auckland CBD

Role Type: Permanent

Salary: Competitive

Flexibility: 2 Days WFH

Reporting to: Data and Analytics Manager

About The Role

Our client is looking for a Senior Data Engineer to join their team in Auckland. The main focus of this role will be designing, implementing, maintaining, and supporting data platforms and products, including Databricks and Azure Data Lake storage. You will be part of a high-performing Data and Analytics Centre of Excellence (CoE) team.

Key Responsibilities

  • Lead the development of self-service data and information products to ensure

excellent customer experience.

  • Promote the use of data transformation tools and techniques to cleanse, integrate,

and transform raw data into usable formats.

  • Drive continuous improvement in data quality, consistency, accessibility, and security.
  • Oversee the design and development of sustainable and scalable data solutions.
  • Provide thought leadership in solution design and product development.
  • Mentor and guide data engineers and analysts, fostering a culture of continuous

learning.

  • Conduct code reviews to ensure adherence to best practices and coding standards.
  • Contribute to data governance practices and participate in strategic planning to align

data engineering initiatives with business goals.

  • Manage and troubleshoot data-related issues, providing timely resolutions.
  • Advocate for understanding business problems and requirements to drive effective

data solutions.

  • Document work outcomes and support the business by utilizing the data catalogue

for key documentation.

Experience Required

  • Extensive experience with Databricks for big data processing and analytics.
  • Proven expertise in designing and implementing Kafka-based data ingestion

pipelines.

  • Experience with Azure Data Lake Storage for scalable data storage solutions.
  • Advanced skills in Python and SQL for data manipulation, querying, and analysis.
  • Experience with PySpark for distributed data processing.
  • Expertise in designing and implementing data models that represent business

entities and relationships.

  • Ability to optimize data pipelines and workflows for performance and scalability.
  • Experience with ETL (Extract, Transform, Load) processes.
  • Ensuring compliance with data governance and regulatory requirements.
  • Ability to implement and manage data quality frameworks to ensure data integrity

and reliability.

  • Experience in mentoring and guiding data engineers and analysts.
  • Strong problem-solving skills to manage and troubleshoot data-related issues.
  • Excellent communication skills to collaborate with cross-functional teams and

understand business requirements.

If you have any questions or would like to apply alternatively, please email [email protected]
Post a CV