Data Engineer – Python, SQL, dbt, AWS, ETL, Data Pipelines, Data Warehousing, DWH, S3, Glue, Data Lake, Automation, London
A Data Engineer is sought after by a leading workplace pensions provider, to join their expanding data engineering team in their London City office. The team have recently built out a next-gen data lake hosted in AWS which makes use of Dremio, Data Mesh architecture and a wide range of AWS tools (S3, Glue, Step Functions etc.).
With the infrastructure of this new Data Lake in place, there is now a focus on enhancing the data stored within (via monitoring, cleaning and, as well as a requirement to design, build and implement robust ETL pipelines for effective integration with the wider business. Additionally, you will also be involved in project-based work, deploying in pods for the delivery of new data products as per business requests, providing a constant pipeline of work and engagement with stakeholders.
If you demonstrate the following skillset, please do apply!
• 2 – 3 years’ experience in Python programming, particularly around data analysis, automation and the building of data pipelines (as well as an understanding of data warehousing concepts)
• Strong SQL skills, especially in relation to ETL processes and data management (querying, cleaning, storing)
• Experience working with and deploying to AWS cloud (having worked with any/all of S3, Glue, Lambda, Step Functions etc.)
• Usage of PowerBI for data visualization
• Clear, confident communications with previous experience of working directly with business users
• Any knowledge or familiarity with Dremio, Data Mesh architecture, big data (Hadoop/Spark) is highly beneficial