ABOUT THE COMPANY: This growing consultancy has a passion to help organisations build data practices using cloud-native and modern open-source tools. They make data simple, accessible and valuable through their modern and unique approach. This is a values driven organisation that works with an enviable client list.
ABOUT THE ROLE: As a Senior Data Engineer specialising in Snowflake and dbt, you will be responsible for designing, building, and maintaining scalable, high-performance data pipelines and data models for this company's clients. You will work closely with our data analysts, data scientists, and project managers to deliver reliable, efficient, and clean data environments using best practices in modern data engineering.
The role offers the opportunity to work with a diverse range of clients, using the latest technologies and techniques in the data space.
WHAT'S IN IT FOR YOU?
Solve real world business problems
Collaborative and supportive work enviornment
Be at the forefront of data science innovation in a dynamic startup environment
Work on a variety of challenging and impactful projects across different industries
Opportunity to learn and grow your skillset with the latest technologies
Continuous learning and development opportunities
Fruit, snacks, monthly lunches and staff acitivities
Be part of a company shaping the future of data driven solutions
ABOUT YOU: We are seeking someone with excellent communucation skills and the ability to work wth cross functional teams including engineers, analysts and business leaders. In addition you will need:
8+ years of IT industry experience working on data engineering practices
5+ years of experience in data engineering, ETL processes, and building data pipelines
Strong knowledge of data modelling, database design, and data warehousing Hands-on experience with Azure Data Services (e.g., Azure Data Factory, Azure Synapse Analytics, Azure SQL Database, Azure Blob Storage)
Hands-on experience in working with Snowflake for data warehousing, data sharing, and cloud-based data architecture
Experience with AWS Data Services (e.g., Redshift, S3, Glue) would be a plus Proficiency in building ETL/ELT pipelines using tools like AWS Glue, Azure Data Factory, and DBT for data transformation and integration
Experience with CI/CD pipelines for automating data workflows
Strong experience in automating data workflows and orchestrating data pipelines in cloud environments
Experience implementing data governance strategies and security protocols in cloud environments, including encryption, access control, and auditing in Snowflake, AWS, and Azure
Experience with real-time data streaming and processing using Azure Stream Analytics, or Snowflake's Snowpipe for automated data loading
Proficiency in SQL, Python, Scala, or other programming languages for data
WHAT'S NEXT? Don't miss out on this awesome opportunity. Apply today, or for more information contact Lisa Cooley on 021 029 81422 or lcooley@brightspark.io