Weβre looking for a dynamic Data Engineer to take the lead in designing and implementing advanced data architectures that power our analytics and decision-making. Join our collaborative team to shape the future of data infrastructure, tackle exciting challenges, and make a real impact with your expertise.
What You'll Do:
- Lead the development and implementation of data solutions, including data warehousing, ETL pipelines, and data lakes;
- Design and build data pipelines and infrastructure that enable efficient processing, storage, and analysis of large volumes of data;
- Collaborate with data team and other business stakeholders to understand data requirements and design solutions that meet their needs;
- Collaborate closely with System Engineers, DBAs, and DevOps teams to align data infrastructure with overall company architecture;
- Work with software engineers to integrate data pipelines and infrastructure into larger software systems;
- Monitor, maintain, and troubleshoot existing data pipelines and solutions to ensure continuous, reliable data flow and system performance;
- Optimize data processing and storage systems for performance and scalability;
- Implement and maintain data security and privacy measures to protect sensitive data;
- Develop and maintain documentation for data pipelines, infrastructure, and processes;
- Stay up-to-date with emerging trends and technologies in the data engineering field and recommend new tools and techniques to improve our data solution
Who You Are:
- At least 3 years of experience in data engineering or a related field;
- Expertise in SQL, Spark/Python or other data processing technologies;
- Experience designing and building data pipelines and infrastructure using cloud platforms such as AWS, GCP or Azure;
- Strong understanding of data warehousing, ETL, and data lake architecture and design principles;
- Experience with data streaming technologies such as Apache Kafka, Apache Flink, or similar tools for real-time data processing.
- Experience with relational database technologies such as MySQL, PostgreSQL, or Oracle;
- Experience with data modeling and schema design;
Tech stack:
- Google cloud;
- BigQuery;
- Composer / Airflow;
- DataProc / Spark;
- DataFlow / Beam
- PubSub;
- Cloud Functions;
- Debezium
- Languages: Python, some Scala and Java, SQL;
- Infrastructure tools like Terraform, k8s, Grafana.
Encouragement to Apply:
We understand that confidence gaps and imposter syndrome can deter amazing candidates from applying. Please apply anyway β weβd love to hear from you.
Β
Yggdrasil
Yggdrasil is a provider of superior online gaming solutions for iGaming operators.
Other jobs at Yggdrasil
Β
Β
Β
Β
Β
Β
Β
Β
Why OmniJobs?
- Rare & hidden jobs
- New jobs every day
- No expired job posts
- All jobs in English
Receive emails about similar jobs
Get alerts to your inbox about new open jobs that are similar to this one.
No spam. No ads. Unsubscribe anytime.
Similar jobs
Β
Β
Β
Β
Β
Β
Β
Β