Senior Data Engineer

Remote
Contract
Senior
🇺🇦 Ukraine
🇲🇩 Moldova
🇵🇱 Poland
🇷🇴 Romania
🇸🇰 Slovakia
Data Engineer
Data science & Analytics

Kanda Software is a dynamic company based in the US, known for delivering innovative software solutions and technology services. We are currently seeking a highly skilled Senior Data Engineer to join our team and contribute to our exciting projects.

Responsibilities:

  • Lead the design, development, and optimization of complex data systems, including data flows, lakes, warehouses, and ETLs, across various cloud platforms.
  • Architect and manage Elasticsearch clusters, focusing on performance optimization, security, and scalability.
  • Create and optimize indices for performance and storage efficiency, leveraging index templates and mappings.
  • Build and optimize Elasticsearch queries using the Query DSL, including complex aggregations and full-text search.
  • Implement data ingestion pipelines to load data from various sources, including relational databases (RDBMS), into Elasticsearch.
  • Oversee performance monitoring, cluster management, security protocols, backup and restore procedures, and version management of Elasticsearch clusters.
  • Develop and maintain data pipelines using industry-standard tools such as Apache Spark, Apache Hive, Apache Airflow, and Stitch.
  • Implement data pipeline observability strategies to ensure reliable data flow and system transparency.
  • Apply modern software development practices, including Agile methodologies, Test-Driven Development (TDD), and Continuous Integration/Continuous Deployment (CI/CD), to enhance data engineering workflows.

Requirements:

  • A minimum of 5 years of experience as a technical lead, building and extending complex data systems.
  • Advanced proficiency in Python, with at least 3 years of experience.
  • Strong SQL writing and tuning skills.
  • Expert-level knowledge of Elasticsearch, including configuration, administration, and optimization techniques.
  • Demonstrated experience with building and maintaining data pipelines using tools such as Spark, Hive, Airflow, and Stitch.
  • Proven experience with data pipeline observability strategies and tools.
  • Strong understanding of software development practices, including Agile, TDD, and CI/CD.

Will be a plus:

  • Experience building or maintaining streaming platforms using Apache Kafka.
  • Proficiency in PySpark for distributed data processing.
  • Skills in implementing and managing containerization solutions using Kubernetes, Docker, etc., including observability of containerized applications.
  • Experience with Big Data as a Service (BDaaS) tools, such as AWS EMR or Azure HDInsight.
  • Experience in building and extending Machine Learning and Data Science platforms.
  • Familiarity with integrating data flows with third-party Business Intelligence (BI) platforms like Domo, Tableau, Sisense, etc.

What we offer:

  • Competitive salary and benefits package;
  • Flexible remote work arrangements;
  • Opportunities for professional development and growth;
  • Periodic review of the salary;
  • Paid events attendance;

 

Globaldev Group

Globaldev Group

Globaldev Group is a team of professionals specializing in creating engineering teams for technological businesses for the Western Europe, Israel, USA

Engineering
Technology
Staffing
Startups
Software
Development
Large Enterprise
Investment

LinkedIn

Building remote teams and providing software development solutions for tech businesses 🇺🇸🇮🇱🇩🇪🇺🇦🇵🇹🇵🇱

🏭Information Technology & Services
314
7.2K

Updated  

Other jobs at Globaldev Group

 

 

 

 

 

 

 

 

View all Globaldev Group jobs

Why OmniJobs?

  • Rare & hidden jobs
  • New jobs every day
  • No expired job posts
  • All jobs in English

Receive emails about similar jobs

Get alerts to your inbox about new open jobs that are similar to this one.

🇺🇦 Ukraine
🇲🇩 Moldova
🇵🇱 Poland
🇷🇴 Romania
🇸🇰 Slovakia
Data Engineer
Remote

No spam. No ads. Unsubscribe anytime.

Similar jobs