The Data Engineer will be responsible for designing, building, and optimizing scalable data pipelines and cloud- based infrastructure under the guidance of the Lead Data Platform Engineer. This role involves working with Databricks, SAP Data Sphere, and AWS to enable seamless data ingestion, transformation, and integration across cloud environments to support enterprise-wide analytics and data-driven decision-making as well as scalable, efficient, and secure data architecture. The Data Engineer will collaborate with cross-functional teams to support analytics, reporting, and data-driven decision-making while ensuring performance, security, and data governance best practices.
Key Responsibilities:
- Design, develop, and maintain ETL/ELT pipelines for batch and streaming data processing.
- Implement data transformations, cleansing, and enrichment using Databricks (Spark, PySpark, SQL, Delta
- Automate pipeline deployment and orchestration.
- Ensure data quality, validation, and consistency by implementing robust monitoring frameworks.
- Develop and maintain data lakehouse solutions on AWS.
- Optimize Databricks workflows, job clusters, and cost-efficiency strategies.
- Implement data governance, lineage tracking, and access controls using Databricks Unity Catalog.
- Build real-time and batch data integrations between SAP Data Sphere and cloud-based platforms.
- Develop logical and physical data models within SAP Data Sphere, ensuring scalability and efficiency. • Enable cross-system data harmonization and replication between SAP and non-SAP environments.
- Monitor data pipeline performance, identify bottlenecks, and optimize query execution.
- Implement logging, alerting, and monitoring.
- Work with the Lead Data Platform Engineer to drive continuous improvements in scalability, observability, and security.
- Work closely with Architects, Data Analysts, and BI teams to support analytical solutions.
- Follow best practices in DevOps, CI/CD, and infrastructure-as-code (Terraform).
- Actively learn and apply the latest cloud, data engineering, and SAP Data Sphere advancements.
Data Pipeline Development & Optimization:
Lake, MLflow) and SAP Data Sphere (Data Builder, Business Builder).
Cloud Data Platform Implementation & Maintenance:
SAP Data Sphere & Data Integration:
Performance Monitoring & Troubleshooting:
Collaboration & Continuous Learning:
Key Requirements:
- 3+ years of experience in data engineering, cloud platforms, and distributed systems.
- Proficiency in SQL, Python, and Spark.
- Experience with Databricks (Delta Lake, Spark, MLflow) and AWS data services.
- Experience with SAP Data Sphere, SAP data modeling, and integration frameworks (OData, API management) will be a plus.
- Familiarity with data pipeline orchestration tools.
- Experience with DevOps & CI/CD pipelines (Terraform, GitHub Actions, Jenkins).
- Strong problem-solving skills and a passion for scalable and efficient data processing.
We offer:
- A dynamic team working within a zero-bullshit culture;
- Working in a comfortable office at UNIT.City (Kyiv). The office is safe as it has a bomb shelter;
- Reimbursement for external training for professional development;
- Ajax's security system kit to use;
- Official employment with Diia City ;
- Medical Insurance;
- Flexible work schedule.
The Data Engineer plays a vital role in building and maintaining scalable, efficient, and secure data pipelines, ensuring seamless SAP and cloud data integration. This role directly supports the Lead Data Platform Engineer in driving enterprise-wide analytics, AI/ML innovation, and data-driven decision-making.
Top Skills
What We Do
The largest manufacturer of security systems in Europe. Designed and developed in Ukraine