Senior Data Engineer, Integrity Data Platform

Posted 15 Days Ago
Be an Early Applicant
Petaling Jaya, Petaling, Selangor
3-5 Years Experience
Mobile • Software
The Role
As a Senior Data Engineer for the Integrity Data Platform at Grab, you will work on building end-to-end data pipelines, ingesting data from streaming and batch systems, collaborating with data scientists, and contributing to fraud detection and prevention solutions. You will need strong technical skills in Java, Scala, Golang, Rust, Hadoop, Spark, Flink, and SQL, along with excellent communication and teamwork abilities.
Summary Generated by Built In

Company Description

Life at Grab

At Grab, every Grabber is guided by The Grab Way, which spells out our mission, how we believe we can achieve it, and our operating principles - the 4Hs: Heart, Hunger, Honour and Humility. These principles guide and help us make decisions as we work to create economic empowerment for the people of Southeast Asia.

Job Description

Get to know our Team:

The Trust team is the custodian of integrity at Grab. We build cutting-edge solutions designed to provide robust fraud detection service with petabyte-scale datasets. Our platform leverages the latest advancements in machine learning and artificial intelligence to help businesses minimize the risk of fraud and maintain a secure environment. We're committed to building a diverse team of passionate and talented professionals who are dedicated to shaping the future of fraud detection technology, and prevent risks like Account Takeover, Chargeback, Fake Orders with fully automated solutions.

 

Get to know the Role:

Data Engineers in Grab get to work on one of the largest and fastest growing datasets of any company in South East Asia. We operate in a challenging, fast paced and ever changing environment that will push you to grow and learn. 

As a Senior Data Engineer for the Integrity Data Platform, you’ll be at the forefront of our day-to-day protection systems, finding ways to derive useful signals from the petabytes of raw data in both offline batch and online streaming systems. This is an opportunity to explore one of the richest datasets in SouthEast Asia, and derive signals that can drive measurable impact. 

The day-to-day activities:

  • Envision and build end to end data pipelines that generate invaluable signals used in both real time ML models and rules

  • Work on large-scale big data systems, leveraging data processing frameworks like Spark and Flink to continuously enhance platform security

  • Ingest data from both streaming systems like Kafka and batch systems like Hadoop and build highly performant ETL jobs

  • Design and implement scripts, ETL jobs, data models, etc.

  • Collaborate closely with data scientists, analysts and machine learning engineers to create innovative solutions for fraud detection and prevention.

  • Coordinate with various stakeholders to understand the end to end business requirements

  • Participate in technical and product discussions, code reviews, and on-call support activities

Qualifications

The Must-Haves

  • Bachelor degree in Analytics, Data Science, Mathematics, Computer Science, Information Systems, Computer Engineering, or a related technical field

  • At least ~4 years of experience in Big Data applications

  • Ability to work in a fast-paced agile development environment

  • Experience with Big Data frameworks such as Hadoop, Spark, Flink, etc.

  • Strong knowledge and fluency of SQL, preferably in a MPP OLAP database

  • Knowledge of static programming languages such as Java, Scala, Golang, Rust. etc

  • Ability to drive initiatives and work independently, while being a team player who can liaison with various stakeholders across the organization

  • Excellent written and verbal communication skills in English, and strong willingness to communicate and coordinate with others from different culture and language backgrounds.

Good to have:

  • Experience with Stream processing technologies such as Flink, Spark Streaming, Kafka

  • Experience in handling large data sets (multiple PBs) and working with structured, unstructured and datasets

  • Knowledgeable on cloud systems like AWS, Azure, or Google Cloud Platform

  • Familiar with tools within the Hadoop ecosystem, especially Presto and Spark.

  • Deep understanding on databases and best engineering practices - include handling and logging errors, monitoring the system, building human-fault-tolerant pipelines, understanding how to scale up, addressing continuous integration, knowledge of database administration, maintaining data cleaning and ensuring a deterministic pipeline

  • Experience working in modern cloud native environments like Kubernetes is also a plus

Additional Information

Our Commitment

We recognize that with these individual attributes come different workplace challenges, and we will work with Grabbers to address them in our journey towards creating inclusion at Grab for all Grabbers.

Top Skills

Go
Java
Rust
Scala
The Company
Houston, Texas
73 Employees
On-site Workplace

What We Do

Grab is a platform that unlocks the travelers’ access to all airport dining and retail opportunities.

Jobs at Similar Companies

bet365 Logo bet365

Software Developer, Trading and Tools

Digital Media • Gaming • Software • eSports • Automation
Denver, CO, USA
6100 Employees
85K-120K Annually

Jobba Trade Technologies, Inc. Logo Jobba Trade Technologies, Inc.

Customer Success Specialist

Cloud • Information Technology • Productivity • Professional Services • Software
Hybrid
Chicago, IL, USA
45 Employees

Similar Companies Hiring

TrainingPeaks (A Peaksware Company) Thumbnail
Software • Fitness
Louisville, CO
69 Employees
bet365 Thumbnail
Software • Gaming • eSports • Digital Media • Automation
Denver, Colorado
6100 Employees
Jobba Trade Technologies, Inc. Thumbnail
Software • Professional Services • Productivity • Information Technology • Cloud
Chicago, IL
45 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account