Software Engineer - LLM Training

Posted 12 Days Ago
Be an Early Applicant
9 Locations
Hybrid
Mid level
Machine Learning • Software
The Role
Design and implement distributed training systems for large-scale AI models, optimizing performance across many GPUs and ensuring usability and flexibility on the CentML platform.
Summary Generated by Built In

About Us

We believe AI will fundamentally transform how people live and work. CentML's mission is to massively reduce the cost of developing and deploying ML models so we can enable anyone to harness the power of AI and everyone to benefit from its potential.


Our founding team is made up of experts in AI, compilers, and ML hardware and has led efforts at companies like Amazon, Google, Microsoft Research, Nvidia, Intel, Qualcomm, and IBM. Our co-founder and CEO, Gennady Pekhimenko, is a world-renowned expert in ML systems who holds multiple academic and industry research awards from Google, Amazon, Facebook, and VMware.


About the Position

We are seeking highly crafted and motivated software engineers to join our team to empower AI practitioners to develop AI models on CentML Platform, productively and affordably. If you have launched multi-node distributed training jobs before and experienced firsthand how painful and cumbersome to get it functional, let alone high-performing, and you wanna be part of the team that derives solutions to address this challenge so that other AI practitioners wouldn’t feel the same pain that you had, please come and join us!


What you’ll do

  • Design and implement highly efficient distributed training systems for large-scale deep learning models.
  • Optimize parallelism strategies to improve performance and scalability across hundreds or thousands of GPUs.
  • Develop low-level systems components and algorithms to maximize throughput and minimize memory and compute bottlenecks.
  • Productionize the training systems onto CentML Platform.
  • Collaborate with researchers and engineers to productionize cutting-edge model architectures and training techniques.
  • Contribute to the design of APIs, abstractions and UX that make it easier to scale models while maintaining usability and flexibility.
  • Profile, debug, and tune performance at the system, model, and hardware levels.
  • Participate in design discussions, code reviews, and technical planning to ensure the product aligns with business goals.
  • Stay up to date with the latest advancements in large-scale model training and help translate research into practical, robust systems.

What you’ll need to be successful

  • Bachelor’s, Master’s, or PhD’s degree in Computer Science/Engineering, Software Engineering, related field or equivalent working experience.
  • 3+ years of experience in software development, preferably with Python and C++.
  • Deep understanding of machine learning pipelines and workflows, distributed systems, parallel computing, and high-performance computing principles.
  • Hands-on experience with large-scale training of deep learning models using frameworks like PyTorch, Megatron Core, DeepSpeed.
  • Experience optimizing compute, memory, and communication performance in large model training workflows.
  • Familiarity with GPU programming, CUDA, NCCL, and performance profiling tools.
  • Solid grasp of deep learning fundamentals, especially as they relate to transformer-based architectures and training dynamics.
  • Experience working with cloud platforms (AWS, GCP, or Azure) and containerization tools (Docker, Kubernetes).
  • Ability to work closely with both research and engineering teams, translating evolving needs into robust infrastructure.
  • Excellent problem-solving skills, with the ability to debug complex systems.
  • A passion for building high-impact tools that push the boundaries of what’s possible with large-scale AI.

Bonus points if you have

  • Experience building tools or platforms for ML model training or fine-tuning.
  • Experience building backends (e.g., using FastAPI) and frontend (e.g., using React).
  • Experience building and optimizing LLM inference engines (e.g., vLLM, SGLang).
  • Exposure to DevOps practices, CI/CD pipelines, and infrastructure as code.
  • Familiarity with MLOps concepts, including model versioning and serving.

Benefits & Perks

- An open and inclusive work environment

- Employee stock options

- Best-in-class medical and dental benefits

- Parental Leave top-up

- Professional development budget

- Flexible vacation time to promote a healthy work-life blend


We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, disability, and any other protected ground of discrimination under applicable human rights legislation. 


CentML strives to respect the dignity and ‎‎independence of people with disabilities and is committed to giving them the same ‎‎opportunity to succeed as all other employees. 


Inclusiveness is core to our culture at CentML, and we strive to ensure you get the most from your interview experience. CentML makes reasonable accommodations for applicants with disabilities. If a reasonable accommodation is needed to participate in the job application or interview process, please reach out to the Talent team.

Top Skills

AWS
Azure
C++
Cuda
Deepspeed
Docker
GCP
Kubernetes
Megatron Core
Nccl
Python
PyTorch
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: Toronto, Ontario
50 Employees
On-site Workplace
Year Founded: 2022

What We Do

We pioneer novel technology to enhance computing efficiency, making AI accessible for innovation and to benefit the global community.

We believe honesty builds integrity, honing craftsmanship delivers excellence, and collaboration fosters community.

Why Work With Us

Our journey began in the esteemed Efficient Computing Systems lab at the University of Toronto, under the leadership of our CEO, Gennady Pekhimenko. Today, the EcoSystems lab stands proudly as one of the world’s foremost authorities in Machine Learning Systems.

Our founding team is made up of experts in AI, ML compilers and ML hardware and has led

Gallery

Gallery

Similar Jobs

Remitly Logo Remitly

Software Engineering Manager

eCommerce • Fintech • Payments • Software • Financial Services
New Westminster, BC, CAN
2700 Employees
148K-185K Annually

Block Logo Block

Senior Machine Learning Engineer, Modeling - Risk Identity

Blockchain • eCommerce • Fintech • Payments • Software • Financial Services • Cryptocurrency
Remote
Hybrid
7 Locations
12000 Employees
195K-343K Annually

Cash App Logo Cash App

Senior Machine Learning Engineer, Modeling - Risk Identity

Blockchain • Fintech • Mobile • Payments • Software • Financial Services
Remote
Hybrid
8 Locations
3500 Employees
195K-343K Annually

Samsara Logo Samsara

Manager II, Software Engineering - Routing Ops

Artificial Intelligence • Cloud • Computer Vision • Hardware • Internet of Things • Software
Easy Apply
Remote
Hybrid
Canada
2800 Employees
143K-185K Annually

Similar Companies Hiring

True Anomaly Thumbnail
Software • Machine Learning • Hardware • Defense • Artificial Intelligence • Aerospace
Colorado Springs, CO
131 Employees
Caliola Engineering Thumbnail
Software • Machine Learning • Hardware • Defense • Data Privacy • App development • Aerospace
Colorado Springs, CO
53 Employees
Red 6 Thumbnail
Virtual Reality • Software • Hardware • Defense • Aerospace
Orlando, Florida
113 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account