The PyTorch Team @ NVIDIA is hiring passionate parallel programmers. Join us to design and build the tools used by millions of AI practitioners deploying AI applications scalable to thousands of GPUs. Our team is responsible for the continual delivery of best in class experience on NVIDIA's hardware with PyTorch. Join our team and collaborate with many multi-disciplinary engineering teams within NVIDIA and internationally in the PyTorch open source community to deliver our customers the best of NVIDIA software.
In this position you will learn innovative techniques from NVIDIA's domain experts for efficiently programming the world's most sophisticated computer systems. Build these techniques into NVIDIA/Fuser (commonly known as "nvFuser") applying our groundbreaking Parallel Programming Theory, allowing these optimization techniques to be applied to algorithms broadly, automatically, and safely to algorithms written in Numpy and PyTorch. Beyond building nvFuser influence and improve the entire software stack all the way from users to the CUDA compiler, to the Lightning-Thunder Graph Compiler, as well as influence the future design of NVIDIA's hardware platform. Join our ambitious and diverse team who strive to lead the best in AI programming.
What you will be doing:
-
Crafting a code generation system to accelerate portions of a graph collected from a machine learning framework.
-
Partnering with NVIDIA’s hardware and software teams to improve GPU performance in PyTorch.
-
Design, build and support production AI solutions used by enterprise customers and partners.
-
Optimize the performance of influential, modern Deep Learning models coming out of academic and industry research, for NVIDIA GPUs and systems.
-
Collaborating with internal applied researchers to improve their AI tools.
-
Advise design of new hardware generations.
What we need to see:
-
MS or PhD Computer Science, Computer Engineering, Electrical Engineering or a related field (or equivalent experience).
-
Parallel programming experience with writing optimized kernels in the NVIDIA CUDA Programming Language or similar parallel languages
-
4+ years of experience with C++ programming.
-
Demonstrated experience developing large software projects.
-
We require excellent verbal and written communication skills.
Ways to stand out from the crowd:
-
Proven technical foundation in CPU and GPU architectures, numeric libraries, modular software design.
-
A background in deep learning compilers or compiler infrastructure
-
Expertise with optimized distributed parallelism techniques and it's a bonus if that includes parallelizing Large Language Models!
-
Knowledge of heuristic generation that employs cost models, machine learning, or auto-tuning.
-
Contributions to PyTorch, Numpy, JAX, TensorFlow, OpenAI-Triton, Lightning Thunder, TVM, Halide or similar system.
The base salary range is 180,000 USD - 276,000 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
Top Skills
What We Do
NVIDIA’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. Today, NVIDIA is increasingly known as “the AI computing company.”