Python Software Engineer, GPU - Accelerated LLM Data Applications

Posted 2 Days Ago
Be an Early Applicant
Santa Clara, CA
Senior level
Artificial Intelligence • Hardware • Robotics • Software • Metaverse
The Role
NVIDIA is seeking a Python Software Engineer to GPU-accelerate data engineering for Large Language Model (LLM) tools and libraries. Responsibilities include developing efficient systems for data processing on GPU-accelerated environments, optimizing libraries for LLM training, and collaborating with ML researchers on full-stack data preparation pipelines.
Summary Generated by Built In

NVIDIA has been redefining computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by great technology—and outstanding people! Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work.

Come join the team and see how you can make a lasting impact on the world! NVIDIA is seeking a Python Software Engineer to further our efforts to GPU-accelerate data engineering for Large Language Model (LLM) tools and libraries. This role is pivotal in accelerating pre-processing pipelines for high-quality multi-modal dataset curation. The day to day focus is on developing efficient, scalable systems for de-duplicating, filtering, and classifying training corpora for foundation model LLMs, as well as ingesting and prepping datasets for use in Retrieval Augmented Generation (RAG) pipelines. Fundamental to these efforts are iterative testing and improvement in system cost, speed, & accuracy through micro-optimization, prompt engineering, fine tuning, and applying new research. The ideal candidate is happiest releasing early and often! They court user feedback with an ear open to the spirit of related feature requests. You are comfortable objectively evaluating the latest AI models and frameworks with an eye on acceleration potential. Would you like to run your training & test experiments on our supercomputers on thousands of GPU? Come work with us!

What you'll be doing:

  • Develop and optimize Python-based data processing frameworks, ensuring efficient handling of large datasets on GPU-accelerated environments, vital for LLM training.

  • Contribute to the design and implementation of RAPIDS and other GPU-accelerated libraries, focusing on seamless integration and performance enhancement in the context of LLM training data preparation and RAG pipelines.

  • Lead development and iterative optimization of components for RAG pipelines, ensuring they demonstrate GPU acceleration & the best performing models for improved TCO.

  • Collaborate with teams of LLM & ML researchers in the development of full-stack, GPU-accelerated data preparation pipelines for multimodal models Implement benchmarking, profiling, and optimization of innovative algorithms in Python in various system architectures, specifically targeting LLM applications.

  • Work closely with diverse teams to understand requirements, build & evaluate POCs, and develop roadmaps for production level tools and library features within the growing LLM ecosystem.

What we need to see:

  • Advanced degree in Computer Science, Computer Engineering, or a related field (or equivalent experience).

  • 5+ years of Python library development experience, including CI systems (GitHub Actions), integration testing, benchmarking, & profiling

  • Proficiency with LLMs and RAG pipelines: prompt engineering, LangChain, llama-index

  • Deep understanding of the PyData & ML/DL ecosystems, including RAPIDS, Pandas, numpy, scikit-learn, XGBoost, Numba, PyTorch

  • Familiarity with distributed programming frameworks like Dask, Apache Spark, or Ray

  • Visible contributions to open-source projects on GitHub

Ways to stand out from the crowd:

  • Active engagement (published papers, conference talks, blogs) in the data science community

  • Experience with production-level data pipelines, especially SQL-based

  • Experience with software packaging technologies: pip, conda, Docker images

  • Familiarity with Docker-Compose, Kubernetes, and Cloud deployment frameworks

  • Knowledge of parallel programming approaches, especially in CUDA C++

With a competitive salary package and benefits, NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us. Are you a creative and autonomous Python Software Engineer developing GPU - Accelerated LLM Data Applications, who loves challenges? Do you have a genuine passion for advancing the state of AI & machine learning across a variety of industries? If so, we want to hear from you. Come join us in these exciting times and make a sizable difference in the exploding world of Deep Learning! Doing what’s never been done before takes vision, innovation, and the world’s best talent.

The base salary range is 148,000 USD - 276,000 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.

You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Top Skills

Python
The Company
HQ: Santa Clara, CA
21,960 Employees
On-site Workplace
Year Founded: 1993

What We Do

NVIDIA’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. Today, NVIDIA is increasingly known as “the AI computing company.”

Similar Jobs

Capital One Logo Capital One

Senior Manager, Software Engineering, Back End (People Leader: Scala, Spark, AWS)

Fintech • Machine Learning • Payments • Software • Financial Services
San Jose, CA, USA
55000 Employees
249K-284K Annually

Capital One Logo Capital One

Senior Manager, Software Engineering, Back End (People Leader: Scala, Spark, AWS)

Fintech • Machine Learning • Payments • Software • Financial Services
San Francisco, CA, USA
55000 Employees
249K-284K Annually

Square Logo Square

Principal Software Engineer, Product Server

eCommerce • Fintech • Hardware • Payments • Software • Financial Services
Remote
Hybrid
8 Locations
12000 Employees
290K-435K Annually

Anduril Logo Anduril

Lead RF Engineer, Process Development

Aerospace • Artificial Intelligence • Hardware • Robotics • Security • Software • Defense
Costa Mesa, CA, USA
1400 Employees
160K-260K Annually

Similar Companies Hiring

bet365 Thumbnail
Software • Gaming • eSports • Digital Media • Automation
Denver, Colorado
6100 Employees
Jobba Trade Technologies, Inc. Thumbnail
Software • Professional Services • Productivity • Information Technology • Cloud
Chicago, IL
45 Employees
RunPod Thumbnail
Software
Philadelphia, PA
51 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account