Python Software Engineering Intern, Accelerated LLM Data Applications - Fall 2025

Posted 5 Hours Ago
Be an Early Applicant
Santa Clara, CA
18-71
Internship
Artificial Intelligence • Computer Vision • Hardware • Robotics • Metaverse
The Role
The Python Software Engineering Intern will develop and optimize data processing frameworks for large datasets in GPU-accelerated environments, focusing on LLM training, RAG pipelines, and collaborating with teams for production-level tools.
Summary Generated by Built In

Today, NVIDIA is tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, encouraging environment where everyone is inspired to do their best work. Come join the team and see how we can make a lasting impact on the world.

Come join the team and see how you can make a lasting impact on the world! NVIDIA is seeking a Python Software Engineer Intern to further our efforts to GPU-accelerate data engineering for Large Language Model (LLM) tools and libraries. This role is pivotal in accelerating pre-processing pipelines for high-quality multi-modal dataset curation. The day to day focus is on developing efficient, scalable systems for de-duplicating, filtering, and classifying training corpora for foundation model LLMs, as well as ingesting and prepping datasets for use in Retrieval Augmented Generation (RAG) pipelines. Fundamental to these efforts are iterative testing and improvement in system cost, speed, & accuracy through micro-optimization, prompt engineering, fine tuning, and applying new research. The ideal candidate is happiest releasing early and often! They court user feedback with an ear open to the spirit of related feature requests. You are comfortable objectively evaluating the latest AI models and frameworks with an eye on acceleration potential. Would you like to run your training & test experiments on our supercomputers on thousands of GPU? Come work with us!

What you'll be doing:

  • Develop and optimize Python-based data processing frameworks, ensuring efficient handling of large datasets on GPU-accelerated environments, vital for LLM training.

  • Contribute to the design and implementation of RAPIDS and other GPU-accelerated libraries, focusing on seamless integration and performance enhancement in the context of LLM training data preparation and RAG pipelines.

  • Lead development and iterative optimization of components for RAG pipelines, ensuring they demonstrate GPU acceleration & the best performing models for improved TCO.

  • Collaborate with teams of LLM & ML researchers in the development of full-stack, GPU-accelerated data preparation pipelines for multimodal models Implement benchmarking, profiling, and optimization of innovative algorithms in Python in various system architectures, specifically targeting LLM applications.

  • Work closely with diverse teams to understand requirements, build & evaluate POCs, and develop roadmaps for production level tools and library features within the growing LLM ecosystem.

What we need to see:

  • Pursuing a MS or PhD in Computer Science, Computer Engineering, or a related field.

  • Python library development experience, including CI systems (GitHub Actions), integration testing, benchmarking, & profiling

  • Familiarity with LLMs and RAG pipelines: prompt engineering, LangChain, llama-index

  • Understanding of the PyData & ML/DL ecosystems, including RAPIDS, Pandas, numpy, scikit-learn, XGBoost, Numba, PyTorch

  • Familiarity with distributed programming frameworks like Dask, Apache Spark, or Ray

  • Visible contributions to open-source projects on GitHub

Ways to stand out from the crowd:

  • Active engagement (published papers, conference talks, blogs) in the data science community

  • Experience with production-level data pipelines, especially SQL-based

  • Experience with software packaging technologies: pip, conda, Docker images

  • Familiarity with Docker-Compose, Kubernetes, and Cloud deployment frameworks

  • Knowledge of parallel programming approaches, especially in CUDA C++

NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us. If you're creative and autonomous, we want to hear from you!

The hourly rate for our interns is 18 USD - 71 USD. Our internship hourly rates are a standard pay determined based on the position and your location, year in school, degree, and experience.

You will also be eligible for Intern benefits. NVIDIA accepts applications on an ongoing basis.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Top Skills

Spark
Dask
Docker
Kubernetes
Numba
Numpy
Pandas
Python
PyTorch
Rapids
Ray
Scikit-Learn
Xgboost
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: Santa Clara, CA
21,960 Employees
On-site Workplace
Year Founded: 1993

What We Do

NVIDIA’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. Today, NVIDIA is increasingly known as “the AI computing company.”

Similar Jobs

Atlassian Logo Atlassian

Senior Principal Data Engineer

Cloud • Information Technology • Productivity • Security • Software • App development • Automation
Remote
San Francisco, CA, USA
11000 Employees
194K-312K Annually

Atlassian Logo Atlassian

Senior Machine Learning Engineering Manager - Knowledge Graph

Cloud • Information Technology • Productivity • Security • Software • App development • Automation
Remote
San Francisco, CA, USA
11000 Employees
190K-306K Annually

MongoDB Logo MongoDB

Forward Deployed Engineer, AI

Big Data • Cloud • Software • Database
Hybrid
San Francisco, CA, USA
5550 Employees
118K-231K Annually

Snap Inc. Logo Snap Inc.

Machine Learning Engineer, Level 5

Artificial Intelligence • Cloud • Machine Learning • Mobile • Software • Virtual Reality • App development
Hybrid
6 Locations
5000 Employees
178K-313K Annually

Similar Companies Hiring

True Anomaly Thumbnail
Software • Machine Learning • Hardware • Defense • Artificial Intelligence • Aerospace
Colorado Springs, CO
131 Employees
Caliola Engineering Thumbnail
Software • Machine Learning • Hardware • Defense • Data Privacy • App development • Aerospace
Colorado Springs, CO
53 Employees
Red 6 Thumbnail
Virtual Reality • Software • Hardware • Defense • Aerospace
Orlando, Florida
113 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account