Machine Learning Intern

Posted Yesterday
Be an Early Applicant
Santa Clara, CA
Internship
Artificial Intelligence • Machine Learning • Software
The Role
The intern will develop a KV-cache solution for LLM inference on D-Matrix hardware, focusing on memory optimization and efficiency using PyTorch.
Summary Generated by Built In

At d-Matrix, we are focused on unleashing the potential of generative AI to power the transformation of technology. We are at the forefront of software and hardware innovation, pushing the boundaries of what is possible. Our culture is one of respect and collaboration.

We value humility and believe in direct communication. Our team is inclusive, and our differing perspectives allow for better solutions. We are seeking individuals passionate about tackling challenges and are driven by execution.  Ready to come find your playground? Together, we can help shape the endless possibilities of AI. 

Location:

Hybrid, working onsite at our Santa Clara, CA headquarters 3 days per week.

The role: Machine Learning Intern

What you will do:

We are seeking a motivated and innovative Machine Learning Intern to join our team. The intern will work on developing a dynamic Key-Value (KV) cache solution for Large Language Model (LLM) inference, aimed at enhancing memory utilization and execution efficiency on D-Matrix hardware. This project will involve modeling at the PyTorch graph level to enable efficient, torch-native support for KV-Cache, addressing limitations in current solutions.

• Research and analyze existing KV-Cache implementations used in LLM inference, particularly those utilizing lists of past-key-values PyTorch tensors.

• Investigate “Paged Attention” mechanisms that leverage dedicated CUDA data structures to optimize memory for variable sequence lengths.

• Design and implement a torch-native dynamic KV-Cache model that can be integrated seamlessly within PyTorch.

• Model KV-Cache behavior within the PyTorch compute graph to improve compatibility with torch.compile and facilitate the export of the compute graph.

• Conduct experiments to evaluate memory utilization and inference efficiency on D-Matrix hardware.

Key Objectives:

• Develop an efficient support system for KV-Cache on D-Matrix hardware.

• Create a torch-level modeling framework for dynamic KV-Cache.

• Ensure compatibility of the KV-Cache model with torch.compile and other PyTorch features for optimized graph export.

What you will bring:

• Currently pursuing a degree in Computer Science, Electrical Engineering, Machine Learning, or a related field.

• Familiarity with PyTorch and deep learning concepts, particularly regarding model optimization and memory management.

• Understanding of CUDA programming and hardware-accelerated computation (experience with CUDA is a plus).

• Strong programming skills in Python, with experience in PyTorch.

• Analytical mindset with the ability to approach problems creatively.

Preferred Qualifications:

• Experience with deep learning model inference optimization.

• Knowledge of data structures used in machine learning for memory and compute efficiency.

• Experience with hardware-specific optimization, especially on custom hardware like D-Matrix, is an advantage.

This role is ideal for a self-motivated intern interested in applying advanced memory management techniques in the context of large-scale machine learning inference. If you’re passionate about optimizing machine learning models and are excited to explore cutting-edge solutions in model inference, we encourage you to apply.

Equal Opportunity Employment Policy

d-Matrix is proud to be an equal opportunity workplace and affirmative action employer. We’re committed to fostering an inclusive environment where everyone feels welcomed and empowered to do their best work. We hire the best talent for our teams, regardless of race, religion, color, age, disability, sex, gender identity, sexual orientation, ancestry, genetic information, marital status, national origin, political affiliation, or veteran status. Our focus is on hiring teammates with humble expertise, kindness, dedication and a willingness to embrace challenges and learn together every day.

d-Matrix does not accept resumes or candidate submissions from external agencies. We appreciate the interest and effort of recruitment firms, but we kindly request that individual interested in opportunities with d-Matrix apply directly through our official channels. This approach allows us to streamline our hiring processes and maintain a consistent and fair evaluation of al applicants. Thank you for your understanding and cooperation.

Top Skills

Cuda
Python
PyTorch
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: Santa Clara, CA
102 Employees
On-site Workplace

What We Do

d-Matrix is building a new way of doing datacenter AI inferencing using in-memory computing (IMC) techniques with chiplet level scale-out interconnects. Founded in 2019, d-Matrix has attacked the physics of memory-compute integration using innovative circuit techniques, ML tools, software and algorithms; solving the memory-compute integration problem, which is the final frontier in AI compute efficiency.

Similar Jobs

NVIDIA Logo NVIDIA

Machine Learning Compiler Research Intern - Summer 2025

Artificial Intelligence • Computer Vision • Hardware • Robotics • Metaverse
2 Locations
21960 Employees
18-71

Kodiak Robotics Logo Kodiak Robotics

Summer 2025 Intern, Artificial Intelligence/Machine Learning

Automotive • Robotics • Software • Transportation
Mountain View, CA, USA
81 Employees
110K-120K

Insitro Logo Insitro

Data Science & Machine Learning Intern (Summer 2025)

Healthtech • Machine Learning • Biotech
South San Francisco, CA, USA
173 Employees
35-65

NVIDIA Logo NVIDIA

Applied Physics ML Research Intern - Fall 2025

Artificial Intelligence • Computer Vision • Hardware • Robotics • Metaverse
Santa Clara, CA, USA
21960 Employees
18-71

Similar Companies Hiring

True Anomaly Thumbnail
Software • Machine Learning • Hardware • Defense • Artificial Intelligence • Aerospace
Colorado Springs, CO
131 Employees
Caliola Engineering Thumbnail
Software • Machine Learning • Hardware • Defense • Data Privacy • App development • Aerospace
Colorado Springs, CO
53 Employees
Red 6 Thumbnail
Virtual Reality • Software • Hardware • Defense • Aerospace
Orlando, Florida
113 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account