Principal Product Manager, ML

Posted 10 Days Ago
Be an Early Applicant
Sunnyvale, CA
Mid level
Artificial Intelligence
The Role
The Principal Product Manager for ML at Cerebras will be responsible for productizing key ML use cases by collaborating with product leadership, research teams, and engineering. This role involves defining product roadmaps, understanding market requirements, and communicating effectively across various audiences to drive AI innovations.
Summary Generated by Built In

Cerebras has developed a radically new chip and system to dramatically accelerate deep learning applications. Our system runs training and inference workloads orders of magnitude faster than contemporary machines, fundamentally changing the way ML researchers work and pursue AI innovation.

We are innovating at every level of the stack – from chip, to microcode, to power delivery and cooling, to new algorithms and network architectures at the cutting edge of ML research. Our fully-integrated system delivers unprecedented performance because it is built from the ground up for deep learning workloads.

The Role

In this role, you will be responsible for productizing the most critical ML use cases for our company.  

You will work closely with Product leadership and our ML Research and Applied ML teams to identify the most promising areas within the industry and research community for us to go after, balancing business value for our customers and ML thought leadership for Cerebras.  

You will translate abstract neural network requirements into concrete deliverables for the Engineering team and work with cross-functional partners to establish roadmaps, process, success criteria, and feedback loops to continuously improve our products.  

This role combines both the highly technical with the highly strategic. Successful candidates will have deep understanding of machine learning and deep learning concepts, a familiarity with common modern models (in particular in the LLM space), and the ability to understand the mathematical foundations behind them. Ideal candidates can go beyond model understanding to see the connections and commonalities across different types of neural networks in different application domains. They will also be close followers of the recent developments in deep learning and have a point of view on which types of models may be widely used within the next 1, 3, 5 years.  

At Cerebras, we're proud to be among the few companies globally capable of training massive LLMs with over 100 billion parameters. We're active contributors to the open-source community, with millions of downloads of our models on Hugging Face. Our customers include national labs, global corporations across multiple industries, and top-tier healthcare systems. This month, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. We are already booking hundreds of millions of dollars in revenue each year, with a strong growth trajectory.  

As the Cerebras ML PM, you will be in the pilot’s seat, driving the transformational role of AI in multiple different industries and getting to work with some of the largest and most interesting datasets in the world alongside a world-class ML research and engineering team.  

Responsibilities

  • Gain a deep understanding of deep learning use cases across different industries and organization types, integrating market analysis, research, and user research studies.   
  • Develop, define, maintain, and own the product roadmap for the neural network architectures and machine learning methods supported by the Cerebras platform. 
  • Work directly with current and prospective end users to define and deeply understand market requirements for AI models used in industry and research. 
  • Work directly with engineering to define software requirements, priorities, and staging for ML network support, specifying associated features, and bringing features from concept to launch. 
  • Define success metrics and testing criteria for successful enablement of these applications, including taking neural network architectures from their research paper descriptions, breaking them down into their detailed components, and articulating the accuracy and performance expectations of the ML community. 
  • Work as a partner to Marketing, Product Marketing, and Sales by supporting feature documentation and defining ML user need, competitive landscape, our product value proposition, and user story 
  • Work across Product, Engineering, and business leadership to help define our product go-to-market approach to maximize value to users and expand our user community over time 
  • Communicate roadmaps, priorities, experiments, and decisions clearly across a wide spectrum of audiences from internal customers to executives 

Requirements

  • Bachelor’s or Master’s degree in computer science, electrical engineering, physics, mathematics, a related scientific/engineering discipline, or equivalent practical experience
  • 3-10+ years product management experience, working directly with engineering teams, end users (enterprise data scientists/ML researchers), and senior product/business leaders   
  • Strong fundamentals in machine learning/deep learning concepts, modern models, and the mathematical foundations behind them; understanding of how to apply deep learning models to relevant real-world applications and use cases  
  • Experience working with a data science/ML stack, including TensorFlow and PyTorch  
  • An entrepreneurial sense of ownership of overall team and product success, and the ability to make things happen around you. A bias towards getting things done, owning the solution, and driving problems to resolution
  • Outstanding presentation skills with a strong command of verbal and written communication 

Preferred

  • Experience developing machine learning applications or building tools for machine learning application developers  
  • Prior research publications in the machine learning/deep learning fields demonstrating deep understanding of the space 

Why Join Cerebras

People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection  point in our business. Members of our team tell us there are five main reasons they joined Cerebras:

  1. Build a breakthrough AI platform beyond the constraints of the GPU
  2. Publish and open source their cutting-edge AI research
  3. Work on one of the fastest AI supercomputers in the world
  4. Enjoy job stability with startup vitality
  5. Our simple, non-corporate work culture that respects individual beliefs

Read our blog: Five Reasons to Join Cerebras in 2024.

Apply today and become part of the forefront of groundbreaking advancements in AI.

Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.

This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.

Top Skills

PyTorch
TensorFlow
The Company
HQ: Sunnyvale, CA
402 Employees
On-site Workplace
Year Founded: 2016

What We Do

Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, functional business experts and engineers of all types. We have come together to build a new class of computer to accelerate artificial intelligence work by three orders of magnitude beyond the current state of the art.

The CS-2 is the fastest AI computer in existence. It contains a collection of industry firsts, including the Cerebras Wafer Scale Engine (WSE-2). The WSE-2 is the largest chip ever built. It contains 2.6 trillion transistors and covers more than 46,225 square millimeters of silicon. The largest graphics processor on the market has 54 billion transistors and covers 815 square millimeters. In artificial intelligence work, large chips process information more quickly producing answers in less time. As a result, neural networks that in the past took months to train, can now train in minutes on the Cerebras CS-2 powered by the WSE-2.

Join us: https://cerebras.net/careers/

Similar Jobs

The Walt Disney Company Logo The Walt Disney Company

Principal Product Manager, ML Platform

AdTech • Digital Media • News + Entertainment
Hybrid
San Francisco, CA, USA
200000 Employees
181K-265K Annually

Capital One Logo Capital One

Machine Learning Product Manager, Shopping (Remote-Eligible)

Fintech • Machine Learning • Payments • Software • Financial Services
Remote
Hybrid
3 Locations
55000 Employees
139K-198K Annually

Capital One Logo Capital One

Manager, Product Management, Machine Learning Tools

Fintech • Machine Learning • Payments • Software • Financial Services
Hybrid
5 Locations
55000 Employees
145K-197K Annually

DigitalOcean Logo DigitalOcean

Principal Product Manager, AI/ML

Cloud • Enterprise Web • Software • Infrastructure as a Service (IaaS)
San Francisco, CA, USA
900 Employees

Similar Companies Hiring

RunPod Thumbnail
Software • Infrastructure as a Service (IaaS) • Cloud • Artificial Intelligence
Charlotte, North Carolina
53 Employees
HERE Thumbnail
Software • Logistics • Internet of Things • Information Technology • Computer Vision • Automotive • Artificial Intelligence
Amsterdam, NL
6000 Employees
True Anomaly Thumbnail
Software • Machine Learning • Hardware • Defense • Artificial Intelligence • Aerospace
Colorado Springs, CO
131 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account