Cerebras has developed a radically new chip and system to dramatically accelerate deep learning applications. Our system runs training and inference workloads orders of magnitude faster than contemporary machines, fundamentally changing the way ML researchers work and pursue AI innovation.
We are innovating at every level of the stack – from chip, to microcode, to power delivery and cooling, to new algorithms and network architectures at the cutting edge of ML research. Our fully integrated system delivers unprecedented performance because it is built from the ground up for deep learning workloads.
Responsibilities:
- Build performance models to understand the project the performance of state of the art and customer models
- Optimize our Kernel micro code and Compiler algorithms to elevate ML model utilization on the Cerebras WSE
- Debug and understand runtime performance on the system and cluster
- Design performance features for upcoming ML architectures to enable highest performance execution on both training and inference
- Develop tools and infrastructure to help visualize performance data collected from the Wafer Scale Engine and our compute cluster
Requirements:
- Experience managing and technically leading small teams (3-5 people)
- Preference for experience having had grown teams and organizations
- Strong communication and presentation skills
- Masters in Electrical Engineering or Computer Science
- Strong background in computer architecture
- Strong analytical and problem solving mindset
- 3+ years of experience in a relevant domain (Computer Architecture, Network Performance, CPU/GPU Performance, Kernel Optimization, HPC)
- Experience working on CPU/GPU simulators
- Exposure to performance profiling and debug on any system pipeline
- Comfort with C++ and Python
- Exposure to and basic understanding of machine learning is desired
- Any pre-silicon performance validation exposure is a plus but not necessary
Why Join Cerebras
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
- Build a breakthrough AI platform beyond the constraints of the GPU
- Publish and open source their cutting-edge AI research
- Work on one of the fastest AI supercomputers in the world
- Enjoy job stability with startup vitality
- Our simple, non-corporate work culture that respects individual beliefs
Read our blog: Five Reasons to Join Cerebras in 2024.
Apply today and become part of the forefront of groundbreaking advancements in AI.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Top Skills
What We Do
Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, functional business experts and engineers of all types. We have come together to build a new class of computer to accelerate artificial intelligence work by three orders of magnitude beyond the current state of the art.
The CS-2 is the fastest AI computer in existence. It contains a collection of industry firsts, including the Cerebras Wafer Scale Engine (WSE-2). The WSE-2 is the largest chip ever built. It contains 2.6 trillion transistors and covers more than 46,225 square millimeters of silicon. The largest graphics processor on the market has 54 billion transistors and covers 815 square millimeters. In artificial intelligence work, large chips process information more quickly producing answers in less time. As a result, neural networks that in the past took months to train, can now train in minutes on the Cerebras CS-2 powered by the WSE-2.
Join us: https://cerebras.net/careers/