At Groq. We believe in an AI economy powered by human agency. We envision a world where AI is accessible to all, a world that demands processing power that is better, faster, and more affordable than is available today. AI applications are currently constrained by the limitations of the Graphics Processing Unit (GPU), a technology originally developed for the gaming market and soon to become the weakest link in the AI economy.
Enter Groq's LPU™ AI Inference Technology. Specifically engineered for the demands of large language models (LLMs), the Language Processing Unit outpaces the GPU in speed, power, efficiency, and cost-effectiveness. The quickest way to understand the opportunity is to watch the following talk – groq.link/scspdemo.
Why join Groq? AI will change humanity forever, and we believe preservation of human agency and self determination is only possible if AI is made affordably and universally accessible. Groq’s LPUs will power AI from an early stage, and you will get to leave your fingerprint on civilization.
Software Engineer (all levels) - Inference System
Missions and Mandates:
- Build and operate real-time, distributed compute frameworks and runtimes to deliver planet-scale low-latency inference for LLMs and advanced AI workloads, optimized for heterogeneous hardware and dynamic global workloads.
- Develop deterministic, low-overhead hardware abstractions for thousands of synchronously coordinated GroqChips across a software-defined interconnection network, prioritizing fault tolerance, real-time diagnostics, ultra-low-latency execution and mission-critical reliability.
- Foster multidisciplinary collaboration with cloud, compiler, infra and hardware teams to align engineering efforts and drive unified progress toward shared objectives.
- Slash operational load and improve SLOs, make tokens go burrrrrrrrrrr, position Groq for world domination. 🚀
Apply If:
- You have a history of continuously shipping high-impact, production-ready code at speed while maintaining collaboration in cross-functional teams.
- You possess deep expertise in computer architecture, operating systems, algorithms, data structures, hardware software co-design, and parallel/distributed computing.
- You've mastered system-level programming (C++, Rust, or similar) with a focus on low-level optimizations and hardware-aware design.
- You're strong at profiling and optimizing systems for latency, throughput, and efficiency, with a zero-tolerance approach to wasted cycles or resources.
- You have relentless commitment to automated testing and CI/CD pipelines; belief that "untested code is broken code."
- You're deeply curious about system internals—from kernel-level interactions to hardware dependencies—paired with the ability to debug across abstraction layers.
- You make pragmatic technical debt decisions, balancing short-term velocity with long-term system health.
- You practice strong version control and modular design practices when working in large-scale codebases.
- Nice to have: Experience operating large-scale distributed systems for high-traffic internet services.
- Nice to have: Experience deploying and optimizing machine learning (ML) or high-performance computing (HPC) workloads in production systems.
- Nice to have: Hands-on optimization of performance-critical applications using GPUs, FPGAs, or ASICs (e.g., memory management, kernel optimization).
- Nice to have: Familiarity with ML frameworks (e.g., PyTorch) and compiler tooling (e.g., MLIR) for AI/ML workflow integration.
The Ideal Candidate:
- Initiates (without derailing): Proactively spots opportunities to solve problems or improve processes—but knows when to align with team priorities first.
- Builds stuff that actually ships: Believes "code in prod" > "perfect slides." Prioritizes delivering real value over polishing ideas that never leave the whiteboard.
- Is a craftsmanship junkie: Always asks, “How can we make this better?” and isn’t afraid to geek out over details.
- Plays to win (together): Thinks winning = everyone wins. Aligns goals with teammates and customers like a pro, because no one scores a touchdown alone.
Logistical Requirements:
- Authorized to work in Canada or United States
- Available to work American Eastern or Pacific hours
If this sounds like you, we’d love to hear from you!
Location: Groq is a geo-agnostic company, meaning you work where you are. Exceptional candidates will thrive in asynchronous partnerships and remote collaboration methods. Some roles may require being located near our primary sites, as indicated in the job description.
At Groq: Our goal is to hire and promote an exceptional workforce as diverse as the global populations we serve. Groq is an equal opportunity employer committed to diversity, inclusion, and belonging in all aspects of our organization. We value and celebrate diversity in thought, beliefs, talent, expression, and backgrounds. We know that our individual differences make us better.
Groq is an Equal Opportunity Employer that is committed to inclusion and diversity. Qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, sexual orientation, gender identity, disability or protected veteran status. We also take affirmative action to offer employment opportunities to minorities, women, individuals with disabilities, and protected veterans.
Groq is committed to working with qualified individuals with physical or mental disabilities. Applicants who would like to contact us regarding the accessibility of our website or who need special assistance or a reasonable accommodation for any part of the application or hiring process may contact us at: [email protected]. This contact information is for accommodation requests only. Evaluation of requests for reasonable accommodations will be determined on a case-by-case basis.
Top Skills
What We Do
Groq is an AI solutions company delivering ultra-low latency AI inference with the world's first Language Processing Unit™. With turnkey generalized software and a deterministic Tensor Streaming architecture, Groq offers a synchronous ecosystem built for ultra-fast inference at scale. Groq solutions maximize human capital and innovative technology performance, having been proven to reduce developer complexity and accelerate time-to-production and ROI. Designed, engineered, and manufactured completely in North America, Groq offers domestically-based and scalable supply that is available now, capable of delivering 390 racks in 6-12 months and ramped lead times of 6-12 weeks. Learn more at groq.com.