Machine Learning Research Engineer

Posted 23 Days Ago
Be an Early Applicant
Cupertino, CA
Entry level
Artificial Intelligence • Hardware • Software
The Role
The Machine Learning Researcher at Etched will design UI/UX for LLMs, enhance interactive demos, implement documentation systems, write tutorials, and visualize large model inference processes. Candidates should be excited about AI and possess strong communication skills and a solid mathematical background.
Summary Generated by Built In

About Etched

Etched is building AI chips that are hard-coded for individual model architectures. Our first product (Sohu) only supports transformers, but has an order of magnitude more throughput and lower latency than a B200. With Etched ASICs, you can build products that would be impossible with GPUs, like real-time video generation models and extremely deep & parallel chain-of-thought reasoning agents. Etched Labs is the organization within Etched whose mission is to democratize generative AI, pushing the boundaries of what will be possible in a post-Sohu world. 

Key responsibilities

  • Propose and conduct novel research to achieve results on Sohu that are unviable on GPUs
  • Translate core mathematical operations from the most popular Transformer-based models into maximally performant instruction sequences for Sohu
  • Develop deep architectural knowledge informing best-in-the-world software performance on Sohu HW, collaborating with HW architects and designers.
  • Co-design and finetune emerging model architectures for highest efficiency on Sohu
  • Guide and contribute to the Sohu software stack, performance characterization tools, and runtime abstractions by implementing frontier models using Python and Rust.

Representative projects

  • Propose and implement a novel test time compute algorithm that leverages Sohu’s unique capabilities to unlock a product could never be achieved on a typical GPU
  • Implement diffusion models on Sohu to achieve GPU-impossible latencies that allow for real-time image generation
  • Optimize model instructions and scheduling algorithms to optimize for utilization, latency, throughput, and/or a mix of these metrics. 
  • Implement model-specific inference-time acceleration techniques such as speculative decoding, tree search, KV cache sharing, priority scheduling, etc by interacting with the rest of the inference serving stack.

You may be a good fit if you have

  • An ML Research background with interests in HW co-design
  • Experience with Python, Pytorch, and / or JAX
  • Familiarity with transformer model architectures and/or inference serving stacks (vLLM, SGLang, etc.) and/or experience working in distributed inference/training environments
  • Experience working cross-functionally in diverse software and hardware organizations

Strong candidates may also have

  • ML Systems Research and HW Co-design backgrounds
  • Published inference-time compute research and/or efficient ML research
  • Experience with Rust
  • Familiarity with GPU kernels, the CUDA compilation stack and related tools, or other hardware accelerators

Benefits

  • Full medical, dental, and vision packages, with 100% of premium covered
  • Housing subsidy of $2,000/month for those living within walking distance of the office
  • Daily lunch and dinner in our office
  • Relocation support for those moving to Cupertino

How we’re different

Etched believes in the Bitter Lesson. We think most of the progress in the AI field has come from using more FLOPs to train and run models, and the best way to get more FLOPs is to build model-specific hardware. Larger and larger training runs encourage companies to consolidate around fewer model architectures, which creates a market for single-model ASICs.

We are a fully in-person team in Cupertino, and greatly value engineering skills. We do not have boundaries between engineering and research, and we expect all of our technical staff to contribute to both as needed.

Top Skills

C++
JavaScript
Python
Typescript
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: Cupertino, CA
53 Employees
On-site Workplace
Year Founded: 2022

What We Do

By burning the transformer architecture into our chips, we’re creating the world’s most powerful servers for transformer inference.

Similar Jobs

Snap Inc. Logo Snap Inc.

Principal Machine Learnings Engineer, Generative AI for Ads

Artificial Intelligence • Cloud • Machine Learning • Mobile • Software • Virtual Reality • App development
Hybrid
5 Locations
5000 Employees
213K-377K Annually
Hybrid
San Francisco, CA, USA
289097 Employees

Block Logo Block

Senior Machine Learning Engineer (Modeling), Personalization

Blockchain • eCommerce • Fintech • Payments • Software • Financial Services • Cryptocurrency
Remote
Hybrid
7 Locations
12000 Employees
139K-245K Annually

Block Logo Block

Senior Machine Learning Engineer - Banking

Blockchain • eCommerce • Fintech • Payments • Software • Financial Services • Cryptocurrency
Remote
Hybrid
7 Locations
12000 Employees
161K-284K Annually

Similar Companies Hiring

HERE Technologies Thumbnail
Software • Logistics • Internet of Things • Information Technology • Computer Vision • Automotive • Artificial Intelligence
Amsterdam, NL
6000 Employees
True Anomaly Thumbnail
Software • Machine Learning • Hardware • Defense • Artificial Intelligence • Aerospace
Colorado Springs, CO
131 Employees
Caliola Engineering Thumbnail
Software • Machine Learning • Hardware • Defense • Data Privacy • App development • Aerospace
Colorado Springs, CO
52 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account