Machine Learning Engineer

Posted 4 Days Ago
San Francisco, CA
Junior
Consumer Web • Digital Media • Enterprise Web • Marketing Tech • News + Entertainment • Software • Generative AI
A generative media company building the AI-native creation platform around the world's first omnimodal foundation model.
The Role
The Machine Learning Engineer will design and implement high-performance computing solutions for training and deploying ML models. Responsibilities include managing cloud infrastructures, optimizing performance for large datasets, overseeing distributed training, and collaborating with teams on computational needs.
Summary Generated by Built In

Hedra is a pioneering generative media company backed by top investors at Index, A16Z, and Abstract Ventures. We're building Hedra Studio, a multimodal creation platform capable of control, emotion, and creative intelligence.

At the core of Hedra Studio is our Character-3 foundation model, the first omnimodal model in production. Character-3 jointly reasons across image, text, and audio for more intelligent video generation — it’s the next evolution of AI-driven content creation.

Note: At Hedra, we’re a team of hard-working, passionate individuals seeking to fundamentally change content and build a generational company together. You should have start-up experience and be a self-starter that is driven to build impactful products that change the status quo. You must be willing to work in-person in either NYC or SF.

Overview:

We are looking for an ML Engineer with expertise in high-performance computing systems to manage and optimize our computational infrastructure for training and deploying our machine learning models. The ideal candidate will have experience with cloud computing platforms and tools for managing ML workloads at scale, supporting our 3DVAE and video diffusion models.

Responsibilities:

  • Design and implement scalable computing solutions for training and deploying ML models, ensuring infrastructure can handle large video datasets.

  • Manage and optimize the performance of our computing clusters or cloud instances, such as AWS or Google Cloud, to support distributed training.

  • Ensure that our infrastructure can handle the resource-intensive tasks associated with training large generative models.

  • Monitor system performance and implement improvements to maximize efficiency, using tools like Kubeflow for orchestration.

  • Collaborate with the team to understand their computational needs and provide appropriate solutions, facilitating seamless model deployment.

Qualifications:

  • Bachelor’s degree in Computer Science, Information Technology, or a related field, with a focus on system administration.

  • Experience with cloud computing platforms such as Amazon Web Services, Google Cloud, or Microsoft Azure, essential for managing large-scale ML workloads.

  • Knowledge of containerization tools like Dockerfile and orchestration tools like Kubeflow, crucial for deploying models at scale.

  • Understanding of distributed training techniques and how to scale models across multiple GPUs or machines, aligning with video generation needs.

  • Proficiency in scripting languages like Python or Bash for automation tasks, facilitating infrastructure management.

  • Strong problem-solving and communication skills, given the need to collaborate with diverse teams.

This role is vital for ensuring the computational backbone supports the company’s ML efforts, focusing on deployment and scalability.

Benefits:

  • Competitive compensation and equity

  • 401k (no match)

  • Healthcare (Silver PPO Medical, Vision, Dental)

  • Lunch and snacks at the office

We encourage you to apply even if you don't fully meet all the listed requirements; we value potential and diverse perspectives, and your unique skills could be a great asset to our team.

Top Skills

AWS
Bash
Docker
GCP
Kubeflow
Azure
Python
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: San Francisco, CA
14 Employees
On-site Workplace
Year Founded: 2023

What We Do

Hedra is an AI native platform for multimodal creation. The platform is built around their own cutting-edge proprietary video model, Character-3, which is the first multimodal model in production. Alongside Character-3, the platform also brings other leading foundation models into one ecosystem spanning generative images, video, and audio. Prosumer and enterprise users leverage Hedra to generate content ranging from viral social media to branded content marketing.

Why Work With Us

We're an early-stage team that moves very fast and is building at the leading edge of AI/Media. Every employee takes on a lot of ownership and has an opportunity to learn and grow rapidly.

Gallery

Gallery

Hedra Offices

OnSite Workspace

Hedra's main office is in San Francisco and secondary hub is in New York.

Typical time on-site: None
HQHQ
New York
Learn more

Similar Jobs

Hedra Logo Hedra

Machine Learning Engineer (CUDA)

Consumer Web • Digital Media • Enterprise Web • Marketing Tech • News + Entertainment • Software • Generative AI
2 Locations
14 Employees

Hedra Logo Hedra

Machine Learning Engineer

Consumer Web • Digital Media • Enterprise Web • Marketing Tech • News + Entertainment • Software • Generative AI
2 Locations
14 Employees

Hedra Logo Hedra

Applied Research Scientist

Consumer Web • Digital Media • Enterprise Web • Marketing Tech • News + Entertainment • Software • Generative AI
2 Locations
14 Employees

Hedra Logo Hedra

Senior Research Engineer

Consumer Web • Digital Media • Enterprise Web • Marketing Tech • News + Entertainment • Software • Generative AI
2 Locations
14 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account