ML Research Engineer — LLM Safety

Posted 18 Days Ago
Be an Early Applicant
Hiring Remotely in San Francisco, CA
Remote
Mid level
Artificial Intelligence • Software
The Role
The ML Research Engineer will focus on LLM safety, generating synthetic data, training and benchmarking models, and delivering scalable production code. They will lead research initiatives, co-author academic papers, and contribute to the development of safe and responsible LLMs for business applications.
Summary Generated by Built In

At Dynamo AI, we believe that LLMs must be developed with safety, privacy, and real-world responsibility in mind. Our ML team comes from a culture of academic research driven to democratize AI advancements responsibly. By operating at the intersection of ML research and industry applications, our team empowers Fortune 500 companies’ adoption of frontier research for their next generation of LLM products. Join us if you:

• Wish to work on the premier platform for private and personalized LLMs. We provide the fastest end to end solution to deploy research in the real world with our fast-paced team of ML Ph.D.’s and builders, free of Big Tech / academic bureaucracy and constraints.

• Are excited at the idea of democratizing state-of-the-art research on safe and responsible AI.

• Are motivated to work at a 2023 CB Insights Top 100 AI Startup and see your impact on end customers in the timeframe of weeks not years.

• Care about building a platform to empower fair, unbiased, and responsible development of LLMs and don’t accept the status quo of sacrificing user privacy for the sake of ML advancement.

Responsibilities

  • Own an LLM vertical with a focus on a specific safety domain, technique, or use case (either from defense or red-team attack perspective)
  • Generate high quality synthetic data, train LLMs, and conduct rigorous benchmarking.
  • Deliver robust, scalable, and reproducible production code.
  • Push the envelope by developing novel techniques and research that delivers the world’s most harmless and helpful models. Your research will directly empower our customers to more feasibly deploy safe and responsible LLMs.
  • Co-author papers, patents, and presentations with our research team by integrating other members’ work with your vertical.

Qualifications

  • Deep domain knowledge in LLM safety techniques.
  • Extensive experience in designing, training, and implementing multiple different types of LLM models and architectures in the real world. Comfortability with leading end-to-end projects.
  • Adaptability and flexibility. In both the academic and startup world, a new finding in the community may necessitate an abrupt shift in focus. You must be able to learn, implement, and extend state-of-the-art research.
  • Preferred: past research or projects in either attacking or defending LLMs.

Dynamo AI is committed to maintaining compliance with all applicable local and state laws regarding job listings and salary transparency. This includes adhering to specific regulations that mandate the disclosure of salary ranges in job postings or upon request during the hiring process. We strive to ensure our practices promote fairness, equity, and transparency for all candidates.


Salary for this position may vary based on several factors, including the candidate's experience, expertise, and the geographic location of the role. Compensation is determined to ensure competitiveness and equity, reflecting the cost of living in different regions and the specific skills and qualifications of the candidate.

Top Skills

Llm
The Company
San Francisco, CA
58 Employees
On-site Workplace
Year Founded: 2021

What We Do

Dynamo AI is pioneering the first end-to-end secure and compliant generative AI infrastructure that runs in any on-premise or cloud environment.

With a holistic approach to GenAI compliance, we help accelerate enterprise adoption to deploy secure, reliable, and compliant AI applications at scale.

Our platform includes three products:
- DynamoEval evaluates GenAI models for security, hallucination, privacy, and compliance risks.
- DynamoEnhance remediates identified risks, ensuring more reliable operations.
- DynamoGuard offers real-time guardrailing, customizable in natural language and with minimal latency

Our client base and partnerships include Fortune 1000 companies across all industries, which underscores our proven success in securing GenAI in highly regulated environments

Similar Jobs

Atlassian Logo Atlassian

Principal Engineer, Distribution at Loom

Cloud • Information Technology • Productivity • Security • Software • App development • Automation
Remote
San Francisco, CA, USA
11000 Employees
171K-274K Annually

Agero Logo Agero

CCaaS Architect (Remote)

Automotive • Big Data • Insurance • Software • Transportation
Remote
USA
3500 Employees

Block Logo Block

Software Engineer (Backend), Buyer Foundations

Blockchain • eCommerce • Fintech • Payments • Software • Financial Services • Cryptocurrency
Remote
Hybrid
7 Locations
12000 Employees
139K-245K Annually

Block Logo Block

Senior Software Engineer, Bitcoin Compliance

Blockchain • eCommerce • Fintech • Payments • Software • Financial Services • Cryptocurrency
Remote
Hybrid
7 Locations
12000 Employees
168K-297K Annually

Similar Companies Hiring

Jobba Trade Technologies, Inc. Thumbnail
Software • Professional Services • Productivity • Information Technology • Cloud
Chicago, IL
45 Employees
RunPod Thumbnail
Software • Infrastructure as a Service (IaaS) • Cloud • Artificial Intelligence
Charlotte, North Carolina
53 Employees
Hedra Thumbnail
Software • News + Entertainment • Marketing Tech • Generative AI • Enterprise Web • Digital Media • Consumer Web
San Francisco, CA
14 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account