The Opportunity
AI is rapidly transforming the world. Whether it’s developing the next generation of human-level intelligence, enhancing voice assistants, or enabling researchers to analyze genetic markers at scale, AI is increasingly integrated into various aspects of our daily lives.
Arize is the leading AI observability and Evaluation platform to help AI teams discover issues, diagnose problems, and improve the results of their AI Applications. We are here to build world class software that helps make AI applications work better.
The Team
Our Backend Engineering team builds all of the highly scalable distributed services that power Arize’s ML observability platform. While Go is our primary language for these distributed systems, the team also maintains services and tools written in Python, Java, and TypeScript. The expectation and scope of every individual on this team is high, whether it’s finding the most efficient way to compute model evaluation metrics across billions of data points, designing the next generation of our OLAP database architecture, or researching and implementing the latest dimensionality reduction techniques – you will never lack a technical challenge.
You will be a part of the core team that drives product innovation at Arize. You will be challenged with understanding how some of the most impactful engineering teams are developing AI and LLM-powered applications, and how to build the right tools to enable them to do their best work. Our product solutions range from clean APIs that magically instrument applications, interactive playgrounds for prompt engineering and agent development, or scaling up real-time evaluation infrastructure to handle millions of annotations per second.
What You’ll Do
- Write maintainable, scalable, and performant backend code primarily in Go, Java, and Python, with opportunities to work in TypeScript.
- Build high-volume and highly available analytics systems.
- Design and build APIs specific to our customers’ Machine Learning and LLM workflows.
- Prototype, optimize, and maintain scalable backend services that power the Arize core platform.
- Extend, and contribute back to, open source OLAP databases and distributed message queue frameworks.
- Develop and integrate collection tools for robust monitoring of ML and LLM pipelines.
- Research and implement cutting-edge visualization & dimensionality reduction algorithms in a distributed environment.
- Collaborate with our product, design, and directly with customer engineering teams to enhance and expand our product offerings.
- Contribute to the build our own in-house AI Agents
What We’re Looking For
- 5+ years of experience working with high-performance backend systems.
- Strong experience writing Go, Python, TypeScript/Node, Java, or similar server programming languages.
- Enthusiasm and interest in the AI and LLM ecosystem, with a desire to learn and stay updated on emerging technologies.
- Previous work building and operating highly complex SaaS platforms/systems.
- Knowledge of working with public clouds & container orchestration - AWS, GCP, Azure, Kubernetes, etc.
Bonus Points, But Not Required
- Experience with distributed stream processing - Kafka, Gazette, or similar.
- Experience with OLAP systems.
- Familiarity with system observability tooling like Prometheus.
- Working knowledge of Machine Learning and/or Data Science.
- First-hand experience working with large language models (LLMs) or developing AI products.
The estimated annual salary for this role is between $125,000 - $225,000, plus a competitive equity package. Actual compensation is determined based on a variety of job-related factors that may include transferable work experience, skill sets, and qualifications. Total compensation also includes a comprehensive benefits package, including medical, dental, vision, a 401(k) plan, unlimited paid time off, a generous parental leave plan, and additional support for mental health and wellness.
While we are a remote-first company, we have opened offices in New York City and the San Francisco Bay Area, as an option for those in those cities who wish to work in-person. For all other employees, there is a WFH monthly stipend to pay for co-working spaces.
More About Arize
Arize’s mission is to make the world’s AI work and work for the people. Our founders came together through a common frustration: investments in AI are growing rapidly across businesses and organizations of all types, yet it is incredibly difficult to understand why a machine learning model behaves the way it does after it is deployed into the real world.
Learn more about Arize in an interview with our founders: https://www.forbes.com/sites/frederickdaso/2020/09/01/arize-ai-helps-us-understand-how-ai-works/#322488d7753c
Diversity & Inclusion @ Arize
Our company's mission is to make AI work and make AI work for the people, we hope to make an impact in bias industry-wide and that's a big motivator for people who work here. We actively hope that individuals contribute to a good culture
- Regularly have chats with industry experts, researchers, and ethicists across the ecosystem to advance the use of responsible AI
- Culturally conscious events such as LGBTQ trivia during pride month
- We have an active Lady Arizers subgroup
Top Skills
What We Do
The leading machine learning observability platform for ML practitioners to detect and troubleshoot AI/ML model issues