About the Company
Companies want to train their own large models on their own data. The current industry standard is to train on a random sample of your data, which is inefficient at best and actively harmful to model quality at worst. There is compelling research showing that smarter data selection can train better models faster—we know because we did much of this research. Given the high costs of training, this presents a huge market opportunity. We founded DatologyAI to translate this research into tools that enable enterprise customers to identify the right data on which to train, resulting in better models for cheaper. Our team has pioneered deep learning data research, built startups, and created tools for enterprise ML. For more details, check out our recent blog posts sharing our high-level results for text models and image-text models.
We've raised over $57M in funding from top investors like Radical Ventures, Amplify Partners, Felicis, Microsoft, Amazon, and notable angels like Jeff Dean, Geoff Hinton, Yann LeCun and Elad Gil. We're rapidly scaling our team and computing resources to revolutionize data curation across modalities.
This role is based in Redwood City, CA. We are in office 4 days a week.
About the Role
We are looking for our first seasoned Full-Stack engineers who love building new products in an iterative and fast-moving environment. In this role, you will build software from the ground up to solve critical bottlenecks for DatologyAI customers and internally. As one of our early senior hires, you will partner closely with our founders on the direction of our product and drive business-critical technical decisions.
You will contribute to developing the core product that customers use for curating their datasets and the visualizations around it, as well as the internal tooling that our team uses daily to develop the core product. You will have a broad impact on the technology, product, and our company's culture.
What You'll Work On
-
Owning the full product development lifecycle for customer-facing data curation products as well as the new internal infrastructure and product experiences.
-
Talking to customers and internal stakeholders to understand their problems and design solutions to address them.
-
Collaborating with a cross-functional team of engineers, researchers, designers, etc to bring new features and research capabilities to our customers.
-
Ensure our products and systems are reliable, secure, and worthy of our customers' trust.
About You
-
6+ years of experience
-
Have meaningful experience with leading and building production backend and/or full-stack experiences that deliver on major product initiatives
-
Proficiency in Python, JavaScript/TypeScript, React, and other web technologies
-
Care deeply about quality, functionality, and the humans we’re communicating to by sweating the details, down to the last page request.
-
Experience maintaining a high-quality bar for design, correctness, and testing.
-
Have prior experience in ML/AI (preferred but not required)
-
Have a humble attitude, an eagerness to help your colleagues, and a desire to do whatever it takes to make the team succeed
-
Own problems end-to-end and are willing to pick up whatever knowledge you're missing to get the job done.
Compensation
At DatologyAI, we are dedicated to rewarding talent with highly competitive salary and significant equity. The salary for this position ranges from $180,000 to $250,000.
-
The candidate's starting pay will be determined based on job-related skills, experience, qualifications, and interview performance.
Benefits
We offer a comprehensive benefits package to support our employees' well-being and professional growth:
-
100% covered health benefits (medical, vision, and dental).
-
401(k) plan with a generous 4% company match.
-
Unlimited paid time off (PTO) policy.
-
Annual $2,000 wellness stipend.
-
Annual $1,000 learning and development stipend.
-
Daily lunches and snacks are provided in our office!
-
Relocation assistance for employees moving to the Bay Area.
Top Skills
What We Do
DatologyAI builds tools to automatically select the best data on which to train deep learning models. Our tools leverage cutting-edge research—much of which we perform ourselves—to identify redundant, noisy, or otherwise harmful data points. The algorithms that power our tools are modality-agnostic—they’re not limited to text or images—and don’t require labels, making them ideal for realizing the next generation of large deep learning models. Our products allow customers in nearly any vertical to train better models for cheaper.