Tecton helps companies unlock the full potential of their data for AI applications. The platform streamlines the complex process of preparing and delivering data to models. With Tecton, AI teams accelerate the development of smarter, more impactful AI applications.
Tecton is funded by Sequoia Capital, Andreessen Horowitz, and Kleiner Perkins, along with strategic investments from Snowflake and Databricks. We have a fast-growing team that’s distributed around the world, with offices in San Francisco and New York City. Our team has years of experience building and operating business-critical machine learning systems at leading tech companies like Uber, Google, Meta, Airbnb, Lyft, and Twitter.
Tecton’s Realtime Compute team builds streaming infrastructure that provides sub-second data freshness for AI applications in production. In addition to streaming, we offer a production-ready Python runtime that securely runs user code in realtime at scale. This runtime can handle tasks like generating embeddings or calling third-party APIs for information retrieval.
This position is open to candidates based near our hubs in San Francisco, New York City, and Seattle.
Responsibilities
- Develop advanced streaming capabilities in Rift like joins, stateful operations and native connectors to streaming data sources
- Build an integrated observability solution that provides an exceptional operational experience with logs, metrics, and traces
- Scale our ingestion platform to handle millions of requests per second with low latency and high availability
- Reduce the cold start times of our sandboxed Python execution environment for extremely fast autoscaling
- Launch our infrastructure across multiple cloud platforms, ensuring compliance with security protocols and data residency requirements
- Assess and prioritize tasks, demonstrating a keen awareness of performance-critical areas
Qualifications
- 7+ years of experience in programming, debugging, and performance tuning distributed and/or highly concurrent software systems.
- Degree in Computer Science, Software Engineering, or a related field, or equivalent practical experience, with strong proficiency in building high throughput infrastructure.
- Experience with Streaming infrastructure like Flink, Spark, Pulsar, Heron
- Experience with Python runtimes, dependency resolution, and container sandboxing.
- Experience with at least one of AWS, GCP.
- Experience with low latency online storage like DynamoDB, Redis, and BigTable.
Tecton values diversity and is an equal opportunity employer committed to creating an inclusive environment for all employees and applicants without regard to race, color, religion, national origin, gender, sexual orientation, age, marital status, veteran status, disability status, or other applicable legally protected characteristics. If you would like to request any accommodations from the application through to the interview, please contact us at [email protected].
This employer participates in E-Verify and will provide the federal government with your Form I-9 information to confirm that you are authorized to work in the U.S.
Top Skills
What We Do
Founded by the team that created the Uber Michelangelo platform, Tecton provides an enterprise-ready feature store to make world-class machine learning accessible to every company.
Machine learning creates new opportunities to generate more value than ever before from data. Companies can now build ML-driven applications to automate decisions at machine speed, deliver magical customer experiences, and re-invent business processes.
But ML models will only ever be as good as the data that is fed to them. Today, it’s incredibly hard to build and manage ML data. Most companies don’t have access to the advanced ML data infrastructure that is used by the internet giants. So ML teams spend the majority of their time building custom features and bespoke data pipelines, and most models never make it to production.
We believe that companies need a new kind of data platform built for the unique requirements of ML. Our goal is to enable ML teams to build great features, serve them to production quickly and reliably, and do it at scale. By getting the data layer for ML right, companies can get better models to production faster to drive real business outcomes.