About Onehouse
Onehouse is a mission-driven company dedicated to freeing data from data platform lock-in. We deliver the industry’s most interoperable data lakehouse through a cloud-native managed service built on Apache Hudi. Onehouse enables organizations to ingest data at scale with minute-level freshness, centrally store it, and make available to any downstream query engine and use case (from traditional analytics to real-time AI / ML).
We are a team of self-driven, inspired, and seasoned builders that have created large-scale data systems and globally distributed platforms that sit at the heart of some of the largest enterprises out there including Uber, Snowflake, AWS, Linkedin, Confluent and many more. Riding off a fresh $35M Series B backed by Craft, Greylock and Addition Ventures, we're now at $68M total funding and looking for rising talent to grow with us and become future leaders of the team. Come help us build the world's best fully managed and self-optimizing data lake platform!
The Community You Will Join
When you join Onehouse, you're joining a team of passionate professionals tackling the deeply technical challenges of building a 2-sided engineering product. Our engineering team serves as the bridge between the worlds of open source and enterprise: contributing directly to and growing Apache Hudi (already used at scale by global enterprises like Uber, Amazon, ByteDance etc) and concurrently defining a new industry category - the transactional data lake.
A Typical Day:
- Be the thought leader around all things data engineering within the company - schemas, frameworks, data models.
- Implement new sources and connectors to seamlessly ingest data streams.
- Building scalable job management on Kubernetes to ingest, store, manage and optimize petabytes of data on cloud storage.
- Optimize Spark applications to flexibly run in batch or streaming modes based on user needs, optimize latency vs throughput.
- Tune clusters for resource efficiency and reliability, to keep costs low, while still meeting SLAs
What You Bring to the Table:
- 3+ years of experience in building and operating data pipelines in Apache Spark.
- 2+ years of experience with workflow orchestration tools like Apache Airflow, Dagster.
- Proficient in Java, Maven, Gradle and other build and packaging tools.
- Adept at writing efficient SQL queries and trouble shooting query plans.
- Experience managing large-scale data on cloud storage.
- Great problem-solving skills, eye for details. Can debug failed jobs and queries in minutes.
- Operational excellence in monitoring, deploying, and testing job workflows.
- Open-minded, collaborative, self-starter, fast-mover.
- Nice to haves (but not required):
- Hands-on experience with k8s and related toolchain in cloud environment.
- Experience operating and optimizing terabyte scale data pipelines
- Deep understanding of Spark, Flink, Presto, Hive, Parquet internals.
- Hands-on experience with open source projects like Hadoop, Hive, Delta Lake, Hudi, Nifi, Drill, Pulsar, Druid, Pinot, etc.
- Operational experience with stream processing pipelines using Apache Flink, Kafka Streams.
-Equity Compensation; our success is your success with eligible participation in our company equity plan
-Health & Well-being; we'll invest in your physical and mental well-being by reimbursing up to 20,000 INR for your monthly insurance premium
-Financial Future; we'll invest in your financial well-being by making this role eligible for the provident fund of which Onehouse will contribute up to 1800 INR/month
-Location; we are a remote-friendly company (internationally distributed across N. America + India), though some roles will be subject to in-person requirements in alignment with the needs of the business
-Generous Time Off; unlimited PTO (mandatory 1 week/year minimum), uncapped sick days and 17 paid company holidays
-Food & Meal Allowance; weekly lunch stipend, in-office snacks/drinks
-Equipment; we'll provide you with the equipment you need to be successful and a one-time $500 (USD) stipend for your initial office/desk setup
-Child Bonding!; 26 weeks off for birthing and 12 weeks for surrogate and adoptive parents - fully paid so you can focus your energy on your newest addition
One Team
Optimize for the company, your team, self - in that order. We may fight long and hard in the trenches, take care of your co-workers with empathy. We give more than we take to build the one house, that everyone dreams of being part of.
Tough & Persevering
We are building our company in a very large, fast-growing but highly competitive space. Life will get tough sometimes. We take hardships in the stride, be positive, focus all energy on the path forward and develop a champion's mindset to overcome odds. Always day one!
Keep Making It Better Always
Rome was not built in a day; If we can get 1% better each day for one year, we'll end up thirty-seven times better. This means being organized, communicating promptly, taking even small tasks seriously, tracking all small ideas, and paying it forward.
Think Big, Act Fast
We have tremendous scope for innovation, but we will still be judged by impact over time. Big, bold ideas still need to be strategized against priorities, broken down, set in rapid motion, measure, refine, repeat. Great execution is what separates promising companies from proven unicorns.
Be Customer Obsessed
Everyone has the responsibility to drive towards the best experience for the customer, be an OSS user or a paid customer. If something is broken, own it, say something, do something; never ignore. Be the change that you want to see in the company.
Top Skills
What We Do
Onehouse delivers a universal data lakehouse through a cloud-native managed lakehouse service built on Apache Hudi, which was created by the founding team while they were at Uber. Onehouse makes it possible to blend the ease of use of a warehouse with the scale of a data lake, by offering a seamless experience for engineers to get their data lakes up and running. Onehouse offers the widest interoperability for your data in the market across table formats, multiple compute engines and multiple cloud providers.
We have a stellar team of inspired, seasoned professionals including data, distributed systems, and platform engineers from Uber, LinkedIn, Confluent, and Amazon. Our product team has helped build enterprise data products at major enterprises including Azure Databricks.