We are a proud work-from-office company. If you're ready to work on-site in a dynamic, global company, we’d love to hear from you.
Position Summary
Do you have a passion for building data architectures that enable smooth and seamless product experiences? Are you an all-around data enthusiast with a knack for ETL? We're hiring Data Engineers to help build and optimize the foundational architecture of our product's data.
We’ve built a strong data engineering team to date, but have a lot of work ahead of us, including:
- Migrating from relational databases to a streaming and big data architecture, including a complete overhaul of our data feeds
- Defining streaming event data feeds required for real-time analytics and reporting
- Leveling up our platform, including enhancing our automation, test coverage, observability, alerting, and performance
As a Senior Data Engineer, you will work with the development team to construct a data streaming platform and data warehouse that serves as the data foundations for our product.
Help us scale our business to meet the needs of our growing customer base and develop new products on our platform. You'll be a critical part of our growing company, working on a cross-functional team to implement best practices in technology, architecture, and process. You’ll have the chance to work in an open and collaborative environment, receive hands-on mentorship and have ample opportunities to grow and accelerate your career!
Responsibilities
- Build our next generation data warehouse
- Build our event stream platform
- Translate user requirements for reporting and analysis into actionable deliverables
- Enhance automation, operation, and expansion of real-time and batch data environment
- Manage numerous projects in an ever-changing work environment
- Extract, transform, and load complex data into the data warehouse using cutting-edge technologies
- Build processes for topnotch security, performance, reliability, and accuracy
- Provide mentorship and collaborate with fellow team members
Requirements
- Bachelor’s or Master’s degree in Computer Science, Information Systems, Operations Research, or related field required
- 5+ years of experience building data pipelines
- 5+ years of experience building data frameworks for unit testing, data lineage tracking, and automation
- Fluency in Scala is required
- Working knowledge of Apache Spark
- Familiarity with streaming technologies (e.g., Kafka, Kinesis, Flink)
Nice to Have
- Experience with Machine Learning
- Familiarity with Looker a plus
- Knowledge of additional server-side programming languages (e.g. Golang, C#, Ruby)
Top Skills
What We Do
PrismHR creates exceptional software and services, empowering human resource outsourcing service providers such as Professional Employer Organizations (PEOs) and Administrative Service Organizations (ASOs) to deliver world-class payroll, benefits and HR to small and medium sized businesses. PrismHR software is used by more than 88,000 organizations and 2.2 million worksite employees, processing greater than $57 billion in payroll each year. Visit our website to learn more about how PrismHR can help your business be more profitable and productive. http://www.prismhr.com