Job Description:
Position Description:
Builds Extract, Transform and Load (ETL) workflows and data-driven solutions using Python. Evaluates new technologies and market trends through research and Proof of Concept (POC). Improves and innovates current data protection and security technologies offerings. Owns and continuously optimizes tools, process, and capabilities to support operational activities.
Primary Responsibilities:
- Delivers impactful scalable, flexible, and efficient data solutions that conform to architectural designs and Fidelity technology strategy.
- Builds and owns a portfolio of policies, procedures, and best practices to provide operational and engineers disciplines, and to evolve data protection and security technologies.
- Employees design patterns and generalizes code to address common use cases.
- Authors high-quality and reusable code to contribute to broader repositories.
- Analyzes information to determine, recommend, and plan computer software specifications on major projects.
- Proposes modifications and improvements based on user need.
- Develops software system tests and validation procedures, programs, and documentation.
Education and Experience:
Bachelor’s degree (or foreign education equivalent) in Computer Science, Engineering, Information Technology, Information Systems, Mathematics, Physics, or a closely related field and three (3) years of experience as a Senior Systems Engineer (or closely related occupation) developing applications using Apache Spark, Hadoop, or Snowflake within a distributed cloud data warehouse or data lake environment.
Or, alternatively, Master’s degree (or foreign education equivalent) in Computer Science, Engineering, Information Technology, Information Systems, Mathematics, Physics, or a closely related field and one (1) year of experience as a Senior Systems Engineer (or closely related occupation) developing applications using Apache Spark, Hadoop, or Snowflake within a distributed cloud data warehouse or data lake environment.
Skills and Knowledge:
Candidate must also possess:
- Demonstrated Expertise (“DE”) developing, designing, and implementing comprehensive data processing solutions using PySpark, Scala or Apache Spark; executing large-scale transformations and analytics within a Cloud environment -- Amazon Web Services (AWS) using AWS S3, EMR, Glue, AWS Lambda, or AWS Step functions; conducting performance tuning and optimization of Spark jobs and ETL processes using Spark UI or Spark DAG to ensure optimal efficiency and throughput for seamless data processing.
- DE designing and implementing comprehensive solutions; developing Web applications (Full Stack), establishing automation frameworks, and executing proof-of-concept initiatives using OOP (Java) or Python Functional Programming (Clojure or Scala) to explore and integrate cutting-edge cloud technologies; and developing RESTful APIs using React JS, OOP (Java), Python Functional Programming (Clojure or Scala), Docker, Kubernetes, or serverless technologies to stay at the forefront of advancements in cloud computing.
- DE developing Continuous Integration/Continuous Delivery (CI/CD) and Infrastructure as a Code (IaC) Terraform or AWS Cloud formation templates; configuring and integrating DevOps tools, building, and releasing management; and automating self-services jobs, deploying applications, and orchestrating disaster recovery using Jenkins, Python, Shell Scripts or Source Code Management (SCM) tools (Bitbucket or GitHub) in a hybrid on-prem and Cloud environment (Amazon Web Services (AWS) or Google Cloud Platform (GCP)).
- DE installing, configuring, and managing data security solutions using Guardium; navigating and integrating between Relational (Oracle, MySQL, or PostgreSQL), NoSQL(MongoDB, Cassandra, or Graph DB), Cloud databases (Aurora, Dynamo, or Elastic Cache) and Object/Block storage technologies; and executing data modeling principles and delving into database internals to ensure expert design and optimization across a broad spectrum of platforms using Apache Nifi or AWS Glue Catalogs.
#PE1M2
Certifications:
Category:Information Technology
Fidelity’s hybrid working model blends the best of both onsite and offsite work experiences. Working onsite is important for our business strategy and our culture. We also value the benefits that working offsite offers associates. Most hybrid roles require associates to work onsite every other week (all business days, M-F) in a Fidelity office.
Top Skills
What We Do
At Fidelity, our goal is to make financial expertise broadly accessible and effective in helping people live the lives they want. We do this by focusing on a diverse set of customers: - from 23 million people investing their life savings, to 20,000 businesses managing their employee benefits to 10,000 advisors needing innovative technology to invest their clients’ money. We offer investment management, retirement planning, portfolio guidance, brokerage, and many other financial products.
Privately held for nearly 70 years, we’ve always believed by providing investors with access to the information and expertise, we can help them achieve better results. That’s been our approach- innovative yet personal, compassionate yet responsible, grounded by a tireless work ethic—it is the heart of the Fidelity way.