Logo
Robinhood

Staff Software Engineer, Data Lake Ingestion

Robinhood, Menlo Park, California, United States, 94029


About the team + roleOur team's mission at Robinhood is to empower informed decision-making, foster innovation, and drive organizational excellence through a reliable, timely, efficient, and privacy-aware Data Lake and Ingestion Infrastructure.

As a Staff Software Engineer, you will lead the development of data ingestion pipelines that process petabytes of data and billions of events daily. This role is highly cross-functional, requiring you to collaborate closely with Data Science, Data Engineering, and Product teams to understand customer requirements, and with Data Platform and Storage teams to develop integrated solutions. We extensively utilize open-source frameworks as the foundation for our platforms.

It is preferred that this role is located in one of the office locations listed on this job description which will align with our in-office working environment. This position is only eligible for remote work in limited geographies within the US where we do not have physical office locations. Please connect with your recruiter for more information regarding our in-office philosophy and expectations.

What you’ll do

Partner to influence and shape the vision, strategy, and adoption of current and future technologies.

Design, build, and maintain efficient and reliable batch and streaming data pipelines that drive key data insights across the Robinhood family of products.

Lead initiatives to improve data quality, efficiency, and privacy at scale.

Forge trusting cross-functional partnerships with data producers and consumers across Robinhood to ensure our solutions meet their needs.

Establish best practices and standards for data operations and lifecycle management.

Mentor engineers at Robinhood, both formally and informally.

What you bring

6+ years of experience as a proven staff engineer with expertise in planning and leading large projects, specifically focused on data infrastructure.

Proficiency in a comprehensive range of data engineering disciplines, including data and stream processing technologies (e.g., Spark, Flink, Kafka, Hudi), data serialization formats (e.g., Avro, Protobuf), workflow orchestration tools (e.g., Airflow), and Data Stores (e.g., Postgres, ClickHouse, Redis).

Strong coding skills in Python, Java, Go or similar languages.

Experience with at least one major cloud suite of offerings (AWS, GCP, Azure).

Proven experience in contributing to open-source technologies such as Spark, Hudi, Flink, Kafka etc.

What we offer

Market competitive and pay equity-focused compensation structure

100% paid health insurance for employees with 90% coverage for dependents

Annual lifestyle wallet for personal wellness, learning and development, and more!

Lifetime maximum benefit for family forming and fertility benefits

Dedicated mental health support for employees and eligible dependents

Generous time away including company holidays, paid time off, sick time, parental leave, and more!

Lively office environment with catered meals, fully stocked kitchens, and geo-specific commuter benefits

We use Covey as part of our hiring and / or promotional process for jobs in NYC and certain features may qualify it as an AEDT. As part of the evaluation process we provide Covey with job requirements and candidate submitted applications. We began using

Covey Scout for Inbound

on September 19, 2024.Please see the independent bias audit report covering our use of Covey

here .

#J-18808-Ljbffr