Logo
Jobot

Lead Data Engineer

Jobot, San Jose, CA, United States


100% REMOTE Lead Data Product Engineer / Senior Data Engineer Needed for Growing Subsidiary of a Large Public Company!

This Jobot Job is hosted by: Reed Kellick

Are you a fit? Easy Apply now by clicking the "Apply Now" button
and sending us your resume.

Salary: $185,000 - $235,000 per year

A bit about us:

We are a growing subsidiary of a large public company that is hiring a Principal Data Product Engineer / Staff Data Engineer!

Why join us?

As a Senior Data Product Engineer / Principal Data Engineer in our company, we are able to offer:

  • A competitive base salary between $185k and $235k, depending on experience!
  • Generous stock grant!
  • Bonus of 15-20%, depending on experience!
  • Work from home / work remote 100%!
  • 401k with dollar for dollar match, up to 6% of eligible earnings (base, bonus). Plus additional company contribution!
  • Comprehensive medical, dental, vision and life insurance!
  • 17 paid holidays per year, including 3 floating holidays!
  • Annual Paid Time Off (PTO), with separate sick days!
  • 12 weeks paid Parental Leave!
  • Caregiver Leave!
  • Adoption and Surrogacy Assistance Plan!
  • Flexible workplace accommodation!
  • Fun team/company events at Sports games, concerts, etc.!
  • Tuition reimbursement!
  • Ability to attend conferences!
  • A MacBook Pro and accompanying hardware to do great work!
  • A modern productivity toolset to get work done: Slack, Miro, Loom, Lucid, Google Docs, Atlassian and more!
  • Generous company discounts!
  • Eligible for donation matching to over 1.5 million nonprofit organization!
Job Details

As a Staff Data Product Engineer / Sr. Data Engineer on our team, we are looking for:

  • Completed BS, MS, or PhD in Computer Science, Mathematics, Statistics, Engineering, Operations Research, or other quantitative field
  • 7+ years of experience as a Data Engineer writing code to extract, process and store data within different types of data stores (Snowflake, Postgres, DynamoDB, Kafka, Graph databases)
  • 2+ years of leading a cross-functional team to deliver data projects, including 1+ year of experience as a reporting manager of a small team
  • Strong programming skills in Python and understanding of core computer science principles
  • Experience with building batch and streaming pipelines using complex SQL, PySpark, Pandas, and similar frameworks
  • Knowledge of relational and dimensional data modeling concepts to build data products
  • Experience with data quality checks, data monitoring solutions
  • Experience with orchestrating complex workflows and data pipelines using like Airflow or similar tools
  • Experience with Git, CI/CD pipelines, Docker, Kubernetes
  • Experience with architecting solutions on AWS or equivalent public cloud platforms.
  • Experience with developing data APIs, Microservices and event driven systems to integrate data products

Preferred:

  • Experience in assessing and implementing new data tools to enhance the data engineering stack
  • Knowledge of data mesh concepts
  • Knowledge in domains such as recommender systems, fraud detection, personalization, and marketing science
  • Knowledge of vector databases, knowledge graphs, and other approaches for organizing & storing information
  • Familiarity with Snowflake, RDS, DynamoDB, Kafka, Fivetran, dbt, Airflow, Docker, Kubernetes, EMR, Sagemaker, DataDog, PagerDuty, Data Cataloging tools, Data Observability tools and Data Governance tools

Interested in hearing more? Easy Apply now by clicking the "Apply Now" button.