Pendulum
Lead Data Engineer, Data and Systems
Pendulum, San Francisco, California, United States, 94199
About PendulumPendulum is leading a revolution that is occurring around the world to improve physical and mental health by first understanding, then restoring and enhancing the human microbiome.Studies have shown that our microbiome (the bacterial communities in and on our bodies) is linked to everything from metabolism and diabetes, to longevity, weight loss, healthy immune systems, cancer prevention, feelings of well-being, inflammatory bowel disease, and even healthy skin. We have just scratched the surface on understanding the impact that our microbiome has on our lives. During early life we develop a diverse and balanced microbiome that plays a critical role in shaping our long-term health. Over our lives, a combination of diet, lifestyle, antibiotics, and aging can decrease the effectiveness of our microbiome.Pendulum recognized the enormous impact they could have on people’s lives if they were able to address the imbalances in the microbiome. To accomplish this, Pendulum created proprietary probiotic pipelines and a unique discovery platform to identify key, novel bacterial strains and the prebiotics that feed them. The company has also built and developed the world’s first manufacturing technology to produce bacteria in an anaerobic (oxygen-free) environment at scale.The medical probiotics that Pendulum has formulated have transformed the consumer probiotics market into a new category of therapeutic offerings that deliver the power and efficacy of a pharmaceutical with the safety and accessibility of a natural probiotic. Due to Pendulum’s explosive revenue and customer growth over the last two years, the company earned a spot on Forbes Magazine’s exclusive “The Next Billion Dollar Startups” list.If you’re interested in improving the lives of people globally and you love working in a cross-functional, collaborative, inspiring environment, please continue reading.Position Summary:We are seeking a Lead Data Engineer to join our team and drive the development and optimization of our data infrastructure. This role is crucial for building and maintaining robust data pipelines, managing our data warehouse, and ensuring the reliability and scalability of our data systems. The ideal candidate will have deep expertise in data engineering, data technologies, and cloud platforms, with a strong understanding of how to align data engineering practices with business and AI/ML needs.What You'll Do:
Design, build, and maintain efficient and scalable ETL pipelines using tools like dbt, Fivetran, and orchestration frameworks such as Airflow.Develop and implement robust schema designs and data models that support efficient querying and data integration across the organization.Manage and optimize our data warehouse and lakehouse environments on platforms like Snowflake, ensuring data is accessible, reliable, and performant.Implement data validation, cleansing, and anomaly detection processes to ensure the integrity and quality of our data.Collaborate with Data Science and Analytics teams to support the deployment of ML models in production, including training, inference, and evaluation processes.Implement monitoring and observability solutions to maintain the reliability and performance of data pipelines and models in production.Leverage Docker, Kubernetes, and workflow management systems to ensure scalable and automated data processing workflows.Ensure compliance with data governance standards and regulations, including GDPR and CCPA, through proper data lineage tracking, metadata management, and secure data handling practices.Knowledge Requirements:
MSc/PhD in Computer Science, or a related field5+ years of experience in data engineering, with a strong focus on building and managing data pipelines, data warehouses, and big data platforms.Expert in Python and SQL, and experience with data technologies (Kafka, Spark, Hive/Iceberg, Postgres, Redis) and cloud infrastructure (GCP, AWS) is required.Proficiency in orchestration and workflow management tools like Airflow, with experience in container orchestration such as Kubernetes being a plus.Strong understanding of data quality management practices, data lineage, metadata management, and compliance with regulations like GDPR and CCPA.Proven ability to work closely with Data Science and Analytics, and other cross-functional teams to deliver data solutions aligned with business needs.Ability to lead data projects independently, make strategic infrastructure decisions, and stay updated with the latest data engineering technologies and practices.Ability to work in a fast paced, dynamic environment where adaptability is imperative.Salary & Benefits
$170,000-$225,000Medical, Dental, and VisionCommuter BenefitsLife & STD InsuranceCompany match on 401 (k)Flexible Time Off (FTO)Equity
#J-18808-Ljbffr
Design, build, and maintain efficient and scalable ETL pipelines using tools like dbt, Fivetran, and orchestration frameworks such as Airflow.Develop and implement robust schema designs and data models that support efficient querying and data integration across the organization.Manage and optimize our data warehouse and lakehouse environments on platforms like Snowflake, ensuring data is accessible, reliable, and performant.Implement data validation, cleansing, and anomaly detection processes to ensure the integrity and quality of our data.Collaborate with Data Science and Analytics teams to support the deployment of ML models in production, including training, inference, and evaluation processes.Implement monitoring and observability solutions to maintain the reliability and performance of data pipelines and models in production.Leverage Docker, Kubernetes, and workflow management systems to ensure scalable and automated data processing workflows.Ensure compliance with data governance standards and regulations, including GDPR and CCPA, through proper data lineage tracking, metadata management, and secure data handling practices.Knowledge Requirements:
MSc/PhD in Computer Science, or a related field5+ years of experience in data engineering, with a strong focus on building and managing data pipelines, data warehouses, and big data platforms.Expert in Python and SQL, and experience with data technologies (Kafka, Spark, Hive/Iceberg, Postgres, Redis) and cloud infrastructure (GCP, AWS) is required.Proficiency in orchestration and workflow management tools like Airflow, with experience in container orchestration such as Kubernetes being a plus.Strong understanding of data quality management practices, data lineage, metadata management, and compliance with regulations like GDPR and CCPA.Proven ability to work closely with Data Science and Analytics, and other cross-functional teams to deliver data solutions aligned with business needs.Ability to lead data projects independently, make strategic infrastructure decisions, and stay updated with the latest data engineering technologies and practices.Ability to work in a fast paced, dynamic environment where adaptability is imperative.Salary & Benefits
$170,000-$225,000Medical, Dental, and VisionCommuter BenefitsLife & STD InsuranceCompany match on 401 (k)Flexible Time Off (FTO)Equity
#J-18808-Ljbffr