Django Rest Framework
Senior Data Engineer
Django Rest Framework, San Francisco, California, United States, 94199
People Data Labs is hiring a
Remote Senior Data Engineer
About Us At People Data Labs, we’re committed to democratizing access to high-quality B2B data and leading the emerging DaaS economy. We empower developers, engineers, and data scientists to create innovative, compliant data products at scale with our clean, easy-to-use datasets of resume, company, location, and education data consumed through our suite of APIs. PDL is an innovative, fast-growing, global team backed by world-class investors, including Craft Ventures, Flex Capital, and Founders Fund. We scour the world for people hungry to improve, curious about how things work, and willing to challenge the status quo to build something new and better. Roles & Responsibilities:
Build infrastructure for ingestion, transformation, and loading an exponentially increasing volume of data from a variety of sources using Spark, SQL, AWS, and Databricks. Building an organic entity resolution framework capable of correctly merging hundreds of billions of individual entities into a number of clean, consumable datasets. Developing CI/CD pipelines and anomaly detection systems capable of continuously improving the quality of data we're pushing into production. Devising solutions to largely-undefined data engineering and data science problems. Work with stakeholders in Engineering and Product to assist with data-related technical issues and support their infrastructure needs. Technical Requirements
5-7+ years industry experience with clear examples of strategic technical problem solving and implementation. Strong software development fundamentals. Experience with Python and Apache Spark (Java, Scala, and/or Python-based). Experience with SQL. Experience building scalable data processing systems (e.g., cleaning, transformation) from the ground up. Experience using developer-oriented data pipeline and workflow orchestration (e.g., Airflow (preferred), dbt, dagster or similar). Knowledge of modern data design and storage patterns (e.g., incremental updating, partitioning and segmentation, rebuilds and backfills). Experience working in Databricks (including delta live tables, data lakehouse patterns, etc.). Experience with cloud computing services (AWS (preferred), GCP, Azure or similar). Experience with data warehousing (e.g., Databricks, Snowflake, Redshift, BigQuery, or similar). Understanding of modern data storage formats and tools (e.g., parquet, ORC, Avro, Delta Lake). Professional Requirements
Must thrive in a fast-paced environment and be able to work independently. Can work effectively remotely (able to be proactive about managing blockers, proactive on reaching out and asking questions, and participating in team activities). Strong written communication skills on Slack/Chat and in documents. You are experienced in writing data design docs (pipeline design, dataflow, schema design). You can scope and breakdown projects, communicate and collaborate progress and blockers effectively with your manager, team, and stakeholders. Nice To Haves:
Degree in a quantitative discipline such as computer science, mathematics, statistics, or engineering. Experience working with entity data (entity resolution / record linkage). Experience working with data acquisition / data integration. Expertise with Python and the Python data stack (e.g., numpy, pandas). Experience with streaming platforms (e.g., Kafka). Experience evaluating data quality and maintaining consistently high data standards across new feature releases (e.g., consistency, accuracy, validity, completeness). Our Benefits
Stock Unlimited paid time off Health, fitness, and office stipends The permanent ability to work wherever and however you want Salary and compensation
No salary data published by company so we estimated salary based on similar jobs related to Design, Python, Education, Cloud, Senior and Engineer jobs that are similar: $65,000 — $105,000/year. Location
San Francisco, California, United States
#J-18808-Ljbffr
About Us At People Data Labs, we’re committed to democratizing access to high-quality B2B data and leading the emerging DaaS economy. We empower developers, engineers, and data scientists to create innovative, compliant data products at scale with our clean, easy-to-use datasets of resume, company, location, and education data consumed through our suite of APIs. PDL is an innovative, fast-growing, global team backed by world-class investors, including Craft Ventures, Flex Capital, and Founders Fund. We scour the world for people hungry to improve, curious about how things work, and willing to challenge the status quo to build something new and better. Roles & Responsibilities:
Build infrastructure for ingestion, transformation, and loading an exponentially increasing volume of data from a variety of sources using Spark, SQL, AWS, and Databricks. Building an organic entity resolution framework capable of correctly merging hundreds of billions of individual entities into a number of clean, consumable datasets. Developing CI/CD pipelines and anomaly detection systems capable of continuously improving the quality of data we're pushing into production. Devising solutions to largely-undefined data engineering and data science problems. Work with stakeholders in Engineering and Product to assist with data-related technical issues and support their infrastructure needs. Technical Requirements
5-7+ years industry experience with clear examples of strategic technical problem solving and implementation. Strong software development fundamentals. Experience with Python and Apache Spark (Java, Scala, and/or Python-based). Experience with SQL. Experience building scalable data processing systems (e.g., cleaning, transformation) from the ground up. Experience using developer-oriented data pipeline and workflow orchestration (e.g., Airflow (preferred), dbt, dagster or similar). Knowledge of modern data design and storage patterns (e.g., incremental updating, partitioning and segmentation, rebuilds and backfills). Experience working in Databricks (including delta live tables, data lakehouse patterns, etc.). Experience with cloud computing services (AWS (preferred), GCP, Azure or similar). Experience with data warehousing (e.g., Databricks, Snowflake, Redshift, BigQuery, or similar). Understanding of modern data storage formats and tools (e.g., parquet, ORC, Avro, Delta Lake). Professional Requirements
Must thrive in a fast-paced environment and be able to work independently. Can work effectively remotely (able to be proactive about managing blockers, proactive on reaching out and asking questions, and participating in team activities). Strong written communication skills on Slack/Chat and in documents. You are experienced in writing data design docs (pipeline design, dataflow, schema design). You can scope and breakdown projects, communicate and collaborate progress and blockers effectively with your manager, team, and stakeholders. Nice To Haves:
Degree in a quantitative discipline such as computer science, mathematics, statistics, or engineering. Experience working with entity data (entity resolution / record linkage). Experience working with data acquisition / data integration. Expertise with Python and the Python data stack (e.g., numpy, pandas). Experience with streaming platforms (e.g., Kafka). Experience evaluating data quality and maintaining consistently high data standards across new feature releases (e.g., consistency, accuracy, validity, completeness). Our Benefits
Stock Unlimited paid time off Health, fitness, and office stipends The permanent ability to work wherever and however you want Salary and compensation
No salary data published by company so we estimated salary based on similar jobs related to Design, Python, Education, Cloud, Senior and Engineer jobs that are similar: $65,000 — $105,000/year. Location
San Francisco, California, United States
#J-18808-Ljbffr