Dice
Sr. Data Engineer Data Activation & Sharing
Dice, Seattle, Washington, us, 98127
Dice is the leading career destination for tech experts at every stage of their careers. Our client, INSPYR Solutions, is seeking the following. Apply via Dice today!Title: Sr. Data Engineer Data Activation & Sharing
Location: Seattle, WA (Hybrid 2-3 days a week)Duration: 12+ month contractCompensation: $87.00-$95.40/hrWork Requirements: Holders or Authorized to Work in the U.S.We are looking for a Sr. Data Engineer to join the Data Activation team who can operate across the enterprise to ensure on-time, high-quality delivery of products and features that directly drive our business. The Data Activation team, within the Product & Data Engineering business unit, is responsible for sharing data with internal customers and external 3rd party partners in a secure and reliable fashion, ensuring that chain-of-custody of our sensitive data is maintained at every step of the process.Responsibilities:
Interfacing with the key stakeholders to understand the businessRequirements:
Designing and developing necessary data models, ETL's, reports, etc. as per requirementsWorking with big data technologies such as Spark and cloud database technologies like SnowflakeWriting test scripts and automating where possible (using tools such as but not limited to Pyspark, bash scripting, python)Basic Qualification:
Participating in code reviewsEnsuring code is checked in as per company guidelinesParticipating in weekly scrum meetings and daily stand up meetingsContributing to ensure correct status in burn down chartsEnsuring QA sign off is obtained, fix any bugs as discoveredSupporting product signoff process and fix any issues / bugs discoveredProviding post go-live support for addressing any P1 issuesPreferred Qualifications:
5+ years of data engineering experience developing large data pipelinesStrong SQL skills and ability to create queries to extract data and build performant datasetsHands-on experience with distributed systems such as Spark (via Databricks using Scala or Python) to query and process data.Experience with at least one major MPP (elastic map reduce) or cloud database technology (Snowflake, Redshift, Big Query). Snowflake Experience Is Strongly Preferred.Experience with AWS Cloud technologies (S3 at a minimum)Solid experience with data integration toolsets (i.e Airflow) and writingRequired Education:
BS STEM +5yrs
#J-18808-Ljbffr
Location: Seattle, WA (Hybrid 2-3 days a week)Duration: 12+ month contractCompensation: $87.00-$95.40/hrWork Requirements: Holders or Authorized to Work in the U.S.We are looking for a Sr. Data Engineer to join the Data Activation team who can operate across the enterprise to ensure on-time, high-quality delivery of products and features that directly drive our business. The Data Activation team, within the Product & Data Engineering business unit, is responsible for sharing data with internal customers and external 3rd party partners in a secure and reliable fashion, ensuring that chain-of-custody of our sensitive data is maintained at every step of the process.Responsibilities:
Interfacing with the key stakeholders to understand the businessRequirements:
Designing and developing necessary data models, ETL's, reports, etc. as per requirementsWorking with big data technologies such as Spark and cloud database technologies like SnowflakeWriting test scripts and automating where possible (using tools such as but not limited to Pyspark, bash scripting, python)Basic Qualification:
Participating in code reviewsEnsuring code is checked in as per company guidelinesParticipating in weekly scrum meetings and daily stand up meetingsContributing to ensure correct status in burn down chartsEnsuring QA sign off is obtained, fix any bugs as discoveredSupporting product signoff process and fix any issues / bugs discoveredProviding post go-live support for addressing any P1 issuesPreferred Qualifications:
5+ years of data engineering experience developing large data pipelinesStrong SQL skills and ability to create queries to extract data and build performant datasetsHands-on experience with distributed systems such as Spark (via Databricks using Scala or Python) to query and process data.Experience with at least one major MPP (elastic map reduce) or cloud database technology (Snowflake, Redshift, Big Query). Snowflake Experience Is Strongly Preferred.Experience with AWS Cloud technologies (S3 at a minimum)Solid experience with data integration toolsets (i.e Airflow) and writingRequired Education:
BS STEM +5yrs
#J-18808-Ljbffr