Logo
CyberTec

Sr. Data Engineer

CyberTec, Seattle, Washington, us, 98127


FTE - project location -: Seattle. WA or California

Onsite 2 days in a week

Note : Client is s

trictly needs a Senior Data Engineer Profile who can stand alone and work with all the Hadoop technology stack.

Job DescriptionOur Ads & Data Platforms team, a segment under my client is looking for a Lead Data Engineer. Data is essential for all our decision-making needs whether it's related to product design, measuring advertising effectiveness, helping users Client new content or building new businesses in emerging markets. This data is deeply valuable and gives us insights into how we can continue improving our service for our users, advertisers and our content partners. Our Content Engineering team is seeking a highly hardworking Data Engineer with a strong technical background and passionate about diving deeper into Big Data to develop state of the art Data Solutions.

Responsibilities

Contribute to the design and growth of our Data Products and Data Warehouses around Content Performance and Content Engagement data.Design and develop scalable data warehousing solutions, building ETL pipelines in Big Data environments (cloud, on-prem, hybrid)Our tech stack includes AWS, Databricks, Snowflake, Spark and AirflowHelp architect data solutions/frameworks and define data models for the underlying data warehouse and data martsCollaborate with Data Product Managers, Data Architects and Data Engineers to design, implement, and deliver successful data solutionsMaintain detailed documentation of your work and changes to support data quality and data governanceEnsure high operational efficiency and quality of your solutions to meet SLAs and support commitment to our customers (Data Science, Data Analytics teams)Be an active participant and advocate of agile/scrum practice to ensure health and process improvements for your team

Basic Qualifications

7+ years of data engineering experience developing large data pipelinesStrong SQL skills and ability to create queries to extract data and build performant datasetsHands-on experience with distributed systems such as Spark, Hadoop (HDFS, Hive, Presto, PySpark) to query and process data at large scaleExperience with at least one major MPP or cloud database technology (Snowflake, Redshift, Big Query)

Preferred Qualifications

Nice to have experience with Cloud technologies like AWS (S3, EMR, EC2)Solid experience with data integration toolsets (i.e Airflow) and writing and maintaining Data PipelinesFamiliarity with Data Modeling techniques and Data Warehousing standard methodologies and practicesGood Scripting skills, including Bash scripting and PythonFamiliar with Scrum and Agile methodologiesYou are a problem solver with strong attention to detail and excellent analytical and communication skills

Required Education

Bachelor's degree in computer science, Information Systems or related field

Preferred Education

Master's degree in computer science, Information Systems or related field