Logo
84.51

Senior Data Engineer (P3152)

84.51, Chicago, IL


84.51° Overview:

84.51° is a retail data science, insights and media company. We help The Kroger Co., consumer packaged goods companies, agencies, publishers and affiliates create more personalized and valuable experiences for shoppers across the path to purchase.

Powered by cutting-edge science, we utilize first-party retail data from more than 62 million U.S. households sourced through the Kroger Plus loyalty card program to fuel a more customer-centric journey using 84.51° Insights, 84.51° Loyalty Marketing and our retail media advertising solution, Kroger Precision Marketing.

Join us at 84.51°!

As a Senior Data Engineer, you will have the opportunity to design and build data pipeline for both internal and external clients. You will have deeper understanding of Data Engineering approaches along with hands on experience in building highly scalable solutions. We are a team of innovators, continuously exploring new technologies that ensure 84.51° remains on the forefront of data development. In this position, you will be utilizing Python, Pyspark, SQL, Databricks, Azure Cloud Services, and Snowflake.

As part of the Data Asset team, you will be working in a space that will be using various internal and external data in order to facilitate reporting, analysis and warehousing.

Responsibilities

Take ownership of stories and drive them to completion through all phases of the entire 84.51° SDLC. This includes external and internal data pipeline as well as process improvement activities such as:
  • Design and develop Python & SQL based data pipeline solutions
  • Perform unit and integration testing
  • Create quality checks for ingested and post processed data
  • Ensure alerting and monitoring of automated pipeline solutions
  • Provide mentoring to junior team members
  • Participate in retrospective reviews
  • Participate in the estimation process for new work and releases
  • Maintain and enhance existing applications
  • Bring new perspectives to problems
  • Be driven to improve yourself and the way things are done

Requirements
  • Understanding of Agile Principles (Scrum)
  • 5+ years of proven professional Python and SQL development experience
  • Proficient with distributed data processing (PySpark, Snowpark)
  • Proficient with automated testing (PyTest, etc...)
  • Proficient with GitHub
  • Experience using Python frameworks
  • Experience with Cloud Technologies & Services (Azure preferred, GCP, AWS)
  • Experience with performance tuning enterprise applications
  • Experience with debugging data pipleines
  • Understanding of CI/CD
  • Understanding of Object Oriented Principles
  • Understanding SOLID principles

Preferred Skills
  • Python & SQL Development
  • Distributed Data Processing (PySpark, Snowpark)
  • CI/CD practices
  • Automated data pipeline orchestration
  • Data observability - Logging, Monitoring, and Alerting (Datadog)
  • Azure Cloud infrastructure development
  • Snowflake
  • Databricks
  • Data quality checks
  • Cloud Technologies


IMPORTANT INFO

This is a Hybrid position. Candidates must be able to come into the office on Monday, Tuesday, and Wednesday of each week. We have locations in Cincinnati, OH, Chicago, IL, Deerfield, IL, New York, NY, and Portland, OR. There are no remote options for this position.

We are NOT working with Staffing Firms, Consulting Companies, or any other 3rd parties on this position.

Full-time only. No contracts. No C2C.

#LI-DOLF