Logo
LeafLink

Principal Data Engineer, Platform

LeafLink, New York, New York, us, 10261


The RoleLeafLink is seeking a Data Engineer to join our New York-based team. As a remote or onsite member of the data engineering and analytics team, you will be in a position to have a direct impact on how LeafLink harnesses its first-party data from various sources to generate business value. This impactful position enables LeafLink to coordinate and integrate with 3rd party data sets and proprietary data to produce valuable insights into business and customer needs.

Who You AreYou are deeply passionate about organizing and managing data. You believe and understand the value that powerful reporting and analytics can drive for the business. You possess a structured and detail-oriented approach to solving problems using a diverse and resourceful technical toolkit. You can collaborate cross-functionally, communicating regular updates and leading projects should come easily to the candidate.

What You’ll Be Doing

Create and maintain optimal data pipeline architecture

Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Python, and AWS cloud technologies

Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.

Assist in building a high-performing data platform that will power various reporting and analytics applications at LeafLink

Design, develop, and test data models in our data warehouse that enable data and analytics processes

Troubleshoot, diagnose, and address data quality issues quickly and effectively

Manage codebase in a GIT-based repository structure and release properly tested code

Maintain documentation on product capabilities, architecture, and infrastructure supporting the Data Environment

What You’ll Bring to the Team

Minimum of 3 years experience in a professional working environment on a data or engineering team

Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases

Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.

Expertise writing Python processing jobs to ingest a variety of structured and unstructured data received from various sources & formats such as Rest APIs, Flat Files, Logs

You should also have experience using the following software/tools:

Experience with object-oriented/object function scripting in Python and data processing libraries such as requests, pandas, sqlalchemy

Experience with relational SQL and NoSQL databases, such as Redshift, or comparable cloud-based OLAP databases such as Snowflake

Experience with data pipeline and workflow management tools: Airflow

Experience with cloud-based data stack, AWS cloud services is a plus

Hands-on experience with technologies such as Dynamo, Terraform, Kubernetes, Fivetran, and dbt is a strong plus

Comfortable working in a fast-paced growth business with many collaborators and quickly evolving business needs

LeafLink Perks & Benefits

Flexible PTO

- you’re going to be working hard so enjoy time off with no cap!

A robust

stock option plan

to give our employees a direct stake in LeafLink’s success

5 Days of Volunteer Time Off (VTO)

- giving back is important to us and we want our employees to prioritize cultivating a better community

Competitive compensation

and

401k match

Comprehensive

health

coverage

(medical, dental, vision)

Commuter Benefits

through our Flexible Spending Account

#J-18808-Ljbffr