Circle K
Lead Data Engineer
Circle K, Charlotte, North Carolina, United States, 28245
Job Description:
As the Technical Lead Data Engineer, your primary responsibility will be to spearhead the design, development, and implementation of data solutions aimed at empowering our organization to derive actionable insights from intricate datasets. You will take the lead in guiding a team of data engineers, fostering collaboration with cross-functional teams, and spearheading initiatives geared towards fortifying our data infrastructure, CI/CD pipelines, and analytics capabilities.
Responsibilities:
Apply advanced knowledge of Data Engineering principles, methodologies and techniques to design and implement data loading and aggregation frameworks across broad areas of the organization.
Gather and process raw, structured, semi-structured and unstructured data using batch and real-time data processing frameworks.
Implement and optimize data solutions in enterprise data warehouses and big data repositories, focusing primarily on movement to the cloud.
Drive new and enhanced capabilities to Enterprise Data Platform partners to meet the needs of product / engineering / business.
Experience building enterprise systems especially using Databricks, Snowflake and platforms like Azure, AWS, GCP etc.
Leverage strong Python, Spark, SQL programming skills to construct robust pipelines for efficient data processing and analysis.
Implement CI/CD pipelines for automating build, test, and deployment processes to accelerate the delivery of data solutions.
Implement data modeling techniques to design and optimize data schemas, ensuring data integrity and performance.
Drive continuous improvement initiatives to enhance performance, reliability, and scalability of our data infrastructure.
Collaborate with data scientists, analysts, and other stakeholders to understand business requirements and translate them into technical solutions.
Implement best practices for data governance, security, and compliance to ensure the integrity and confidentiality of our data assets.
Coaching and mentoring junior Data Engineers: Providing guidance, support, and technical expertise to junior members of the data engineering team.
Qualifications:
Bachelor’s or master’s degree in computer science, Engineering, or a related field.
Proven experience (8+) in a data engineering role, with expertise in designing and building data pipelines, ETL processes, and data warehouses.
Strong proficiency in SQL, Python and Spark programming languages.
Strong experience with cloud platforms such as AWS, Azure, or GCP is a must.
Hands-on experience with big data technologies such as Hadoop, Spark, Kafka, and distributed computing frameworks.
Knowledge of data lake and data warehouse solutions, including Databricks, Snowflake, Amazon Redshift, Google BigQuery, Azure Data Factory, Airflow etc.
Experience in implementing CI/CD pipelines for automating build, test, and deployment processes.
Solid understanding of data modeling concepts, data warehousing architectures, and data management best practices.
Excellent communication and leadership skills, with the ability to effectively collaborate with cross-functional teams and drive consensus on technical decisions.
Relevant certifications (e.g., Azure, databricks, snowflake) would be a plus.
#J-18808-Ljbffr
As the Technical Lead Data Engineer, your primary responsibility will be to spearhead the design, development, and implementation of data solutions aimed at empowering our organization to derive actionable insights from intricate datasets. You will take the lead in guiding a team of data engineers, fostering collaboration with cross-functional teams, and spearheading initiatives geared towards fortifying our data infrastructure, CI/CD pipelines, and analytics capabilities.
Responsibilities:
Apply advanced knowledge of Data Engineering principles, methodologies and techniques to design and implement data loading and aggregation frameworks across broad areas of the organization.
Gather and process raw, structured, semi-structured and unstructured data using batch and real-time data processing frameworks.
Implement and optimize data solutions in enterprise data warehouses and big data repositories, focusing primarily on movement to the cloud.
Drive new and enhanced capabilities to Enterprise Data Platform partners to meet the needs of product / engineering / business.
Experience building enterprise systems especially using Databricks, Snowflake and platforms like Azure, AWS, GCP etc.
Leverage strong Python, Spark, SQL programming skills to construct robust pipelines for efficient data processing and analysis.
Implement CI/CD pipelines for automating build, test, and deployment processes to accelerate the delivery of data solutions.
Implement data modeling techniques to design and optimize data schemas, ensuring data integrity and performance.
Drive continuous improvement initiatives to enhance performance, reliability, and scalability of our data infrastructure.
Collaborate with data scientists, analysts, and other stakeholders to understand business requirements and translate them into technical solutions.
Implement best practices for data governance, security, and compliance to ensure the integrity and confidentiality of our data assets.
Coaching and mentoring junior Data Engineers: Providing guidance, support, and technical expertise to junior members of the data engineering team.
Qualifications:
Bachelor’s or master’s degree in computer science, Engineering, or a related field.
Proven experience (8+) in a data engineering role, with expertise in designing and building data pipelines, ETL processes, and data warehouses.
Strong proficiency in SQL, Python and Spark programming languages.
Strong experience with cloud platforms such as AWS, Azure, or GCP is a must.
Hands-on experience with big data technologies such as Hadoop, Spark, Kafka, and distributed computing frameworks.
Knowledge of data lake and data warehouse solutions, including Databricks, Snowflake, Amazon Redshift, Google BigQuery, Azure Data Factory, Airflow etc.
Experience in implementing CI/CD pipelines for automating build, test, and deployment processes.
Solid understanding of data modeling concepts, data warehousing architectures, and data management best practices.
Excellent communication and leadership skills, with the ability to effectively collaborate with cross-functional teams and drive consensus on technical decisions.
Relevant certifications (e.g., Azure, databricks, snowflake) would be a plus.
#J-18808-Ljbffr