Logo
Insight Global

Lead Azure Data Engineer

Insight Global, Charlotte, North Carolina, United States, 28245


POSITION:

Lead Azure Data EngineerLOCATION:

Charlotte, NC 28217 (Hybrid 3 times per week)PAY RANGE:

$80.00 - $87.00 per hour W2DURATION:

6-Month Contract-To-Hire (100% career opportunity)MUST-HAVES:10+ years of experience in a data engineering role, with expertise in Azure, designing and building data pipelines, ETL processes, and data warehouses.2+ years of experience working as a Lead Engineer and developing/mentoring other engineers.5+ years of experience with Azure Cloud Platform.Extensive experience working with Azure Databricks and Azure Data Factory for Data Lake and Data Warehouse solutions.Must have experience with Azure Synapse Analytics, Databricks Unity Catalog and Azure Logic Apps.Experience in implementing CI/CD pipelines for automating build, test, and deployment processes.Advanced proficiency in SQL, specifically being able to handle window functions like rank, average, standard deviation, etc.Python, Spark and PySpark programming languages.Hands-on experience with big data technologies (such as Hadoop, Spark, Kafka).PLUSSES:Experience working in the retail industry/sector.Relevant certifications (e.g., Azure, Databricks, Snowflake, etc.).Experience working with Snowflake and/or Microsoft Fabric.SUMMARY:A top Insight Global client in Charlotte, NC is looking for Lead Azure Data Engineer to join their team. As the Technical Lead Data Engineer, your primary responsibility will be to spearhead the design, development, and implementation of data solutions within Azure, aimed at empowering the organization to derive actionable insights from intricate datasets. You will take the lead in guiding a team of data engineers, fostering collaboration with cross-functional teams, and spearheading initiatives geared towards fortifying our data infrastructure, CI/CD pipelines, and analytics capabilities.Responsibilities are shown below:Experience building Azure enterprise systems using Databricks, Unity Catalog, Logic Apps, Synapse and Snowflake.Leverage strong SQL, Python, Spark and PySpark programming skills to construct robust pipelines for efficient data processing and analysis.Apply advanced knowledge of Data Engineering principles, methodologies and techniques to design and implement data loading and aggregation frameworks across broad areas of the organization.Implement CI/CD pipelines for automating build, test, and deployment processes to accelerate the delivery of data solutions.Gather and process raw, structured, semi-structured and unstructured data using batch and real-time data processing frameworks.Implement and optimize data solutions in enterprise data warehouses and big data repositories, focusing primarily on movement to the cloud.Drive new and enhanced capabilities to Enterprise Data Platform partners to meet the needs of product / engineering / business.Implement data modeling techniques to design and optimize data schemas, ensuring data integrity and performance.Drive continuous improvement initiatives to enhance performance, reliability, and scalability of our data infrastructure.Collaborate with data scientists, analysts, and other stakeholders to understand business requirements and translate them into technical solutions.Implement best practices for data governance, security, and compliance to ensure the integrity and confidentiality of our data assets.Compensation:$80.00 - $87.00/hrExact compensation may vary based on several factors, including skills, experience, and education.Benefit packages for this role will start on the 31st day of employment and include medical, dental, and vision insurance, as well as HSA, FSA, and DCFSA account options, and 401k retirement account access with employer matching. Employees in this role are also entitled to paid sick leave and/or other paid time off as provided by applicable law.