Willis Towers Watson (WTW) - Insurance Services
Data Engineer
Willis Towers Watson (WTW) - Insurance Services, Chicago, Illinois, United States, 60290
Job Summary :
As a Senior Data Engineer, you will play a pivotal role in designing, building, and optimizing complex data pipelines and ETL processes while leveraging your expertise in Azure Synapse and PySpark to build advanced data analytics systems and data outputs for downstream consumption. You will be responsible for architecting scalable and efficient data models to support the business processes for reporting and data integrations. You will utilize cutting-edge Generative AI (GenAI) technologies to drive innovative data extraction and analysis solutions for our downstream consumers. The ideal candidate will bring over 10 years of experience in data engineering with a strong background in ETL, Data Modeling through Azure cloud solutions, and the integration of AI into data workflows.
The Role:
Key Responsibilities :
ETL Development & Maintenance: Lead the design, development, and optimization of complex ETL processes and pipelines that enable reliable data ingestion, transformation, and loading across a variety of sources.
Azure Synapse Analytics: Architect and develop scalable data solutions utilizing Azure Synapse with a deep focus on performance optimization. Create optimized PySpark notebooks for advanced data transformations and analytical queries.
Data Modeling: Design and maintain logical and physical data models, ensuring data structures align with business needs, scalability, and optimization for data warehousing and analytics.
GenAI Integration for Data: Lead the application of Generative AI technologies for data analysis and data extraction, including leveraging GenAI for predictive analytics, automated data transformation, and natural language query processing.
Data Pipeline Automation: Develop, implement, and manage automated data pipelines for continuous integration and deployment of data solutions. Incorporate best practices for monitoring and error-handling in production environments.
Collaboration: Work closely with other Data Engineers, analysts, and business stakeholders to understand their data requirements and provide innovative, scalable data solutions.
Performance Tuning & Optimization: Continuously monitor, evaluate, and optimize data pipelines and queries to enhance the performance of data systems, minimize latency, and ensure real-time data availability.
Cloud Engineering: Drive cloud-native engineering best practices on the Azure platform including security, scalability, high availability, disaster recovery, and cost-efficiency in data storage and processing.
Documentation & Best Practices: Create and maintain clear, concise documentation for data pipelines, models, and processes. Promote best practices for data governance, quality, and security.
The Requirements:
Required Skills & Qualifications :
10+ years of experience in Data Engineering and ETL processes with proven expertise in building and optimizing data pipelines.
5+ years of experience in Azure Synapse Analytics, including hands-on work with PySpark notebooks.
5+ years of experience in Data Modeling, with strong expertise in relational and dimensional modeling.
3+ years of experience in integrating and leveraging Generative AI technologies for data-related functions.
Expertise in Azure Cloud Ecosystem, including Azure Data Lake, Azure Data Factory, and Synapse Studio.
Strong proficiency in PySpark and distributed computing frameworks.
Solid understanding of Data Governance principles, including data security, data quality, and master data management (MDM).
Strong SQL skills and proficiency in other languages like Python, Scala, or R.
Experience with DevOps practices such as CI/CD pipelines for data workflows.
Familiarity with AI/ML workflows and the intersection of AI with data engineering processes.
Preferred Qualifications:
Azure Certifications: Microsoft Certified: Azure Data Engineer Associate or Azure Solutions Architect Expert.
Experience with real-time data streaming solutions such as Azure Event Hubs.
Hands-on experience with BI Tools such as Power BI or similar.
Knowledge of NoSQL databases such as Cosmos DB or MongoDB.
Experience with containerization (Docker, Kubernetes) for data engineering workloads.
Experience in the Property and Casualty Insurance industry.
Soft Skills :
Problem-Solving Orientation: Ability to think critically and solve complex problems.
Strong Communication Skills: Capable of communicating effectively with both technical and non-technical stakeholders.
Leadership & Mentorship: Experience in mentoring junior engineers.
Adaptability & Continuous Learning: Openness to continuously learning new tools and technologies.
Location:
Near major WTW offices in the United States
Compensation and BenefitsBase salary range and benefits information are being included in accordance with requirements of various state/local pay transparency legislation. Please note that salaries may vary for different individuals in the same role based on several factors, including but not limited to location of the role, individual competencies, education/professional certifications, qualifications/experience, performance in the role and potential for revenue generation.
Compensation:The base salary compensation range being offered for this role is $110,000 - $118,000 USD per year. This role is also eligible for an annual short-term incentive bonus.
Company Benefits:WTW provides a competitive benefit package which includes the following (eligibility requirements apply):
Health and Welfare Benefits:
Medical, Dental, Vision, Health Savings Account, Commuter Account, Flexible Spending Accounts, Group Accident, Life Insurance, Wellbeing Program.
Leave Benefits:
Paid Holidays, Annual Paid Time Off, Short-Term Disability, Long-Term Disability, Other Leaves.
Retirement Benefits:
Contributory Pension Plan and Savings Plan (401k).
At WTW, we trust you to know your work and the people, tools and environment you need to be successful. The majority of our colleagues work in a 'hybrid' style, with a mix of remote, in-person and in-office interactions.
EOE, including disability/vets
#J-18808-Ljbffr
As a Senior Data Engineer, you will play a pivotal role in designing, building, and optimizing complex data pipelines and ETL processes while leveraging your expertise in Azure Synapse and PySpark to build advanced data analytics systems and data outputs for downstream consumption. You will be responsible for architecting scalable and efficient data models to support the business processes for reporting and data integrations. You will utilize cutting-edge Generative AI (GenAI) technologies to drive innovative data extraction and analysis solutions for our downstream consumers. The ideal candidate will bring over 10 years of experience in data engineering with a strong background in ETL, Data Modeling through Azure cloud solutions, and the integration of AI into data workflows.
The Role:
Key Responsibilities :
ETL Development & Maintenance: Lead the design, development, and optimization of complex ETL processes and pipelines that enable reliable data ingestion, transformation, and loading across a variety of sources.
Azure Synapse Analytics: Architect and develop scalable data solutions utilizing Azure Synapse with a deep focus on performance optimization. Create optimized PySpark notebooks for advanced data transformations and analytical queries.
Data Modeling: Design and maintain logical and physical data models, ensuring data structures align with business needs, scalability, and optimization for data warehousing and analytics.
GenAI Integration for Data: Lead the application of Generative AI technologies for data analysis and data extraction, including leveraging GenAI for predictive analytics, automated data transformation, and natural language query processing.
Data Pipeline Automation: Develop, implement, and manage automated data pipelines for continuous integration and deployment of data solutions. Incorporate best practices for monitoring and error-handling in production environments.
Collaboration: Work closely with other Data Engineers, analysts, and business stakeholders to understand their data requirements and provide innovative, scalable data solutions.
Performance Tuning & Optimization: Continuously monitor, evaluate, and optimize data pipelines and queries to enhance the performance of data systems, minimize latency, and ensure real-time data availability.
Cloud Engineering: Drive cloud-native engineering best practices on the Azure platform including security, scalability, high availability, disaster recovery, and cost-efficiency in data storage and processing.
Documentation & Best Practices: Create and maintain clear, concise documentation for data pipelines, models, and processes. Promote best practices for data governance, quality, and security.
The Requirements:
Required Skills & Qualifications :
10+ years of experience in Data Engineering and ETL processes with proven expertise in building and optimizing data pipelines.
5+ years of experience in Azure Synapse Analytics, including hands-on work with PySpark notebooks.
5+ years of experience in Data Modeling, with strong expertise in relational and dimensional modeling.
3+ years of experience in integrating and leveraging Generative AI technologies for data-related functions.
Expertise in Azure Cloud Ecosystem, including Azure Data Lake, Azure Data Factory, and Synapse Studio.
Strong proficiency in PySpark and distributed computing frameworks.
Solid understanding of Data Governance principles, including data security, data quality, and master data management (MDM).
Strong SQL skills and proficiency in other languages like Python, Scala, or R.
Experience with DevOps practices such as CI/CD pipelines for data workflows.
Familiarity with AI/ML workflows and the intersection of AI with data engineering processes.
Preferred Qualifications:
Azure Certifications: Microsoft Certified: Azure Data Engineer Associate or Azure Solutions Architect Expert.
Experience with real-time data streaming solutions such as Azure Event Hubs.
Hands-on experience with BI Tools such as Power BI or similar.
Knowledge of NoSQL databases such as Cosmos DB or MongoDB.
Experience with containerization (Docker, Kubernetes) for data engineering workloads.
Experience in the Property and Casualty Insurance industry.
Soft Skills :
Problem-Solving Orientation: Ability to think critically and solve complex problems.
Strong Communication Skills: Capable of communicating effectively with both technical and non-technical stakeholders.
Leadership & Mentorship: Experience in mentoring junior engineers.
Adaptability & Continuous Learning: Openness to continuously learning new tools and technologies.
Location:
Near major WTW offices in the United States
Compensation and BenefitsBase salary range and benefits information are being included in accordance with requirements of various state/local pay transparency legislation. Please note that salaries may vary for different individuals in the same role based on several factors, including but not limited to location of the role, individual competencies, education/professional certifications, qualifications/experience, performance in the role and potential for revenue generation.
Compensation:The base salary compensation range being offered for this role is $110,000 - $118,000 USD per year. This role is also eligible for an annual short-term incentive bonus.
Company Benefits:WTW provides a competitive benefit package which includes the following (eligibility requirements apply):
Health and Welfare Benefits:
Medical, Dental, Vision, Health Savings Account, Commuter Account, Flexible Spending Accounts, Group Accident, Life Insurance, Wellbeing Program.
Leave Benefits:
Paid Holidays, Annual Paid Time Off, Short-Term Disability, Long-Term Disability, Other Leaves.
Retirement Benefits:
Contributory Pension Plan and Savings Plan (401k).
At WTW, we trust you to know your work and the people, tools and environment you need to be successful. The majority of our colleagues work in a 'hybrid' style, with a mix of remote, in-person and in-office interactions.
EOE, including disability/vets
#J-18808-Ljbffr