Agility Partners LLC
Azure Data Engineer Company Hidden New Albany, OH Developer 1 Opening Posted tod
Agility Partners LLC, New Albany, Ohio, United States, 43054
Agility Partners is looking for a Senior Data Engineer to build data pipelines, model and prepare data, perform complex data analysis to answer Business questions, build and automate data pipeline and quality framework to enable and promote self-service data pipelines, assist in operationalizing the AI / ML Engineering solutions. This role is expected to lead and guide other team members and evangelize the design patterns as well as coding standards.
This role plays an active part in our client's Data Modernization project to migrate from on-prem platforms such as IBM Netezza to cloud.
In this role, you will:
Team up with the engineering teams and enterprise architecture (EA) to define standards, design patterns, accelerators, development practices, DevOps and CI/CD automation.Create and maintain the data ingestion, quality testing and audit framework.Conduct complex data analysis to answer queries from Business Users or Technology team partners either directly from Analysts or stemmed from one of the Reporting tools such as PowerBI, Tableau, OBIEE.Build and automate the data ingestion, transformation and aggregation pipelines using Azure Data Factory, Databricks / Spark, Snowflake, Kafka as well as Enterprise Scheduler tools such as CA Workload automation or Control M.Setup and evangelize the metadata-driven approach to data pipelines to promote self-service.Continuously improve the data quality and audit monitoring as well as alerting.Evaluate process automation options and collaborate with engineering as well as architecture to review the proposed design.Demonstrate mastery of build and release engineering principles and methodologies including source control, branch management, build and smoke testing, archiving and retention practices.Adhere to, enhance, and document the design principles and best practices by collaborating with Solution and Enterprise Architects.Participate in and support the Data Academy and Data Literacy program to train the Business Users and Technology teams on Data.Respond to SLA driven production data quality or pipeline issues.Work in a fast-paced Agile/Scrum environment.Identify and assist with the implementation of DevOps practices in support of fully automated deployments.Document the Data Flow Diagrams, Data Models, Technical Data Mapping and Production Support Information for Data Pipelines.Follow industry-standard data security practices and evangelize the same across the team.Benefits and Perks
Fully remote!Ability to join a collaborative team, making a positive impact on the financial stability and overall health of an organization.Culture-focused company focused on customer-first and a growth mindset.Great resume builder – gain experience with a Fortune 15 company.Medical, Dental and Vision plans (PPO and HSA plans available); Individual and Family coverage offerings.Long and short-term disability coverage.401(k).The Ideal Candidate
5+ years of experience in an Enterprise Data Management or Data Engineering role.3+ years of hands-on experience in building metadata-driven data pipelines using Azure Data Factory, Databricks / Spark for Cloud Datalake.5+ years hands-on experience with using one or more of the following for data analysis and wrangling: Databricks, Python / PySpark, Jupyter Notebooks.Expert level SQL knowledge on databases such as but not limited to Snowflake, Netezza, Oracle, SQL Server, MySQL, Teradata.3+ years of hands-on experience on one or more of big data technologies such as Cloudera Hadoop, Pivotal, Vertica, MapR is a plus.Experience working in a multi-developer environment and hands-on experience in using either Azure DevOps or GitLab.Preferably experienced in SLA driven Production Data Pipeline or Quality support.Experience or strong understanding of traditional enterprise ETL platforms such as IBM Datastage, Informatica, Pentaho, Ab Initio etc.Functional knowledge of some of the following technologies: Terraform, Azure CLI, PowerShell, Containerization (Kubernetes, Docker).Functional knowledge of one or more Reporting tools such as PowerBI, Tableau, OBIEE.Team player with excellent communication skills, ability to communicate with customers directly and able to explain the status of the deliverables in scrum calls.Ability to implement Agile methodologies and work in an Agile DevOps environment.
#J-18808-Ljbffr
This role plays an active part in our client's Data Modernization project to migrate from on-prem platforms such as IBM Netezza to cloud.
In this role, you will:
Team up with the engineering teams and enterprise architecture (EA) to define standards, design patterns, accelerators, development practices, DevOps and CI/CD automation.Create and maintain the data ingestion, quality testing and audit framework.Conduct complex data analysis to answer queries from Business Users or Technology team partners either directly from Analysts or stemmed from one of the Reporting tools such as PowerBI, Tableau, OBIEE.Build and automate the data ingestion, transformation and aggregation pipelines using Azure Data Factory, Databricks / Spark, Snowflake, Kafka as well as Enterprise Scheduler tools such as CA Workload automation or Control M.Setup and evangelize the metadata-driven approach to data pipelines to promote self-service.Continuously improve the data quality and audit monitoring as well as alerting.Evaluate process automation options and collaborate with engineering as well as architecture to review the proposed design.Demonstrate mastery of build and release engineering principles and methodologies including source control, branch management, build and smoke testing, archiving and retention practices.Adhere to, enhance, and document the design principles and best practices by collaborating with Solution and Enterprise Architects.Participate in and support the Data Academy and Data Literacy program to train the Business Users and Technology teams on Data.Respond to SLA driven production data quality or pipeline issues.Work in a fast-paced Agile/Scrum environment.Identify and assist with the implementation of DevOps practices in support of fully automated deployments.Document the Data Flow Diagrams, Data Models, Technical Data Mapping and Production Support Information for Data Pipelines.Follow industry-standard data security practices and evangelize the same across the team.Benefits and Perks
Fully remote!Ability to join a collaborative team, making a positive impact on the financial stability and overall health of an organization.Culture-focused company focused on customer-first and a growth mindset.Great resume builder – gain experience with a Fortune 15 company.Medical, Dental and Vision plans (PPO and HSA plans available); Individual and Family coverage offerings.Long and short-term disability coverage.401(k).The Ideal Candidate
5+ years of experience in an Enterprise Data Management or Data Engineering role.3+ years of hands-on experience in building metadata-driven data pipelines using Azure Data Factory, Databricks / Spark for Cloud Datalake.5+ years hands-on experience with using one or more of the following for data analysis and wrangling: Databricks, Python / PySpark, Jupyter Notebooks.Expert level SQL knowledge on databases such as but not limited to Snowflake, Netezza, Oracle, SQL Server, MySQL, Teradata.3+ years of hands-on experience on one or more of big data technologies such as Cloudera Hadoop, Pivotal, Vertica, MapR is a plus.Experience working in a multi-developer environment and hands-on experience in using either Azure DevOps or GitLab.Preferably experienced in SLA driven Production Data Pipeline or Quality support.Experience or strong understanding of traditional enterprise ETL platforms such as IBM Datastage, Informatica, Pentaho, Ab Initio etc.Functional knowledge of some of the following technologies: Terraform, Azure CLI, PowerShell, Containerization (Kubernetes, Docker).Functional knowledge of one or more Reporting tools such as PowerBI, Tableau, OBIEE.Team player with excellent communication skills, ability to communicate with customers directly and able to explain the status of the deliverables in scrum calls.Ability to implement Agile methodologies and work in an Agile DevOps environment.
#J-18808-Ljbffr