CapB InfoteK
AZURE DATA ENGINEER
CapB InfoteK, Des Moines, Iowa, United States, 50319
CapB is a global leader on IT Solutions and Managed Services. Our R&D is focused on providing cutting edge products and solutions across Digital Transformations from Cloud, AI/ML, IOT, Blockchain to MDM/PIM, Supply chain, ERP, CRM, HRMS and Integration solutions. For our growing needs we need consultants who can work with us on salaried or contract basis. We provide industry standard benefits, and an environment for LEARNING & Growth.
For one of our going on project we are looking for an
AZURE DATA ENGINEER.
The position is based out of
Des Moines, IA . Locals preferred but can be done remotely for the time being this year.
Responsibilities:
• Create functional design specifications, Azure reference architectures, and assist with other project deliverables as needed.
• Design and Develop Platform as a Service (PaaS) Solutions using different Azure Services
• Create a data factory, orchestrate data processing activities in a data-driven workflow, monitor and manage the data factory, move, transform and analyze data
• Design complex enterprise Data solutions that utilize Azure Data Factory Create migration plans to move legacy SSIS packages into Azure Data Factory
• Build conceptual and logical data models
• Design and implement big data real-time and batch processing solutions
• Design, build, and scale data pipelines across a variety of source systems and streams (internal, third-party, as well as cloud-based), distributed / elastic environments, and downstream applications and/or self-service solutions.
• Develop and document mechanisms for deployment, monitoring and maintenance
Skills and Experience:
• Bachelor's degree or higher in Computer Science Engineering/ Information Technology, Information Systems
• 3+ years experience with Microsoft Cloud Data Platform: Azure Data Factory, Azure Databricks, Python, Scala, Spark SQL, SQL Data Warehouse
• 3+ years of experience in developing data ingestion, data processing and analytical pipelines for big data, relational databases, NoSQL, data lake solutions
• Expertise with SQL, database design/structures, ETL/ELT design patterns, and DataMart structures (star, snowflake schemas, etc.)
• Functional knowledge of programming scripting and data science languages such as JavaScript, PowerShell, Python, Bash, SQL, .NET, Java, PHP, Ruby, PERL, C++, R, etc.
• Creation of descriptive, predictive and prescriptive analytics solutions using Azure Stream Analytics, Azure Analysis Services, Data Lake Analytics, HDInsight, HDP, Spark, Databricks, MapReduce, Pig, Hive, Tez, SSAS, Watson Analytics, SPSSA
• Experience in Azure Data Factory (ADF) creating multiple pipelines and activities using Azure for full and incremental data loads into Azure Data Lake Store and Azure SQL DW
• Experience for Azure Data Lake Storage and working with Parquet files and partitions
• Experience managing Microsoft Azure environments with VM’s, VNETS, Subnets, NSG’s, Resource Groups, etc.
• Experience in Creation & Configuration of Azure Resources & RBAC
• Experience with Git/Azure DevOps
• Azure certification would be desired
• Must have an ability to communicate clearly and be a team player
For one of our going on project we are looking for an
AZURE DATA ENGINEER.
The position is based out of
Des Moines, IA . Locals preferred but can be done remotely for the time being this year.
Responsibilities:
• Create functional design specifications, Azure reference architectures, and assist with other project deliverables as needed.
• Design and Develop Platform as a Service (PaaS) Solutions using different Azure Services
• Create a data factory, orchestrate data processing activities in a data-driven workflow, monitor and manage the data factory, move, transform and analyze data
• Design complex enterprise Data solutions that utilize Azure Data Factory Create migration plans to move legacy SSIS packages into Azure Data Factory
• Build conceptual and logical data models
• Design and implement big data real-time and batch processing solutions
• Design, build, and scale data pipelines across a variety of source systems and streams (internal, third-party, as well as cloud-based), distributed / elastic environments, and downstream applications and/or self-service solutions.
• Develop and document mechanisms for deployment, monitoring and maintenance
Skills and Experience:
• Bachelor's degree or higher in Computer Science Engineering/ Information Technology, Information Systems
• 3+ years experience with Microsoft Cloud Data Platform: Azure Data Factory, Azure Databricks, Python, Scala, Spark SQL, SQL Data Warehouse
• 3+ years of experience in developing data ingestion, data processing and analytical pipelines for big data, relational databases, NoSQL, data lake solutions
• Expertise with SQL, database design/structures, ETL/ELT design patterns, and DataMart structures (star, snowflake schemas, etc.)
• Functional knowledge of programming scripting and data science languages such as JavaScript, PowerShell, Python, Bash, SQL, .NET, Java, PHP, Ruby, PERL, C++, R, etc.
• Creation of descriptive, predictive and prescriptive analytics solutions using Azure Stream Analytics, Azure Analysis Services, Data Lake Analytics, HDInsight, HDP, Spark, Databricks, MapReduce, Pig, Hive, Tez, SSAS, Watson Analytics, SPSSA
• Experience in Azure Data Factory (ADF) creating multiple pipelines and activities using Azure for full and incremental data loads into Azure Data Lake Store and Azure SQL DW
• Experience for Azure Data Lake Storage and working with Parquet files and partitions
• Experience managing Microsoft Azure environments with VM’s, VNETS, Subnets, NSG’s, Resource Groups, etc.
• Experience in Creation & Configuration of Azure Resources & RBAC
• Experience with Git/Azure DevOps
• Azure certification would be desired
• Must have an ability to communicate clearly and be a team player