Saxon Global
Azure Developer
Saxon Global, Jersey City, New Jersey, United States, 07390
Job Description:
They are looking for the same as below but heavy
MDM Informatica experience .
RESPONSIBILITIESBuild large-scale batch and real-time data pipelines with data processing frameworks in Azure cloud platform.Designing and implementing highly performant data ingestion pipelines from multiple sources using Azure Databricks.Direct experience of building data pipelines using Azure Data Factory and Databricks.Developing scalable and re-usable frameworks for ingesting of datasetsLead design of ETL, data integration and data migration.Partner with architects, engineers, information analysts, business, and technology stakeholders for developing and deploying enterprise grade platforms that enable data-driven solutions.Integrating the end to end data pipeline - to take data from source systems to target data repositories ensuring the quality and consistency of data is maintained at all timesWorking with event based / streaming technologies to ingest and process dataWorking with other members of the project team to support delivery of additional project components (API interfaces, Search)Evaluating the performance and applicability of multiple tools against customer requirementsREQUIREMENTS
Experience on ADLS, Azure Databricks, Azure SQL DB and DatawarehouseStrong working experience in Implementation of Azure cloud components using Azure Data Factory , Azure Data Analytics, Azure Data Lake, Azure Data Catalogue, LogicApps and FunctionAppsHave knowledge in Azure Storage services (ADLS, Storage Accounts)Expertise in designing and deploying data applications on cloud solutions on AzureHands on experience in performance tuning and optimizing code running in Databricks environmentGood understanding of SQL, T-SQL and/or PL/SQLShould have experience working in Agile projects with knowledge in JiraGood to have handled Data Ingestion projects in Azure environmentDemonstrated analytical and problem-solving skills particularly those that apply to a big data environmentExperience on Python scripting, Spark SQL PySpark is a plus.
They are looking for the same as below but heavy
MDM Informatica experience .
RESPONSIBILITIESBuild large-scale batch and real-time data pipelines with data processing frameworks in Azure cloud platform.Designing and implementing highly performant data ingestion pipelines from multiple sources using Azure Databricks.Direct experience of building data pipelines using Azure Data Factory and Databricks.Developing scalable and re-usable frameworks for ingesting of datasetsLead design of ETL, data integration and data migration.Partner with architects, engineers, information analysts, business, and technology stakeholders for developing and deploying enterprise grade platforms that enable data-driven solutions.Integrating the end to end data pipeline - to take data from source systems to target data repositories ensuring the quality and consistency of data is maintained at all timesWorking with event based / streaming technologies to ingest and process dataWorking with other members of the project team to support delivery of additional project components (API interfaces, Search)Evaluating the performance and applicability of multiple tools against customer requirementsREQUIREMENTS
Experience on ADLS, Azure Databricks, Azure SQL DB and DatawarehouseStrong working experience in Implementation of Azure cloud components using Azure Data Factory , Azure Data Analytics, Azure Data Lake, Azure Data Catalogue, LogicApps and FunctionAppsHave knowledge in Azure Storage services (ADLS, Storage Accounts)Expertise in designing and deploying data applications on cloud solutions on AzureHands on experience in performance tuning and optimizing code running in Databricks environmentGood understanding of SQL, T-SQL and/or PL/SQLShould have experience working in Agile projects with knowledge in JiraGood to have handled Data Ingestion projects in Azure environmentDemonstrated analytical and problem-solving skills particularly those that apply to a big data environmentExperience on Python scripting, Spark SQL PySpark is a plus.