Datavail
Analytics Azure Architect
Datavail, Boulder, Colorado, United States, 80301
Job Description
Data Architect Data Warehouse Lakehouse (Databricks or Microsoft Fabric) Datalake ADF/Synapse/Data Pipelines (ETL/ELT) Strong SQL and PySpark knowledge CI/CD Experience Bachelor's Degree in Computer Science, Information Technology, Engineering, Business, or related field AND 4+ years experience in cloud/infrastructure technologies, information technology (IT) consulting/support, systems administration, network operations, software development/support, technology solutions, practice development, architecture, and/or consulting. Or, equivalent experience
12+ years of experience in implementing large data and analytics platforms 6+ years of experience architecting and building data platforms on Azure 4+ years of experience architecting and building data platforms on Databricks from scratch Experience using Azure Synapse as source or target for the data pipelines in at least 1 project Hands on experience using Microsoft Fabric in at least 1 project Pro in all Databricks engineering services Unity Catalog, Spark Jobs, Delta Live Table (DLT), DLT Meta, Databricks Workflow, Auto Loader. (You need to ask the candidate) Hands on in Databricks SQL, PySpark Understanding of data modelling, data mappings, understand the data model changes and propose the best practices and guidelines Creating technical design based on end-to-end data architecture and frameworks for meta data driven data ingestion, transformation, logging and monitoring Deploying data pipelines/applications to higher environments using CI-CD pipelines Experience in writing technical user stories Experience in Agile methodology and scrums
Data Architect Data Warehouse Lakehouse (Databricks or Microsoft Fabric) Datalake ADF/Synapse/Data Pipelines (ETL/ELT) Strong SQL and PySpark knowledge CI/CD Experience Bachelor's Degree in Computer Science, Information Technology, Engineering, Business, or related field AND 4+ years experience in cloud/infrastructure technologies, information technology (IT) consulting/support, systems administration, network operations, software development/support, technology solutions, practice development, architecture, and/or consulting. Or, equivalent experience
12+ years of experience in implementing large data and analytics platforms 6+ years of experience architecting and building data platforms on Azure 4+ years of experience architecting and building data platforms on Databricks from scratch Experience using Azure Synapse as source or target for the data pipelines in at least 1 project Hands on experience using Microsoft Fabric in at least 1 project Pro in all Databricks engineering services Unity Catalog, Spark Jobs, Delta Live Table (DLT), DLT Meta, Databricks Workflow, Auto Loader. (You need to ask the candidate) Hands on in Databricks SQL, PySpark Understanding of data modelling, data mappings, understand the data model changes and propose the best practices and guidelines Creating technical design based on end-to-end data architecture and frameworks for meta data driven data ingestion, transformation, logging and monitoring Deploying data pipelines/applications to higher environments using CI-CD pipelines Experience in writing technical user stories Experience in Agile methodology and scrums