Virtual
Principal Data Architect
Virtual, Parsippany, New Jersey, us, 07054
Principal Data Architect
Direct Hire Opportunity
Hybrid (Two days per week) in Parsippany, New Jersey
Job Description:
You will be critical to design, develop, and maintain the infrastructure and systems necessary for the collection, storage, processing, and analysis of large volumes of data.
Responsibilities:
• Expertise in designing, building, and maintaining scalable data pipelines and ETL (Extract, Transform, Load) processes.
• Experience with data modeling and database design principles.
• Knowledge of data warehousing concepts and tools.
• Understanding of data governance, data quality, and data integration techniques.
• Proficiency in Microsoft Azure cloud services, including Azure Data Factory, Azure Databricks, Azure SQL Database, Azure Data Lake, etc.
• Proficiency in data visualization tools like Power BI, Sisense
• Experience with managing and optimizing cloud-based data storage and processing solutions.
• Knowledge of Azure DevOps for CI/CD (Continuous Integration/Continuous Deployment).
• Understanding of cloud security and compliance principles
• Familiarity with data integration tools (e.g., Apache Kafka, Apache Nifi)
• Understanding of machine learning concepts and experience with ML platforms like Azure Machine Learning or TensorFlow.
Requirements:
• A bachelor's or master's degree in computer science, data science, or a related field.
• Strong programming skills, particularly in languages like Python, Java, or Scala.
• Proficiency in SQL and experience with relational databases (e.g., MySQL, PostgreSQL).
• Familiarity with distributed computing frameworks, such as Apache Hadoop and Apache Spark.
• Experience with version control systems like Git.
Apply now!
Direct Hire Opportunity
Hybrid (Two days per week) in Parsippany, New Jersey
Job Description:
You will be critical to design, develop, and maintain the infrastructure and systems necessary for the collection, storage, processing, and analysis of large volumes of data.
Responsibilities:
• Expertise in designing, building, and maintaining scalable data pipelines and ETL (Extract, Transform, Load) processes.
• Experience with data modeling and database design principles.
• Knowledge of data warehousing concepts and tools.
• Understanding of data governance, data quality, and data integration techniques.
• Proficiency in Microsoft Azure cloud services, including Azure Data Factory, Azure Databricks, Azure SQL Database, Azure Data Lake, etc.
• Proficiency in data visualization tools like Power BI, Sisense
• Experience with managing and optimizing cloud-based data storage and processing solutions.
• Knowledge of Azure DevOps for CI/CD (Continuous Integration/Continuous Deployment).
• Understanding of cloud security and compliance principles
• Familiarity with data integration tools (e.g., Apache Kafka, Apache Nifi)
• Understanding of machine learning concepts and experience with ML platforms like Azure Machine Learning or TensorFlow.
Requirements:
• A bachelor's or master's degree in computer science, data science, or a related field.
• Strong programming skills, particularly in languages like Python, Java, or Scala.
• Proficiency in SQL and experience with relational databases (e.g., MySQL, PostgreSQL).
• Familiarity with distributed computing frameworks, such as Apache Hadoop and Apache Spark.
• Experience with version control systems like Git.
Apply now!