The Custom Group of Companies
Senior Data Engineer
The Custom Group of Companies, New York, New York, us, 10261
Senior Data EngineerNew York, NY - Hybrid position Onsite 2 days a week (local to area)Rate $100-$110 per hour1+ year contractMust be a USC to obtain L2 security clearanceNo Corp to Corp or 3rd party agencies
We are looking for a Senior Data Engineer to join our team of professionals. The selected individual should be well versed in Databricks and AWS.
In this position you will:Work on a team of 15 and have expert level skills in coding and testing pipelines in Databricks on AWS 6Other solid skills Python, SQL, Spark, AWS, Trino
Position Description
Design, develop, monitor, test and maintain data pipelines in Databricks on an AWS 6 ecosystem with Delta Lake, Python, SQL and Starburst as the technology stack. Collaborate with cross-functional teams to understand data needs and translate them into effective data pipeline solutions.Establish data quality checks and ensure data integrity and accuracy throughout the data lifecycle.Automate testing of the data pipelines and configure as part of CICDOptimize data processing and query performance for large-scale datasets within AWS and Databricks environments.Document data engineering processes, architecture, and configurations.Troubleshooting and debugging data-related issues on the AWS Databricks platform.Integrating Databricks with other AWS products such as SNS, SQS, and MSK.Qualifications
Minimum of 5 years of experience in data engineering roles, with a focus on AWS and Expert level skills coding and testing Databricks in an AWS 6 Environment.Experience with Python, PySpark, SQL, SparkGitlab with CI/CD.AWS Services like S3, RDS, Lambda, SQS, SNS, MSK is requiredDatabricks Certified is highly desiredStrong SQL skills to perform data analysis and understanding of source data.Experience with data pipeline orchestration toolsCommunication, Collaboration, experimentation, inquisitiveness, detailed oriented
We are looking for a Senior Data Engineer to join our team of professionals. The selected individual should be well versed in Databricks and AWS.
In this position you will:Work on a team of 15 and have expert level skills in coding and testing pipelines in Databricks on AWS 6Other solid skills Python, SQL, Spark, AWS, Trino
Position Description
Design, develop, monitor, test and maintain data pipelines in Databricks on an AWS 6 ecosystem with Delta Lake, Python, SQL and Starburst as the technology stack. Collaborate with cross-functional teams to understand data needs and translate them into effective data pipeline solutions.Establish data quality checks and ensure data integrity and accuracy throughout the data lifecycle.Automate testing of the data pipelines and configure as part of CICDOptimize data processing and query performance for large-scale datasets within AWS and Databricks environments.Document data engineering processes, architecture, and configurations.Troubleshooting and debugging data-related issues on the AWS Databricks platform.Integrating Databricks with other AWS products such as SNS, SQS, and MSK.Qualifications
Minimum of 5 years of experience in data engineering roles, with a focus on AWS and Expert level skills coding and testing Databricks in an AWS 6 Environment.Experience with Python, PySpark, SQL, SparkGitlab with CI/CD.AWS Services like S3, RDS, Lambda, SQS, SNS, MSK is requiredDatabricks Certified is highly desiredStrong SQL skills to perform data analysis and understanding of source data.Experience with data pipeline orchestration toolsCommunication, Collaboration, experimentation, inquisitiveness, detailed oriented