Strategic Staffing Solutions
Data Engineer - Python, PySpark, AWS
Strategic Staffing Solutions, Richmond, Virginia, United States, 23214
Data Engineer - Python, PySpark, AWS
W2 ONLY- NO C2C
Location:
Richmond, VA
Setting:
May work remote or on site
Contract:
4+months
Beware of scams. S3 never asks for money during its onboarding process.
Position Overview:
As a Data Engineer, you will be responsible for designing, implementing, and maintaining scalable data pipelines and infrastructure. You will work closely with our data scientists and analysts to understand data requirements and ensure data quality and reliability. The ideal candidate will have strong technical skills in Python, PySpark, and AWS, along with experience in data modeling, ETL processes, and cloud-based data solutions.
Key Responsibilities:Design and develop robust data pipelines using Python and PySpark to process large volumes of data from multiple sources.Implement efficient ETL processes to transform and cleanse data before loading into our data warehouse.Collaborate with data scientists and analysts to understand data requirements and ensure data quality and consistency.Optimize data infrastructure and pipeline performance for scalability and reliability.Monitor and troubleshoot data pipelines and infrastructure to ensure uptime and performance.Implement best practices for data security, privacy, and compliance.Stay current with advancements in data engineering technologies and best practices.Tech stack:
PythonPySparkAWSResponsibilities:
Development and maintenance of data pipelines using Python, PySpark and various AWS services.End to End testing, deployment, Product support and Defect fixes.
Job ID:
JOB-236362Publish Date:
20 Aug 2024
W2 ONLY- NO C2C
Location:
Richmond, VA
Setting:
May work remote or on site
Contract:
4+months
Beware of scams. S3 never asks for money during its onboarding process.
Position Overview:
As a Data Engineer, you will be responsible for designing, implementing, and maintaining scalable data pipelines and infrastructure. You will work closely with our data scientists and analysts to understand data requirements and ensure data quality and reliability. The ideal candidate will have strong technical skills in Python, PySpark, and AWS, along with experience in data modeling, ETL processes, and cloud-based data solutions.
Key Responsibilities:Design and develop robust data pipelines using Python and PySpark to process large volumes of data from multiple sources.Implement efficient ETL processes to transform and cleanse data before loading into our data warehouse.Collaborate with data scientists and analysts to understand data requirements and ensure data quality and consistency.Optimize data infrastructure and pipeline performance for scalability and reliability.Monitor and troubleshoot data pipelines and infrastructure to ensure uptime and performance.Implement best practices for data security, privacy, and compliance.Stay current with advancements in data engineering technologies and best practices.Tech stack:
PythonPySparkAWSResponsibilities:
Development and maintenance of data pipelines using Python, PySpark and various AWS services.End to End testing, deployment, Product support and Defect fixes.
Job ID:
JOB-236362Publish Date:
20 Aug 2024