Randstad
senior data engineer
Randstad, Beaverton, Oregon, us, 97078
senior data engineer.
beaverton , oregon (remote)
posted december 20, 2024
job details
summary
$60 - $66 per hour
contract
no requirements
category computer and mathematical occupations
reference1075444
job details
job summary:
About Us:
We are the Consumer Data Engineering team (CoDE), seeking an experienced Senior Data Engineer to join our team. As a Senior Data Engineer, you will play a critical role in designing, building, and maintaining our big data infrastructure, ensuring the scalability, reliability, and performance of our data systems.
Job Summary:
We are looking for a highly skilled Senior Data Engineer with a strong background in big data engineering, cloud computing, and software development. The ideal candidate will have a proven track record of designing and implementing scalable data solutions using AWS, Spark, and Python. The candidate should have hands-on experience with Databricks, optimizing Spark applications, and building ETL pipelines. Experience with CI/CD, unit testing, and big data problem-solving is a plus.
Key Responsibilities:
Design, build, and maintain large-scale data pipelines using AWS EMR, Spark, and Python
Develop and optimize Spark applications and ETL pipelines for performance and scalability
Collaborate with product managers and analysts to design and implement data models and data warehousing solutions
Work with cross-functional teams to integrate data systems with other applications and services
Ensure data quality, integrity, and security across all data systems
Develop and maintain unit test cases for data pipelines and applications
Implement CI/CD pipelines for automated testing and deployment
Collaborate with the DevOps team to ensure seamless deployment of data applications
Stay up to date with industry trends and emerging technologies in big data and cloud computing
Requirements:
At least 5 years of experience in data engineering, big data, or a related field
Proficiency in Spark, including Spark Core, Spark SQL, and Spark Streaming
Experience with AWS EMR, including cluster management and job optimization
strong skills in Python, including data structures, algorithms, and software design patterns
Hands-on experience with Databricks, including Databricks Lakehouse (advantageous)
Experience with optimizing Spark applications and ETL pipelines for performance and scalability
Good understanding of data modeling, data warehousing, and data governance
Experience with CI/CD tools such as Jenkins, GitLab, or CircleCI (advantageous)
strong understanding of software development principles, including unit testing and test-driven development
Ability to design and implement scalable data solutions that meet business requirements
strong problem-solving skills, with the ability to debug complex data issues
Excellent communication and collaboration skills, with the ability to work with cross-functional teams
Nice to Have:
Experience with Databricks Lakehouse
Knowledge of data engineering best practices and design patterns
Experience with agile development methodologies, such as Scrum or Kanban
location: BEAVERTON, Oregon
job type: Contract
salary: $60 - 66 per hour
work hours: 8am to 4pm
education: No Degree Required
responsibilities:
Design, build, and maintain large-scale data pipelines using AWS EMR, Spark, and Python
Develop and optimize Spark applications and ETL pipelines for performance and scalability
Collaborate with product managers and analysts to design and implement data models and data warehousing solutions
Work with cross-functional teams to integrate data systems with other applications and services
Ensure data quality, integrity, and security across all data systems
Develop and maintain unit test cases for data pipelines and applications
Implement CI/CD pipelines for automated testing and deployment
Collaborate with the DevOps team to ensure seamless deployment of data applications
Stay up to date with industry trends and emerging technologies in big data and cloud computing
qualifications:
Experience level: Experienced
Minimum 5 years of experience
Education: No Degree Required
skills:
Data WarehouseEqual Opportunity Employer: Race, Color, Religion, Sex, Sexual Orientation, Gender Identity, National Origin, Age, Genetic Information, Disability, Protected Veteran Status, or any other legally protected group status.At Randstad Digital, we welcome people of all abilities and want to ensure that our hiring and interview process meets the needs of all applicants. If you require a reasonable accommodation to make your application or interview experience a great one, please contact HRsupport@randstadusa.com.Pay offered to a successful candidate will be based on several factors including the candidate's education, work experience, work location, specific job duties, certifications, etc. In addition, Randstad Digital offers a comprehensive benefits package, including health, an incentive and recognition program, and 401K contribution (all benefits are based on eligibility).This posting is open for thirty (30) days.It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability.
beaverton , oregon (remote)
posted december 20, 2024
job details
summary
$60 - $66 per hour
contract
no requirements
category computer and mathematical occupations
reference1075444
job details
job summary:
About Us:
We are the Consumer Data Engineering team (CoDE), seeking an experienced Senior Data Engineer to join our team. As a Senior Data Engineer, you will play a critical role in designing, building, and maintaining our big data infrastructure, ensuring the scalability, reliability, and performance of our data systems.
Job Summary:
We are looking for a highly skilled Senior Data Engineer with a strong background in big data engineering, cloud computing, and software development. The ideal candidate will have a proven track record of designing and implementing scalable data solutions using AWS, Spark, and Python. The candidate should have hands-on experience with Databricks, optimizing Spark applications, and building ETL pipelines. Experience with CI/CD, unit testing, and big data problem-solving is a plus.
Key Responsibilities:
Design, build, and maintain large-scale data pipelines using AWS EMR, Spark, and Python
Develop and optimize Spark applications and ETL pipelines for performance and scalability
Collaborate with product managers and analysts to design and implement data models and data warehousing solutions
Work with cross-functional teams to integrate data systems with other applications and services
Ensure data quality, integrity, and security across all data systems
Develop and maintain unit test cases for data pipelines and applications
Implement CI/CD pipelines for automated testing and deployment
Collaborate with the DevOps team to ensure seamless deployment of data applications
Stay up to date with industry trends and emerging technologies in big data and cloud computing
Requirements:
At least 5 years of experience in data engineering, big data, or a related field
Proficiency in Spark, including Spark Core, Spark SQL, and Spark Streaming
Experience with AWS EMR, including cluster management and job optimization
strong skills in Python, including data structures, algorithms, and software design patterns
Hands-on experience with Databricks, including Databricks Lakehouse (advantageous)
Experience with optimizing Spark applications and ETL pipelines for performance and scalability
Good understanding of data modeling, data warehousing, and data governance
Experience with CI/CD tools such as Jenkins, GitLab, or CircleCI (advantageous)
strong understanding of software development principles, including unit testing and test-driven development
Ability to design and implement scalable data solutions that meet business requirements
strong problem-solving skills, with the ability to debug complex data issues
Excellent communication and collaboration skills, with the ability to work with cross-functional teams
Nice to Have:
Experience with Databricks Lakehouse
Knowledge of data engineering best practices and design patterns
Experience with agile development methodologies, such as Scrum or Kanban
location: BEAVERTON, Oregon
job type: Contract
salary: $60 - 66 per hour
work hours: 8am to 4pm
education: No Degree Required
responsibilities:
Design, build, and maintain large-scale data pipelines using AWS EMR, Spark, and Python
Develop and optimize Spark applications and ETL pipelines for performance and scalability
Collaborate with product managers and analysts to design and implement data models and data warehousing solutions
Work with cross-functional teams to integrate data systems with other applications and services
Ensure data quality, integrity, and security across all data systems
Develop and maintain unit test cases for data pipelines and applications
Implement CI/CD pipelines for automated testing and deployment
Collaborate with the DevOps team to ensure seamless deployment of data applications
Stay up to date with industry trends and emerging technologies in big data and cloud computing
qualifications:
Experience level: Experienced
Minimum 5 years of experience
Education: No Degree Required
skills:
Data WarehouseEqual Opportunity Employer: Race, Color, Religion, Sex, Sexual Orientation, Gender Identity, National Origin, Age, Genetic Information, Disability, Protected Veteran Status, or any other legally protected group status.At Randstad Digital, we welcome people of all abilities and want to ensure that our hiring and interview process meets the needs of all applicants. If you require a reasonable accommodation to make your application or interview experience a great one, please contact HRsupport@randstadusa.com.Pay offered to a successful candidate will be based on several factors including the candidate's education, work experience, work location, specific job duties, certifications, etc. In addition, Randstad Digital offers a comprehensive benefits package, including health, an incentive and recognition program, and 401K contribution (all benefits are based on eligibility).This posting is open for thirty (30) days.It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability.