Rackspace
Senior Big Data Hadoop ML Engineer (GCP)
Rackspace, San Antonio, Texas, United States, 78208
About the Role:We are seeking a highly skilled and experienced Senior Big Data Engineer to join our dynamic team. The ideal candidate will have a strong background in developing batch processing systems, with extensive experience in the Apache Hadoop ecosystem (Map Reduce, Oozie, Hive, Pig, HBase, Storm). This role involves working in Java, and working on Machine Learning pipelines for data collection or batch inference. This is a remote position, requiring excellent communication skills and the ability to solve complex problems independently and creatively.Work Location:
US-Remote
What you will be doing:
Develop scalable and robust code for large scale batch processing systems using Hadoop, Oozie, Pig, Hive, Map Reduce, Spark (Java), Python, HBaseDevelop, manage, and maintain batch pipelines supporting Machine Learning workloadsLeverage GCP for scalable big data processing and storage solutionsImplement automation/DevOps best practices for CI/CD, IaC, etc.Requirements:
Proficiency in the Hadoop ecosystem with Map Reduce, Oozie, Hive, Pig, HBase, StormStrong programming skills with Java, Python, and SparkKnowledge in public cloud services, particularly in GCP.Experienced in Infrastructure and Applied DevOps principles in daily work. Utilize tools for continuous integration and continuous deployment (CI/CD), and Infrastructure as Code (IaC) like Terraform to automate and improve development and release processes.Ability to tackle complex challenges and devise effective solutions. Use critical thinking to approach problems from various angles and propose innovative solutions.Worked effectively in a remote setting, maintaining strong written and verbal communication skills. Collaborate with team members and stakeholders, ensuring clear understanding of technical requirements and project goals.Proven experience in engineering batch processing systems at scale.Hands-on experience in public cloud platforms, particularly GCP. Additional experience with other cloud technologies is advantageous.Must Have:
Experience with batch pipelines supporting Machine Learning workloadsStrong experience in programming language such as JavaStrong experience in the Apache Hadoop ecosystem10+ years of experience in customer-facing software/technology or consulting5+ years of experience with “on-premises to cloud” migrations or IT transformationsTechnical degree: Computer Science, Software Engineering or relatedGood to Have:
Familiarity with TerraformFamiliarity with Python5+ years of experience building and operating solutions built on GCP
#J-18808-Ljbffr
US-Remote
What you will be doing:
Develop scalable and robust code for large scale batch processing systems using Hadoop, Oozie, Pig, Hive, Map Reduce, Spark (Java), Python, HBaseDevelop, manage, and maintain batch pipelines supporting Machine Learning workloadsLeverage GCP for scalable big data processing and storage solutionsImplement automation/DevOps best practices for CI/CD, IaC, etc.Requirements:
Proficiency in the Hadoop ecosystem with Map Reduce, Oozie, Hive, Pig, HBase, StormStrong programming skills with Java, Python, and SparkKnowledge in public cloud services, particularly in GCP.Experienced in Infrastructure and Applied DevOps principles in daily work. Utilize tools for continuous integration and continuous deployment (CI/CD), and Infrastructure as Code (IaC) like Terraform to automate and improve development and release processes.Ability to tackle complex challenges and devise effective solutions. Use critical thinking to approach problems from various angles and propose innovative solutions.Worked effectively in a remote setting, maintaining strong written and verbal communication skills. Collaborate with team members and stakeholders, ensuring clear understanding of technical requirements and project goals.Proven experience in engineering batch processing systems at scale.Hands-on experience in public cloud platforms, particularly GCP. Additional experience with other cloud technologies is advantageous.Must Have:
Experience with batch pipelines supporting Machine Learning workloadsStrong experience in programming language such as JavaStrong experience in the Apache Hadoop ecosystem10+ years of experience in customer-facing software/technology or consulting5+ years of experience with “on-premises to cloud” migrations or IT transformationsTechnical degree: Computer Science, Software Engineering or relatedGood to Have:
Familiarity with TerraformFamiliarity with Python5+ years of experience building and operating solutions built on GCP
#J-18808-Ljbffr