Ciber
Data Engineer General
Ciber, Dearborn, Michigan, United States, 48120
HTC Global Services wants you. Come build new things with us and advance your career. At HTC Global you'll collaborate with experts.You'll join successful teams contributing to our clients' success.You'll work side by side with our clients and have long-term opportunities to advance your career with the latest emerging technologies.
At HTC Global Services our consultants have access to a comprehensive benefits package. Benefits can include Paid-Time-Off, Paid Holidays, 401K matching, Life and Accidental Death Insurance, Short & Long Term Disability Insurance, and a variety of other perks.
Job Description:
We are seeking an experienced and highly skilled Senior DevOps Engineer to join the GDIA department. The ideal candidate will have a strong background in data warehousing and significant experience working with Google Cloud Platform (GCP). As a Senior DevOps Engineer, you will play a critical role in designing, implementing, and maintaining the infrastructure and tools that enable our data engineering and analytics teams to operate efficiently and effectively.
Skills Required:
Infrastructure as Code:
Design, build, and maintain scalable and reliable infrastructure on GCP using Infrastructure as Code (IaC) tools such as Terraform and Deployment Manager.Automate the provisioning and management of cloud resources to ensure consistency and repeatability.
Continuous Integration and Continuous Deployment (CI/CD):
Implement and manage CI/CD pipelines using tools like Jenkins, GitLab CI, or Cloud Build to facilitate seamless code integration and deployment.Ensure automated testing and monitoring are integrated into the CI/CD process to maintain high-quality code and rapid delivery cycles.
Data Pipeline Management:
Collaborate with data engineers to design and optimize data pipelines on GCP using tools such as Apache Airflow, Cloud Composer, and Cloud Dataflow.Implement monitoring and alerting solutions to detect and resolve issues in data pipelines promptly.
Cloud Platform Expertise:
Utilize GCP services such as Cloud Storage, Cloud Run, and Cloud Functions to build scalable and cost-effective solutions.Implement best practices for cloud security, cost management, and resource optimization.
Collaboration and Communication:
Work closely with data engineers, data scientists, and other stakeholders to understand their requirements and provide the necessary infrastructure and tooling support.Foster a culture of collaboration and continuous improvement within the team.
Monitoring and Incident Management:
Implement robust monitoring, logging, and alerting solutions using tools like Stackdriver, Prometheus, and Grafana.Manage and respond to incidents, ensuring minimal downtime and quick resolution of issues.
Documentation and Training:
Create and maintain comprehensive documentation for infrastructure, CI/CD pipelines, and operational procedures.Provide training and support to team members on DevOps best practices and GCP services.
Skills Preferred:Technical Skills:
Proficiency in Infrastructure as Code (IaC) tools such as Terraform, Deployment Manager, or CloudFormation.Strong knowledge of CI/CD tools and practices, including Jenkins, GitLab CI, and Cloud Build.Experience with data pipeline tools and frameworks such as Apache Airflow, Cloud Composer, and Cloud Dataflow.Familiarity with GCP services, including Cloud Storage, Cloud Run, Cloud Functions, and BigQuery.Proficiency in scripting languages such as Python, Bash, or PowerShell.Soft Skills:
Excellent problem-solving and analytical skills.Strong communication and collaboration abilities.Ability to work independently and as part of a team in a fast-paced, dynamic environment.Experience Required:Experience:
Minimum of 5 years of experience in DevOps or infrastructure engineering, with a strong focus on data warehousing.At least 2 years of hands-on experience working with Google Cloud Platform (GCP).Education Required:Education:
Bachelor's degree in Computer Science, Information Technology, or a related field is required.Education Preferred:
A Master's degree in a relevant field is preferred.
Our success as a company is built on practicing inclusion and embracing diversity. HTC Global Services is committed to providing a work environment free from discrimination and harassment, where all employees are treated with respect and dignity. Together we work to create and maintain an environment where everyone feels valued, included, and respected. At HTC Global Services, our differences are embraced and celebrated. HTC is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills, and experiences within our workforce. HTC is proud to be recognized as a National Minority Supplier.
At HTC Global Services our consultants have access to a comprehensive benefits package. Benefits can include Paid-Time-Off, Paid Holidays, 401K matching, Life and Accidental Death Insurance, Short & Long Term Disability Insurance, and a variety of other perks.
Job Description:
We are seeking an experienced and highly skilled Senior DevOps Engineer to join the GDIA department. The ideal candidate will have a strong background in data warehousing and significant experience working with Google Cloud Platform (GCP). As a Senior DevOps Engineer, you will play a critical role in designing, implementing, and maintaining the infrastructure and tools that enable our data engineering and analytics teams to operate efficiently and effectively.
Skills Required:
Infrastructure as Code:
Design, build, and maintain scalable and reliable infrastructure on GCP using Infrastructure as Code (IaC) tools such as Terraform and Deployment Manager.Automate the provisioning and management of cloud resources to ensure consistency and repeatability.
Continuous Integration and Continuous Deployment (CI/CD):
Implement and manage CI/CD pipelines using tools like Jenkins, GitLab CI, or Cloud Build to facilitate seamless code integration and deployment.Ensure automated testing and monitoring are integrated into the CI/CD process to maintain high-quality code and rapid delivery cycles.
Data Pipeline Management:
Collaborate with data engineers to design and optimize data pipelines on GCP using tools such as Apache Airflow, Cloud Composer, and Cloud Dataflow.Implement monitoring and alerting solutions to detect and resolve issues in data pipelines promptly.
Cloud Platform Expertise:
Utilize GCP services such as Cloud Storage, Cloud Run, and Cloud Functions to build scalable and cost-effective solutions.Implement best practices for cloud security, cost management, and resource optimization.
Collaboration and Communication:
Work closely with data engineers, data scientists, and other stakeholders to understand their requirements and provide the necessary infrastructure and tooling support.Foster a culture of collaboration and continuous improvement within the team.
Monitoring and Incident Management:
Implement robust monitoring, logging, and alerting solutions using tools like Stackdriver, Prometheus, and Grafana.Manage and respond to incidents, ensuring minimal downtime and quick resolution of issues.
Documentation and Training:
Create and maintain comprehensive documentation for infrastructure, CI/CD pipelines, and operational procedures.Provide training and support to team members on DevOps best practices and GCP services.
Skills Preferred:Technical Skills:
Proficiency in Infrastructure as Code (IaC) tools such as Terraform, Deployment Manager, or CloudFormation.Strong knowledge of CI/CD tools and practices, including Jenkins, GitLab CI, and Cloud Build.Experience with data pipeline tools and frameworks such as Apache Airflow, Cloud Composer, and Cloud Dataflow.Familiarity with GCP services, including Cloud Storage, Cloud Run, Cloud Functions, and BigQuery.Proficiency in scripting languages such as Python, Bash, or PowerShell.Soft Skills:
Excellent problem-solving and analytical skills.Strong communication and collaboration abilities.Ability to work independently and as part of a team in a fast-paced, dynamic environment.Experience Required:Experience:
Minimum of 5 years of experience in DevOps or infrastructure engineering, with a strong focus on data warehousing.At least 2 years of hands-on experience working with Google Cloud Platform (GCP).Education Required:Education:
Bachelor's degree in Computer Science, Information Technology, or a related field is required.Education Preferred:
A Master's degree in a relevant field is preferred.
Our success as a company is built on practicing inclusion and embracing diversity. HTC Global Services is committed to providing a work environment free from discrimination and harassment, where all employees are treated with respect and dignity. Together we work to create and maintain an environment where everyone feels valued, included, and respected. At HTC Global Services, our differences are embraced and celebrated. HTC is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills, and experiences within our workforce. HTC is proud to be recognized as a National Minority Supplier.