Medeanalytics, Inc.
Sr. Cloud DevOps Engineer
Medeanalytics, Inc., Nashville, Tennessee, United States,
About the job:
Location Requirement:
The candidate hired for this position must be based within a commutable distance from Nashville, TN, or Richardson, TX. This role may require periodic in-office attendance, and applicants not located within proximity to these areas may not be considered.
MedeAnalytics is seeking a highly motivated
Senior Cloud DevOps Engineer
with a passion for
AI, data science , and
cloud automation
to join our
Cloud Engineering team . This lead role will drive
automation initiatives
aligned with our
R&D strategy , support
cloud migrations , and manage the
cloud infrastructure
in a
SaaS environment . You will collaborate with product development to design and maintain
scalable, reliable, and secure
solutions, ensuring best practices in
DevOps
and
cloud computing . If you thrive in a fast-paced, innovative environment and are committed to improving
healthcare outcomes , we encourage you to apply.
Essential Duties and Responsibilities:
Infrastructure Automation:
Design, implement, and maintain automated infrastructure provisioning and management using tools like Terraform and
AWS CloudFormation .
Collaborate with development teams to automate deployment and testing processes, including AI and data science models.
Containerization and Orchestration:
Manage and optimize Kubernetes clusters on AWS.
Develop and maintain Helm charts for packaging and deploying applications, including AI and data science models.
Implement containerization strategies using Docker or other relevant technologies.
CI/CD Pipelines:
Build and maintain robust CI/CD pipelines using tools like Jenkins, GitLab CI/CD, ArgoCD, Atlantis or CircleCI, tailored for AI and data science workflows.
Integrate automated testing frameworks for both application code and AI models.
Implement code quality, security checks, and model validation within the pipelines.
Cloud Infrastructure Management:
Manage and optimize AWS cloud resources, including EC2 instances, S3 buckets, VPCs, and other services, with a focus on supporting AI and data science workloads.
Implement best practices for cloud security, cost optimization, and performance tuning.
Monitor and troubleshoot cloud infrastructure issues, particularly related to AI and data science applications.
Monitoring and Alerting:
Implement comprehensive monitoring solutions (e.g., Prometheus, Grafana, CloudWatch) to track system performance, AI model health, and data quality.
Configure alerts and notifications to ensure timely response to critical issues, including model drift or performance degradation.
AI and Data Science MLops:
Collaborate with data scientists to develop and deploy AI models into production.
Implement MLops practices to manage the entire lifecycle of AI models, including versioning, experimentation, and reproducibility.
Use tools like Kubeflow, MLflow, or Airflow to automate ML workflows.
Ensure data privacy and security compliance within AI and data science pipelines.
Collaboration and Problem-Solving:
Work closely with development, data science, and AI teams to understand their requirements and provide technical guidance.
Collaborate with other DevOps team members to share knowledge and best practices, particularly related to AI and data science.
Identify and resolve complex technical challenges, including those specific to AI and data science applications.
Key Required Qualifications:
Bachelor’s degree in computer science, Engineering, or a related field.
3+ years of experience as a DevOps Engineer or a similar role, with a focus on AI and data science.
Certification in AWS (Amazon Web Services) is required,
demonstrating a strong understanding of cloud architecture, services, and best practices.
Kubernetes certification (CKA or CKAD) is required , showcasing expertise in container orchestration, deployment, and management at scale.
Strong proficiency in AWS cloud services and tools.
Experience with Terraform and
AWS CloudFormation
for infrastructure automation.
In-depth knowledge of Kubernetes and containerization technologies (Docker).
Experience with Helm charts and CI/CD pipelines, tailored for AI and data science workflows.
Understanding of scripting languages (e.g., Bash, Python).
Excellent problem-solving and troubleshooting skills.
Strong communication and collaboration abilities.
Preferred Qualifications:
Certification in AWS (e.g., AWS Certified DevOps Engineer)
Experience with serverless computing (e.g., AWS Lambda, EKS)
Knowledge of security best practices and compliance frameworks
Experience with microservices architecture
Familiarity with data engineering concepts and tools
Experience with Jenkins, ArgoCD and Atlantis for GitOps-based deployments
Understanding of healthcare data and regulatory compliance (e.g., HIPAA)
Experience with AI and data science frameworks (e.g., TensorFlow, PyTorch).
Knowledge of MLops principles and tools.
#J-18808-Ljbffr
Location Requirement:
The candidate hired for this position must be based within a commutable distance from Nashville, TN, or Richardson, TX. This role may require periodic in-office attendance, and applicants not located within proximity to these areas may not be considered.
MedeAnalytics is seeking a highly motivated
Senior Cloud DevOps Engineer
with a passion for
AI, data science , and
cloud automation
to join our
Cloud Engineering team . This lead role will drive
automation initiatives
aligned with our
R&D strategy , support
cloud migrations , and manage the
cloud infrastructure
in a
SaaS environment . You will collaborate with product development to design and maintain
scalable, reliable, and secure
solutions, ensuring best practices in
DevOps
and
cloud computing . If you thrive in a fast-paced, innovative environment and are committed to improving
healthcare outcomes , we encourage you to apply.
Essential Duties and Responsibilities:
Infrastructure Automation:
Design, implement, and maintain automated infrastructure provisioning and management using tools like Terraform and
AWS CloudFormation .
Collaborate with development teams to automate deployment and testing processes, including AI and data science models.
Containerization and Orchestration:
Manage and optimize Kubernetes clusters on AWS.
Develop and maintain Helm charts for packaging and deploying applications, including AI and data science models.
Implement containerization strategies using Docker or other relevant technologies.
CI/CD Pipelines:
Build and maintain robust CI/CD pipelines using tools like Jenkins, GitLab CI/CD, ArgoCD, Atlantis or CircleCI, tailored for AI and data science workflows.
Integrate automated testing frameworks for both application code and AI models.
Implement code quality, security checks, and model validation within the pipelines.
Cloud Infrastructure Management:
Manage and optimize AWS cloud resources, including EC2 instances, S3 buckets, VPCs, and other services, with a focus on supporting AI and data science workloads.
Implement best practices for cloud security, cost optimization, and performance tuning.
Monitor and troubleshoot cloud infrastructure issues, particularly related to AI and data science applications.
Monitoring and Alerting:
Implement comprehensive monitoring solutions (e.g., Prometheus, Grafana, CloudWatch) to track system performance, AI model health, and data quality.
Configure alerts and notifications to ensure timely response to critical issues, including model drift or performance degradation.
AI and Data Science MLops:
Collaborate with data scientists to develop and deploy AI models into production.
Implement MLops practices to manage the entire lifecycle of AI models, including versioning, experimentation, and reproducibility.
Use tools like Kubeflow, MLflow, or Airflow to automate ML workflows.
Ensure data privacy and security compliance within AI and data science pipelines.
Collaboration and Problem-Solving:
Work closely with development, data science, and AI teams to understand their requirements and provide technical guidance.
Collaborate with other DevOps team members to share knowledge and best practices, particularly related to AI and data science.
Identify and resolve complex technical challenges, including those specific to AI and data science applications.
Key Required Qualifications:
Bachelor’s degree in computer science, Engineering, or a related field.
3+ years of experience as a DevOps Engineer or a similar role, with a focus on AI and data science.
Certification in AWS (Amazon Web Services) is required,
demonstrating a strong understanding of cloud architecture, services, and best practices.
Kubernetes certification (CKA or CKAD) is required , showcasing expertise in container orchestration, deployment, and management at scale.
Strong proficiency in AWS cloud services and tools.
Experience with Terraform and
AWS CloudFormation
for infrastructure automation.
In-depth knowledge of Kubernetes and containerization technologies (Docker).
Experience with Helm charts and CI/CD pipelines, tailored for AI and data science workflows.
Understanding of scripting languages (e.g., Bash, Python).
Excellent problem-solving and troubleshooting skills.
Strong communication and collaboration abilities.
Preferred Qualifications:
Certification in AWS (e.g., AWS Certified DevOps Engineer)
Experience with serverless computing (e.g., AWS Lambda, EKS)
Knowledge of security best practices and compliance frameworks
Experience with microservices architecture
Familiarity with data engineering concepts and tools
Experience with Jenkins, ArgoCD and Atlantis for GitOps-based deployments
Understanding of healthcare data and regulatory compliance (e.g., HIPAA)
Experience with AI and data science frameworks (e.g., TensorFlow, PyTorch).
Knowledge of MLops principles and tools.
#J-18808-Ljbffr