Global Atlantic Financial Group
AVP, DevOps Engineer
Global Atlantic Financial Group, Boston, Massachusetts, us, 02298
Summary:
The
Assistant Vice President of DevOps – Data
will be responsible for integrating the project functions and resources across the product life cycle, right from planning, building, testing, and deployment to support. The Assistant Vice President will design and implement efficient procedures and pipelines for software development and infrastructure deployment, manage and deploy various key data systems and services. The Assistant Vice president will also expand and optimize the data and data pipeline architecture, as well optimize data flow and collection for cross functional teams. The Assistant Vice President will work with Cloud engineers, Data Engineers, System Administrator, Data Administrator and Architects to find opportunities to leverage DevOps technologies to process large volumes of data. The role will work closely with the business stakeholders, data architects and IT teams to support on data strategy initiatives and will ensure optimal data system performance and architecture is consistent throughout the strategy. The Assistant Vice President will also be working with DevOps engineers and build CI/CD pipeline for Infrastructure as Code and automated deployment. This role requires a highly motivated individual with strong leadership capability, technical ability, data capability, excellent communication, and collaboration skills including the ability to develop and troubleshoot a diverse range of problems.
Responsibilities
Build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline) for Data Infrastructure using
Terraform, Gitlab CI/CD,
Build containerized application platform using
Docker and Kubernetes
Select and deploy
Gitlab CI/CD
tools and pipeline
Implement various development, testing, automation tools, and Data infrastructure
Deploy updates and fixes, Install and maintain software, services, and application by identifying system requirements.
Maintain environment by identifying system requirements, installing upgrades, monitoring system performance and build automated process
Defining and setting development, test, release, update, and support processes for
DevOps
operation
Have the technical skill to review, verify, and validate the software code developed in the project.
Identify and deploy cybersecurity measures by continuously performing vulnerability assessment and risk management
Plan the team structure, activities, and involvement in project management activities.
Lead a team of DevOps engineers at both Onsite and Offshore
Setting goals and performance review for direct report
Understanding business objectives of the company and creating cloud-based solutions to facilitate those objectives.
Build pipeline to migrate system from on-premise to AWS cloud
Maintain system performance by performing system monitoring and analysis and performance tuning;
Upgrades system and services and developing, testing, evaluating, and installing enhancements and new software.
Communicate with technology team and stakeholders and build applications to meet project needs.
Troubleshooting, Incidence management and root cause analysis
QUALIFICATIONS
Bachelor’s degree in computer science or engineering
Minimum of 12 years of experience in
DevOps
engineering
Minimum of 5 years of experience with
Terraform
and
Gitlab CI/CD
Minimum of 5 years of
AWS Cloud
experience that includes – AWS EC2, ECS, VPC, AWS EMR, AWS Lambda, Aws Glue, AWS Batch, AWS RDS, Redshift
Experience with
CloudFormation
and
Infrastructure as Code
software
Experience with
jFrog Artifactory
for hosting and managing binary and artifacts
Experience with programming language such as
Python
and
JavaScript
Experience with building container platform using
Docker
Experience with building and maintaining container platform using
Kubernetes
Experience with Agile software development using JIRA
Experience in multiple OS platforms with strong emphasis on Linux and Windows systems
Experience with OS-level scripting environment such as KSH shell, Bash and PowerShell
Experience with version management tools and CICD pipeline
In-depth knowledge of the TCP / IP protocol suite, security architecture, securing and hardening Operating Systems, Networks, Databases and Applications.
Experience supporting and optimizing data pipelines and data sets.
Knowledge of the Incident Response life-cycle
Strong written and verbal communication skills.
This position is not currently eligible for visa sponsorship now or in the future.
#LI-AO1
#LI-Hybrid
#J-18808-Ljbffr
The
Assistant Vice President of DevOps – Data
will be responsible for integrating the project functions and resources across the product life cycle, right from planning, building, testing, and deployment to support. The Assistant Vice President will design and implement efficient procedures and pipelines for software development and infrastructure deployment, manage and deploy various key data systems and services. The Assistant Vice president will also expand and optimize the data and data pipeline architecture, as well optimize data flow and collection for cross functional teams. The Assistant Vice President will work with Cloud engineers, Data Engineers, System Administrator, Data Administrator and Architects to find opportunities to leverage DevOps technologies to process large volumes of data. The role will work closely with the business stakeholders, data architects and IT teams to support on data strategy initiatives and will ensure optimal data system performance and architecture is consistent throughout the strategy. The Assistant Vice President will also be working with DevOps engineers and build CI/CD pipeline for Infrastructure as Code and automated deployment. This role requires a highly motivated individual with strong leadership capability, technical ability, data capability, excellent communication, and collaboration skills including the ability to develop and troubleshoot a diverse range of problems.
Responsibilities
Build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline) for Data Infrastructure using
Terraform, Gitlab CI/CD,
Build containerized application platform using
Docker and Kubernetes
Select and deploy
Gitlab CI/CD
tools and pipeline
Implement various development, testing, automation tools, and Data infrastructure
Deploy updates and fixes, Install and maintain software, services, and application by identifying system requirements.
Maintain environment by identifying system requirements, installing upgrades, monitoring system performance and build automated process
Defining and setting development, test, release, update, and support processes for
DevOps
operation
Have the technical skill to review, verify, and validate the software code developed in the project.
Identify and deploy cybersecurity measures by continuously performing vulnerability assessment and risk management
Plan the team structure, activities, and involvement in project management activities.
Lead a team of DevOps engineers at both Onsite and Offshore
Setting goals and performance review for direct report
Understanding business objectives of the company and creating cloud-based solutions to facilitate those objectives.
Build pipeline to migrate system from on-premise to AWS cloud
Maintain system performance by performing system monitoring and analysis and performance tuning;
Upgrades system and services and developing, testing, evaluating, and installing enhancements and new software.
Communicate with technology team and stakeholders and build applications to meet project needs.
Troubleshooting, Incidence management and root cause analysis
QUALIFICATIONS
Bachelor’s degree in computer science or engineering
Minimum of 12 years of experience in
DevOps
engineering
Minimum of 5 years of experience with
Terraform
and
Gitlab CI/CD
Minimum of 5 years of
AWS Cloud
experience that includes – AWS EC2, ECS, VPC, AWS EMR, AWS Lambda, Aws Glue, AWS Batch, AWS RDS, Redshift
Experience with
CloudFormation
and
Infrastructure as Code
software
Experience with
jFrog Artifactory
for hosting and managing binary and artifacts
Experience with programming language such as
Python
and
JavaScript
Experience with building container platform using
Docker
Experience with building and maintaining container platform using
Kubernetes
Experience with Agile software development using JIRA
Experience in multiple OS platforms with strong emphasis on Linux and Windows systems
Experience with OS-level scripting environment such as KSH shell, Bash and PowerShell
Experience with version management tools and CICD pipeline
In-depth knowledge of the TCP / IP protocol suite, security architecture, securing and hardening Operating Systems, Networks, Databases and Applications.
Experience supporting and optimizing data pipelines and data sets.
Knowledge of the Incident Response life-cycle
Strong written and verbal communication skills.
This position is not currently eligible for visa sponsorship now or in the future.
#LI-AO1
#LI-Hybrid
#J-18808-Ljbffr