Saxon Global
Platform Engineer
Saxon Global, Carlisle, Pennsylvania, United States, 17013
Responsibilities
Develop general infrastructure technology in a public/private cloud Design, configure, optimize, manage, monitor, document and support platform services and components, as well as supporting enterprise data ingestion Assist in the delivery of technical projects Participate in design sessions and code reviews to elevate the quality of engineering across the organization Spearhead new feature use (innovate within existing tooling) Spearhead new software acquisition and use (innovate with new tooling) Leverage automation to remove redundant error prone tasks to improve the quality of solutions Provide advanced system administration, operational support, and problem resolution for a large complex cloud computing environment and develop scripts to automate the deployment of resource stacks and associated configurations Extend standard system management processes into the cloud including change, incident, and problem management Develop and maintain a library of deployable, tested, and documented automation design scripts, processes, and procedures Enable DevOps development activities and complex development tasks that will involve working with a wide variety of tools and container management systems Coordinate and bring application experts and other infrastructure teams together for finding optimal solutions to issues related to capacity, security, performance Implement and maintain CI/CD solutions and create code deployment models to support self-service automation
Qualifications
Proven track record with at least 4 years of experience in DevOps data platform development Proficiency in infrastructure as code concepts and tools (e.g., Terraform, Ansible) for automating resource provisioning and configuration Hands-on experience with CI/CD pipeline tools (e.g., Jenkins, CircleCI) and version control systems (e.g., GitHub) Strong understanding of DevOps concepts (Azure DevOps framework and tools preferred) Strong working knowledge of networking concepts (DNS/DHCP/Firewalls/Sub-netting, etc.) Azure preferred Solid scripting skills in languages such as Python, Bash, or similar Solid understanding of monitoring / observability concepts and tooling Extensive experience and strong understanding of cloud and infrastructure components Strong problem-solving and analytical skills, with the ability to troubleshoot complex DevOps platform issues and provide effective solutions Knowledge of developer tooling across the software development life cycle (task management, source code, building, deployment, operations, real-time communication) Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams and stakeholders 4+ years of professional infrastructure and/or software development experience 3+ years of experience with AWS, GCP, Azure, or another cloud service (Azure preferred) Bachelor's or Master's degree in Computer Science, Data Science, or a related field
Bonus:
Experience with Databricks, more specifically with Unity Catalog implementations for enterprise customers Experience with Kafka and/or MongoDB Responsibilities
Develop general infrastructure technology in a public/private cloud Design, configure, optimize, manage, monitor, document and support platform services and components, as well as supporting enterprise data ingestion Assist in the delivery of technical projects Participate in design sessions and code reviews to elevate the quality of engineering across the organization Spearhead new feature use (innovate within existing tooling) Spearhead new software acquisition and use (innovate with new tooling) Leverage automation to remove redundant error prone tasks to improve the quality of solutions Provide advanced system administration, operational support, and problem resolution for a large complex cloud computing environment and develop scripts to automate the deployment of resource stacks and associated configurations Extend standard system management processes into the cloud including change, incident, and problem management Develop and maintain a library of deployable, tested, and documented automation design scripts, processes, and procedures Enable DevOps development activities and complex development tasks that will involve working with a wide variety of tools and container management systems Coordinate and bring application experts and other infrastructure teams together for finding optimal solutions to issues related to capacity, security, performance Implement and maintain CI/CD solutions and create code deployment models to support self-service automation
Qualifications
Proven track record with at least 4 years of experience in DevOps data platform development Proficiency in infrastructure as code concepts and tools (e.g., Terraform, Ansible) for automating resource provisioning and configuration Hands-on experience with CI/CD pipeline tools (e.g., Jenkins, CircleCI) and version control systems (e.g., GitHub) Strong understanding of DevOps concepts (Azure DevOps framework and tools preferred) Strong working knowledge of networking concepts (DNS/DHCP/Firewalls/Sub-netting, etc.) Azure preferred Solid scripting skills in languages such as Python, Bash, or similar Solid understanding of monitoring / observability concepts and tooling Extensive experience and strong understanding of cloud and infrastructure components Strong problem-solving and analytical skills, with the ability to troubleshoot complex DevOps platform issues and provide effective solutions Knowledge of developer tooling across the software development life cycle (task management, source code, building, deployment, operations, real-time communication) Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams and stakeholders 4+ years of professional infrastructure and/or software development experience 3+ years of experience with AWS, GCP, Azure, or another cloud service (Azure preferred) Bachelor's or Master's degree in Computer Science, Data Science, or a related field
Bonus:
Experience with Databricks, more specifically with Unity Catalog implementations for enterprise customers Experience with Kafka and/or MongoDB
Required Skills : Unity Catalog implementation CI/CD- Jenkins infrastructure as code- Terraform Azure remote Background Check :Yes Drug Screen :Yes Notes : Selling points for candidate :remote Project Verification Info :"The information provided below is for Apex Systems AV use only and is not to be distributed publicly, or to any third party. Any distribution of the below information will result in corrective action from Apex Systems Vendor Management. MSA: Blanket Approval Received Client Letter: Will Provide" finishing implemntation project Candidate must be your W2 Employee :No Exclusive to Apex :No Face to face interview required :No Candidate must be local :No Candidate must be authorized to work without sponsorship ::No Interview times set : :No Type of project :0012660 | TAPFIN @ Retail Business Services, LLC Master Job Title : Branch Code :
Develop general infrastructure technology in a public/private cloud Design, configure, optimize, manage, monitor, document and support platform services and components, as well as supporting enterprise data ingestion Assist in the delivery of technical projects Participate in design sessions and code reviews to elevate the quality of engineering across the organization Spearhead new feature use (innovate within existing tooling) Spearhead new software acquisition and use (innovate with new tooling) Leverage automation to remove redundant error prone tasks to improve the quality of solutions Provide advanced system administration, operational support, and problem resolution for a large complex cloud computing environment and develop scripts to automate the deployment of resource stacks and associated configurations Extend standard system management processes into the cloud including change, incident, and problem management Develop and maintain a library of deployable, tested, and documented automation design scripts, processes, and procedures Enable DevOps development activities and complex development tasks that will involve working with a wide variety of tools and container management systems Coordinate and bring application experts and other infrastructure teams together for finding optimal solutions to issues related to capacity, security, performance Implement and maintain CI/CD solutions and create code deployment models to support self-service automation
Qualifications
Proven track record with at least 4 years of experience in DevOps data platform development Proficiency in infrastructure as code concepts and tools (e.g., Terraform, Ansible) for automating resource provisioning and configuration Hands-on experience with CI/CD pipeline tools (e.g., Jenkins, CircleCI) and version control systems (e.g., GitHub) Strong understanding of DevOps concepts (Azure DevOps framework and tools preferred) Strong working knowledge of networking concepts (DNS/DHCP/Firewalls/Sub-netting, etc.) Azure preferred Solid scripting skills in languages such as Python, Bash, or similar Solid understanding of monitoring / observability concepts and tooling Extensive experience and strong understanding of cloud and infrastructure components Strong problem-solving and analytical skills, with the ability to troubleshoot complex DevOps platform issues and provide effective solutions Knowledge of developer tooling across the software development life cycle (task management, source code, building, deployment, operations, real-time communication) Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams and stakeholders 4+ years of professional infrastructure and/or software development experience 3+ years of experience with AWS, GCP, Azure, or another cloud service (Azure preferred) Bachelor's or Master's degree in Computer Science, Data Science, or a related field
Bonus:
Experience with Databricks, more specifically with Unity Catalog implementations for enterprise customers Experience with Kafka and/or MongoDB Responsibilities
Develop general infrastructure technology in a public/private cloud Design, configure, optimize, manage, monitor, document and support platform services and components, as well as supporting enterprise data ingestion Assist in the delivery of technical projects Participate in design sessions and code reviews to elevate the quality of engineering across the organization Spearhead new feature use (innovate within existing tooling) Spearhead new software acquisition and use (innovate with new tooling) Leverage automation to remove redundant error prone tasks to improve the quality of solutions Provide advanced system administration, operational support, and problem resolution for a large complex cloud computing environment and develop scripts to automate the deployment of resource stacks and associated configurations Extend standard system management processes into the cloud including change, incident, and problem management Develop and maintain a library of deployable, tested, and documented automation design scripts, processes, and procedures Enable DevOps development activities and complex development tasks that will involve working with a wide variety of tools and container management systems Coordinate and bring application experts and other infrastructure teams together for finding optimal solutions to issues related to capacity, security, performance Implement and maintain CI/CD solutions and create code deployment models to support self-service automation
Qualifications
Proven track record with at least 4 years of experience in DevOps data platform development Proficiency in infrastructure as code concepts and tools (e.g., Terraform, Ansible) for automating resource provisioning and configuration Hands-on experience with CI/CD pipeline tools (e.g., Jenkins, CircleCI) and version control systems (e.g., GitHub) Strong understanding of DevOps concepts (Azure DevOps framework and tools preferred) Strong working knowledge of networking concepts (DNS/DHCP/Firewalls/Sub-netting, etc.) Azure preferred Solid scripting skills in languages such as Python, Bash, or similar Solid understanding of monitoring / observability concepts and tooling Extensive experience and strong understanding of cloud and infrastructure components Strong problem-solving and analytical skills, with the ability to troubleshoot complex DevOps platform issues and provide effective solutions Knowledge of developer tooling across the software development life cycle (task management, source code, building, deployment, operations, real-time communication) Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams and stakeholders 4+ years of professional infrastructure and/or software development experience 3+ years of experience with AWS, GCP, Azure, or another cloud service (Azure preferred) Bachelor's or Master's degree in Computer Science, Data Science, or a related field
Bonus:
Experience with Databricks, more specifically with Unity Catalog implementations for enterprise customers Experience with Kafka and/or MongoDB
Required Skills : Unity Catalog implementation CI/CD- Jenkins infrastructure as code- Terraform Azure remote Background Check :Yes Drug Screen :Yes Notes : Selling points for candidate :remote Project Verification Info :"The information provided below is for Apex Systems AV use only and is not to be distributed publicly, or to any third party. Any distribution of the below information will result in corrective action from Apex Systems Vendor Management. MSA: Blanket Approval Received Client Letter: Will Provide" finishing implemntation project Candidate must be your W2 Employee :No Exclusive to Apex :No Face to face interview required :No Candidate must be local :No Candidate must be authorized to work without sponsorship ::No Interview times set : :No Type of project :0012660 | TAPFIN @ Retail Business Services, LLC Master Job Title : Branch Code :