Saxon Global
Platform Engineer
Saxon Global, Carlisle, Pennsylvania, United States, 17013
Responsibilities
Develop general infrastructure technology in a public/private cloudDesign, configure, optimize, manage, monitor, document and support platform services and components, as well as supporting enterprise data ingestionAssist in the delivery of technical projectsParticipate in design sessions and code reviews to elevate the quality of engineering across the organizationSpearhead new feature use (innovate within existing tooling)Spearhead new software acquisition and use (innovate with new tooling)Leverage automation to remove redundant error prone tasks to improve the quality of solutionsProvide advanced system administration, operational support, and problem resolution for a large complex cloud computing environment and develop scripts to automate the deployment of resource stacks and associated configurationsExtend standard system management processes into the cloud including change, incident, and problem managementDevelop and maintain a library of deployable, tested, and documented automation design scripts, processes, and proceduresEnable DevOps development activities and complex development tasks that will involve working with a wide variety of tools and container management systemsCoordinate and bring application experts and other infrastructure teams together for finding optimal solutions to issues related to capacity, security, performanceImplement and maintain CI/CD solutions and create code deployment models to support self-service automation
Qualifications
Proven track record with at least 4 years of experience in DevOps data platform developmentProficiency in infrastructure as code concepts and tools (e.g., Terraform, Ansible) for automating resource provisioning and configurationHands-on experience with CI/CD pipeline tools (e.g., Jenkins, CircleCI) and version control systems (e.g., GitHub)Strong understanding of DevOps concepts (Azure DevOps framework and tools preferred)Strong working knowledge of networking concepts (DNS/DHCP/Firewalls/Sub-netting, etc.) Azure preferredSolid scripting skills in languages such as Python, Bash, or similarSolid understanding of monitoring / observability concepts and toolingExtensive experience and strong understanding of cloud and infrastructure componentsStrong problem-solving and analytical skills, with the ability to troubleshoot complex DevOps platform issues and provide effective solutionsKnowledge of developer tooling across the software development life cycle (task management, source code, building, deployment, operations, real-time communication)Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams and stakeholders4+ years of professional infrastructure and/or software development experience3+ years of experience with AWS, GCP, Azure, or another cloud service (Azure preferred)Bachelor's or Master's degree in Computer Science, Data Science, or a related field
Bonus:
Experience with Databricks, more specifically with Unity Catalog implementations for enterprise customersExperience with Kafka and/or MongoDBResponsibilities
Develop general infrastructure technology in a public/private cloudDesign, configure, optimize, manage, monitor, document and support platform services and components, as well as supporting enterprise data ingestionAssist in the delivery of technical projectsParticipate in design sessions and code reviews to elevate the quality of engineering across the organizationSpearhead new feature use (innovate within existing tooling)Spearhead new software acquisition and use (innovate with new tooling)Leverage automation to remove redundant error prone tasks to improve the quality of solutionsProvide advanced system administration, operational support, and problem resolution for a large complex cloud computing environment and develop scripts to automate the deployment of resource stacks and associated configurationsExtend standard system management processes into the cloud including change, incident, and problem managementDevelop and maintain a library of deployable, tested, and documented automation design scripts, processes, and proceduresEnable DevOps development activities and complex development tasks that will involve working with a wide variety of tools and container management systemsCoordinate and bring application experts and other infrastructure teams together for finding optimal solutions to issues related to capacity, security, performanceImplement and maintain CI/CD solutions and create code deployment models to support self-service automation
Qualifications
Proven track record with at least 4 years of experience in DevOps data platform developmentProficiency in infrastructure as code concepts and tools (e.g., Terraform, Ansible) for automating resource provisioning and configurationHands-on experience with CI/CD pipeline tools (e.g., Jenkins, CircleCI) and version control systems (e.g., GitHub)Strong understanding of DevOps concepts (Azure DevOps framework and tools preferred)Strong working knowledge of networking concepts (DNS/DHCP/Firewalls/Sub-netting, etc.) Azure preferredSolid scripting skills in languages such as Python, Bash, or similarSolid understanding of monitoring / observability concepts and toolingExtensive experience and strong understanding of cloud and infrastructure componentsStrong problem-solving and analytical skills, with the ability to troubleshoot complex DevOps platform issues and provide effective solutionsKnowledge of developer tooling across the software development life cycle (task management, source code, building, deployment, operations, real-time communication)Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams and stakeholders4+ years of professional infrastructure and/or software development experience3+ years of experience with AWS, GCP, Azure, or another cloud service (Azure preferred)Bachelor's or Master's degree in Computer Science, Data Science, or a related field
Bonus:
Experience with Databricks, more specifically with Unity Catalog implementations for enterprise customersExperience with Kafka and/or MongoDB
Required Skills : Unity Catalog implementation CI/CD- Jenkins infrastructure as code- Terraform Azure remoteBackground Check :YesDrug Screen :YesNotes :Selling points for candidate :remoteProject Verification Info :"The information provided below is for Apex Systems AV use only and is not to be distributed publicly, or to any third party. Any distribution of the below information will result in corrective action from Apex Systems Vendor Management. MSA: Blanket Approval Received Client Letter: Will Provide" finishing implemntation projectCandidate must be your W2 Employee :NoExclusive to Apex :NoFace to face interview required :NoCandidate must be local :NoCandidate must be authorized to work without sponsorship ::NoInterview times set : :NoType of project :0012660 | TAPFIN @ Retail Business Services, LLCMaster Job Title :Branch Code :
Develop general infrastructure technology in a public/private cloudDesign, configure, optimize, manage, monitor, document and support platform services and components, as well as supporting enterprise data ingestionAssist in the delivery of technical projectsParticipate in design sessions and code reviews to elevate the quality of engineering across the organizationSpearhead new feature use (innovate within existing tooling)Spearhead new software acquisition and use (innovate with new tooling)Leverage automation to remove redundant error prone tasks to improve the quality of solutionsProvide advanced system administration, operational support, and problem resolution for a large complex cloud computing environment and develop scripts to automate the deployment of resource stacks and associated configurationsExtend standard system management processes into the cloud including change, incident, and problem managementDevelop and maintain a library of deployable, tested, and documented automation design scripts, processes, and proceduresEnable DevOps development activities and complex development tasks that will involve working with a wide variety of tools and container management systemsCoordinate and bring application experts and other infrastructure teams together for finding optimal solutions to issues related to capacity, security, performanceImplement and maintain CI/CD solutions and create code deployment models to support self-service automation
Qualifications
Proven track record with at least 4 years of experience in DevOps data platform developmentProficiency in infrastructure as code concepts and tools (e.g., Terraform, Ansible) for automating resource provisioning and configurationHands-on experience with CI/CD pipeline tools (e.g., Jenkins, CircleCI) and version control systems (e.g., GitHub)Strong understanding of DevOps concepts (Azure DevOps framework and tools preferred)Strong working knowledge of networking concepts (DNS/DHCP/Firewalls/Sub-netting, etc.) Azure preferredSolid scripting skills in languages such as Python, Bash, or similarSolid understanding of monitoring / observability concepts and toolingExtensive experience and strong understanding of cloud and infrastructure componentsStrong problem-solving and analytical skills, with the ability to troubleshoot complex DevOps platform issues and provide effective solutionsKnowledge of developer tooling across the software development life cycle (task management, source code, building, deployment, operations, real-time communication)Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams and stakeholders4+ years of professional infrastructure and/or software development experience3+ years of experience with AWS, GCP, Azure, or another cloud service (Azure preferred)Bachelor's or Master's degree in Computer Science, Data Science, or a related field
Bonus:
Experience with Databricks, more specifically with Unity Catalog implementations for enterprise customersExperience with Kafka and/or MongoDBResponsibilities
Develop general infrastructure technology in a public/private cloudDesign, configure, optimize, manage, monitor, document and support platform services and components, as well as supporting enterprise data ingestionAssist in the delivery of technical projectsParticipate in design sessions and code reviews to elevate the quality of engineering across the organizationSpearhead new feature use (innovate within existing tooling)Spearhead new software acquisition and use (innovate with new tooling)Leverage automation to remove redundant error prone tasks to improve the quality of solutionsProvide advanced system administration, operational support, and problem resolution for a large complex cloud computing environment and develop scripts to automate the deployment of resource stacks and associated configurationsExtend standard system management processes into the cloud including change, incident, and problem managementDevelop and maintain a library of deployable, tested, and documented automation design scripts, processes, and proceduresEnable DevOps development activities and complex development tasks that will involve working with a wide variety of tools and container management systemsCoordinate and bring application experts and other infrastructure teams together for finding optimal solutions to issues related to capacity, security, performanceImplement and maintain CI/CD solutions and create code deployment models to support self-service automation
Qualifications
Proven track record with at least 4 years of experience in DevOps data platform developmentProficiency in infrastructure as code concepts and tools (e.g., Terraform, Ansible) for automating resource provisioning and configurationHands-on experience with CI/CD pipeline tools (e.g., Jenkins, CircleCI) and version control systems (e.g., GitHub)Strong understanding of DevOps concepts (Azure DevOps framework and tools preferred)Strong working knowledge of networking concepts (DNS/DHCP/Firewalls/Sub-netting, etc.) Azure preferredSolid scripting skills in languages such as Python, Bash, or similarSolid understanding of monitoring / observability concepts and toolingExtensive experience and strong understanding of cloud and infrastructure componentsStrong problem-solving and analytical skills, with the ability to troubleshoot complex DevOps platform issues and provide effective solutionsKnowledge of developer tooling across the software development life cycle (task management, source code, building, deployment, operations, real-time communication)Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams and stakeholders4+ years of professional infrastructure and/or software development experience3+ years of experience with AWS, GCP, Azure, or another cloud service (Azure preferred)Bachelor's or Master's degree in Computer Science, Data Science, or a related field
Bonus:
Experience with Databricks, more specifically with Unity Catalog implementations for enterprise customersExperience with Kafka and/or MongoDB
Required Skills : Unity Catalog implementation CI/CD- Jenkins infrastructure as code- Terraform Azure remoteBackground Check :YesDrug Screen :YesNotes :Selling points for candidate :remoteProject Verification Info :"The information provided below is for Apex Systems AV use only and is not to be distributed publicly, or to any third party. Any distribution of the below information will result in corrective action from Apex Systems Vendor Management. MSA: Blanket Approval Received Client Letter: Will Provide" finishing implemntation projectCandidate must be your W2 Employee :NoExclusive to Apex :NoFace to face interview required :NoCandidate must be local :NoCandidate must be authorized to work without sponsorship ::NoInterview times set : :NoType of project :0012660 | TAPFIN @ Retail Business Services, LLCMaster Job Title :Branch Code :