Pennant Solutions Group
Remote DevOps Engineer (Kafka/Kong)
Pennant Solutions Group, Portland, Oregon, United States, 97204
Senior Kafka Administrator/Kong Administrator with DevOps ExperienceRemote-Contract to HireOur client is seeking a skilled and experienced Senior Kafka/Kong Administrator to join our client's IT operations team. As a Kafka/Kong Administrator, you will be responsible for the deployment, configuration, maintenance, and monitoring of our Kafka clusters and Kong gateways, ensuring the high availability, reliability, and optimal performance of our real-time platforms. Your role will be crucial in supporting our data engineering and application teams to deliver seamless data integration and processing.Responsibilities: Kafka Administration
Cluster Deployment and Configuration:
Deploy and configure Kafka clusters following best practices for scalability, security, and performance.Collaborate with cross-functional teams to gather requirements and design Kafka infrastructure to meet data streaming needs.Manage topics, partitions, replication factors, and broker configurations to ensure efficient data distribution and fault tolerance.
Monitoring and Performance Optimization:
Implement monitoring and alerting solutions to proactively identify and address performance bottlenecks, resource constraints, and anomalies.Conduct regular performance testing, load testing, and capacity planning to ensure clusters can handle anticipated workloads.
High Availability and Disaster Recovery:
Design and implement high-availability strategies, including failover mechanisms and data replication across multiple data centers or cloud regions.Develop and maintain disaster recovery plans and procedures to minimize data loss and downtime in the event of failures.
Security and Compliance:
Implement and manage security measures such as encryption, authentication, and authorization to ensure data privacy and compliance with industry standards.Stay updated on security vulnerabilities and patches, applying necessary updates to maintain a secure Kafka environment.
Troubleshooting and Issue Resolution:
Diagnose and resolve Kafka-related issues, including performance degradation, data loss, and connectivity problems.Collaborate with development and data engineering teams to troubleshoot consumer/producer application integration with Kafka.
Documentation and Knowledge Sharing:
Maintain thorough documentation of Kafka configurations, deployment processes, and troubleshooting procedures.Provide training and knowledge sharing sessions to junior team members and other stakeholders.
Collaboration and Communication:
Collaborate with cross-functional teams, including data engineers, developers, and system administrators, to ensure smooth integration of Kafka into our data ecosystem.Communicate effectively with stakeholders to provide updates on Kafka performance, maintenance, and improvements.
Responsibilities: Kong Administration
Hands on experience installing and configuring Kong API Gateway as well as integration with existing API frameworks.Describe and configure dataplane security using RBAC, Roles, Workspaces and Teams.Configure upstreams and load balancing.Create Services, Routes, and Consumers.Qualifications:
Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent work experience).5+ years of hands-on experience in administering Apache Kafka clusters in production environments.5+ years proficiency in Kafka/Kong architecture, installation, configuration, and tuning.Strong understanding of Kafka internals, including topics, partitions, replication, and consumer/producer APIs.5+ years of experience with Kafka ecosystem tools such as Kafka Connect, Kafka Streams, and Schema Registry.Knowledge of Linux systems and shell scripting.Strong DevOps background.5+ years of familiarity with cloud platforms (e.g., AWS, Azure, GCP) and containerization (e.g., Docker, Kubernetes).Excellent problem-solving skills and the ability to troubleshoot complex issues efficiently.Strong communication skills and the ability to work collaboratively in a team environment.
#J-18808-Ljbffr
Cluster Deployment and Configuration:
Deploy and configure Kafka clusters following best practices for scalability, security, and performance.Collaborate with cross-functional teams to gather requirements and design Kafka infrastructure to meet data streaming needs.Manage topics, partitions, replication factors, and broker configurations to ensure efficient data distribution and fault tolerance.
Monitoring and Performance Optimization:
Implement monitoring and alerting solutions to proactively identify and address performance bottlenecks, resource constraints, and anomalies.Conduct regular performance testing, load testing, and capacity planning to ensure clusters can handle anticipated workloads.
High Availability and Disaster Recovery:
Design and implement high-availability strategies, including failover mechanisms and data replication across multiple data centers or cloud regions.Develop and maintain disaster recovery plans and procedures to minimize data loss and downtime in the event of failures.
Security and Compliance:
Implement and manage security measures such as encryption, authentication, and authorization to ensure data privacy and compliance with industry standards.Stay updated on security vulnerabilities and patches, applying necessary updates to maintain a secure Kafka environment.
Troubleshooting and Issue Resolution:
Diagnose and resolve Kafka-related issues, including performance degradation, data loss, and connectivity problems.Collaborate with development and data engineering teams to troubleshoot consumer/producer application integration with Kafka.
Documentation and Knowledge Sharing:
Maintain thorough documentation of Kafka configurations, deployment processes, and troubleshooting procedures.Provide training and knowledge sharing sessions to junior team members and other stakeholders.
Collaboration and Communication:
Collaborate with cross-functional teams, including data engineers, developers, and system administrators, to ensure smooth integration of Kafka into our data ecosystem.Communicate effectively with stakeholders to provide updates on Kafka performance, maintenance, and improvements.
Responsibilities: Kong Administration
Hands on experience installing and configuring Kong API Gateway as well as integration with existing API frameworks.Describe and configure dataplane security using RBAC, Roles, Workspaces and Teams.Configure upstreams and load balancing.Create Services, Routes, and Consumers.Qualifications:
Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent work experience).5+ years of hands-on experience in administering Apache Kafka clusters in production environments.5+ years proficiency in Kafka/Kong architecture, installation, configuration, and tuning.Strong understanding of Kafka internals, including topics, partitions, replication, and consumer/producer APIs.5+ years of experience with Kafka ecosystem tools such as Kafka Connect, Kafka Streams, and Schema Registry.Knowledge of Linux systems and shell scripting.Strong DevOps background.5+ years of familiarity with cloud platforms (e.g., AWS, Azure, GCP) and containerization (e.g., Docker, Kubernetes).Excellent problem-solving skills and the ability to troubleshoot complex issues efficiently.Strong communication skills and the ability to work collaboratively in a team environment.
#J-18808-Ljbffr