Senior AI Infrastructure Engineer
WEX, San Francisco, CA, United States
About the Team/Role
WEX is an innovative global commerce platform and payments technology company looking to forge the way in a rapidly changing environment, to simplify the business of doing business for customers, freeing them to spend more time, with less worry, on the things they love and care about. We are journeying to build a consistent world-class user experience across our products and services and leverage customer-focused innovations across all our strategic initiatives, including big data, AI, and Risk. Our AI Infrastructure team is pivotal in enabling these advancements.
We are looking for a highly motivated and high-potential senior Engineer to join our AI Infrastructure team to make significant contributions to our cloud-based AI solutions and grow your career.
This is a really exciting time to be in the AI Infrastructure team at WEX. Our team is responsible for building and maintaining the robust, scalable, and secure cloud infrastructure that powers our AI and machine learning initiatives. We work with cutting-edge technologies like AWS, Azure, Docker, and Kubernetes to create a dynamic environment that supports the development and deployment of AI models at scale.
We have challenging problems with huge business impact potential for you to work on and grow. We also have a strong team with highly talented and skillful engineers and leaders to support, guide, and coach you.
If you dream to be a strong engineer who can solve tough problems, generate big impacts, and grow fast, this is a great opportunity for you!
How you'll make an impact
Collaborate with data scientists, ML engineers, and stakeholders to understand the requirements and challenges of AI/ML workloads.
Design, implement, and maintain scalable and secure cloud infrastructure on AWS and Azure to support AI/ML workloads using IaC technologies like Terraform.
Manage containerization (Docker) and orchestration (Kubernetes) for efficient deployment and scaling of AI/ML applications.
Develop and optimize CI/CD pipelines for automating the build, test, and deployment of AI/ML models and infrastructure.
Implement robust monitoring and alerting systems to ensure the health, performance, and reliability of production AI infrastructure.
Proactively analyze system performance data to identify bottlenecks, optimize resource utilization, and improve overall efficiency.
Stay current with emerging cloud technologies, tools, and best practices in the AI/ML infrastructure space.
Mentor and guide junior team members, fostering a culture of continuous learning and knowledge sharing.
Contribute to the team's technical roadmap and strategic initiatives.
Troubleshoot complex technical issues and provide timely solutions.
Participate in on-call rotation to ensure 24/7 availability and support of critical AI infrastructure.
Advocate for your positions while fully supporting team decisions.
Experience you'll bring
Bachelor's degree in Computer Science, Software Engineering, or a related field. OR demonstrable equivalent deep understanding, experience, and capability.
A Master's or PhD degree in Computer Science (or related field) is a plus.
4+ years of experience in software engineering or cloud infrastructure, with a strong focus on AI/ML infrastructure.
Demonstrable advanced programming skills in a 3GL strongly-typed language like Java, Python, C/C++ or Golang.
Strong understanding of cloud platforms (AWS and Azure), including services relevant to AI/ML (e.g., EC2, S3, EKS, Azure ML, AKS).
Hands-on experience with containerization (Docker) and container orchestration (Kubernetes) in production environments.
Extensive experience in building and managing CI/CD pipelines for infrastructure and ML model deployment (using tools like Jenkins, GitLab CI/CD, etc.).
Strong understanding of networking concepts (VPC, subnets, routing, firewalls) and experience configuring network infrastructure in the cloud.
Experience with infrastructure monitoring and alerting tools (e.g., Prometheus, Grafana, CloudWatch, Azure Monitor).
Strong scripting skills (Python, Bash) for automation and configuration management.
Excellent problem-solving skills, with the ability to analyze complex systems and identify performance bottlenecks.
Strong communication and collaboration skills, with the ability to work effectively in a team environment.
Preferred Qualifications:
Experience with MLOps tools and practices.
Familiarity with infrastructure as code (IaC) tools like Terraform or CloudFormation.
Contributions to open-source projects related to AI/ML infrastructure.
Experience with big data technologies (e.g., Hadoop, Spark) is a plus.