CV Library
Senior ML infrastructure engineer
CV Library, San Francisco, California, United States, 94199
Kuzco is seeking a Senior ML Infrastructure Engineer to join our team. This role involves developing large-scale, fault-tolerant systems that handle millions of large model inference requests per day. If you are passionate about developing next- ML systems that operate at scale, we want to hear from you.
About Kuzco
We are building a distributed LLM inference network that combines idle GPU capacity from around the world into a single cohesive plane of compute that can be used for running large- models like Llama and Mistral. At any given moment, we have over 5,000 GPUs and hundreds of terabytes of VRAM connected to the network. Learn more .
We are a small, well-funded team of staff-level engineers who work in-person in downtown San Francisco on difficult, high-impact engineering problems. Everyone on the team has been writing code for over 10 years, and has founded and run their own software companies. We are high-agency, adaptable, and collaborative. We value creativity alongside technical prowess and humility. We work hard, and deeply enjoy the work that we do; we are almost always online at least six days per week.
About the Role
You will be responsible for designing and implementing the core systems that power our globally distributed LLM inference network. You'll work on problems at the intersection of distributed systems, machine learning, and resource optimization.
Key Responsibilities
Design and implement scalable distributed systems for our inference network
Develop models for efficient resource allocation across a network of heterogeneous hardware and quickly changing topology
Optimize network latency, throughput, and availability
Build robust logging and metrics systems to monitor network health and performance
Conduct reviews of architecture and system design to ensure use of best practices
Collaborate with founders, engineers, and other stakeholders to improve our infrastructure and product offerings
What We're Looking For
Very strong problem-solving skills and ability to work in a startup environment
5+ years of experience in building high performance systems
Strong programming skills in Typescript, Python, and one of Go, Rust, or C++
Solid understanding of distributed systems concepts
Knowledge of orchestrators and schedulers like Kubernetes and Nomad
Use of AI tooling in development workflow (ChatGPT, Claude, Cursor, etc)
Experience with LLM inference engines like vLLM or TensorRT-LLM is plus
Experience with GPU programming and optimization (CUDA experience is a plus)
Compensation
We offer competitive compensation, equity in a high-growth startup, and comprehensive benefits. The base salary range for this role is $180,000 - $250,000, plus equity and benefits, depending on experience.
Equal Opportunity
Kuzco is an equal opportunity employer. We welcome applicants from all backgrounds and don't discriminate based on , , , , , , genetics, , , or veteran status.
If you're excited about building the future of developer-first AI infrastructure, we'd love to hear from you. Please send your resume, LinkedIn, and GitHub to
sam@kuzco.xyz .
#J-18808-Ljbffr
About Kuzco
We are building a distributed LLM inference network that combines idle GPU capacity from around the world into a single cohesive plane of compute that can be used for running large- models like Llama and Mistral. At any given moment, we have over 5,000 GPUs and hundreds of terabytes of VRAM connected to the network. Learn more .
We are a small, well-funded team of staff-level engineers who work in-person in downtown San Francisco on difficult, high-impact engineering problems. Everyone on the team has been writing code for over 10 years, and has founded and run their own software companies. We are high-agency, adaptable, and collaborative. We value creativity alongside technical prowess and humility. We work hard, and deeply enjoy the work that we do; we are almost always online at least six days per week.
About the Role
You will be responsible for designing and implementing the core systems that power our globally distributed LLM inference network. You'll work on problems at the intersection of distributed systems, machine learning, and resource optimization.
Key Responsibilities
Design and implement scalable distributed systems for our inference network
Develop models for efficient resource allocation across a network of heterogeneous hardware and quickly changing topology
Optimize network latency, throughput, and availability
Build robust logging and metrics systems to monitor network health and performance
Conduct reviews of architecture and system design to ensure use of best practices
Collaborate with founders, engineers, and other stakeholders to improve our infrastructure and product offerings
What We're Looking For
Very strong problem-solving skills and ability to work in a startup environment
5+ years of experience in building high performance systems
Strong programming skills in Typescript, Python, and one of Go, Rust, or C++
Solid understanding of distributed systems concepts
Knowledge of orchestrators and schedulers like Kubernetes and Nomad
Use of AI tooling in development workflow (ChatGPT, Claude, Cursor, etc)
Experience with LLM inference engines like vLLM or TensorRT-LLM is plus
Experience with GPU programming and optimization (CUDA experience is a plus)
Compensation
We offer competitive compensation, equity in a high-growth startup, and comprehensive benefits. The base salary range for this role is $180,000 - $250,000, plus equity and benefits, depending on experience.
Equal Opportunity
Kuzco is an equal opportunity employer. We welcome applicants from all backgrounds and don't discriminate based on , , , , , , genetics, , , or veteran status.
If you're excited about building the future of developer-first AI infrastructure, we'd love to hear from you. Please send your resume, LinkedIn, and GitHub to
sam@kuzco.xyz .
#J-18808-Ljbffr