Liquid AI
Member of Technical Staff - AI Inference Engineer
Liquid AI, San Francisco, California, United States, 94199
As we prepare to deploy our models across various device types, including GPUs, CPUs, and NPUs, we're seeking an expert who can optimize inference stacks tailored to each platform. We're looking for someone who can take our models, dive deep into the task, and return with a highly optimized inference stack-leveraging existing frameworks like ggml, vllm, and DeepSpeed to deliver exceptional throughput and low latency.
The ideal candidate is a highly skilled engineer with extensive experience in CUDA, C++, and Triton, as well as a deep understanding of GPU, CPU, and NPU architectures. They should be self-motivated, capable of working independently, and driven by a passion for optimizing performance across diverse hardware platforms. Proficiency in building and enhancing inference stacks using frameworks like ggml, vllm, and DeepSpeed is essential. Additionally, experience with mobile development and expertise in cache-aware algorithms will be highly valued.
Responsibilities
Collaborate with ML Teams : Requires proficiency in Python and PyTorch to effectively interface with machine learning staff at a technical level.Hardware Awareness : Must understand modern hardware architecture, including cache hierarchies and memory access patterns, and their impact on performance.Proficient in Coding : Expertise in Python, PyTorch, and either CUDA, Triton, or C++ is essential for this role.Optimization of Low-Level Primitives : Responsible for optimizing core primitives to ensure efficient model execution.Self-Guided and Ownership : Ability to independently take a PyTorch model and inference requirements (e.g., maximize GPU throughput or minimize CPU latency) and deliver a fully optimized stack with minimal guidance.Research-Driven : Should stay up-to-date with advancements in ML inference, such as new quantization techniques or speculative decoding, while maintaining focus on delivering practical solutions.
The ideal candidate is a highly skilled engineer with extensive experience in CUDA, C++, and Triton, as well as a deep understanding of GPU, CPU, and NPU architectures. They should be self-motivated, capable of working independently, and driven by a passion for optimizing performance across diverse hardware platforms. Proficiency in building and enhancing inference stacks using frameworks like ggml, vllm, and DeepSpeed is essential. Additionally, experience with mobile development and expertise in cache-aware algorithms will be highly valued.
Responsibilities
Collaborate with ML Teams : Requires proficiency in Python and PyTorch to effectively interface with machine learning staff at a technical level.Hardware Awareness : Must understand modern hardware architecture, including cache hierarchies and memory access patterns, and their impact on performance.Proficient in Coding : Expertise in Python, PyTorch, and either CUDA, Triton, or C++ is essential for this role.Optimization of Low-Level Primitives : Responsible for optimizing core primitives to ensure efficient model execution.Self-Guided and Ownership : Ability to independently take a PyTorch model and inference requirements (e.g., maximize GPU throughput or minimize CPU latency) and deliver a fully optimized stack with minimal guidance.Research-Driven : Should stay up-to-date with advancements in ML inference, such as new quantization techniques or speculative decoding, while maintaining focus on delivering practical solutions.