Beacon Talent
Founding Machine Learning Engineer
Beacon Talent, San Francisco, California, United States, 94199
Founding/Staff Machine Learning Engineer – Generative AI
Our client is a venture-backed YC startup revolutionizing the supply chain sector with cutting-edge generative AI technologies. As a rapidly growing organization, they are applying advancements in machine learning to address complex industry challenges, unlocking unprecedented efficiencies and insights for their clients. Role Overview
We are seeking a Founding/Staff Machine Learning Engineer with deep expertise in generative AI to lead critical technical efforts in developing and deploying state-of-the-art solutions. This role is for an experienced engineer who thrives on building robust, scalable systems and is passionate about advancing the frontiers of AI. This position requires hands-on experience in training large language models (LLMs), working with embedding models and vector databases, and developing AI-powered chatbot solutions. Key Responsibilities
Train and fine-tune LLM foundation models (e.g., GPT, Claude, PaLM 2, LLaMA) using cutting-edge techniques and frameworks, ideally on AWS SageMaker.
Design, implement, and optimize embedding models for a variety of applications.
Build and deploy AI-powered chatbots using frameworks like LangChain or LangGraph.
Integrate and manage vector databases (e.g., MongoDB Atlas Vector Store, Milvus, Weaviate, Pinecone) to support efficient model querying and retrieval.
Collaborate closely with cross-functional teams to align AI-driven solutions with business objectives in the supply chain domain.
Write clean, maintainable, and scalable code in Python; TypeScript experience is a strong plus.
Drive the end-to-end lifecycle of machine learning models, from research and experimentation to production deployment and monitoring.
Qualifications
Experience:
5+ years of hands-on experience as a Machine Learning Engineer (not a Data Scientist) with a focus on developing and deploying production-ready solutions.
Foundation Models:
Proven experience training and fine-tuning LLMs (GPT, Claude, Gemini/PaLM 2, LLaMA, etc.).
Embedding Models:
Strong expertise in designing and implementing embedding-based solutions.
Vector Databases:
Practical knowledge of vector databases (MongoDB Atlas Vector Store, Milvus, Weaviate, Pinecone, etc.).
Chatbots:
Hands-on experience building AI-powered chatbots, ideally using LangChain or LangGraph.
Technical Skills:
Advanced proficiency in Python. Experience with TypeScript is a plus but not required.
Cloud Platforms:
Familiarity with AWS, particularly SageMaker, for training and deploying models.
Team Collaboration:
Excellent communication and collaboration skills to work in a fast-paced, multidisciplinary environment.
Why Join
Be a foundational team member in a high-impact, venture-backed startup.
Solve meaningful problems with cutting-edge generative AI technologies.
Work in a dynamic, collaborative environment in the heart of Silicon Valley.
Enjoy competitive compensation, benefits, and equity opportunities.
#J-18808-Ljbffr
Our client is a venture-backed YC startup revolutionizing the supply chain sector with cutting-edge generative AI technologies. As a rapidly growing organization, they are applying advancements in machine learning to address complex industry challenges, unlocking unprecedented efficiencies and insights for their clients. Role Overview
We are seeking a Founding/Staff Machine Learning Engineer with deep expertise in generative AI to lead critical technical efforts in developing and deploying state-of-the-art solutions. This role is for an experienced engineer who thrives on building robust, scalable systems and is passionate about advancing the frontiers of AI. This position requires hands-on experience in training large language models (LLMs), working with embedding models and vector databases, and developing AI-powered chatbot solutions. Key Responsibilities
Train and fine-tune LLM foundation models (e.g., GPT, Claude, PaLM 2, LLaMA) using cutting-edge techniques and frameworks, ideally on AWS SageMaker.
Design, implement, and optimize embedding models for a variety of applications.
Build and deploy AI-powered chatbots using frameworks like LangChain or LangGraph.
Integrate and manage vector databases (e.g., MongoDB Atlas Vector Store, Milvus, Weaviate, Pinecone) to support efficient model querying and retrieval.
Collaborate closely with cross-functional teams to align AI-driven solutions with business objectives in the supply chain domain.
Write clean, maintainable, and scalable code in Python; TypeScript experience is a strong plus.
Drive the end-to-end lifecycle of machine learning models, from research and experimentation to production deployment and monitoring.
Qualifications
Experience:
5+ years of hands-on experience as a Machine Learning Engineer (not a Data Scientist) with a focus on developing and deploying production-ready solutions.
Foundation Models:
Proven experience training and fine-tuning LLMs (GPT, Claude, Gemini/PaLM 2, LLaMA, etc.).
Embedding Models:
Strong expertise in designing and implementing embedding-based solutions.
Vector Databases:
Practical knowledge of vector databases (MongoDB Atlas Vector Store, Milvus, Weaviate, Pinecone, etc.).
Chatbots:
Hands-on experience building AI-powered chatbots, ideally using LangChain or LangGraph.
Technical Skills:
Advanced proficiency in Python. Experience with TypeScript is a plus but not required.
Cloud Platforms:
Familiarity with AWS, particularly SageMaker, for training and deploying models.
Team Collaboration:
Excellent communication and collaboration skills to work in a fast-paced, multidisciplinary environment.
Why Join
Be a foundational team member in a high-impact, venture-backed startup.
Solve meaningful problems with cutting-edge generative AI technologies.
Work in a dynamic, collaborative environment in the heart of Silicon Valley.
Enjoy competitive compensation, benefits, and equity opportunities.
#J-18808-Ljbffr