Logo
Tbwa Chiat/Day Inc

Senior Machine Learning Engineer - GenAI United States, San Mateo, CA

Tbwa Chiat/Day Inc, San Mateo, California, United States, 94409


United States, San Mateo, CA Why PlayStation? PlayStation isn’t just the Best Place to Play — it’s also the Best Place to Work. Today, we’re recognized as a global leader in entertainment producing The PlayStation family of products and services including PlayStation5, PlayStation4, PlayStationVR, PlayStationPlus, acclaimed PlayStation software titles from PlayStation Studios, and more. PlayStation also strives to create an inclusive environment that empowers employees and embraces diversity. We welcome and encourage everyone who has a passion and curiosity for innovation, technology, and play to explore our open positions and join our growing global team. The PlayStation brand falls under Sony Interactive Entertainment, a wholly-owned subsidiary of Sony Corporation. San Mateo, CA / Remote

The Role: We are seeking a highly skilled and experienced Senior Machine Learning Engineer with a strong focus on Large Language Models (LLMs), Retrieval-Augmented Generation (RAG), Fine-Tuning and general software development background. This role is pivotal in supporting the development and enhancement of our AI-powered solutions, particularly in cloud environments and with technologies such as Kubernetes, AWS Sagemaker and Langchain/Llamaindex. Our ideal candidate will have a proven track record of building and leveraging Gen-AI solutions to solve complex enterprise use-cases, while demonstrating both expertise and a deep passion for innovation in the field. What you will do: RAG and Agent Optimization:

Design, develop, and optimize standard RAG (via Corrective-RAG or Graph-Rag) and Agentic solutions (e.g. Langgraph) to solve enterprise usecases and enhance our existing products with Gen-AI. Model Deployment and Inference:

Leverage ML-specific kubernetes workloads (e.g. kubeflow) or cloud managed solutions (e.g. AWS Sagemaker) to manage LLM deployment and serving workloads. Strive to continually optimize the LLM infrastructure and inference engine to keep improving inference quality and speed. Model Fine-Tuning:

Lead data preparation and fine-tuning of LLMs via one or more of the following techniques - Supervised Fine-Tuning (SFT), Reinforced Learning from Human Feedback (RLHF) and Direct Preference Optimization (DPO). Prompt Engineering:

Develop and fine-tune prompts to optimize the performance and accuracy of LLM integrations in various applications. Monitoring and Logging:

Set up robust monitoring and logging for LLM inference and training infrastructure to ensure performance and reliability. LLM and RAG Evaluation:

Design and wire up appropriate evaluation frameworks to compare and evaluate different LLMs, Embedding Models and RAG configurations. Security:

Implement best practices for security in LLM and Embedding Model deployments and infrastructure management. Productize and Iterate:

Work closely with Developer-Experience Engineers to test, iterate and productize new LLM capabilities to power different products and capabilities (e.g., chat, search, recommendation, agentic workflows and so on). Research and Innovation:

Stay ahead of latest advancements in LLM and RAG space and continually explore opportunities to enhance our products and services. What you will bring: Substantial experience in designing and integrating RESTful APIs with various front-end and back-end systems, ensuring seamless communication between components and services. Proficient in using version control systems, implementing comprehensive testing strategies, debugging complex issues, and conducting effective code reviews to ensure quality and maintainability. Proficiency in ML frameworks and libraries commonly used in NLP and Generative AI, such as PyTorch, Transformers and Sentence Transformers. Deep conceptual and working knowledge in at least one RAG framework (e.g., Langchain, Llamaindex, Haystack). Substantial exposure with leveraging advanced RAG concepts such as Re-Ranking, RAPTOR, Corrective-RAG and GraphRAG to productionize a standard RAG implementation. Exposure with designing and wiring up agentic workflows (via Langgraph or similar framework) to intelligently handle user queries or general tasks end-to-end. Expertise in prompt engineering and optimizing prompts for LLMs. Excellent problem-solving skills and attention to detail. Exceptional communication and teamwork skills. Good-to-have: Proficiency with operating in cloud and container-orchestration platforms (AWS and Kubernetes). Experience in working with automated evaluation frameworks (e.g., DeepEval, RAGAS) to assess LLMs and advanced RAG techniques. Expertise in aligning models with fine-tuning frameworks such as RLHF and DPO. Experience in productionizing LLM based solutions and monitoring KPIs to assess performance and quality. This is a flexible role that can be remote, with varying pay ranges based on geographic location. For example, if you are based out of Seattle, the estimated base pay range for this role is listed below. $187,700 - $281,500 USD Equal Opportunity Statement: Sony is an Equal Opportunity Employer. All persons will receive consideration for employment without regard to gender (including gender identity, gender expression and gender reassignment), race (including colour, nationality, ethnic or national origin), religion or belief, marital or civil partnership status, disability, age, sexual orientation, pregnancy, maternity or parental status, trade union membership or membership in any other legally protected category. We strive to create an inclusive environment, empower employees and embrace diversity. We encourage everyone to respond. PlayStation is a Fair Chance employer and qualified applicants with arrest and conviction records will be considered for employment. Apply for this job

* indicates a required field

#J-18808-Ljbffr