Logo
Sony Playstation

Senior Machine Learning Engineer - GenAI

Sony Playstation, San Mateo, California, United States, 94409


Why PlayStation? PlayStation isn’t just the Best Place to Play — it’s also the Best Place to Work. Today, we’re recognized as a global leader in entertainment producing The PlayStation family of products and services including PlayStation5, PlayStation4, PlayStationVR, PlayStationPlus, acclaimed PlayStation software titles from PlayStation Studios, and more. PlayStation also strives to create an inclusive environment that empowers employees and embraces diversity. We welcome and encourage everyone who has a passion and curiosity for innovation, technology, and play to explore our open positions and join our growing global team. The PlayStation brand falls under Sony Interactive Entertainment, a wholly-owned subsidiary of Sony Corporation. Senior Machine Learning Engineer - GenAI

San Mateo, CA / Remote

The Role: We are seeking a highly skilled and experienced Senior Machine Learning Engineer with a strong focus on Large Language Models (LLMs), Retrieval-Augmented Generation (RAG), Fine-Tuning and general software development background. This role is pivotal in supporting the development and enhancement of our AI-powered solutions, particularly in cloud environments and with technologies such as Kubernetes, AWS Sagemaker and Langchain/Llamaindex. Our ideal candidate will have a proven track record of building and leveraging Gen-AI solutions to solve complex enterprise use-cases, while demonstrating both expertise and a deep passion for innovation in the field. What you will do: RAG and Agent Optimization: Design, develop, and optimize standard RAG (via Corrective-RAG or Graph-Rag) and Agentic solutions (e.g. Langgraph) to solve enterprise usecases and enhance our existing products with Gen-AI. Model Deployment and Inference: Leverage ML-specific kubernetes workloads (e.g. kubeflow) or cloud managed solutions (e.g. AWS Sagemaker) to manage LLM deployment and serving workloads. Strive to continually optimize the LLM infrastructure and inference engine to keep improving inference quality and speed. Model Fine-Tuning: Lead data preparation and fine-tuning of LLMs via one or more of the following techniques - Supervised Fine-Tuning (SFT), Reinforced Learning from Human Feedback (RLHF) and Direct Preference Optimization (DPO). Prompt Engineering: Develop and fine-tune prompts to optimize the performance and accuracy of LLM integrations in various applications. Monitoring and Logging: Set up robust monitoring and logging for LLM inference and training infrastructure to ensure performance and reliability. LLM and RAG Evaluation: Design and wire up appropriate evaluation frameworks to compare and evaluate different LLMs, Embedding Models and RAG configurations. Security: Implement best practices for security in LLM and Embedding Model deployments and infrastructure management. Productize and Iterate: Work closely with Developer-Experience Engineers to test, iterate and productize new LLM capabilities to power different products and capabilities (e.g., chat, search, recommendation, agentic workflows and so on). Research and Innovation: Stay ahead of latest advancements in LLM and RAG space and continually explore opportunities to enhance our products and services. What you will bring: Expert programming skills in Python, plus good working knowledge of at least one additional language, preferably Go, Rust or Javascript. Candidates should be able to write clean, efficient, and well-documented code. Substantial experience in designing and integrating RESTful APIs with various front-end and back-end systems, ensuring seamless communication between components and services. Proficient in using version control systems, implementing comprehensive testing strategies, debugging complex issues, and conducting effective code reviews to ensure quality and maintainability. Proficiency in ML frameworks and libraries commonly used in NLP and Generative AI, such as PyTorch, Transformers and Sentence Transformers. Deep conceptual and working knowledge in at least one RAG framework (e.g., Langchain, Llamaindex, Haystack). Substantial exposure with leveraging advanced RAG concepts such as Re-Ranking, RAPTOR, Corrective-RAG and GraphRAG to productionize a standard RAG implementation. Exposure with designing and wiring up agentic workflows (via Langgraph or similar framework) to intelligently handle user queries or general tasks end-to-end. Expertise in prompt engineering and optimizing prompts for LLMs. Excellent problem-solving skills and attention to detail. Exceptional communication and teamwork skills. Good-to-have: Proficiency with operating in cloud and container-orchestration platforms (AWS and Kubernetes). Experience in working with automated evaluation frameworks (e.g., DeepEval, RAGAS) to assess LLMs and advanced RAG techniques. Expertise in aligning models with fine-tuning frameworks such as RLHF and DPO. Experience in productionizing LLM based solutions and monitoring KPIs to assess performance and quality.

#J-18808-Ljbffr