Logo
Acceler8 Talent

Founding Applied Machine Learning Researcher

Acceler8 Talent, Palo Alto, California, United States, 94306


About the Company:

We are a well-funded Stanford Spinout, based in Palo Alto, on a mission to redefine efficiency and affordability in hardware engineering. We are already partnered with some of the world's largest semiconductor companies and are rapidly expanding our customer base...As a company, we are dedicated to revolutionizing the hardware engineering landscape with our foundation model designed to dramatically enhance the hardware design process. Our vision is evolve our model to be able to autonomously create cutting-edge hardware designs, pushing the boundaries of efficiency and affordability in the hardware engineer.About the Role as a Founding ML Research Engineer:

As a Founding ML Research Engineer, you will be at the forefront of our AI initiatives. Your responsibilities will include:Developing and evaluating extensive systems comprising interconnected Language Model Models (LLMs), Meta-Learning Models (MMMs), and Reinforcement Learning (RL) models.Structuring data appropriately for training purposes.Utilizing both open-source and closed-source models to build advanced AI architectures.Designing models with a deep understanding of constraints and limitations.Implementing innovative search and retrieval algorithms tailored to our domain.Conducting thorough evaluations of AI models to ensure optimal performance.Addressing complex engineering challenges such as latency and costly model inference.What We Can Offer You:

Joining our team as a Founding ML Research Engineer offers numerous benefits, including:The opportunity to work on groundbreaking AI projects that shape the future.A collaborative and supportive work environment that fosters innovation.Competitive compensation and benefits package.Professional development opportunities to expand your skills and knowledge in AI.Keywords: Machine Learning, Artificial Intelligence, Deep Learning, LLMs, MMMs, Reinforcement Learning, RL, Large Language Modes, Meta-Learning Models, AI Architectures, HW-SW Codesign, Quantization, Pruning, Sparsity, Resource Constrained Models