Logo
Amazon

SoC Modeling Engineering Team Lead, Annapurna Labs, Machine Learning Accelerator

Amazon, Cupertino, California, United States, 95014


Senior SoC Functional Modeling Engineer, Annapurna Labs, Machine Learning Accelerators

Custom SoCs (system-on-chips) are the brains behind AWS’s Machine Learning servers. Our team builds C++ functional models of these accelerator SoCs for use by internal partner teams. We’re looking for a Senior SoC Modeling Engineer to join the team and deliver new functional models, infrastructure, and tooling for our customers.As part of the ML accelerator modeling team, you will:Develop and own SoC functional models end-to-end, including model architecture, integration with other model or infrastructure components, testing, and debug.Work closely with architecture, RTL design, design verification, emulation, and software teams.Innovate on the tooling you provide to customers, making it easier for them to use our SoC models.Drive model and modeling infrastructure performance improvements to help our models scale.Develop software which can be maintained, improved upon, documented, tested, and reused.Annapurna Labs, our organization within AWS, designs and deploys some of the largest custom silicon in the world, with many subsystems that must all be modeled and tested with high quality. Our SoC model is a critical piece of software used in both our SoC development process and by our partner software teams. You’ll collaborate with many internal customers who depend on your models to be effective themselves, and you'll work closely with these teams to push the boundaries of how we're using modeling to build successful products.You will thrive in this role if you:Are an expert in functional modeling for SoCs, ASICs, TPUs, GPUs, or CPUs.Are comfortable modeling in C++, and familiar with Python.Enjoy learning new technologies, building software at scale, moving fast, and working closely with colleagues as part of a small team within a large organization.Want to jump into an ML-aligned role, or get deeper into the details of ML at the hardware/system level.Although we are building machine learning chips, no machine learning background is needed for this role. This role spans modeling of the ML and management regions of our chips, and you’ll dip your toes into both. You’ll be able to ramp up on ML as part of this role, and any ML knowledge that’s required can be learned on-the-job.This role can be based in either Cupertino, CA or Austin, TX. The team is split between the two sites, with a slight preference for CA, due to colocation with more customer teams.We're changing an industry. We're searching for individuals who are ready for this challenge, who want to reach beyond what is possible today. Come join us and build the future of machine learning!BASIC QUALIFICATIONS

- 5+ years of non-internship professional experience writing functional or performance models.- Experience programming with C++.- Familiarity with SoC, CPU, GPU, and/or ASIC architecture and micro-architecture.PREFERRED QUALIFICATIONS

- 5+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, and testing.- Experience developing and calibrating performance models for custom silicon chips.- Experience with writing benchmarks and analyzing performance.- Experience with PyTest and GoogleTest.- Familiarity with modern C++ (11, 14, etc.).- Experience in multi-threaded programming, vector extensions, HPC, and QEMU.- Experience with machine learning accelerator hardware and/or software.Amazon is committed to a diverse and inclusive workplace. Amazon is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.

#J-18808-Ljbffr