Logo
Ampere Computing

Sr. Principal Architect: AI Accelerator

Ampere Computing, Portland, Oregon, United States, 97204


Description

Invent the future with us. Recognized by Fast Company's 2023 100 Best Workplaces for Innovators List, Ampere is a semiconductor design company for a new era, leading the future of computing with an innovative approach to CPU design focused on high-performance, energy efficient, sustainable cloud computing.

By providing a new level of predictable performance, efficiency, and sustainability, Ampere is working with leading cloud suppliers and a growing partner ecosystem to deliver cloud instances, servers and embedded/edge products that can handle the compute demands of today and tomorrow. Join us at Ampere and work alongside a passionate and growing team - we'd love to have you apply!

About the role:

We are looking for an experienced HW (hardware) DSA (domain-specific accelerator) architect for our future AI accelerator product roadmap. This is an exciting opportunity to be at the forefront of new architecture developments in this strategically critical project. In this role you will leverage your experience in mapping SW (software) algorithms to domain specific accelerator/offload HW, and work with the Ampere AI software team to define and develop the architecture for the AI product. This will include analyzing workload decomposition, compute kernels, memory usage, and data movement; and helping to guide key decisions including task scheduling, right-sizing compute HW, cache hierarchy, interconnect, virtual memory, IO virtualization and DMA design. You will provide technical leadership to the rest of the HW architect and design team, and feedback to the SW team, ensuring that the overall solution is optimized for the target workloads.

What you'll achieve:

Joining the Aurora team means you will contribute to the creation of a groundbreaking AI compute product that supports sustainability and performance, designed to operate efficiently across a variety of data center infrastructures.

You will be a critical technical leader on our new AI product roadmap, leveraging your experience and expertise to shape the architecture, evaluate trade-offs and make critical decisions.

Work closely with the SW team and other HW architects and designers to define how target workloads will be mapped to domain-specific HW functions, including the host-device interface, data movement, and vector and matrix execution pipelines.

Help define a flexible HW architecture that meets the performance targets across different form-factors, configurations, and TDPs.

Work with the driver team to help define the host/device interface, including command structures and DMA engines.

Provide guidance and leadership to other architects on the product, to ensure the overall product works efficiently end-to-end for the target workloads.

Help with performance modeling and analysis of the HW for targeted workloads.

Create and maintain clear, detailed and comprehensive product architecture specifications.

About you:

BS degree in Electrical Engineering, Computer Engineering, or Computer Science and 12 years of experience; or MS degree and 8 years of experience; or PhD degree and 5 years of experience.

High-level knowledge and understanding of AI algorithms, compute, and memory requirements.

Experience architecting and/or designing high-performance domain specific accelerators, especially for AI or GPU applications.

Knowledge and understanding of: Cache-coherency protocols, Cache architecture and/or design, Virtual memory architecture, AMBA suite of interface protocols, PCIe and CXL protocols.

Experience with implementing ARMv9, Nvidia and/or AMD GPU ISAs, especially vector and matrix instructions and pipeline-based micro-architectures (preferred).

Experience with PCIe host/device interface models, IO virtualization and DMA engines (preferred).

Experience with performance modeling, e.g. in Python or C++ (preferred).

Must be self-driven, curious, organized and comfortable with ambiguity.

Ability to learn and adapt, one of the foundational principles of Ampere.

Great communication skills, comfortable with reaching out and asking questions, and brainstorming with peers - working at Ampere is very much a team sport!

What we'll offer:

At Ampere we believe in taking care of our employees and providing a competitive total rewards package that includes base pay, bonus, equity, and comprehensive benefits. The full base pay range for this role is between $163,100 and $271,900, except in the San Francisco Bay Area where the range is between $171,800 and $286,300. We offer an annual bonus program tied to internal company goals and annual meritocratic equity awards that enable our employees to participate in the success of the company.

Our benefits include health, wellness, and financial programs that support employees through every stage of life, with full benefits eligibility at 20 hours per week. Benefits highlights include:

Premium medical, dental, vision insurance, as well as income protection and a 401K retirement plan, so that you can feel secure in your health and financial future.

Unlimited Flextime and 10+ paid holidays so that you can embrace a healthy work-life balance.

A variety of healthy snacks, energizing espresso, and refreshing drinks to keep you fueled and focused throughout the day.

And there is much more than compensation and benefits. At Ampere, we foster an inclusive culture that empowers our employees to do more and grow more. We are passionate about inventing industry leading cloud-native designs that contribute to a more sustainable future. We are excited to share more about our career opportunities with you through the interview process.

#LI-CB1 #LI-Hybrid

Ampere is an inclusive and equal opportunity employer and welcomes applicants from all backgrounds. All qualified applicants will receive consideration for employment without regard to race, color, national origin, citizenship, religion, age, veteran and/or military status, sex, sexual orientation, gender, gender identity, gender expression, physical or mental disability, or any other basis protected by federal, state or local law.

#J-18808-Ljbffr