Logo
d-Matrix

AI Hardware Architect

d-Matrix, Seattle, Washington, us, 98127


d-Matrix has fundamentally changed the physics of memory-compute integration with our digital in-memory compute (DIMC) engine. The “holy grail” of AI compute has been to break through the memory wall to minimize data movements. We’ve achieved this with a first-of-its-kind DIMC engine. Having secured over $154M, $110M in our Series B offering, d-Matrix is poised to advance Large Language Models to scale Generative inference acceleration with our chiplets and In-Memory compute approach. We are on track to deliver our first commercial product in 2024. We are poised to meet the energy and performance demands of these Large Language Models.The company has 100+ employees across Silicon Valley, Sydney and Bengaluru.Our pedigree comes from companies like Microsoft, Broadcom, Inphi, Intel, Texas Instruments, Lucent, MIPS and Wave Computing. Our past successes include building chips for all the cloud hyperscalers globally - Amazon, Facebook, Google, Microsoft, Alibaba, Tencent along with enterprise and mobile operators like China Mobile, Cisco, Nokia, Ciena, Reliance Jio, Verizon, AT&T. We are recognized leaders in the mixed signal, DSP connectivity space, now applying our skills to next generation AI.Location:Hybrid, working onsite at our Santa Clara, CA headquarters 3 days per week.The role: AI Hardware Architectd-Matrix is seeking outstanding computer architects to help accelerate AI application performance at the intersection of both hardware and software, with particular focus on emerging hardware technologies (such as DIMC, D2D, PIM etc.) and emerging workloads (such as generative inference etc.). Our acceleration philosophy cuts through the system ranging from efficient tensor cores, storage, and data movements along with co-design of dataflow, and collective communication techniques.What you will do:As a member of the architecture team, you will contribute to features that power the next generation of inference accelerators in datacenters.

This role requires keeping up with the latest research in ML Architecture and Algorithms space, and collaborating with different partner teams including hardware design and compiler.

Your day-to-day work will include (1) analyzing the properties of emerging machine learning algorithms and identifying functional, performance implications (2) proposing new features to enable or accelerate these algorithms, (3) studying the benefits of proposed features with performance models (analytical, cycle-level).

What you will bring:Minimum:MS, PhD, MSEE with 3+ years of experience or PhD with 0-1 year of applicable experience.

Solid grasp through academic or industry experience in multiple of the relevant areas – computer architecture, hardware software co-design, performance modeling.

Programming fluency in C/C++ or Python.

Experience with developing architecture simulators for performance analysis, or hacking existing ones such as cycle-level simulators (gem5, GPGPU-Sim etc.) or analytical models (Timeloop, Maestro etc.).

Research background with publication record in top-tier architecture or machine learning venues is a huge plus (such as ISCA, MICRO, ASPLOS, HPCA, DAC, MLSys etc.).

Self-motivated team player with strong sense of collaboration and initiative.

Equal Opportunity Employment Policyd-Matrix

is proud to be an equal opportunity workplace and affirmative action employer. We’re committed to fostering an inclusive environment where everyone feels welcomed and empowered to do their best work. We hire the best talent for our teams, regardless of race, religion, color, age, disability, sex, gender identity, sexual orientation, ancestry, genetic information, marital status, national origin, political affiliation, or veteran status. Our focus is on hiring teammates with humble expertise, kindness, dedication and a willingness to embrace challenges and learn together every day.d-Matrix

does not accept resumes or candidate submissions from external agencies. We appreciate the interest and effort of recruitment firms, but we kindly request that individuals interested in opportunities with d-Matrix apply directly through our official channels. This approach allows us to streamline our hiring processes and maintain a consistent and fair evaluation of all applicants. Thank you for your understanding and cooperation.

#J-18808-Ljbffr