Logo
d-Matrix

Machine Learning Research Engineer

d-Matrix, Santa Clara, California, us, 95053


d-Matrix has fundamentally changed the physics of memory-compute integration with our digital in-memory compute (DIMC) engine. The "holy grail" of AI compute has been to break through the memory wall to minimize data movements. We've achieved this with a first-of-its-kind DIMC engine. Having secured over $154M, $110M in our Series B offering, d-Matrix is poised to advance Large Language Models to scale Generative inference acceleration with our chiplets and In-Memory compute approach. We are on track to deliver our first commercial product in 2024. We are poised to meet the energy and performance demands of these Large Language Models. The company has 100+ employees across Silicon Valley, Sydney and Bengaluru.

Our pedigree comes from companies like Microsoft, Broadcom, Inphi, Intel, Texas Instruments, Lucent, MIPS and Wave Computing. Our past successes include building chips for all the cloud hyperscalers globally - Amazon, Facebook, Google, Microsoft, Alibaba, Tencent along with enterprise and mobile operators like China Mobile, Cisco, Nokia, Ciena, Reliance Jio, Verizon, AT&AT. We are recognized leaders in the mixed signal, DSP connectivity space, now applying our skills to next generation AI.

Location:

Santa Clara, Hybrid or Remote

The role:

Machine Learning Research Engineer

The Machine Learning Team is responsible for the R&D of core algorithm-hardware co-design capabilities in d-Matrix's end-to-end solution. You will be joining a team of exceptional people enthusiastic about researching and developing state-of-the-art efficient deep learning techniques tailored for d-Matrix's AI compute engine. You will also have the opportunity of collaboration with top academic labs and help customers to optimize and deploy workloads for real-world AI applications on our systems.

What you will do: Engage and collaborate with Product Managers to define R&D goals. Engage and collaborate with internal SW team to meet stack development milestones. Maintain customer-facing SW tools for customer workload deployment. Port customer workloads, optimize them for deployment, generate reference implementations and evaluate performance. Report and present progress timely and effectively. What you will bring:

Minimum:

Master's degree in Computer Science, Electrical and Computer Engineering, or a related scientific discipline with 1+ years of industry experience. High proficiency with major deep learning frameworks: PyTorch, TensorFlow. High proficiency in algorithm analysis, data structure, and Python programming. Deep, wide and current knowledge in machine learning and modern deep learning applications. Hands-on experience with CNN, RNN, Transformer neural network architectures in production. Knowledge and experience with efficient deep learning is preferred: quantization, sparsity, distillation. Passionate about AI and thriving in a fast-paced and dynamic startup culture. Preferred:

Experience with AI HW product development and management. Experience with specialized HW accelerator systems for deep neural network.

Equal Opportunity Employment Policy

d-Matrix

is proud to be an equal opportunity workplace and affirmative action employer. We're committed to fostering an inclusive environment where everyone feels welcomed and empowered to do their best work. We hire the best talent for our teams, regardless of race, religion, color, age, disability, sex, gender identity, sexual orientation, ancestry, genetic information, marital status, national origin, political affiliation, or veteran status. Our focus is on hiring teammates with humble expertise, kindness, dedication and a willingness to embrace challenges and learn together every day.

d-Matrix

does not accept resumes or candidate submissions from external agencies. We appreciate the interest and effort of recruitment firms, but we kindly request that individual interested in opportunities with d-Matrix apply directly through our official channels. This approach allows us to streamline our hiring processes and maintain a consistent and fair evaluation of al applicants. Thank you for your understanding and cooperation.