At d-Matrix, we are focused on unleashing the potential of generative AI to power the transformation of technology. We are at the forefront of software and hardware innovation, pushing the boundaries of what is possible. Our culture is one of respect and collaboration.
We value humility and believe in direct communication. Our team is inclusive, and our differing perspectives allow for better solutions. We are seeking individuals passionate about tackling challenges and are driven by execution. Ready to come find your playground? Together, we can help shape the endless possibilities of AI.
Location:
Hybrid, working onsite at our Santa Clara, Ca headquarters 3-5 days per week.
What You Will Do:
The role requires you to be part of the team that helps productize the SW stack for our AI compute engine. As part of the Software team, you will be responsible for the development, enhancement, and maintenance of the development and testing infrastructure for next-generation AI hardware. You can build and scale software deliverables in a tight development window. As a part of this team, you’ll be involved in leveraging the d-Matrix ISA and the dataflow architecture paradigm to build optimized implementations of SOTA large language models for achieving benchmark performance metrics. You will be involved in developing Machine Learning Op kernels for the graph-based d-Matrix compiler stack and with work with a team of compiler, hardware architecture experts, and Machine Learning model researchers during the process. You will also have the opportunity to contribute to the research of novel techniques for the Machine Learning software stack, models, and architecture.
What You Will Bring:
-
MS or PhD preferred in Computer Science, Electrical Engineering, Math, Physics or related degree with 10-12+ Years of Industry Experience.
-
Strong grasp of computer architecture, data structures, system software, and machine learning fundamentals
-
Experience with mapping NLP models (Transformers, State-Space Models, etc.) to accelerators and awareness of trade-offs across memory, BW and compute
-
Proficient in Python/C/C++ development in Linux environment and using standard development tools
-
Experience with deep learning frameworks (such as PyTorch, Tensorflow)
-
Self-motivated team player with a strong sense of ownership and leadership
Desired:
-
Research background with publication record in top-tier ML/Computer architecture conferences
-
Prior startup, small team or incubation experience
-
Experience breaking down Machine Learning models and an understanding of what makes each model unique
-
Experience implementing and optimizing ML workloads and low-level software algorithms for specialized hardware such as FPGAs, DSPs, DL accelerators.
-
Understanding of the nuances included in training and deployment of Distributed ML models, and familiar with techniques like quantization, sparsity, etc.
-
Experience implementing SIMD algorithms on vector processors
-
Willing to keep oneself up-to-date with the latest trends and research in the ML community and understand how the trends affect d-Matrix requirements and approach
#LI-DL1
Equal Opportunity Employment Policy
d-Matrix is proud to be an equal opportunity workplace and affirmative action employer. We’re committed to fostering an inclusive environment where everyone feels welcomed and empowered to do their best work. We hire the best talent for our teams, regardless of race, religion, color, age, disability, sex, gender identity, sexual orientation, ancestry, genetic information, marital status, national origin, political affiliation, or veteran status. Our focus is on hiring teammates with humble expertise, kindness, dedication and a willingness to embrace challenges and learn together every day.
d-Matrix does not accept resumes or candidate submissions from external agencies. We appreciate the interest and effort of recruitment firms, but we kindly request that individual interested in opportunities with d-Matrix apply directly through our official channels. This approach allows us to streamline our hiring processes and maintain a consistent and fair evaluation of al applicants. Thank you for your understanding and cooperation.
Top Skills
What We Do
d-Matrix is building a new way of doing datacenter AI inferencing using in-memory computing (IMC) techniques with chiplet level scale-out interconnects. Founded in 2019, d-Matrix has attacked the physics of memory-compute integration using innovative circuit techniques, ML tools, software and algorithms; solving the memory-compute integration problem, which is the final frontier in AI compute efficiency.