At d-Matrix, we are focused on unleashing the potential of generative AI to power the transformation of technology. We are at the forefront of software and hardware innovation, pushing the boundaries of what is possible. Our culture is one of respect and collaboration.
We value humility and believe in direct communication. Our team is inclusive, and our differing perspectives allow for better solutions. We are seeking individuals passionate about tackling challenges and are driven by execution. Ready to come find your playground? Together, we can help shape the endless possibilities of AI.
Location:
Hybrid, working onsite at our Santa Clara, CA headquarters 3 days per week.
What You Will Do:
• Design and verify FPGA-based solutions for d-Matrix AI inference accelerator management
• Define FPGA microarchitecture specifications and collaborate with stakeholders to ensure alignment with project requirements.
• Develop resilient dual boot architecture for multi-core multi chiplet booting
• Design and implement hardware and software modules for platform power management, health monitoring, and telemetry data acquisition.
• Interface with host server BMC through SMBus mailbox with management protocol overlays such as MCTP, PLDM and SPDM
• Integrate RISC-V CPU cores and related firmware into FPGA designs.
• Develop eFuse controller within the FPGA
• Design and integrate a secure boot solution adhering to NIST standards within the FPGA to enable secure booting of d-Matrix accelerator chiplets
• Collaborate with cross-functional teams to ensure seamless hardware-software integration and support inference accelerator hardware bring-up and troubleshooting.
• Author Python scripts for hardware testing and automation
What You Will Bring:
• Bachelor's degree in Electrical Engineering, Computer Engineering, or a related field, Master's degree preferred with a Minimum of 5+ years of experience in FPGA design and verification.
• Expertise in hardware design using Hardware Description Languages (HDLs) like Verilog or VHDL
• Familiarity with RISC-V architecture and embedded systems development
• Understanding of hardware-software integration concepts
• Experience with scripting languages like Python for test automation
• Strong analytical and problem-solving skills
• Excellent communication, collaboration, and teamwork abilities
• Thrive in dynamic environments where innovative problem-solving is key
• Experience with industry-standard management protocols (MCTP, PLDM, SPDM)
• Experience with platform BMC (Baseboard Management Controller)
• Knowledge of power management techniques (PMBus)
• Knowledge of hardware security and secure boot concepts.
• Experience with cloud server architectures and concepts
Equal Opportunity Employment Policy
d-Matrix is proud to be an equal opportunity workplace and affirmative action employer. We’re committed to fostering an inclusive environment where everyone feels welcomed and empowered to do their best work. We hire the best talent for our teams, regardless of race, religion, color, age, disability, sex, gender identity, sexual orientation, ancestry, genetic information, marital status, national origin, political affiliation, or veteran status. Our focus is on hiring teammates with humble expertise, kindness, dedication and a willingness to embrace challenges and learn together every day.
d-Matrix does not accept resumes or candidate submissions from external agencies. We appreciate the interest and effort of recruitment firms, but we kindly request that individual interested in opportunities with d-Matrix apply directly through our official channels. This approach allows us to streamline our hiring processes and maintain a consistent and fair evaluation of al applicants. Thank you for your understanding and cooperation.
What We Do
d-Matrix is building a new way of doing datacenter AI inferencing using in-memory computing (IMC) techniques with chiplet level scale-out interconnects. Founded in 2019, d-Matrix has attacked the physics of memory-compute integration using innovative circuit techniques, ML tools, software and algorithms; solving the memory-compute integration problem, which is the final frontier in AI compute efficiency.