echoloc

d-Matrix Tech Stack

AI inference accelerator hardware and compiler stack for datacenters

Semiconductor Manufacturing Santa Clara, California 51–200 employees Founded 2019 Privately Held

d-Matrix designs custom silicon and system software for AI inference workloads at datacenter scale, using a deep hardware-software stack spanning SystemVerilog, RISC-V, CUDA, PyTorch, and MLIR-based compiler tooling. The hiring profile is nearly 100% engineering-heavy with significant staff and principal seniority, concentrated on chip design (SOC verification, tape-out) and compiler infrastructure — typical of a semiconductor startup executing a multi-year hardware development cycle. Active pain points around inference latency, memory efficiency, and secure datacenter integration reveal the core technical challenges blocking their path to production.

Tech Stack 86 technologies

Core StackPyTorch C++ Python Go MATLAB Linux UVM SystemVerilog C/C++ SystemC vLLM Triton CUDA RISC-V PCIe 5.0 Bash Git Raspberry Pi Zephyr Ubuntu TCP/IP GPIO UART PowerShell GPU TPU Cadence Verilog VHDL Perl+56 more

What d-Matrix Is Building

Challenges

  • Maximizing throughput
  • Scaling software infrastructure
  • Predictable project delivery
  • Reducing inference latency
  • Tight development window
  • Complex software/hardware integration
  • Optimizing inference performance
  • Enhancing memory utilization and execution efficiency
  • Integrating secure computing into ai accelerators
  • Meeting datacenter security requirements

Active Projects

  • Productize ai compute engine software stack
  • Build compiler infrastructure
  • Soc verification environments and tape out efforts
  • Hardware diagnostic tools for novel hardware stack
  • Ai compute platform focusing on in-memory compute for ai inference in datacenters
  • Runtime firmware for multiprocessor soc
  • Scalable mlir compiler for cloud inference
  • Productizing the sw stack for the d-matrix ai compute engine
  • Software kernels for next-generation ai hardware
  • Chip development cycle program management

Hiring Activity

Steady60 roles · 15 in 30d

Department

Engineering
51
HR
1
Manufacturing
1
Research
1
Security
1

Seniority

Staff
17
Senior
14
Principal
12
Intern
8
Director
3
Manager
1
Company intelligence

Find more companies like d-Matrix by tech stack, pain points and active projects

Get started free

About d-Matrix

d-Matrix builds Corsair, a specialized compute platform for running large language model inference in datacenters at higher throughput and lower power than general-purpose accelerators. The company operates as a full-stack semiconductor firm: designing custom silicon (verified in UVM/SystemVerilog), developing runtime firmware and kernel software for their multiprocessor systems-on-chip, and building compiler infrastructure (MLIR-based) to optimize inference workloads. Founded in 2019 and based in Santa Clara, the company employs 51–200 people and is actively hiring across engineering roles in the United States, India, Canada, and Australia. Their roadmap includes productizing their software stack, scaling compiler tooling, and integrating security features to meet datacenter deployment requirements.

HeadquartersSanta Clara, California
Company Size51–200 employees
Founded2019
Hiring MarketsUnited States, India, Canada, Australia

Frequently Asked Questions

What is d-Matrix's tech stack?

d-Matrix uses SystemVerilog and UVM for chip design, RISC-V and PCIe 5.0 for hardware architecture, C/C++ and Python for software, PyTorch and vLLM for AI frameworks, and MLIR-based compilers for inference optimization. CAD tools include Cadence for design.

What is d-Matrix working on?

d-Matrix is focused on productizing their AI inference compute engine, building compiler infrastructure (MLIR-based), executing SOC design and tape-out, developing runtime firmware for multiprocessor systems, and creating hardware diagnostic tools for their novel accelerator stack.

Similar Companies in Semiconductor Manufacturing

Other companies in the same industry, closest in size