MatX designs custom silicon optimized for large language model inference and training. The stack is deep systems work—Rust, C++, CUDA, SystemVerilog, and ASIC tools (Synopsys, Allegro)—with active firmware and kernel optimization projects. The hiring mix (15 engineers, 2 researchers, balanced mid/senior split) and pain-point focus on silicon methodology and cost efficiency suggest a team scaling from architecture through physical design tapeout, while grappling with the financial and operational overhead of semiconductor manufacturing.
MatX builds specialized hardware accelerators designed to run large language models more efficiently than general-purpose GPUs. The company is based in Mountain View and operates as a 11–50-person engineering-driven organization. Active development spans silicon architecture, micro-architecture design, firmware bring-up, and kernel optimization for transformer-based models. The team is hiring across engineering roles (15 open positions) with a focus on mid and senior IC design expertise, alongside research and finance staff to support the manufacturing and fundraising cycles inherent to fabless semiconductor startups.
SystemVerilog for RTL, C/C++ and Rust for firmware and software optimization, Assembly for kernel tuning, and ASIC toolflow languages (TCL). Synopsys DFT Compiler and Allegro are in the design flow.
Silicon architecture design, micro-architecture sign-off, kernel optimization for ML models like transformers, firmware bring-up for PHY, physical design flow automation, and LLM training/optimization for custom hardware.
Other companies in the same industry, closest in size