AI processor IP for on-device inference without code partitioning
Quadric designs a general-purpose neural processing unit (Chimera GPNPU) that runs AI inference and C++ code on the same chip — eliminating the developer friction of splitting applications across separate processors. The tech stack (LLVM, PyTorch, ONNX Runtime, llama.cpp, CUDA) reveals a compiler-first architecture; active projects on LLVM enhancement, quantization algorithms, and kernel optimization confirm heavy investment in toolchain maturity. Safety certification and licensee acquisition appear as concurrent blockers, suggesting early-stage commercialization challenges typical of semiconductor IP vendors scaling beyond design partners.
Notable leadership hires: Chief Financial Officer
Quadric licenses a processor architecture designed to reduce friction in AI inference deployment. The Chimera GPNPU scales from 1 to 864 TOPS and combines scalar, vector, and matrix execution in a single instruction set, allowing developers to write unified C++ applications rather than partitioning code across multiple processors. The company targets semiconductor, edge-device, and automotive customers who need on-device AI with minimal software overhead. Current hiring spans engineering-heavy roles with significant seniority concentration, plus active efforts in kernel library development, compiler optimization, and test infrastructure — all typical of an architecture vendor moving from R&D toward volume licensing and production support.
Quadric's Chimera GPNPU is a general-purpose neural processing unit that executes AI inference and C++ code without requiring developers to partition applications across separate processors. It scales from 1 to 864 TOPS and supports scalar, vector, and matrix operations.
Core tools include LLVM, GCC, PyTorch, TensorFlow, ONNX Runtime, llama.cpp, CUDA, and UVM. The emphasis on LLVM and compiler infrastructure reflects Quadric's focus on toolchain accessibility and kernel optimization.
Other companies in the same industry, closest in size