Custom silicon and systems for low-power AI inference at hyperscale
Tensordyne designs integrated hardware and software for GenAI inference, built on a proprietary logarithmic math layer that replaces multiplication with addition—reducing power consumption at the algorithmic root. The tech stack (Cadence, SystemVerilog, UVM, ASIC, PCIe, SerDes) reflects a full-stack silicon company, while the hiring mix skews heavily toward senior and principal engineers (7 senior, 2 principal, 2 staff across 13 roles), signaling deep technical execution and active scaling of next-generation product lines. Active projects span chip design optimization, deployment infrastructure, and hyperscaler go-to-market—indicating simultaneous maturation of both silicon and sales motion.
Tensordyne builds inference acceleration systems for hyperscalers and neo-cloud data centers, targeting the power and cost constraints of running large multimodal models at scale. The company's core innovation is a logarithmic compute architecture embedded in custom silicon, interconnect, and system software; rather than optimizing traditional matrix multiplication, the approach reformulates AI math to use addition-based primitives. The product is positioned to reduce rack footprint, power draw, and operational cost per inference token. With 51–200 employees distributed across Sunnyvale and Munich, the company is actively hiring engineers across chip design, DevOps, and infrastructure while building both hardware delivery capability and enterprise sales coverage.
Logarithmic compute architecture that replaces multiplication with addition in AI inference, reducing power consumption. Implemented in custom silicon, hardware, and system software for multimodal GenAI workloads.
Cadence, Linux, Kubernetes, C++, Python, Terraform, GCP, Rust, SystemVerilog, UVM, ASIC design, RISC-V, and PCIe/SerDes for hardware integration.
Other companies in the same industry, closest in size