Analog computing chips for AI inference at ultra-low power
TetraMem designs analog compute-in-memory processors aimed at AI workloads, with a hardware-heavy tech stack spanning Synopsys, Cadence, SystemVerilog, and LLVM. The company is scaling engineering talent (44 active roles, mostly senior hires) while tackling hard problems: deploying neural networks on analog fabric, building internal IP for in-memory compute, and moving from R&D into production validation. Adopting Django signals early-stage software tooling for internal infrastructure.
Notable leadership hires: Software Head
TetraMem develops analog and mixed-signal processors optimized for AI inference, emphasizing power efficiency over traditional digital approaches. Founded in 2018 and headquartered in San Jose, the company employs 51–200 people, nearly all in engineering roles. They are actively designing chip architectures (compute-in-memory, SOC verification), building compiler toolchains to map neural networks onto their hardware, and running silicon post-validation. Current hiring spans the United States and Singapore.
Synopsys, Cadence (including Virtuoso), and Mentor Graphics. The company lists EDA cost reduction and license optimization as active challenges, indicating heavy reliance on these platforms.
Analog compute-in-memory chip design, neural network model deployment on in-memory fabric, compiler toolchains for the hardware, silicon validation, and edge AI model development.
Other companies in the same industry, closest in size