AI inference platform for edge, data center, and cloud deployments
Blaize builds a programmable AI inference platform spanning edge, data center, and cloud—a broad architectural scope reflected in their deep ML stack (PyTorch, TensorFlow, ONNX) and low-level hardware tooling (SystemVerilog, hardware emulation). Active work on DNN performance libraries, AI accelerator IP emulation, and robot control systems signals they are moving beyond pure software toward optimized silicon and real-world robotics integration. Hiring is decelerating but skewed senior (6 of 11 roles), indicating a shift from scaling to execution consolidation.
Blaize is a public AI platform company based in El Dorado Hills, California, with 201–500 employees. They deliver inference software and hardware IP targeting government, enterprise, and service provider segments. Their platform spans three deployment models: edge (where inference runs on distributed devices), data center (inference-centric workloads), and cloud AI services. Current project work includes solution validation for customer PoCs, adoption expansion in China, robot control and navigation systems, and verification infrastructure for custom accelerator IP. The company operates internationally, with active hiring in the United States, United Kingdom, India, and China.
Blaize uses PyTorch, TensorFlow, Keras, ONNX for ML frameworks; Python and C/C++ for core software; SystemVerilog for hardware design; and ROS 2 with Gazebo and CoppeliaSim for robotics simulation and validation.
Blaize recruits in the United States, United Kingdom, India, and China. Active hiring includes senior engineers, directors, managers, and interns across multiple countries.
Other companies in the same industry, closest in size