AI accelerator silicon and software stack for datacenter inference and training
Graphcore designs custom silicon and software for AI workloads, operating as a vertically integrated semiconductor house under SoftBank ownership. The tech stack—PyTorch, JAX, TensorFlow paired with PCIe Gen 6, 800G Ethernet, and high-performance interconnect fabrics (NCCL, MPI, libfabric)—reflects a company building end-to-end compute from chip through datacenter orchestration. Active hiring in validation and firmware engineering, combined with heavy project focus on silicon bringup and post-silicon characterization, signals they are scaling manufacturing readiness while optimizing kernel efficiency on their hardware.
Notable leadership hires: AI SoC Validation Lead, Hardware Validation Lead
Graphcore develops AI accelerator processors and the complete software stack required to deploy them at scale. The company was founded in 2016 and is headquartered in Bristol, United Kingdom, with backing from SoftBank Group. They operate across five countries (United Kingdom, Poland, India, Taiwan, United States), with engineering-heavy headcount concentrated on silicon validation, kernel optimization, and firmware development. The product spans silicon design through hyperscale server platform management and reference applications, positioning them to serve datacenter operators deploying large-scale AI inference and training workloads.
Graphcore uses PyTorch, JAX, TensorFlow, Kubernetes, Ray, and custom hardware interfaces (PCIe Gen 6, 800G Ethernet). They're adopting distributed storage (Ceph, Lustre, GPFS, WEKA) and OpenBMC for system management.
Primary focus is silicon validation and bringup: post-silicon characterization, automated test frameworks, hardware defect detection, and yield optimization. Secondary track is kernel efficiency tuning and Zephyr-based firmware for hyperscale server management.
Other companies in the same industry, closest in size