AI inference accelerator chips and systems for data-center deployment
Fractile designs custom silicon and systems software for AI inference workloads, with a stack spanning hardware design (SystemVerilog, Cadence Innovus, Synopsys ICC2), ML frameworks (PyTorch, vLLM, SGLang), and system-level tooling (Kubernetes, Linux kernel drivers in Rust). The engineering-dominated hiring mix (33 of 35 roles) and active projects spanning pre-silicon modeling through production deployment reveal a company moving from design validation toward real data-center inference integrations.
Fractile, founded in 2022 and based in London, builds custom AI accelerator hardware and software stacks targeting inference at scale. The company addresses a specific gap in the AI compute market: while existing hardware excels at training large models, inference—the dominant operational workload—remains inefficient and costly. Fractile's approach rearchitects the memory-compute boundary through custom silicon, paired with a runtime stack and ML framework integrations. Active projects span pre-silicon validation, Linux kernel drivers, and inference server deployments. The organization is primarily engineering-focused, with hiring concentrated in the UK and Taiwan.
Custom AI accelerator chips and systems software optimized for inference workloads at scale. The company designs silicon (SystemVerilog, Cadence/Synopsys CAD tools) integrated with ML runtimes (vLLM, SGLang) and Linux kernel drivers (Rust).
Hardware design: SystemVerilog, Cadence Innovus, Synopsys ICC2, Calibre. ML/software: PyTorch, vLLM, SGLang, MLIR. Systems: Kubernetes, Docker, Linux kernel, Rust. Build: Bazel, GitHub Actions. Simulation: Verilator, Cocotb, QEMU.
Other companies in the same industry, closest in size