Photonic AI chips delivering exascale inference at GPU power budgets
Neurophos builds optical processors for AI inference, compressing the compute density of large server racks into a single GPU-sized package. The company is hiring 27 engineers—mostly at lead and principal levels—across a tight stack of silicon photonics, ASIC design, and compiler work (PyTorch, JAX, Triton). Current projects center on GEMM optimization, 4-bit quantization, and porting large language models to optical inference, suggesting a near-term push toward production LLM acceleration.
Notable leadership hires: Director of Marketing, Functional Modeling Lead, Tech Lead, Optics Director
Neurophos designs the OPU, an optical processor that replaces conventional silicon for AI workloads. The chip trades electrical transistors for photonic ones, routing compute through light rather than electrons, which reduces power consumption and physical footprint dramatically—millions of weights fit in postage-stamp-sized areas where conventional silicon would need a full square meter. Founded in 2020 and based in Austin, the company is backed by a small but specialized engineering organization focused on the hardware, compiler, and modeling layers required to bring photonic AI to market.
Neurophos uses Cadence Virtuoso and SystemVerilog for chip design, PyTorch/JAX/Triton for ML frameworks, Jenkins and GitLab CI/CD for build, and Xcelium for simulation. The stack reflects deep specialization in photonic ASIC design and AI compiler development.
Current projects include optical GEMM engine modeling, 4-bit quantization strategies, custom processor core design, LLM compiler development, and porting large language models to optical inference engines. The company is also building trace-driven and performance modeling infrastructure.
Other companies in the same industry, closest in size