Lossless, congestion-free networking for AI and HPC at scale
Cornelis Networks builds high-performance fabric architectures designed to eliminate congestion and packet loss in AI training and HPC workloads. The tech stack—heavy on Verilog, SystemVerilog, Ethernet protocol design, RDMA, and GPU-direct memory access—combined with active verification and silicon-validation projects, signals a company still in hardware bring-up phase. Senior engineering dominance and pain points around first-pass silicon success and verification closure indicate they're scaling infrastructure teams to ship their next-generation platform.
Notable leadership hires: Chief Marketing Officer
Cornelis Networks designs and manufactures high-performance networking fabrics for AI and HPC clusters. The company serves data-center operators and hyperscale compute providers building training and inference infrastructure. Built on the Omni-Path architecture, their solutions aim to reduce network latency and congestion in GPU-dense environments. The company is headquartered in Wayne, PA, operates with 51–200 employees, and maintains engineering and manufacturing presence across the United States, Costa Rica, Belgium, Saudi Arabia, and Japan. Active hiring across engineering, sales, and marketing roles reflects expansion tied to product launches and customer acquisition.
The stack emphasizes hardware design (Verilog, SystemVerilog, Synopsys VCS, Questa for simulation) and networking protocols (Ethernet, RDMA, InfiniBand, Omni-Path). They integrate GPU acceleration APIs (GPUDirect RDMA, CUDA TensorRT-LLM) and observability tools (OpenTelemetry, VTune, Nsight Systems).
Headquarters in Wayne, PA. Engineering and manufacturing teams span the United States, Costa Rica, Belgium, Saudi Arabia, and Japan.
Other companies in the same industry, closest in size