Silicon and software IP for wireless connectivity and edge AI inference
Ceva licenses DSP cores and edge AI IP to semiconductor companies building connected smart devices. The stack—Verilog, SystemVerilog, LLVM, MLIR, PyTorch, Synopsys, Cadence—reflects a hardware-software co-design focus; adoption of Cursor and GitHub Copilot suggests the org is accelerating developer velocity on compiler and tooling work. Pain points cluster around inference optimization, memory and power constraints, and time-to-market pressure, all consistent with the edge-inference mission.
Notable leadership hires: Business Operations Director
Ceva designs and licenses wireless communications and machine-learning IP for edge devices. The customer base spans consumer IoT, mobile, automotive, and industrial applications; over 18 billion Ceva-powered chips ship annually across smartphones, drones, base stations, and embedded systems. The company operates design centers in Israel, Ireland, France, and the UK, with sales offices across Europe, Asia, and the US. Active project portfolio covers Wi-Fi transceivers, cellular modems, radar/lidar baseband processing, and graph compiler software for neural-processor acceleration. The engineering-dominant hiring profile and focus on compiler optimization and transceiver IC development align with a hardware-accelerated AI product roadmap.
Ceva uses Verilog, SystemVerilog, ASIC, FPGA, Synopsys, and Cadence for chip design, paired with LLVM, MLIR, XLA, and IREE for compiler infrastructure. PyTorch and GPU/DSP simulation support inference optimization work.
Key initiatives include AI graph compiler software for NPUs, next-generation Wi-Fi and connectivity IP, wireless transceiver IC development, cellular modems, and radar/lidar baseband processing. Tooling and driver development for macOS also underway.
Other companies in the same industry, closest in size