Custom silicon and compiler stack for AI and HPC workloads
NextSilicon designs application-specific processors and the software stack to run them. The tech footprint—MLIR, LLVM, PyTorch/TensorFlow compilers, CUDA/ROCm, and EDA tools (Synopsys, Cadence)—shows a company building from silicon up through AI inference and HPC optimization. The hiring profile is heavily senior and lead engineers (27 of 30 in engineering), signaling deep technical work on custom hardware-software integration rather than scaling operations.
Founded in 2017 and based in Giv'atayim, Israel, NextSilicon develops custom processors and compiler infrastructure for accelerating AI and high-performance computing workloads. The company operates across the full stack: ASIC design (timing closure, backend optimization), runtime compilation (MLIR-based AI compiler), and system software for proprietary coprocessors. Active projects span next-generation runtime compilers, PyTorch/TensorFlow backends, and automated optimization for HPC applications. The engineering-forward organizational shape and focus on cross-stack performance bottlenecks reflect the technical complexity of adapting software to custom hardware acceleration.
Python, C/C++, MLIR, LLVM, PyTorch, TensorFlow, CUDA, ROCm, Synopsys, and Cadence. Currently adopting MLIR for compiler infrastructure.
Next-generation runtime compilers, MLIR-based AI compiler stacks, custom coprocessor architectures, PyTorch/TensorFlow backends, and automated optimization for HPC and AI workloads on proprietary hardware.
Other companies in the same industry, closest in size