echoloc

EnCharge AI Tech Stack

In-memory AI accelerator hardware with custom compiler stack

Embedded Software Products Santa Clara, California 11–50 employees Founded 2022 Privately Held

EnCharge AI designs custom silicon for AI inference, built around in-memory computing architecture paired with a next-generation compiler stack (Torch-MLIR, XLA, LLVM). The hiring profile is heavily weighted toward principal and senior hardware engineers (13 principal roles), reflecting the maturity and depth required for post-silicon validation, timing closure on advanced nodes, and chiplet-based system integration. Active adoption of Llama alongside projects spanning SoC floorplanning, custom NPU kernel libraries, and AI serving infrastructure suggests a shift from pure accelerator design toward end-to-end inference deployment.

Tech Stack 35 technologies

Core StackC++ Python GitHub Actions GitLab CI/CD Jenkins Linux PCIe RISC-V SystemVerilog UVM MPI pthreads Torch-MLIR XLA LLVM GPU MLIR Cadence TCL LPDDR4 LPDDR5 Tcl Synopsys TestMAX bash Red Hat Ubuntu NFS SMB Innovus IC Compiler II+4 more
AdoptingLlama

What EnCharge AI Is Building

Challenges

  • Balancing aggressive performance targets
  • Eliminating systemic bottlenecks
  • Graph compilation
  • Performance parity across models
  • Meeting stringent timing targets
  • Ensuring manufacturable chips
  • Balancing performance and power
  • Minimizing area overhead
  • Timing closure on advanced process nodes
  • Performance optimization of simulation

Active Projects

  • High-speed soc implementation
  • Chip-level floorplanning for complex socs
  • Serving infrastructure for video generation models
  • Chiplet-based ai inference platform
  • Next-generation ai compiler and software stack
  • Kernel library integration for custom npu ops
  • Post-silicon validation and bring-up
  • Scalable multi-core simulation model
  • Timing closure strategy for soc and ip designs
  • Simulation infrastructure optimization

Hiring Activity

Accelerating40 roles · 25 in 30d

Department

Engineering
35
Ops
1

Seniority

Principal
13
Senior
10
Staff
8
Director
3
Lead
1
Mid
1

Notable leadership hires: Physical Design Lead

Company intelligence

Find more companies like EnCharge AI by tech stack, pain points and active projects

Get started free

About EnCharge AI

EnCharge AI builds in-memory computing hardware and software for AI workloads, founded in 2022 by semiconductor and AI systems veterans. The company operates as a hardware-software hybrid, developing custom silicon optimized for inference in power- and space-constrained environments, paired with a proprietary compiler and runtime stack. Projects span chip design (SoC implementation, floorplanning, post-silicon validation), compiler infrastructure (graph compilation, kernel library integration), and inference serving systems. The organization is structured around engineering depth, with hiring concentrated in hardware design, physical design, and silicon validation across US and India offices.

HeadquartersSanta Clara, California
Company Size11–50 employees
Founded2022
Hiring MarketsUnited States, India

Frequently Asked Questions

What is EnCharge AI's tech stack?

Hardware design: PCIe, RISC-V, SystemVerilog, UVM, Cadence, Synopsys TestMAX, Innovus, IC Compiler II. Software/compiler: Torch-MLIR, XLA, LLVM, MLIR, C++, Python. Infrastructure: Linux (Red Hat, Ubuntu), GitHub Actions, GitLab CI/CD, Jenkins.

What is EnCharge AI working on?

High-speed SoC implementation, chiplet-based AI inference platforms, next-generation AI compiler stack, post-silicon validation, and serving infrastructure for video generation models. Current challenges center on timing closure, performance-power tradeoffs, and graph compilation optimization.

Similar Companies in Embedded Software Products

Other companies in the same industry, closest in size