echoloc

Liquid AI Tech Stack

Efficient AI model optimization and inference across hardware scales

Information Services Cambridge, Massachusetts 51–200 employees Founded 2023 Privately Held

Liquid AI is a GPU-focused AI infrastructure company built on CUDA, PyTorch, and low-level kernel optimization (custom CUDA kernels, TensorRT, vLLM). The tech stack and project list reveal a shift away from generic frameworks toward hand-tuned performance: they're developing custom inference kernels for edge hardware, optimizing RL training runs, and integrating novel optimization techniques—all pain points rooted in generic frameworks being insufficient for production scale. The engineering-heavy org (14 senior engineers, 6 hires in the last month) and active Japanese language model work suggest both rapid infrastructure development and vertical market expansion.

Tech Stack 34 technologies

Core StackPyTorch Python Hugging Face React Next.js TypeScript Tailwind CSS Supabase TensorFlow FastAPI C++ CUDA Nsight Systems Nsight Compute cuDNN cuBLAS C/C++ DeepSpeed Megatron-LM vLLM NCCL shadcn/ui Neon Apollo GPU JAX SGLang llama.cpp TensorRT Google Workspace+1 more
AdoptingSwift Kotlin

What Liquid AI Is Building

Challenges

  • Scaling rapidly
  • Generic frameworks insufficient for performance
  • Eliminating manual invoicing workflows
  • Dataset scarcity
  • Adapting foundation models to new languages
  • Catching and fixing edge cases where llm may fail
  • Ensuring datasets meet enterprise-grade quality
  • Improving training throughput
  • Scaling data pipelines
  • Eliminating data loading bottlenecks

Active Projects

  • Rl training run optimization
  • Pretraining data pipeline
  • Synthetic data generation pipeline
  • Optimization technique integration
  • Custom cuda kernel development for novel model architectures
  • Implement inference kernels for edge hardware
  • Web crawler for dataset acquisition
  • Japanese dataset curation and augmentation
  • Japanese llm fine-tuning for enterprise use cases
  • Evaluation framework implementation for japanese datasets

Hiring Activity

Steady20 roles · 6 in 30d

Department

Engineering
14
Data
2
Design
1
Executive
1
Finance
1
Research
1
Sales
1

Seniority

Senior
14
Mid
5
Manager
1
Staff
1
Company intelligence

Find more companies like Liquid AI by tech stack, pain points and active projects

Get started free

About Liquid AI

Liquid AI, founded in 2023 and headquartered in Cambridge, Massachusetts, builds efficient AI systems optimized for inference and training across different hardware targets. The company operates across model optimization (RL training, pretraining pipelines, synthetic data generation), low-level systems work (CUDA kernel development, inference kernels for edge devices), and targeted language-model customization (Japanese LLM fine-tuning for enterprise customers). The team is scaling model evaluation frameworks, data pipelines, and Japanese dataset curation in parallel, signaling both technical depth in GPU-level optimization and early motion toward non-English language markets.

HeadquartersCambridge, Massachusetts
Company Size51–200 employees
Founded2023
Hiring MarketsUnited States, Japan

Frequently Asked Questions

What is Liquid AI's tech stack?

CUDA, PyTorch, cuDNN, TensorRT, vLLM, DeepSpeed, Megatron-LM, JAX, and custom C/C++ kernel development for GPU optimization. Frontend: React, Next.js, TypeScript. Data: Supabase, Neon.

What is Liquid AI working on?

Custom CUDA kernels for novel model architectures, inference kernel optimization for edge hardware, Japanese LLM fine-tuning for enterprise, RL training optimization, pretraining data pipelines, synthetic data generation, and evaluation frameworks for language-specific datasets.

Similar Companies in Information Services

Other companies in the same industry, closest in size