Liquid AI is a GPU-focused AI infrastructure company built on CUDA, PyTorch, and low-level kernel optimization (custom CUDA kernels, TensorRT, vLLM). The tech stack and project list reveal a shift away from generic frameworks toward hand-tuned performance: they're developing custom inference kernels for edge hardware, optimizing RL training runs, and integrating novel optimization techniques—all pain points rooted in generic frameworks being insufficient for production scale. The engineering-heavy org (14 senior engineers, 6 hires in the last month) and active Japanese language model work suggest both rapid infrastructure development and vertical market expansion.
Liquid AI, founded in 2023 and headquartered in Cambridge, Massachusetts, builds efficient AI systems optimized for inference and training across different hardware targets. The company operates across model optimization (RL training, pretraining pipelines, synthetic data generation), low-level systems work (CUDA kernel development, inference kernels for edge devices), and targeted language-model customization (Japanese LLM fine-tuning for enterprise customers). The team is scaling model evaluation frameworks, data pipelines, and Japanese dataset curation in parallel, signaling both technical depth in GPU-level optimization and early motion toward non-English language markets.
CUDA, PyTorch, cuDNN, TensorRT, vLLM, DeepSpeed, Megatron-LM, JAX, and custom C/C++ kernel development for GPU optimization. Frontend: React, Next.js, TypeScript. Data: Supabase, Neon.
Custom CUDA kernels for novel model architectures, inference kernel optimization for edge hardware, Japanese LLM fine-tuning for enterprise, RL training optimization, pretraining data pipelines, synthetic data generation, and evaluation frameworks for language-specific datasets.
Other companies in the same industry, closest in size