AI research and implementation platform for enterprise LLM and agent deployment
H Company operates a machine learning infrastructure and research outfit built on PyTorch, JAX, TensorFlow, and a deep optimization stack (CUDA, Triton, vLLM, TensorRT-LLM). The tech foundation—spanning GPU kernels, inference pipelines, and training infrastructure—reveals a company shipping inference and training systems to enterprises, not just consulting. Hiring skews research and engineering (14 of 22 roles) with aggressive velocity, signaling scaled delivery of proprietary models and agent platforms.
Notable leadership hires: Sales Director
H Company develops AI research and deployment infrastructure for enterprise customers. The product suite spans training infrastructure for custom models and agents, scalable inference pipelines, and an agent platform with APIs and runtimes. The tech stack shows deep systems work: GPU kernel development, observability and monitoring pipelines, and performance optimization across the inference chain. The company is based in Paris and actively recruiting across France, the UK, and the US.
H Company uses PyTorch, JAX, TensorFlow for training; Triton, vLLM, TensorRT-LLM, and SGLang for inference optimization; and CUDA, NCCL, and custom GPU kernels for hardware acceleration. FastAPI and REST serve the API layer.
H Company is building training infrastructure for models and agents, scalable inference pipelines, agent platform APIs and runtimes, and GPU kernel optimizations. Recent focus areas include observability suites, instruction-following research, and enterprise platform integration.
Other companies in the same industry, closest in size