GPU infrastructure and HPC clusters for AI workloads at scale
Sesterce operates high-performance GPU clusters ranging from 100 to 15,000 GPUs, built on Kubernetes + Docker across AWS, GCP, and Azure. The tech stack reveals a production-grade infrastructure play: orchestration, multi-cloud networking (InfiniBand, Ethernet, Cisco/Arista/Juniper), and ML frameworks (TensorFlow, PyTorch, scikit-learn). The company is actively designing new datacenters and optimizing energy efficiency — their pain-point pattern (expanding HPC globally, scaling architecture, datacenter acquisition costs) signals they're building toward European capacity while managing unit economics.
Sesterce provides managed GPU infrastructure and HPC clusters for AI builders, with deployments scaling from 100 to 15,000 GPUs. The company operates datacenters primarily in France and is pursuing expansion across Europe. Services include flexible cloud and cluster solutions, with active projects spanning datacenter design, deployment, energy optimization, and commercialization. The organization is engineering-focused (7 senior and mid-level engineers) with ops, finance, and sales functions — typical of an infrastructure-as-a-service startup managing complex hardware and multi-tenant workloads.
Kubernetes, Docker, AWS/GCP/Azure, Python, Jenkins, GitLab CI/CD, Terraform, Ansible, TensorFlow, PyTorch, InfiniBand, Ethernet, Cisco/Arista/Juniper networking.
HPC cluster optimization, datacenter expansion and acquisition, energy efficiency, deployment automation, and commercialization of GPU infrastructure services across Europe.
Other companies in the same industry, closest in size