GPU infrastructure platform for large-scale AI compute deployment
Fluidstack operates a GPU compute infrastructure business serving AI labs and enterprises, with a tech stack anchored in infrastructure-as-code (Terraform, Ansible, Puppet, Chef) and observability (Prometheus, Grafana, Elasticsearch). The hiring mix reveals the company's capital-intensive, operations-driven model: engineering (103 roles) is paired with heavy ops (45), logistics (14), and construction (5) hiring, alongside emerging leadership in supply chain and tax — a pattern consistent with rapid data center buildout. Active projects span GPU cluster deployment, data center control systems, and WAN automation, while pain points cluster around scaling deployments to gigawatt scale and managing supply chain continuity.
Notable leadership hires: Capacity Lead, Tax Director, Payroll Director, Logistics Director, Supply Chain Director
Fluidstack provides GPU compute infrastructure for AI training and inference workloads, partnering with AI research labs, governments, and enterprises. The company operates a capital-intensive, geographically distributed model: it is building new data centers, deploying GPU clusters, and automating the operational systems (power, cooling, networking) required to run large-scale AI workloads. The platform includes digital twin tooling for deployment planning and contingency routing for cluster redundancy. Fluidstack is headquartered in New York and is actively hiring across the United States and United Kingdom.
Infrastructure-as-code (Terraform, Ansible, Puppet, Chef), observability (Prometheus, Grafana, Elasticsearch), Linux, Python, Go, Bash, and networking tools (Ciena, NetBox). The company is adopting vLLM and TensorRT-LLM for LLM inference optimization.
New York, NY. The company is hiring in the United States and United Kingdom.
Other companies in the same industry, closest in size