AI workload orchestration platform for hybrid infrastructure
Open Innovation AI builds a Kubernetes-native orchestration platform (OICM) for managing AI compute across GPU-diverse, multi-cloud environments. The tech stack is infrastructure-heavy—Kubernetes, Terraform, ArgoCD, Prometheus, Grafana—paired with ML frameworks (PyTorch, TensorFlow, vLLM), revealing a company solving the operational complexity of federated AI deployments rather than model training itself. Hiring is engineering-dominant (6 of 11 roles) and senior-skewed (8 senior+ hires), suggesting they're building toward customer-scale reliability and infrastructure depth, not early-stage experimentation.
Open Innovation AI develops a platform for orchestrating AI workloads across heterogeneous GPU clusters and cloud providers. The product, Open Innovation Cluster Manager, abstracts hardware diversity (NVIDIA, AMD, Intel accelerators) and cloud topology (AWS EKS, Azure, GCP, OpenShift), allowing enterprises to manage AI jobs without lock-in to a single infrastructure vendor or accelerator family. The company targets mid-to-large organizations seeking to reduce operational overhead and accelerate deployment cycles for AI applications. Founded in 2022 and based in London, the 51–200-person team operates across UK and UAE hiring markets, with active expansion into sales and customer success roles alongside core infrastructure engineering.
The Open Innovation Cluster Manager (OICM) orchestrates AI workloads across diverse GPU hardware and cloud infrastructure (AWS, Azure, GCP, OpenShift, Kubernetes), reducing operational costs and deployment complexity for enterprise AI applications.
Infrastructure: Kubernetes, Terraform, Helm, ArgoCD, AWS EKS, OpenShift. Observability: Prometheus, Grafana, Loki. Languages: Python, Go, C++, Java. ML: PyTorch, TensorFlow, vLLM, llama.cpp. CI/CD: GitLab CI/CD, Docker.
Other companies in the same industry, closest in size