Full-stack AI infrastructure platform for enterprise GPU and Kubernetes operations
Mirantis provides AI infrastructure automation across Kubernetes, OpenStack, and GPU orchestration for large enterprises. The tech stack reveals a testing-and-validation-heavy operation—Playwright, Cypress, Selenium, TestRail, Xray dominate the tooling—paired with ML-ops primitives (Kubeflow, KServe, vLLM, KubeVirt). Active projects center on AI workload deployments and multi-cloud testing at scale, while pain points cluster around GPU automation complexity and infrastructure-operations friction. Hiring is engineering-led (62 roles, mostly senior-level) and decelerating, suggesting mature product focus over rapid expansion.
Mirantis builds infrastructure automation software for AI workloads, containerized applications, and hybrid cloud environments. The platform spans orchestration (Kubernetes, OpenStack), GPU resource management, and CI/CD automation, targeting enterprises managing multi-cloud and on-premises deployments. Founded in 1999 and headquartered in Campbell, California, the company operates across 12 countries with 501–1,000 employees. Core competencies include Kubernetes, Docker, virtualization, and open-source infrastructure projects.
Mirantis uses Kubernetes, Docker, OpenStack, Playwright, GitHub Actions, TestRail, Cypress, Selenium, Python, Go, Linux, Kubeflow, KServe, vLLM, KubeVirt, Harbor, ArgoCD, and Grafana across infrastructure, testing, and AI operations.
Yes. Engineering represents 62 of 98 active roles, predominantly senior-level positions. Hiring is active across Czechia, Bulgaria, US, Poland, Ukraine, Spain, Netherlands, Latvia, India, UK, Finland, and Belgium.
Current projects include scalable test automation, Kubernetes cluster management, customer infrastructure deployments, AI workload proof-of-concepts, CI/CD pipeline optimization, and Mirantis OpenStack for Kubernetes (MOSK) product development.
Other companies in the same industry, closest in size