ClearML operates an infrastructure layer for ML teams managing GPUs and training pipelines at scale. The stack—Python, Kubernetes, Slurm, CUDA, MongoDB—reveals a systems-level platform built for distributed compute. Hiring is heavily weighted toward engineering and senior/lead roles across Europe and North America, paired with accelerating marketing velocity; the pain-point list (scaling solutions architecture, complex enterprise pre-sales, secure deployments) suggests a shift from self-serve open-source adoption toward high-touch enterprise motion.
ClearML is an infrastructure platform that helps organizations optimize GPU utilization, manage AI/ML workflows, and deploy generative AI models. The product serves data science and IT teams across Fortune 500 companies, public sector agencies, startups, and academia. ClearML operates a dual model: a free open-source tier and hosted/self-hosted deployments. The company is an NVIDIA partner and reports over 2,100 customers and 300,000 users globally. Engineering is anchored in Python and containerization (Kubernetes, Docker); the platform integrates with major cloud providers (AWS, GCP, Azure) and supports orchestration frameworks like Slurm and MPI for distributed training.
Python, Kubernetes, MongoDB, Elasticsearch, Docker, CUDA, Slurm, AWS, GCP, and Azure. The stack reflects a systems platform optimized for GPU-accelerated workloads and distributed ML infrastructure.
Yes. Engineering roles dominate the hiring mix (10 of 18 active roles), weighted toward senior and lead levels. Recent postings span Poland, South Africa, Netherlands, France, Germany, Italy, US, and Portugal.
Over 2,100 customers with 300,000 users globally, including Fortune 500 companies, public sector agencies, startups, and academic institutions. ClearML partners with NVIDIA.
Other companies in the same industry, closest in size