echoloc

Nebius Tech Stack

Full-stack AI cloud infrastructure for model training and deployment

Technology, Information and Internet Amsterdam 501–1,000 employees Public Company

Nebius operates a GPU-backed AI cloud platform targeting ML practitioners at startups, enterprises, and research institutions. The tech stack—PyTorch, Ray, Slurm, MLflow, NVIDIA H200 GPUs, and Kubernetes—is purpose-built for distributed training and inference workloads. With 96 engineers and 30 ops staff actively hiring across 17 countries and projects spanning hardware validation, data center buildout, and regional go-to-market expansion, Nebius is scaling toward infrastructure-as-a-service for generative AI at scale.

Tech Stack 158 technologies

Core StackAzure AD Python HubSpot Linux MLflow PyTorch Kubernetes Go Azure Functions Terraform C# Microsoft Sentinel Microsoft Defender XDR Microsoft Purview Kusto Query Language PowerShell Microsoft Graph API Bash Azure Logic Apps QEMU KVM Ray Slurm NVIDIA SQL H200 GPU API Management Azure Service Bus Event Grid REST API+122 more
AdoptingNVIDIA

What Nebius Is Building

Challenges

  • Reducing infrastructure costs
  • Scaling inference platform usage across regions
  • Improving sla performance
  • Building l3 support line from scratch
  • Fleet stability
  • Reducing resolution times
  • Hardware firmware issues
  • Large-scale ai deployments
  • Optimizing gpu performance for ml training
  • Building large in-house ai/ml teams

Active Projects

  • Us market pipeline development
  • Hardware validation rollout
  • Ats to hris workflow ownership
  • Design partner initiatives and joint campaigns
  • Hardware reliability strategy
  • Post-hire quality survey design
  • Low precision training & inference
  • Ai data center development
  • Implementing asset management processes
  • Regional go-to-market strategy

Hiring Activity

Accelerating220 roles · 140 in 30d

Department

Engineering
96
Ops
30
Sales
21
Support
14
HR
13
Marketing
13
Security
8
Data
6

Seniority

Senior
122
Mid
46
Manager
30
Lead
9
Junior
5
Director
3
C-Level
1

Notable leadership hires: Warehouse Operations Lead, Regional Sales Director, Director GTM M&A

Company intelligence

Find more companies like Nebius by tech stack, pain points and active projects

Get started free

About Nebius

Nebius provides cloud infrastructure optimized for AI model training and inference. The platform is built on NVIDIA GPUs (H200), orchestrated through Kubernetes and Slurm, and supports PyTorch and Ray for distributed workloads. Customers include startups, enterprises, and scientific institutions building and deploying generative AI applications. The company operates from Amsterdam with a distributed engineering and operations footprint across Europe, North America, the Middle East, and Asia-Pacific. Current focus areas include hardware reliability, regional scaling, inference platform expansion, and cost optimization.

HeadquartersAmsterdam
Company Size501–1,000 employees
Hiring MarketsCzechia, Netherlands, United States, United Kingdom, Finland, Denmark, Israel, Bulgaria

Frequently Asked Questions

What GPUs does Nebius use for AI training?

Nebius deploys NVIDIA H200 GPUs as core infrastructure, orchestrated with Kubernetes and Slurm for distributed training jobs.

Does Nebius use Kubernetes?

Yes. Kubernetes is a primary orchestration layer, paired with Slurm for workload scheduling and Ray for distributed ML job coordination.

Similar Companies in Technology, Information and Internet

Other companies in the same industry, closest in size