European sustainable cloud infrastructure with AI inference capabilities
Yorizon Cloud operates a nascent infrastructure-as-a-service platform built on NVIDIA GPUs, vLLM, and OpenStack, with active development on LangGraph and CrewAI for agentic AI workloads. Founded in 2024, the company is hiring across engineering and technical support while simultaneously building order-to-cash and billing infrastructure—a pattern typical of early-stage platform companies scaling from product to operational maturity. The tech stack shape (GPU inference + AI orchestration + classical hypervisor stack) and project focus on partner activation and channel scaling suggest a go-to-market model centered on reseller networks rather than direct enterprise sales.
Notable leadership hires: Technical Support Lead, AI Operating Model Lead
Yorizon Cloud builds a sustainable, Europe-based cloud infrastructure platform offering IaaS and PaaS services. The company operates a network of data centers integrated with GPU hardware, hypervisors, and inference-optimized software (NVIDIA Triton, TensorRT, llama.cpp) to support both traditional workloads and emerging AI inference use cases. The platform is positioned as a regulated, security-compliant alternative to hyperscalers, with a stated focus on carbon efficiency. The go-to-market strategy runs through a partner ecosystem; active projects include partner program development, channel sales scaling, and partner activation. The organization is currently building core business infrastructure—billing systems, reporting, analytics—indicating a transition from product development to operational scaling.
Core stack: NVIDIA GPUs, vLLM, OpenStack, Kubernetes, Terraform, CUDA, TensorRT, Triton Inference Server. Currently adopting LangGraph, CrewAI, AutoGen for agentic AI workloads. Operating system layer includes Linux, Unix, Windows.
Berlin, Germany. The company also hires in Austria and Peru. Founded in 2024 as an independent entity.
Other companies in the same industry, closest in size