Gcore operates distributed edge infrastructure and cloud services for media delivery, now pivoting toward AI inference workloads. The tech stack—Kubernetes, TensorFlow, PyTorch, vLLM, and OpenStack—combined with active projects in AI inference infrastructure and Kubernetes-native AI systems signal a strategic shift from pure CDN/hosting toward inference-as-a-service. Senior-heavy hiring (43 of 78 roles) concentrated in engineering reflects the complexity of building multi-region AI compute, not incremental scaling.
Gcore provides edge computing and cloud infrastructure designed for media companies and entertainment businesses, with a core business in content delivery, hosting, and storage. Founded in 2014, the company operates Points of Presence (PoPs) across Europe, the Middle East, and Asia. The product suite addresses CDN performance, security, and media platform services. In recent years the company has expanded into AI inference and training infrastructure, positioning itself to serve customers running large language models and other compute-intensive workloads at the edge.
Gcore runs Kubernetes, Docker, and OpenStack for orchestration; Python, Go, and FastAPI for backend services; TensorFlow, PyTorch, and vLLM for AI workloads; and Prometheus/Grafana for observability. Infrastructure spans Intel and NVIDIA hardware on Equinix and partner networks.
Active projects center on AI infrastructure: scalable AI inference and model training, Kubernetes-native AI platforms, monitoring and observability for inference, and real-time visibility APIs. Secondary focus includes network automation, latency reduction, and PoP capacity expansion.
Other companies in the same industry, closest in size