AI research lab focused on intelligent computing and materials science
Zhejiang Lab is a state-backed research institution founded in 2017 that applies AI — particularly foundation models and multimodal inference — to materials science and R&D workflows. The tech stack (PyTorch, Megatron-LM, DeepSpeed, Kubernetes, Hugging Face) reflects a research-heavy operation oriented toward large-scale model training and deployment. Pain points center on compute optimization, intelligent scheduling, and adapting general-purpose models to domain-specific scientific data, which maps directly to their active work on materials-focused inference platforms and elastic infrastructure.
Zhejiang Lab operates as a nonprofit research institute in Hangzhou, China, with 1,001–5,000 employees. Its mission centers on intelligent computing as a strategic technology platform supporting China's innovation-driven development goals. The organization runs three core operational areas: applied AI research (foundation models and multimodal reasoning), platform engineering (scheduling, distributed storage, inference frameworks), and materials science integration. Current hiring is concentrated in engineering and research roles, with accelerating velocity and a senior-heavy workforce, indicating expansion of in-house model development and infrastructure capacity.
PyTorch, Megatron-LM, DeepSpeed, Hugging Face, GPT, Claude, and Kubernetes. Stack reflects focus on large-scale model training, inference optimization, and distributed deployment.
Materials science AI integration (multimodal inference models, lightweight model deployment for materials R&D), foundation model research and deployment, and platform infrastructure (scheduling, distributed storage, elastic computing layers for high-performance access).
Other companies in the same industry, closest in size