GPU platform for AI, graphics, and edge computing
Moore Threads designs GPUs for AI inference, graphics, and edge deployment, with a stack centered on CUDA, PyTorch, and TensorFlow. The project portfolio reveals an infrastructure-first company: GPU drivers, AI inference operators, cloud desktop virtualization, and an emerging educational push via open-source AI tutorials. Hiring velocity is accelerating (11 roles in 30 days), weighted heavily toward engineering internships and early-career roles—typical for a 2020 semiconductor startup scaling software foundations and developer ecosystems in parallel.
Moore Threads, founded in 2020 and based in Beijing, manufactures GPUs targeting AI, graphics, and metaverse workloads. The company operates across cloud, edge, and mobile deployment surfaces, with particular focus on AI inference optimization and autonomous driving. Development spans low-level GPU drivers, cloud virtualization, and the MUSA software stack—positioning the company as both a hardware vendor and a platform builder. Active hiring in China and the United States reflects geographic expansion of engineering and product capacity.
Moore Threads targets PyTorch, TensorFlow, PaddlePaddle, MXNet, and Caffe. The stack also includes CUDA, OpenCL, and Vulkan for compute and graphics APIs, plus MLIR and XLA for compiler-level optimization.
Moore Threads is actively hiring in China and the United States, reflecting domestic scaling and international engineering expansion.
Other companies in the same industry, closest in size