ComfyUI operates a modular, open-source platform for orchestrating AI inference pipelines—built on PyTorch and deployed across Kubernetes, GCP, AWS, and Azure. The engineering-heavy org (10+ engineers, mostly senior/lead level) is actively scaling inference optimization and GPU scheduling algorithms while managing explosive community adoption; simultaneous hiring across design, product, and marketing signals a shift from pure infrastructure toward productizing the platform and monetizing community contributions.
Notable leadership hires: Creative Director
ComfyUI is a node-graph-based interface for building and executing AI inference workflows, primarily in image generation. The company was founded in 2024 and operates from San Francisco with 11–50 employees. The technical foundation spans PyTorch backends, Kubernetes orchestration, and multi-cloud deployment (GCP, AWS, Azure). Core work includes inference engine optimization, GPU scheduling for heterogeneous workloads, and a design system to support new product launches. A significant part of the roadmap focuses on securing and deploying custom nodes contributed by an active open-source community.
PyTorch, Python, Go, Kubernetes, PostgreSQL, React, Vue, TypeScript, Terraform, GCP, AWS, Azure, GitHub, and infrastructure tools like Prometheus, Datadog, and Loki.
Core inference engine optimization, GPU scheduling for diverse workflows, deploying community-contributed custom nodes securely, frontend architecture for new products, and a design system to support scaling.
Other companies in the same industry, closest in size