Luma builds multimodal foundation models (video, 3D, generative media) with a product-first deployment strategy: Dream Machine ships generative video to creators; parallel work on agent stacks and AI-accelerated production workflows suggests a shift from single-modality generation toward agentic systems that operate across domains. The hiring mix is heavily research-forward (17 dedicated research roles against 42 engineering), yet sales and product teams are growing—indicating transition from research-output to commercialization, with noted friction around moving research into production systems and scaling inference.
Notable leadership hires: Delivery Lead, Partner Marketing Lead
Luma AI develops multimodal foundation models and generative media products targeting creators and production teams. Dream Machine, the flagship product, enables fast visual generation from text and images. The tech stack spans PyTorch, JAX, and TensorFlow for model development; Docker, Kubernetes, and CUDA/NCCL for distributed training and inference; and Next.js/React for product surfaces. Active projects include scaling multimodal training, agent stack development, and production workflow automation. The company is expanding hiring across Peru, United States, United Kingdom, Saudi Arabia, Germany, and Kyrgyzstan, with most roles concentrated in senior and mid-level engineering and research.
Luma develops multimodal AI models for video, 3D, and generative media. Dream Machine is their primary product—a text-to-video and image-to-video generator for creators. Active projects include agent stacks, AI-accelerated production workflows, and scaling multimodal foundation models.
Core ML: PyTorch, JAX, TensorFlow, CUDA, NCCL. Infrastructure: Docker, Kubernetes, Linux. Web: Next.js, React, JavaScript, TypeScript. Testing: Pytest, Playwright, Cypress, Selenium. Adopting: RAG, GitOps, ComfyUI.
Other companies in the same industry, closest in size