echoloc

Magic Tech Stack

Frontier AI model development with GPU-scale infrastructure

Software Development San Francisco 2–10 employees Founded 2022 Privately Held

Magic is building frontier-scale code models on massive GPU clusters, operating the full stack from kernel optimization to trillion-parameter training. The tech stack — CUDA, XLA, Triton, Ray, Kubernetes across GCP/AWS/Azure/OCI — reflects deep infrastructure work; the project list (sandboxed execution, high-density compute, internet-scale data pipelines, inference optimization) reveals a company building toward inference at scale, not just training. Engineering dominance and heavy investment in storage, caching, and fault tolerance signal they are solving hard systems problems upstream of the model itself.

Tech Stack 28 technologies

Core StackAWS Terraform Pulumi CloudFormation C++ Go Rust GitHub Kubernetes CUDA XLA GPU TPU GCP Azure OCI AWS CDK C NVMe NFS cgroups SSD Ray Ruff LinkedIn NCCL Triton Mojo

What Magic Is Building

Challenges

  • Maintaining diverse datasets at scale
  • Navigating unknown product development
  • Optimizing inference latency
  • Ai training and inference process protection
  • Emerging threat detection
  • Red-teaming vulnerability reduction
  • Highly available training
  • Generating synthetic datasets reliably
  • Fault detection and recovery
  • Data infrastructure ergonomics

Active Projects

  • Sandboxed execution environments
  • Self-service compute platform
  • High-performance storage and caching systems to support long-context inference and training
  • High density compute provisioning
  • Evaluate porting compute kernels to alternative hardware
  • Massive-scale gpu supercomputing infrastructure
  • Build out internet-scale data pipelines and crawlers
  • Train trillion-parameter models on large gpu clusters
  • Optimize inference throughput for novel model architectures
  • Post-training dataset acquisition

Hiring Activity

Decelerating25 roles · 5 in 30d

Department

Engineering
15
Data
4
Research
2
HR
1
Product
1
Security
1

Seniority

Senior
17
Mid
6
Lead
1
Company intelligence

Find more companies like Magic by tech stack, pain points and active projects

Get started free

About Magic

Magic develops frontier-scale code models designed as AI coworkers rather than code assistants. Founded in 2022 and based in San Francisco, the company operates as a small, engineering-focused team (2–10 employees, 15 of 24 hires in engineering) building infrastructure for training and deploying very large models. The technical footprint spans model training on GPU supercomputer clusters, inference optimization for novel architectures, and supporting infrastructure: sandboxed execution environments, high-performance storage systems for long-context inference, and internet-scale data acquisition pipelines. The team is also investing in safety and robustness (red-teaming, fault detection, synthetic dataset generation).

HeadquartersSan Francisco
Company Size2–10 employees
Founded2022
Hiring MarketsUnited States

Frequently Asked Questions

What tech stack does Magic use?

Magic uses CUDA, XLA, Triton for GPU compute; Ray for distributed training; Kubernetes for orchestration; Terraform/Pulumi for infrastructure-as-code; GCP, AWS, Azure, and OCI for cloud resources; and Rust, C++, Go for systems work.

What is Magic working on?

Magic is focused on frontier AI infrastructure: training trillion-parameter models on GPU clusters, optimizing inference throughput, building sandboxed execution and high-performance storage systems, acquiring internet-scale training data, and addressing fault tolerance and security in large-scale training pipelines.

Similar Companies in Software Development

Other companies in the same industry, closest in size