AI detection platform with peer-reviewed accuracy for written content
Pangram Labs builds an AI detection model claiming 99.98% accuracy, backed by peer-reviewed research. The tech stack reveals a heavy ML-infrastructure focus: PyTorch, CUDA, DeepSpeed, Ray, and vLLM for model training and inference, paired with Apache Spark, Beam, and Airflow for data pipelines. Active projects span synthetic dataset creation, LLM inference scaling, and a synthetic text generation pipeline—indicating the company is building both the detection capability and the training data to sustain it. Hiring velocity is accelerating across engineering and research roles, suggesting they're scaling inference capacity and model development in parallel.
Pangram Labs operates a machine learning platform for detecting AI-generated writing, targeting educators, publishers, and content platforms. The company is based in Brooklyn and currently operates at 2–10 employees. Their immediate technical focus centers on three areas: improving the core detection model, scaling inference pipelines for user-facing throughput, and expanding synthetic training datasets. They're also building internal tooling and experimenting with growth loops. Documented pain points include operational scaling, inference pipeline bottlenecks, and user acquisition—typical constraints for early-stage ML platforms balancing model quality with product adoption.
Pangram claims 99.98% accuracy. Their approach is grounded in peer-reviewed research and focuses on helping users understand the origin of written content rather than simple binary classification.
The stack includes PyTorch, CUDA, and DeepSpeed for model training; Ray, vLLM, and Apache Airflow for inference and data orchestration; and AWS/GCP for cloud compute. This setup supports both model development and production inference scaling.
Other companies in the same industry, closest in size