Data reasoning infrastructure for structured AI decision-making
Native is a 2025-founded startup building foundation models for tabular data and structured reasoning. The stack—PyTorch, Ray, JAX, Python, Rust, Java—signals a research-forward org focused on training and inference optimization. Active projects span self-supervised learning objectives for tabular data, LLM agent grounding in structured representations, and long-horizon agent behavior simulation. The hiring mix (7 of 8 recent roles at senior level, split between engineering and research) and repeated focus on GPU cluster optimization suggest the team is racing to solve scaling bottlenecks rather than filling out a mature product org.
Native builds reasoning infrastructure designed to improve decision accuracy in data-heavy environments. The company targets use cases where traditional ML and ungrounded LLMs fall short: scenarios requiring structured reasoning over complex tabular data. The engineering effort spans foundation model scaling, API design for internal and customer systems, new encoder architectures for heterogeneous data, and training/evaluation pipelines. The product roadmap includes both research-to-production translation (converting theoretical advances into backend services) and customer-facing interfaces—indicating early-stage movement from research lab toward deployed systems.
Primary stack: PyTorch, Ray, JAX, Python, Rust, Java, Go, SQL, React, TypeScript. The heavy ML framework concentration reflects a compute-intensive foundation model training and inference business.
San Francisco, California. The company also has hiring activity in the United Kingdom.
Other companies in the same industry, closest in size