Tecton operates a feature store—infrastructure that transforms raw data into ML-ready features and serves them for real-time predictions. The tech stack (Python, Java, Kotlin, Go, Spark, Ray, DuckDB, Kafka, Flink, Kubernetes, gRPC) reflects a systems-heavy, polyglot engineering org built for sub-millisecond latency and millions of requests per second. Active projects center on query execution (DAG-based multi-stage queries, optimization), serving platform scaling, and distributed compute—pain points that map directly to their core technical challenge: maintaining tight SLAs on availability and latency across a several-million-line monorepo.
Tecton is a feature store platform founded in 2019 by the creators of Uber's Michelangelo ML platform. The product transforms raw data into ML-ready features and serves them for real-time predictions—used for fraud detection, credit decisions, and personalization. The company operates with a senior-heavy engineering org (13 engineers, mostly senior/staff level) and minimal recent hiring velocity, suggesting a stable, focused product roadmap rather than rapid scaling. They serve mid-market to enterprise ML teams running production inference workloads that demand high availability and sub-millisecond response times.
Python, Java, Kotlin, Go, Apache Spark, Ray, Arrow, DuckDB, Kafka, Flink, Kubernetes, gRPC, PostgreSQL, Snowflake, BigQuery, Redshift, DynamoDB, Redis, and cloud infrastructure across AWS, GCP, and Azure.
Query execution engines with DAG support, query optimization, scaling serving to millions of requests per second, distributed compute, observability, and improved developer workflows for feature development.
Other companies in the same industry, closest in size