Hydrolix operates a streaming data lake optimized for high-volume log ingestion and historical analysis at petabyte scale. The stack reveals a polyglot data platform: ClickHouse and Druid for analytics engines, Kubernetes for orchestration, and multi-cloud deployment (AWS, Azure, GCP). Active projects center on scaling petabyte-scale streaming architecture and Kubernetes reliability, while pain points cluster around data cost optimization and platform scaling—suggesting the core product value is making massive log retention economically viable.
Hydrolix builds a streaming data lake designed for organizations ingesting and analyzing extreme-scale log data. The product combines warehouse query performance with data lake flexibility, handling streaming ingest, transformation, and compression natively. Founded in 2018 and based in Portland, Oregon, the company operates with 201–500 employees. The engineering-focused hiring mix and active projects around integration deployment, observability pipelines, and CI/CD optimization indicate a maturing platform with emphasis on operational reliability and customer deployment success.
Hydrolix uses ClickHouse and Druid as core analytics engines, Kubernetes for orchestration, PostgreSQL and MongoDB for data storage, and deploys across AWS, Azure, and GCP. Observability tools include Prometheus, Grafana, and Kibana.
Active projects include petabyte-scale streaming analytics architecture, Kubernetes cluster reliability, CI/CD optimization, observability pipeline development, and Salesforce workflow automation for customer onboarding and retention.
Other companies in the same industry, closest in size