Corti builds healthcare-specific AI models (PyTorch, TensorFlow) deployed on multi-cloud infrastructure (Azure, AWS, GCP) with heavy focus on model serving (NVIDIA Triton, vLLM, FastAPI) and observability (Grafana, Loki, Tempo). Engineering-dominant hiring (16 of 25 roles) skews senior, and active projects span speech recognition, ML feature development, and Kubernetes multi-tenancy — indicating a company scaling inference workloads and wrestling with clinical-grade reliability at concurrent throughput.
Corti develops enterprise-grade AI models for healthcare delivery, distributed as managed API services. The product targets clinical workflows where decision support and automation reduce clinician cognitive load. The company operates across the United States, Denmark, and the United Kingdom. Core technical challenges center on scaling inference infrastructure for high concurrency, maintaining audit readiness in regulated environments, and reducing friction in customer API integration. Active work on speech recognition and text generation pipelines suggests expansion into multimodal clinical data.
Python, PyTorch, TensorFlow, Go, Kubernetes, NVIDIA Triton, vLLM, FastAPI, Apache Kafka, with observability via Grafana, Loki, Tempo. Hosting spans Azure, AWS, and GCP. MLflow and Kubeflow manage model lifecycle.
Speech recognition and text generation backends, multi-tenant Kubernetes infrastructure, new ML-based product features, API platform integration, and technical onboarding automation to reduce customer implementation friction.
Other companies in the same industry, closest in size