Omnilex is an AI workspace purpose-built for legal teams, combining retrieval-augmented generation, LLM-powered search, and multi-model inference (Claude, ChatGPT, OpenAI) to accelerate legal research. The tech stack reveals a forward-looking architecture: pgvector + OpenSearch + Azure AI Search for semantic retrieval, Node.js/Next.js frontend, and active work on LLM agent runtimes in production. Current pain points—operational reliability of LLM workloads, messy legal-content ingestion across jurisdictions, and search relevance tuning—map directly to the project roadmap (retrieval ranking, enrichment pipelines, search infrastructure), suggesting a team focused on hardening AI reliability rather than feature breadth.
Omnilex builds an AI workspace for law firms and in-house legal teams, replacing manual legal research with AI-assisted workflows. The platform ingests legal documents across multiple jurisdictions, applies semantic search and entity extraction, and surfaces insights through LLM-powered chat and retrieval. Based in Zurich with 11–50 employees, Omnilex is actively hiring across Switzerland, Germany, and Austria, with a hiring velocity that has accelerated over the past month. The team is split engineering-heavy (12 eng, 3 data, 3 marketing), reflecting a technical product still in active optimization rather than a sales-driven phase.
React, Next.js, TypeScript, PostgreSQL, Azure, Node.js, and a vector-search layer (pgvector, OpenSearch, Elasticsearch, Azure AI Search). LLM integrations span Claude, ChatGPT, and OpenAI. Frontend testing via Jest, Playwright, and Storybook.
Retrieval ranking and search infrastructure tuning, LLM agent runtimes in production, enrichment pipelines (tagging, classification, embeddings, entity extraction), UX improvements, and a product launch roadmap. Core focus is operational reliability of LLM workloads and multi-jurisdiction legal content handling.
Other companies in the same industry, closest in size