Verbex builds voice-first AI agents for customer engagement, sales, and support using a custom LLM stack (Llama, Hugging Face Transformers, LangChain) paired with proprietary speech technology. The engineering-focused org is actively integrating advanced speech foundation models (SpeechGPT, Moshi, Llama-Omni) and scaling inference with TensorRT + vLLM, signaling rapid iteration on model quality and latency performance. Active projects span TTS/STT development, Kubernetes-based microservice deployment, and telephony integration—suggesting a maturing platform transitioning from research to production at scale.
Verbex develops conversational AI agents that interact via voice across phone, web, mobile, and IoT channels. The platform targets enterprise use cases—SDR automation, support agents, appointment booking, and conversational payments—across finance, healthcare, and other regulated verticals. Built on a proprietary LLM foundation and speech synthesis/recognition stack, agents handle high throughput (2M+ calls/hour) with low latency and operate across multiple jurisdictions (Japan, Bangladesh). Founded in 2017 and based in Singapore, the company holds 24 patents and operates a compact, engineering-led team.
Llama, Hugging Face Transformers, LangChain, ONNX, TensorRT, and vLLM for inference; Docker and Kubernetes for deployment; PostgreSQL and MongoDB for persistence; Kafka and RabbitMQ for messaging.
Core work includes TTS/STT and LLM development, integrating speech foundation models, Kubernetes microservice infrastructure, CI/CD pipeline automation, and telephony system integration testing.
Other companies in the same industry, closest in size