Generative AI solutions and data talent staffing for enterprise clients
Zorba AI operates at the intersection of two distinct businesses: building generative AI systems (copilots, LLM workflows, RAG pipelines) and staffing data/AI roles for enterprises. The tech stack reveals a Microsoft-Azure-first foundation (Power BI, Power Platform, Azure OpenAI) layered with multi-cloud LLM inference (AWS Bedrock, Claude, Gemini, vLLM, Triton), indicating client demand for model portability and cost optimization. Pain points cluster heavily around query performance and pipeline reliability — suggesting internal scaling challenges that likely inform product roadmap priorities.
Zorba AI is a 11–50-person consulting firm based in Mumbai, India, founded in 2016. The company provides two main services: custom generative AI solutions (including AI copilots, NLP assistants, prompt engineering, and LLM-powered data workflows) and strategic staffing partnerships for data science, machine learning, cloud engineering, and MLOps roles. The hiring velocity is accelerating — 155 roles posted in the last 30 days, predominantly in data and engineering disciplines at senior levels — which indicates aggressive capacity scaling in delivery and talent brokerage. Projects span Python-based data orchestration, scalable ETL/ELT pipelines on GCP, LLM search and summarization systems, and Gemini enterprise model integration.
Primary: Power BI, Tableau, SQL Server, Python on Azure (Functions, Logic Apps, OpenAI). Multi-cloud LLM inference: AWS Bedrock, Claude, Gemini, vLLM, Triton. Adopting containerization (Docker, Kubernetes) and CI/CD (GitLab CI/CD, Jenkins, ArgoCD) to scale deployment.
LLM search/summarization pipelines, LLM+OCR deployment systems, Gemini enterprise integration, Python data orchestration, scalable ETL/ELT on GCP, and data product development using R and Quarto.
Other companies in the same industry, closest in size