Avigna.AI builds generative AI capabilities into enterprise products using a Microsoft-heavy stack (Azure OpenAI, Azure AI Search, LangChain, Semantic Kernel) paired with vector databases (Pinecone, Weaviate, FAISS). The project list—RAG pipelines, LLMOps CI/CD, AI integration initiatives—reveals a systems-focused practice around embedding AI into existing software rather than building standalone models. Hiring is concentrated in engineering (6 roles) with senior-level emphasis, suggesting they're scaling implementation depth over headcount.
Avigna.AI helps mid-market and enterprise companies integrate generative AI into existing products. The company operates from Pune and works primarily with clients in India. Their approach centers on RAG (retrieval-augmented generation) pipelines and LLMOps infrastructure to inject AI capabilities into legacy and modern applications without full rewrites. The tech stack leans heavily on Microsoft Azure services and open-source AI frameworks, indicating a consultative systems-integration model rather than SaaS product licensing.
Primary: .NET Core, Azure (App Service, Functions, Storage, OpenAI, AI Search, Cognitive Search), C#, SQL Server, Angular, React. AI/ML: LangChain, Semantic Kernel, AutoGen, FastAPI, Python. Vector DBs: Pinecone, Weaviate, FAISS. Infrastructure: Docker, Kubernetes.
Generative AI applications, RAG pipelines, LLMOps CI/CD pipelines, and AI integration initiatives within existing enterprise products. Focus is on embedding AI into legacy and modern software systems.
Other companies in the same industry, closest in size