RAG and LLM infrastructure for enterprise knowledge systems
Zenking builds AI infrastructure centered on retrieval-augmented generation (RAG) and large language models. The tech stack—Java, Python, MySQL, Docker, Kubernetes, LangChain, LlamaIndex, and GraphRAG—reflects a foundation designed for production LLM pipelines. Active projects span RAG optimization, multi-knowledge-base routing, prompt-LLM middleware, and AI scenario implementation, with RAG adoption signaling heavy investment in knowledge retrieval as a core differentiator.
Zenking is a Beijing-based software company building infrastructure for enterprise AI and knowledge management. The platform centers on RAG (retrieval-augmented generation) systems, multi-knowledge-base routing, and LLM middleware services. Projects indicate focus on knowledge content construction, cross-platform product planning for research and learning, and operationalizing AI transformation across business scenarios. The hiring mix—predominantly senior engineering and product roles in China—reflects a small, focused team.
Java, Python, MySQL, Docker, Kubernetes, LangChain, LlamaIndex, GraphRAG, Selenium, and JMeter. Design and prototyping tools include Figma, Axure RP, and Sketch. Development environments: JetBrains, Eclipse, VS Code.
Yes, engineering roles are active (4 open). Hiring is concentrated in China. Current openings skew senior-level (6 of 7 total roles are senior).
RAG pipeline optimization, multi-knowledge-base routing, LLM middleware services, knowledge content construction for business use cases, and cross-platform product development for research and learning platforms.
Other companies in the same industry, closest in size