AI research lab building reasoning models and developer tools
SkyLabs AI is a 2024-founded research team focused on reasoning-capable AI models, with infrastructure built around high-throughput LLM inference (NVIDIA Triton, vLLM) and containerized execution sandboxes. The stack—Python/TypeScript frontend, C++ reasoning kernels, Kubernetes orchestration, and Kafka/RabbitMQ event streams—reflects a systems-heavy approach to AI engineering. Adopting LangGraph signals movement into agentic workflows; active hiring skews senior (5 of 11 roles) with engineering-led composition (7 engineering, 1 research, 1 product), suggesting focus on building inference infrastructure and reasoning agent capabilities rather than sales or customer support.
SkyLabs AI is a research organization based in Coral Gables, Florida, working on AI models trained for complex reasoning tasks. The company is building an integrated developer experience around reasoning models, spanning an IDE plugin for AI-native development, high-throughput LLM inference infrastructure, and agentic systems for code comprehension and retrieval-augmented generation over code bases. Core infrastructure includes containerized execution environments, telemetry pipelines to track agent behavior, and cloud deployment across GCP and AWS. The team is hiring primarily in Pakistan and the United States, with current headcount under 50.
Frontend: React, TypeScript, Vue. Backend/inference: C++, gRPC, NVIDIA Triton, vLLM. Orchestration: Kubernetes, Docker, Terraform. Messaging: Kafka, RabbitMQ. Cloud: GCP, AWS. Observability: Prometheus, Grafana, Jaeger. Tooling: CLion, Visual Studio Code, LLVM, Bazel.
Core projects include an AI-native IDE plugin, LLM inference infrastructure, containerized execution sandboxes, agentic systems for C++ code comprehension, and RAG pipelines for code knowledge retrieval. Infrastructure includes telemetry systems to track agent trajectories.
Other companies in the same industry, closest in size