AI safety research nonprofit building non-agentic systems and oversight tools
LawZero is a Montreal-based nonprofit conducting AI safety research under the direction of a leading AI researcher. The tech stack—PyTorch, TensorFlow, JAX, distributed training infrastructure (DeepSpeed, Ray, SLURM), and large-scale inference optimization (vLLM, Hugging Face Accelerate)—reflects heavy computational work on model training and inference efficiency. The research-dominant hiring mix (11 of 16 active roles) paired with projects spanning experimentation pipelines, probabilistic graphical models, and amortized inference methods signals a team built for rigorous, reproducible AI safety experiments rather than product delivery.
LawZero is a nonprofit organization focused on AI safety research and technical solutions for safe-by-design AI systems. Based in Montréal, the organization conducts work on non-agentic AI systems intended to accelerate scientific discovery, provide oversight for agentic systems, and advance understanding of AI risks. Research priorities include security analysis, large-scale experimentation infrastructure, inference optimization, and evaluation frameworks. The team operates across research, engineering, product, and operations, with hiring currently concentrated in Canada.
PyTorch, TensorFlow, JAX, AWS, GCP, Azure, Docker, Kubernetes, Python, DeepSpeed, Hugging Face Accelerate, vLLM, Ray, SLURM, NVIDIA Nsight, and gRPC. Heavy emphasis on distributed training and large-scale inference tools.
AI security and safety agendas, large-scale experimentation pipelines, amortized inference methods, parameter-learning for probabilistic graphical models, evaluation frameworks, and a scientist AI program focused on using AI to accelerate scientific work.
Other companies in the same industry, closest in size