Distributed AI training workforce for model safety and evaluation
SME Careers operates a global network of annotators and evaluators who label safety data, run red-team tests, and audit model behavior for AI labs. The stack spans R, Python, SQL, and mobile frameworks—a sharp signal that the workforce tools handle both statistical analysis and consumer-grade interfaces. The organization is scaling rapidly (342 roles posted in 30 days, 710 total open), with data roles dominating the headcount; the pain-point list clusters tightly around model bias, safety, and accuracy—revealing that their core offering sits at the intersection of AI safety infrastructure and distributed QA.
SME Careers connects remote workers with AI teams for structured training work: annotation, evaluation, fact-checking, and content review. The model is distributed and weekly-paid. The company operates across 25+ hiring countries, with the largest concentration in data-related roles (365 open positions), followed by engineering (164) and security (97). Active project work centers on safety-critical tasks—red-teaming, adversarial testing, policy localization, and quality assurance for model outputs. The talent mix skews mid-to-senior, signaling they recruit for quality control and expertise, not volume alone.
SME Careers hires across 25+ countries including Indonesia, Egypt, India, France, Bangladesh, Japan, Brazil, China, Germany, South Korea, Portugal, Malaysia, Vietnam, Morocco, Philippines, and others, enabling a globally distributed workforce.
Current projects focus on safety-critical AI work: safety data labeling, red-teaming and adversarial testing, policy localization, AI response evaluation, model behavior audits, and quality checks on safety data.
The stack includes R, Python, SQL, pandas, NumPy, scikit-learn for analysis; React, Angular, Vue for interfaces; and iOS/Android (Swift, Kotlin) for mobile. SuperAnnotate is their core labeling platform.
Other companies in the same industry, closest in size