AI-powered 3D scanning platform converting physical objects into digital twins
ALLSIDES transforms physical products into photogrammetry-grade 3D digital twins using computer vision and neural rendering. The tech stack reveals a production ML infrastructure: PyTorch, MLflow, Weights & Biases, Kubeflow, and Airflow handle model training and experimentation; differentiable and diffusion-based rendering pipelines are core to their asset generation. Pain points cluster around scaling: quality control at production volume, shipping neural 3D systems, managing multi-terabyte training datasets, and reproducibility across distributed workloads—typical constraints for companies operationalizing research-grade computer vision at e-commerce scale.
ALLSIDES builds automated 3D scanning systems for brands and e-commerce platforms to generate digital twins of physical products at scale. The company serves apparel, footwear, and outdoor-gear brands requiring mass production of 3D assets for virtual try-on, product photography, and immersive commerce applications. Operations span 3D data capture, neural rendering, and post-production optimization. The engineering-heavy organization (10 of 11 roles) is actively hiring mid and senior-level talent, concentrated in Italy, with focus on rendering pipelines, neural inverse rendering, and quality standardization as production priorities.
Core ML stack: PyTorch, MLflow, Weights & Biases, Kubeflow, Airflow for training pipelines. Graphics: Blender, Substance Painter. Infrastructure: Docker, Linux, NFS, Delta Lake. Backend: Python, C++, FastAPI, Spring Boot, Node.js. Frontend: React, TypeScript, Vue, Angular.
Neural rendering pipelines (BRDF estimation, differentiable and diffusion-based rendering), neural inverse rendering for geometry and material estimation, 3D asset quality standards, post-production workflow optimization, and generative reconstruction approaches for digital twin generation.
Other companies in the same industry, closest in size