Data engineered for excellence
Leverage our state-of-the-art AI training platform and global AI Community to build high-quality training data at scale.
Advanced AI training platform
Our sophisticated platform is built to support multi-sensor data and offers cutting-edge automation powered by machine learning (ML) to improve efficiency and reduce costs.
Industry-leading quality control
Quality control is integrated into every stage of the data annotation process via platform-enabled quality checks, annotator training, robust analytics and best practices followed by our dedicated QA teams.
Global AI Community
Take advantage of our diverse selection of skilled data annotation experts to reduce model bias and improve labeling accuracy for complex machine learning and computer vision models.
Privacy & security prioritization
We enforce the highest security standards at all data touchpoints and ensure regulatory compliance with GDPR, CSL, Amended APPI and CCPA. Our computer vision capabilities are SOC 2 compliant and TISAX certified.
Solved complex industry use cases
High-quality annotations delivered
AI Community members
From advanced driver assistance systems to L5 mobility perception models, we deliver fully-managed data annotation support for diverse and complex use cases.
Object detection and localization
For multi-object detection and localization models, our platform offers annotation support for standard classes such as pedestrians, cyclists, cars, traffic signs, etc. We also support additional classes and attribute customization for complex mobility applications.
Object detection and tracking
Autonomous vehicle manufacturers collect data that spans several hours and different conditions to train advanced mapping and perception systems. Our platform's automated object interpolation and tracking capabilities significantly simplify video annotations.
Panoptic segmentation provides granular, pixel-critical information for advanced ML algorithms combining instance and non-instance annotations within a single frame. Further, unique instance IDs for each object of interest are assigned automatically to each segment.
3D point cloud segmentation
Leverage accurate point-wise segmentation of 3D point clouds to build high-performing self-driving models. Our platform is compatible with a wide range of lidars, including solid-state or flash lidars that produce dense points with richer data.
Sensor fusion via 2D-3D linking
Detect and track objects of interest across 2D images and 3D point clouds captured using complex sensor set up found in autonomous vehicles. Our platform has an auto-linking feature that accurately links objects in both 2D and 3D scenes across multiple frames.
Object movement tracking
Accurately determine object shapes and track object movements via a sequence of pixel-accurate points using a landmark or key points annotation tool. These datasets are especially useful to train autonomous driver assistant/monitoring systems.
Irregular shape detection
Our platform’s auto-pixel detection for popular classes used in autonomous driving datasets (e.g. pedestrians, car shapes, road or lane markings, signboards, etc.) helps to simplify the annotation of different shapes and coarse objects in images and videos via polygons.
Autonomous vehicles are trained in numerous scenarios to accurately detect drivable areas by detecting lane lines during the daytime, in cities or highways and high-traffic areas. Our platform provides a polyline tool to support lane detection in 2D and 3D scenes.
Fuel your data pipeline with a continuous supply of high-quality training data
TELUS International is defining new benchmarks in data annotation to support autonomous vehicles with sophisticated ML-assisted tools, a scalable expert workforce, robust analytics and enterprise-grade platform services. With TISAX-certification and state-of-the-art deep learning models that auto-anonymize images, we enforce the highest security and privacy standards at every data touchpoint.
Build high-quality image datasets using data annotation tools that support 2D/3D bounding boxes, polygons, polylines, landmarks and segmentation.
Sensor collected objects of interest can be annotated using our ML-powered tools, regardless of sequence length.
Access our best-in-class visualization and annotation infrastructure for labeling multi-sensor fusion datasets for autonomous driving.