1. Insights
  2. AI Data
  3. Article
  • Share on X
  • Share on Facebook
  • Share via email

A guide to building training data for computer vision models

Posted March 4, 2022
A variety of men and women walking up a set of stairs in a public space. Each man has a yellow bounding box around them, with the label "Male" and each woman has a blue bounding box around them, with the label "Female."

Artificial intelligence (AI) has influenced the product roadmaps of most of today’s enterprises. It’s become increasingly common to see prominent AI-based applications implemented to automate business processes. One of the most exciting developments in the field of AI is computer vision.

Computer vision is being explored and applied across industries, from traditional financial services to cutting-edge technologies like autonomous vehicles. Some other popular use cases for computer vision include drones, mapping and satellites, robotics, medicine and agriculture.

So what goes into creating computer vision technology? Here are the major steps:

  1. Data collection
  2. Data labeling
  3. Graphics Processing Unit (GPU) acquisition
  4. Algorithm selection - Training - Testing - Teaching
  5. Repeat and refine the process

Each of these steps involves its own set of operational challenges, but this article will focus on the collection and labeling of training data.

Data collection

When starting to collect data, there are many free and paid standard datasets available.

For example, here are some of the top open labeled dataset repositories available:

  1. ImageNet
  2. Google’s Open Images
  3. KITTI
  4. The University of Edinburgh School of Informatics’ CVonline: Image Databases
  5. Yet Another Computer Vision Index To Datasets (YACVID)
  6. CV datasets on GitHub
  7. ComputerVisionOnline.com
  8. Cityscapes Dataset
  9. MNIST handwritten datasets

These datasets serve as a good starting point for anyone looking to get started with machine learning (ML). They are even useful for building simple models for side projects. But for more practical purposes, collecting proprietary training data, similar to the data required for the final model to run efficiently is probably best.

For more complex projects, it is beneficial to work with a data outsourcing partner. Outsourcing data annotation allows companies to incorporate the best practices outsourcing partners have learned from annotating thousands of images, across a variety of scenarios and use cases.

From determining the crowd capacity and creating workflows, to handling task design and instructions, to qualifying and managing annotators, an end-to-end data outsourcing partner allows companies to attain a data collection and annotation speed that is unmatched.

Data labeling

Once data has been collected, it must be labeled. There are primarily two things to be concerned about here:

  1. How to label the data (Internal vs. external tools)
  2. Who labels the data (Internal resources vs. outsourced annotators)

How to label the data: Choosing the right data annotation tool

Lots of data annotation tools are available online. However, selecting the right one for your needs can be challenging. Here, are a few factors to consider when selecting an annotation tool:

  1. Tool setup time and effort
  2. Labeling accuracy
  3. Labeling speed

If open tools don’t meet your specific needs, you may need to consider customizing or building one from scratch. This is understandably very costly and possibly unnecessary. An alternative is to work with an outsourcing partner and leverage their technology and expertise.

Who labels the data: Selecting annotators

If you have the data, but don’t have the tools or workforce to annotate the data internally, you can offload all of your annotation tasks by partnering with a data annotation company. These companies can provide the raw data itself, a platform for labeling the data and a trained workforce to label the data for you.

Companies like TELUS International already have platforms built to collect and annotate data, as well as a large, trained workforce that can annotate hundreds of thousands of data points at scale. The main advantage of partnering with a data annotation company is that you don’t have to deal with building a data annotation infrastructure from scratch. All you have to do is build specific guidelines and QA protocols for the company to follow.

The essential guide to AI training data

Discover best practices for the sourcing, labeling and analyzing of training data from TELUS International, a leading provider of AI data solutions.

Download the guide

Best practices for data annotation

It’s imperative that companies measure the quality of their data annotations. This is a twofold process that involves measuring the annotations against a set of ideal annotations to determine their accuracy. Secondly, it’s crucial to measure the consistency of annotations to ensure that the assembled team of annotators label in the same way.

Other best practices for labeling worth noting are:

  1. Creating a gold standard
  2. Using a small set of labels
  3. Performing ongoing statistical analysis
  4. Asking multiple annotators to label the same data point (multipass)
  5. Reviewing each annotator
  6. Hiring a diverse team
  7. Iterating continuously

Evaluating quality of training datasets

Three important parameters that indicate the quality of training data are:

  1. Data diversity: Diverse training datasets minimize biases in model predictions and outcomes. For instance, if a model is trained to predict cats, using images of only domestic cats will limit the model’s prediction capabilities. To get better outcomes it is advisable to include a wide variety of cat images, including different attributes like sitting cats, running cats, standing cats, sleeping cats, etc.
  2. Data adequacy and imbalance: It is imperative you use adequate datasets to train models and consider a number of variable factors that might affect the model’s outcomes to ensure that datasets aren’t imbalanced.
  3. Data reliability: Reliability refers to the degree to which you can trust your data. You can measure reliability by determining the following factors:
    • Tangibility of human errors: If the dataset is labeled by humans, there are bound to be some errors. How frequent are those errors and how can you correct them?
    • Noisy data features: Some amount of noise is okay. But data that has too many noisy features may affect the outcome of your models.
    • Duplicate data: For example, the same data records may be duplicated because of a server error, or if you face a storage crash or a cyberattack. Evaluate how these events may impact your data and have contingency plans in place.
    • Label accuracies: Wrong data labels and attributes account for huge gaps in model performances. It is important to maintain high precision and recall rates for labeled data.

The TELUS International approach

TELUS International offers secure and robust data creation and data annotation services to train ML models and support various AI applications. We rely on an analytics-based approach to identify and minimize error rates in training data by running the data through multiple expert human-loops until it reaches sufficient accuracy to ensure our client’s access to the highest quality data. Additionally, our world-class UX designers are constantly working to improve the annotators’ experience and productivity to make the overall process more efficient.

Reach out to learn how we can help with your next computer vision project.


Check out our solutions

Test and improve your machine learning models via our global AI Community of 1 million+ annotators and linguists.

Learn more