1. Insights
  2. AI Data
  3. Article
  • Share on X
  • Share on Facebook
  • Share via email

The importance of controlling for bias in AI

Posted July 14, 2020
Controlling for bias in AI

Good artificial intelligence, at its core, depends on good humans.

AI has the potential to greatly improve customer experiences, especially through chatbots and other technologies that use natural language processing (NLP). Unfortunately, it also has the potential to bake in underlying human biases. When AI programs are designed by humans and are based on historical data, they risk perpetuating inequalities. But, by recognizing the ways that artificial intelligence models can be led astray, programmers can develop and implement strategies for controlling machine bias.

Every brand looking to adopt AI solutions should be thinking about detecting and rooting out conscious and unconscious biases. One way of doing this is to ensure you begin with a culturally diverse team with employees coming from different backgrounds, life experiences and perspectives. By tapping into the strengths of this diversity, your brand may be better positioned to think and behave differently and transfer that knowledge and experience into the AI solutions you design and deliver.

This is why, when it comes to CX, artificial intelligence isn’t a destination unto itself; rather, agents and AI need to work together to bring out the best in one another. A customer service representative could use an AI application to help triage a customer’s needs or identify an issue sooner, while the AI can learn from the human interactions to get smarter over time.

But for AI to provide genuine value to customers, employees and companies, it has to have mechanisms in place to guard against bias.

Why bias can easily seep into AI-enabled tools

The term “artificial intelligence” can evoke a belief that its decisions are objective and not influenced by bias. The opposite is actually true because intelligence that drives AI originates from human data, and is programmed by humans — people who have flaws, blind spots and unconscious biases that can now impact a customer’s experience.

A major issue with relying on historical data to teach AI how to make decisions is that what a machine is taught may not be representative of how you’d want it to behave in the future. In recent years, some companies have come under fire for using algorithms that reinforce gender discrimination in hiring. Consider this: AI pulls from historical data and if your company is trying to hire to correct gender imbalances, a poorly tuned AI may end up reinforcing the very bias you’re trying to escape.

Wouldn’t a company in this situation notice that its AI is biased? The answer is: it depends on who on the team is making and deploying these tools.

As Nahia Orduña, senior manager in Analytics and Digital Integration at Vodafone, wrote last year for the World Economic Forum, “Non-homogeneous teams are more capable than homogenous teams of recognizing their biases and solving issues when interpreting data, testing solutions or making decisions.” Once again, reiterating the importance of hiring diverse and multicultural teams.

The pitfalls of not recognizing bias

In the recently published academic study Racial Disparities In Automated Speech Recognition, researchers looking at some of the most well-known conversational AI tools found that there was a prominent racial divide in speech recognition. “If you’re not careful, you risk automating the exact same biases these programs are supposed to eliminate,” said Kristian Lum, the lead statistician at non-profit Human Rights Data Analysis Group, speaking to The Guardian last year.

In regards to customer service and the increasing use of AI-enabled chatbots, to not actively look for and recognize bias means that customers could be treated unequally. By not questioning the data behind a data-driven AI decision, companies could undermine their own values regarding treating customers fairly. Companies that have taken the time to question their data have increasingly looked to crowdsourced data as a potential remedy that introduces greater diversity. Let’s take a closer look at ways to counteract bias.

How to control for bias

The recent attention and increasing awareness around our own biases, both conscious and unconscious, in society and in AI, coupled with the significant real-world consequences of flawed models, has led to an outpouring of research on how to mitigate AI bias. This research, thankfully, also includes insights relevant to the customer experience.

One approach that has been gaining traction is human-in-the-loop (HITL), which takes a semi-supervised approach to machine learning, as opposed to full automation. “What if, instead of thinking of automation as the removal of human involvement from a task, we imagined it as the selective inclusion of human participation?” asked Stanford professor Ge Wang on the universitys Human-Centered Artificial Intelligence website. “The result would be a process that harnesses the efficiency of intelligent automation while remaining amenable to human feedback, all while retaining a greater sense of meaning.”

Human-in-the-loop is a form of “active learning” that allows an AI model to be continuously improved by human experts that tackle edge cases and provide feedback into the loop.

As well, understanding where AI bias exists means being able to understand how AI models make their decisions. Recent years have seen a push toward explainable AI (XAI) for developers to make their models more transparent. In other words, answer the questions of how and why a model is making its determinations. On top of that, more tools are coming online to help developers check for bias in their models, such as Local Interpretable Model-agnostic Explanations (LIME) toolkits and Aequitas.

These increases in transparency around AI will afford CX teams the ability to pinpoint problems and constantly improve their models.

Of course, the ability to truly control and mitigate machine bias in AI is often reliant on developers asking the right questions and understanding the potential impact and unintended consequences of AI deployment. This requires a greater level of diversity with the teams developing an AI-enabled tool, both in terms of cultural backgrounds and skill sets. As AI plays an increasingly larger role in customer service, good CX will depend largely on having the most comprehensive, thoughtful AI solutions available.


Check out our solutions

Test and improve your machine learning models via our global AI Community of 1 million+ annotators and linguists.

Learn more