1. Insights
  2. AI Data
  3. Article
  • Share on X
  • Share on Facebook
  • Share via email

Five strategies to mitigate bias when implementing generative AI

Posted March 14, 2024
Illustration of a robot meant to represent AI, an iceberg labeled as "AI BIAS" and some depictions meant to convey data

Combating the pervasiveness of social bias in generative AI (GenAI) is not only a key and urgent responsibility of the developers of these models, but also the deployers. This isn't an overstatement: GenAI applications are used to help make decisions that can greatly impact our lives, or even endanger them.

For example, a study published in the academic journal Digital Medicine showed that the large language models (LLMs) — the power behind generative AI — have a propensity to output responses that perpetuate race-based medicine when integrated into our healthcare systems. Emphasizing the potential harm of perpetuating debunked race-based medicine, the American Academy of Family Physicians states, "By using race as a biological marker for disease states or as a variable in medical diagnosis and treatment, the true health status of a patient may not be accurately assessed."

Unsurprisingly, the potential of bias in GenAI to negatively impact individuals' lives has led to public concern in how and where the technology is being used. According to a TELUS International survey, almost one-third (32%) of respondents believe that bias within a generative AI algorithm caused them to miss out on an opportunity, such as a financial application approval or job opportunity. Further, 40% of the respondents don't believe companies that are using generative AI technology in their platforms are doing enough to protect users from bias and false information.

Incorporating bias-mitigation strategies into your daily operations serves not only the end user, but also your organization. For example, a GenAI model that outputs biased decisions is performing suboptimally for your business. Further, a demonstrated commitment to responsible AI practices is an important investment that helps foster trust with your customers.

"It's also more than protecting a company's brand or avoiding mishaps. Responsible AI is a competitive advantage," said Salesforce CEO Marc Benioff in a recent Forbes article. "Creating systems that are accurate, trusted and transparent will mean much better insights. This will also lead to stronger connections with customers, employees and partners."

Building trust in generative AI

Brands are eager to reap the benefits of generative AI (GenAI) while limiting potential risks. Join TELUS International’s Steve Nemzer, director of AI growth and innovation, as he shares best practices for leveraging GenAI without compromising your organization’s goodwill.

Watch the video

Biased datasets

The phenomenon of human bias creeping into AI models is well-known and well-documented, having first been noted by computer scientist and Massachusetts Institute of Technology professor Joseph Weizenbaum in his 1976 book Computer Power and Human Reason. But the problem is exacerbated in LLMs for several reasons. First, these models are trained on astoundingly large datasets. One popular GenAI chatbot is estimated to have been trained on 45 terabytes of text data (equivalent to one quarter of the entire collection held in the Library of Congress), according to McKinsey & Company. These datasets reflect the biases we hold as a population and, because of their massive sizes, can't feasibly be checked for objectivity.

Further, unlike their traditional AI model predecessors, GenAI models aren't simply used for classification or prediction, they're also creating output, like images, that can greatly shape users' perceptions. When Bloomberg conducted an analysis of over 5,000 images generated by a well-known text-to-image model, their findings showed that the image sets generated for every high-paying job were dominated by male subjects with lighter skin tones. "In this world, doctors, lawyers and CEOs were men, and people of color were more likely to be in lower-paying jobs," Steve Nemzer, TELUS International's director of AI growth and innovation, said in the webinar, Building trust in generative AI.

Finally, leading foundational LLMs are mostly proprietary — as opposed to open source — and, as a result, lack transparency. For example, the latest releases of proprietary GenAI chatbots are often accompanied by reports that contain few details on these models' sizes, architectures, hardware, training data, training methods and guardrails. In fact, many recent model releases have provided no information on their training datasets, a departure from common practices before 2023.

As Nemzer noted in the aforementioned webinar, there's no easy answer when it comes to eliminating bias in generative AI, but there are steps that can be taken to help minimize it.

A five-step bias-mitigation framework

When implementing generative AI into your business, the following are some best practices to consider.

1. Make bias-mitigation initiatives a priority

The first strategy is plain and simple: Make a conscious leadership decision to demonstrate that you care about the issue of bias and commit to funding bias-mitigation initiatives. Not only does this help protect the end user, it helps protect your business. For example, if your LLM is performing suboptimally by outputting biased decisions, it may reject the best job applicants. In this case, your company could end up missing out on top candidates.

One way to prioritize bias-mitigation initiatives is by setting up a committee to govern the ethical and responsible use of GenAI technology within your organization. This team is responsible for developing a robust governance strategy.

Another strategy is to invest in GenAI technology that has been developed responsibly and ethically. For example, Claude, the large language model developed by Anthropic, is said to have "Constitutional AI" built into it. The purpose of this safety measure is to ensure Claude adheres to a set of principles — or constitution — that guides its output to be helpful, honest and harmless, according to the company's website.

2. Mandate your bias-mitigation initiatives

Building on the first step, this strategy involves having your governance team create a written document that outlines the principles and policies your team members will adhere to with regards to generative AI technology. This will help to ensure responsible AI practices are employed and enforced. For example, to ensure fair and equitable outcomes when it comes to decision making aided by GenAI applications, you could implement a policy that stipulates human review of those decisions must occur.

Note that in order to achieve these initiatives, it's critical to have explicit buy-in from your leadership team, as well as sufficient resources to implement the initiatives as an integral part of daily operations.

3. Source your training data broadly

Most companies don't have the resources to build their own LLM. Instead, they'll create apps based on an existing foundational model such as OpenAI's GPT or Google's Gemini. While you have no control over the massive datasets used to train these models, you can control the datasets you use to customize these pretrained models.

Your curated training dataset should not only be relevant to your objectives, it should be representative of the diversity in the general population. This will help to ensure the model learns from various perspectives. There are several ways to go about this. One is to use high-quality data from multiple trusted sources such as surveys, interviews, customer relationship management systems and more. Another is to analyze the data to pinpoint and correct any imbalances that could lead to biased output. You can do so using data visualization tools to identify the patterns, outliers, noise and more. Further, you can remove or anonymize sensitive information that could lead to bias.

4. Diversify the fine-tuning process

Fine-tuning an LLM using your proprietary data enables you to adapt the model to suit your specialized use case. This is done by adjusting the parameters of the pretrained LLM to your specific domain or task. One popular method for doing so is known as supervised fine-tuning. It uses a labeled dataset in which each data point is associated with a correct answer. The goal is for the model to learn to adjust its parameters to predict these correct answers. Through these adjustments, your LLM learns the task-specific patterns that occur in the labeled data.

You can also take your fine-tuning to the next level by using reinforcement learning from human feedback (RLHF). During this process, the LLM learns to make decisions not only through feedback from its environment, but also by receiving feedback from humans, which has a significant impact on performance.

Further, consider red teaming your LLM. This process entails subjecting your model to prompts, or inputs, that mimic adversarial attacks to uncover its flaws and vulnerabilities. These purposefully crafted malicious prompts challenge your LLM's capabilities and probe for biases, privacy vulnerabilities and more.

To mitigate bias, it's important to have a diverse team executing these tasks. "It's good practice to encourage a wide range of backgrounds among your fine-tuning and red teaming analyst pools by region, by language and by expertise," said Nemzer. "This will produce more balanced results as you go through the fine-tuning process."

Something to note during your efforts to fine-tune your LLM to mitigate bias is to avoid tuning it to the point where it overcompensates. Consider the recent example of an image-generating LLM that was tuned to ensure it showed a range of diverse people in various occupational fields. The way it was done led the LLM to overcompensate and output an image of a racially diverse 1943 German soldier as well as other misleading images.

5. Evaluate your model in operation

The way a model performs in training versus the way it performs in the real world can differ. This is due to a phenomenon known as data shift in which the data the model was trained on doesn't match the real-world data it encounters. To prevent this, continuously monitor your model's operational performance and further fine-tune, as necessary. If your model is showing any signs of biased output, you can take action to correct it immediately. Further, consider incorporating prompt engineering into this process, where the model is provided with carefully crafted inputs, or prompts, to guide it to generate desired output.

Helping you to mitigate bias

Implementing generative AI in your business can seem like an overwhelming task. Leveraging the support of a third-party partner is an option that can help ease the process. In fact, in an Everest Group survey, supported by TELUS International, three-quarters (76%) of respondents plan to leverage an outsourcing partner in some capacity to help them implement a generative AI solution in their operations.

TELUS International provides solutions to help with many aspects of your generative AI implementation, including keeping your bias-mitigation strategies on track. By leveraging our AI Community of over 1 million members, we can build diverse, customized LLM training datasets for fine-tuning your pretrained model. In addition to helping you source and validate your datasets, we offer fine-tuning services, including RLHF, to adapt pretrained models to specialized tasks. Contact us to learn more.


Check out our solutions

Explore our end-to-end solutions for the next generation of AI.

Learn more