1. Insights
  2. AI Data
  3. Article
  • Share on X
  • Share on Facebook
  • Share via email

The future of AI regulation in 2023

Posted January 13, 2022 - Updated December 7, 2022
Illustration of a robot (symbolizing AI), scales (symbolizing law and justice), and a clipboard (symbolizing regulation)

People are paying closer attention to artificial intelligence (AI) than ever before.

According to Edelman's special report on tech within its annual Trust Barometer in the Tech Sector report, four in 10 respondents believe AI needs more regulation, and the majority of respondents were not convinced of the positive impact of AI.

Some say the chasm between trust and technology has formed for good reasons: For most of AI's existence, there hasn't been much regulation around it. At times, the rules may have seemed a bit loose and opaque for just how world-changing the technology has been.

Governments and organizations around the globe are now responding by developing AI regulations to establish guidelines in their jurisdictions around technology — and most importantly, around the elements of inclusivity and fairness, data privacy and transparency.

Arguably, though, a patchwork of different laws isn't enough, particularly when we consider the international scope of tech and AI in business. As Paul Hecht, senior product marketing manager of AI Data Solutions at TELUS International notes, businesses have a significant responsibility in answering for the impact of AI, especially when it comes to the existence and perception of fairness. "I think there is some justifiable apprehension," says Hecht.

"At the extreme end of the spectrum, there's the 'is AI going to take over the world' viewpoint, which is inaccurate — but, there needs to be checkpoints," Hecht continues. "Because AI and robots can only perform as well as the data that's feeding the machine learning models, universal guidelines will lead to increased public confidence that the uses of AI are for the greater good."

AI legislation across the globe

Historically, the law has been no match for the swiftness with which AI creation and adoption operates. Industry has traditionally been the leader here; tech start-ups introduce new concepts or paradigms, and the government follows.

While digital privacy laws have been around for a while, the regulation of artificial intelligence is much newer. By and large, the typical approach to date has been one of self-regulation; essentially, companies and other organizations have been free to do what they please regarding AI, so long as it doesn't contravene existing criminal or civil laws.

But that paradigm has started to shift and the regulation of artificial intelligence is now ramping up worldwide. Below are some of the new policies that have recently come into effect or are on the horizon for 2023.

AI regulation in Europe

The AI Act is a proposed European law that is expected to be adopted in 2023. The main objectives of the law is to ensure that all AI systems within the European Union (EU) enhance governance and enforcement of existing law, facilitate a single market for trustworthy AI and prevent market fragmentation. It assigns AI applications into three risk categories, with a set of prescribed actions, depending on which category it falls into:

  • Unacceptable risk
  • High-risk
  • Not explicitly banned/high-risk

The EU AI Act is expected to become the new gold standard for regulating AI, adding an additional layer of regulation alongside GDPR.

AI regulation in China

As of March 2022, the Chinese Cyberspace Administration has adopted the Internet Information Service Algorithmic Recommendation Management Provisions. These new regulations apply to any entity that uses recommender systems or similar content decision algorithms in apps and websites. This includes virtual reality, text generation, text-to-speech and deep fakes.

AI regulation in Brazil

In September 2021, the Brazilian government approved the Marco Legal da Inteligencia Artificial, Bill no. 21/2020, with the intent to regulate the use of AI within the country. While the bill can still be modified in the Senate, it presents a number of controversial provisions, mainly around the lack of protections for specific societal groups and representatives.

AI regulation in the United Kingdom

In July 2022, the U.K. government published its AI Action Plan, summarizing its intent to introduce a "pro-innovation approach to regulating AI." Unlike the EU's AI Act, which gives responsibility for AI governance to a central regulatory body, the U.K. government's proposals will enable regulators to take a case-by-case approach to the use of AI in a range of settings. The goal is to ensure that the U.K.'s AI regulations can keep pace with change and avoid serving as an obstacle to innovation.

AI regulation in Canada

On June 16, 2022, the Canadian government tabled Bill C-27, The Digital Charter Implementation Act, 2022. Bill C-27 proposes to enact, among other things, the Artificial Intelligence and Data Act (AIDA). This is the country’s first attempt at regulating AI systems outside of privacy laws and would result in criminal and/or financial repercussions for businesses found to engage in unlawful or fraudulent behavior related to AI.

AI regulation in the United States

A number of regulations regarding automated decision making that involve the use of data, machines and algorithms have recently been put in place across the U.S. The Illinois Artificial Intelligence Video Interview Act, for example, requires employers to obtain consent when they use AI to vet video job interviews.

The year 2023 will see new regulations come into effect in New York City that prohibit employers from using automated decision-making to screen for employment decisions, unless the tool has undergone a bias audit in the previous year. Colorado, Connecticut and Virginia will implement similar laws in the year ahead, all aimed at giving consumers the right to opt-out of automated decisions.

Reducing bias in AI requires intention

A major reason why governments are seeking to regulate AI is because of its enormous potential for bias. As we know, AI models are only as diversified as the data used to train those models, which mostly starts with humans.

If AI's human counterparts are too homogenous as a group, organizations run a real and potentially devastating risk of introducing and reinforcing racial, cultural, socioeconomic and gender biases into AI models and algorithms. Imagine an alternative lending start-up whose algorithm routinely denies mortgage loans to people of color, or a healthcare company whose symptom checker doesn't account for diseases that disproportionately affect women.

Hecht says sourcing data and AI training staff – such as data annotators and data collectors – from as far afield as possible is critical to rooting out bias. "For example, TELUS International works with our partners from a solutions standpoint, to ensure that we collect data and source our AI Community members from a diverse footprint," says Hecht. It isn't always easy to find people who, for instance, speak a rare dialect from a remote corner of the world — but it is worth it.

"All the training data that we deliver has human quality standard checkpoints built in," he notes. Consistency is achieved by continuously examining the data and running training programs to ensure as many gaps as possible are filled.

This is a movement far beyond tokenism. Intentional inclusion is essential to reducing bias in AI and ensuring that algorithms do not perpetuate the same kinds of discrimination that have marked our world to date.

Businesses are active partners in AI rule-making

As we've seen with past tech-facing regulations, governments don't always get it 100% right the first time — which means three things. One, like tech itself, AI legislation will have to be iterative to remain relevant and enforceable. Two, governments will need to partner with corporations and researchers — and, ideally, each other — to understand the true scope of regulatory requirements. And three, companies must identify the AI processes they use and conduct a thorough risk assessment.

Adapting a checklist like the European Commission's Assessment List for Trustworthy AI is one way corporations can ensure their bases are covered. Once the risks are understood, they can take reasonable steps to mitigate them. Furthermore, organizations should operate with explainability at the forefront of their decision-making. Explaining an adverse decision enables an effective appeal if a system's prediction doesn't make sense, or acceptance of the outcome if it does.

To create a successful ecosystem, companies have a duty to be forward-thinking and proactive when it comes to protecting individuals' privacy. For instance, TELUS International's Ground Truth Studio, a proprietary AI training data platform, uses an image anonymizer that blurs out license plates and faces in photos. In order to deliver better AI outcomes, we believe in following certain principles. Take a look at our guidelines for responsible AI.

Investing in reducing bias is also a critical area for corporate social responsibility. Right now, organizations have the opportunity to pioneer new ways of fostering inclusivity, fairness and truth in AI and throughout our technology systems. These societal values are possible to achieve through artificial intelligence; it just takes intention, creativity and, more than anything, the human element.

Restoring a sense of balance to technology may be just what we need to restore citizens' trust in it, too.


Check out our solutions

Test and improve your machine learning models via our global AI Community of 1 million+ annotators and linguists.

Learn more