1. Insights
  2. AI Data
  3. Article
  • Share on X
  • Share on Facebook
  • Share via email

The shift to responsible AI and best practices for implementation

Posted July 26, 2022
Illustration of two people interacting with technology meant to symbolize responsible artificial intelligence

Although artificial intelligence (AI) may still feel shapeless and intangible to many, AI is already deeply impacting how we live, work and communicate today.

From our voice-enabled virtual assistants, to our algorithmically curated social media feeds, to our car’s AI-enabled driver assistance features, AI is all around us now — and it will continue to grow in the years to come. “There’s no one who uses any type of technology in the world that’s untouched by AI right now,” says Paul Hecht, senior product marketing manager of AI Data Solutions at TELUS International.

The rapidly advancing technology is also revolutionizing business functions by solving complex problems via automation in industries like manufacturing, retail, automotive, healthcare, transportation, financial services and more. According to Precedence Research, the AI market size is projected to surpass around $1.5 trillion by 2030, expanding at a compound annual growth rate (CAGR) of 38% between now and 2030.

Given its near-ubiquity and immense influence in shaping our experiences, it is becoming increasingly important that AI is developed ethically and responsibly. That’s why the success of AI demands a shift toward responsible AI.

What is responsible AI and why does it matter?

The purpose of AI should always be to help humans, not harm them. However, there have been some noticeable instances of AI baking in bias from a lack of diversity in the data used, and from the blind spots of its developers. In the previous decade, when AI was still nascent, a social media bot, meant to learn from casual conversations with humans, went from enhancing conversational understanding to spewing hate using racist, sexist and politically biased comments in a matter of hours, illustrating how easily AI reinforces human biases and prejudices.

Since AI-powered systems are evolving continuously with data and use, their resulting misbehavior is often more arduous to determine and correct in the long run. Responsible AI, says Hecht, is “really about how organizations design and build models responsibly to remove bias and properly represent the ever-changing user base that their products will touch.”

Avoiding and removing bias is one dominant guidepost in the development of responsible AI. It should also:

  • Be transparent and explainable
  • Be human-centered
  • Benefit society
  • Create better opportunities where technology and people can co-exist
  • Enforce the highest standards of privacy
  • Proactively comply with data governance standards such as the EU’s GDPR

In addition, Hecht emphasizes that a critical component of responsible AI is being accountable for how you manage the individuals influencing — and being influenced by — artificial intelligence. A lack of accountability in these areas can lead to the spread of bias.

For instance, voice-enabled virtual assistants are predominantly trained to interpret continuous and uninterrupted speech, alienating users with speech impairments or disorders. Making essential considerations such as adding diverse speech patterns and flexible listening capabilities can improve accessibility for segments excluded from the mix.

Consider a self-driving car that’s unable to recognize people of color due to racially underrepresented data, or an applicant tracking system reinforcing gender bias while making critical hiring decisions. Such scenarios demonstrate that while AI technology may be inherently neutral, AI — built by and interacting with humans — is injected with underlying beliefs, stereotypes and values.

Therefore, developers must apply responsible AI thinking from the very onset of such projects. Instituting diverse teams in terms of gender, race, ethnicity, class and other aspects will significantly reduce unintended bias in AI.

Best practices for operationalizing responsible AI

Today, many large-scale AI innovators and organizations have defined responsible AI frameworks that aim to safeguard against the misuse of AI. But as Reid Blackman, an AI ethicist and author of Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent and Respectful AI, wrote in the Harvard Business Review, “The difficulty comes in operationalizing those principles.” One of the tips he gives for implementing responsible AI is to formally and informally incentivize employees to play a role in identifying ethical risks.

Hecht echoes this strategy for making responsible AI a company-wide effort. “One of the ways organizations can take ownership of their role in the development of responsible AI and removal of bias is for it to be made a priority at every level of the organization,” he says.

Having the responsibility stem directly from leadership and spread throughout the organization is essential for empowering employees to uphold AI ethics. It enables companies to build ethical values and align the organization on a shared commitment. Hecht continues: “This can’t be something that sits on a sticky note in a lab somewhere. It has to be living through the DNA of an organization.”

Cross-functional collaboration is an effective way to address blind spots in AI systems that often go unrecognized until an unanticipated risk or behavior occurs. Designing and implementing strategies with multi-functional viewpoints could reduce, combat or eliminate potential pitfalls.

In addition, addressing ethical concerns must be prioritized throughout a product’s lifecycle, not just at the end. The data used to train AI must be free of bias at every stage of the development process. Data processing activities such as data collection, annotation, relevance and validation require meticulous attention to detail, and an awareness of data diversity, volume and representation are needed to deliver responsible AI outcomes in the long term.

A central part of accomplishing this is the incorporation of responsible AI principles, which are primarily founded on the concepts of fairness; reliability and safety; privacy and security; inclusiveness; and transparency and accountability. Illustrating the adoption of responsible AI at TELUS International, Hecht said, “We work closely with our clients to offer diverse workforce solutions to mitigate AI bias, use built-in tools to anonymize personal identifiable information (PII) from data to comply with data governance regulations and implement the highest security standards to ensure data security.”

Technology is constantly changing, and so are the related issues. An organization moving toward responsible AI must recognize that ethics in AI are rapidly evolving, and that the approach requires a mindset shift and flexibility rather than a one-off seminar.

Building the future of responsible AI

As we continue to uncover the myriad applications of AI and their enormous potential to transform our collective realities, the challenges of implementing ethical diligence and governance of AI become more pronounced.

To a large extent, responsible AI is focused on making sure that AI reaches its full potential to improve our lives and to be a benefit to society. Apart from principles and regulations, a fundamental component in building responsible AI is diverse and high-quality data. Ensuring that the data used to train AI systems and models is free from bias and accurately represents all the target users of the technology can substantially improve their performance.

Selecting an AI data partner focused on mitigating bias is an important first step in the ongoing journey to achieving responsible AI. Together, we can work to ensure AI systems are secure, reliable and accountable. Reach out to our team of experts to learn more.


Check out our solutions

Test and improve your machine learning models via our global AI Community of 1 million+ annotators and linguists.

Learn more