CX Network, a division of IQPC
Artificial intelligence (AI) is here to stay, and it’s transforming the way CX practitioners operate. But as well as grappling with how to get the best out of this technology, it’s essential for companies to ensure its responsible and fair use. This guide explains how.
The need for ethical AI
In CX Network’s latest Global State of CX survey, 67 percent of respondents said they either agree or strongly agree that customers are concerned about ethical AI use and the future development of AI for customer experience. Elsewhere, 38 percent said that awareness of how AI uses their data is among the top three customer concerns at present.
These figures show that to maintain trust with their customer base, brands need to demonstrate they are safeguarding people’s data and be transparent about their use of AI.
Such is the power of AI, that governments around the world have introduced guidelines for how government, tech companies and individuals can work together to ensure there are safeguards in place, such as the Blueprint for an AI Bill of Rights in the US, and the European Union’s Artificial Intelligence Act.
Businesses are taking note – in February this year leading corporations including Lenovo Group, Mastercard, Microsoft, Salesforce and Telefonica announced they were signing up to UNESCO’s Recommendation on the Ethics of Artificial Intelligence, pledging to build more ethical AI. The framework states that due diligence must be carried out to meet safety standards, identify the adverse effects of AI, and timely measures taken to prevent, mitigate, or remedy them, in line with domestic legislation.
In short, brands need to ensure they are using AI responsibly to comply with regulations, mitigate risks, build trust and as we will see below, enhance customer satisfaction.
The ethical aspects of AI in customer experience
Whether you are using generative AI to create marketing materials or deploying AI-powered chatbots in the contact center, these are all the considerations organizations must take into account when using AI in CX:
Data privacy
According to our research, 54 percent of CX practitioners strongly agree that data privacy and security is becoming a key issue for customers. This comes as no surprise given the vast amounts of data collected by websites, social media platforms and mobile devices.
AI systems are trained on data and information scraped from the internet, but when it comes to customer data this should only be collected for legitimate purposes, and the customer must remain in control.
For example, videocalling platform Zoom does not use its customers’ data to train its AI models or those of any third parties. Its AI capabilities are disabled by default unless the user chooses to share their data.
“Be transparent about what you are collecting and why. Implement clear and accessible privacy policies.” Annette Franz, CX Journey Inc.
First party data – gathered through interactions with your brand – is key to this, as it requires permission from customers to share and use this data. They should be kept fully informed of how their data will be used at every step of the journey, and have the opportunity to give informed consent before any data is collected or processed.
“Make sure that your data collection processes comply with data protection regulations and respect user privacy,” says Franz. “Be transparent about what you are collecting and why. Implement clear and accessible privacy policies, provide options for users to manage their data preferences and communicate transparently about how you intend to use their data.”
Transparency
As well as being fully transparent on data use, companies must also be transparent with customers about when they are interacting with an AI-driven system. This includes clearly stating what data is being collected, how it's being used, and how decisions are being made.
Jaakko Lempinen, CX Network advisory board member and head of customer experience at broadcaster YLE Finland, explains: “Companies should be open about how AI technologies are employed in their operations, specifically in how data is collected, analyzed, and utilized to improve customer experience. This involves clearly communicating the purposes of data collection and the benefits to the customer, ensuring there's an understanding of the value exchange.”
Source: ChatGPT
According to Zendesk’s 2024 CX Trends report, 75 percent of organizations believe that a lack of transparency in their use of AI could lead to increased customer churn. Disclosing the decision-making process, for example why a chatbot has made certain product recommendations, helps to build trust.
Human supervision
Human supervision of AI systems is critical to make sure they are being used responsibly and ethically. Humans should have the ability to intervene in cases where the AI makes decisions that could harm customers or violate ethical principles.
EJ Cay, VP for Genesys UK and Ireland, states that this is important for AI-powered virtual assistants. “This ensures businesses always approach AI from a customer-first perspective as virtual assistants are trained on specific scenarios relevant to the business, supporting transparency and consistency of outcomes,” she says.
As well as supporting ethical AI this ensures better customer service. “This human intervention is essential when using AI for customer service, as anyone who has been frustrated by a chatbot will know,” notes Jason Giles, VP of product design for UserTesting. “Having a human there to guide the interaction or take over when the limits of AI are reached, especially in complex situations, helps the business stay efficient while keeping customers happy.”
“You want AI to supercharge the existing customer service process, rather than supersede it.” Jason Giles, UserTesting
Additionally, a growing number of experts are highlighting the importance of using AI to help humans, and not the other way round. Shannon Vallor from the University of Edinburgh in Scotland says that, “Human-centered technology is about aligning the entire technology ecosystem with the health and well-being of the human person. The contrast is with technology that’s designed to replace humans, compete with humans, or devalue humans as opposed to technology that’s designed to support, empower, enrich, and strengthen humans.”
She points to generative AI as an example of technology created by organizations simply wanting to see how powerful they can make a system, rather than to meet a human need. “What we get is something that we then have to cope with as opposed to something designed by us, for us, and to benefit us. It’s not the technology we needed,” she explains.
Ensuring AI is human-centered and regularly monitored helps to address potential bias, as we will see below.
Bias mitigation
There have been numerous reports of bias in AI algorithms, including mortgage algorithms charging Black and Latino borrowers higher interest rates and the female voices of virtual assistants reinforcing gender stereotypes (many of these examples are collected in this PwC study).
Allowing bias to flourish is ethically and morally wrong, and can exclude and discriminate against certain groups of customers. Addressing this ensures the fair treatment of all customers. To do this, the data scientists and business leaders responsible for developing AI must identify potential biases, regularly audit algorithms and take steps to mitigate unfair outcomes.
David Armano, executive vice president of AI analytics strategy at Ringer Sciences, believes that corporate culture also has a role to play. “Most industry experts assumed that bias in AI was one of the most significant ethical challenges, meaning the humans who build the AI allow their personal biases to seep into it, unknowingly or knowingly,” he says.
“But I believe it is culture — corporate culture also extends into the jobs debate around AI as companies view the opportunities to streamline and enhance productivity through AI automation. A healthy, ethical corporate culture is likely to spawn ethical AI considerations, and a less healthy or questionable culture may have the opposite effect.”
Regulations and compliance
AI privacy breaches are on the rise and can lead to damaging lawsuits. Some of the most high profile incidents have involved entering sensitive company information into ChatGPT. As we know AI saves data for training purposes this must be avoided at all costs. The usual basics of cyber hygiene such as creating strong passwords and closing the application after use also applies in the case of AI tools.
Check you are complying with local data regulation laws such as GDPR in Europe and CCPA in the US.
Case studies from leading companies
Totaljobs’ ethical generative AI companion
Recruitment platform Totaljobs is in the early stages of building an AI-powered virtual assistant called AI Job Search Companion to help users through the process of job searching online. Given the challenging nature of looking for work, the tool was devised with ethics in mind.
Head of product Somnath Biswas explains that Totaljobs has an inhouse team exclusively dedicated to ethical AI. “We have made sure we are GDPR and EU AI act compliant, but equally things like bias, toxicity, making sure that from an accessibility perspective, the conversation is equally helpful for people who are differently abled… these are the core aspects that we have accounted for,” he says.
“Compared to other conversational journeys out there, we have been more diligent. If you don’t get that right in the initial stages of the product, and try to bring ethics in afterwards, it isn’t very effective. Right from the very start, we had a defined frequency and we benchmark naturalness and empathy, but we also check for bias, toxicity, hallucination, and all these other aspects.”
This also extends to teaching the model how to respond to different prompts. “Within the prompts, you are also providing the guardrails so the machine knows which ones to respond to, which ones not to respond to, which ones to sidestep,” Biswas adds.
Zoom’s focus on data privacy
Communications platform Zoom has prioritized data privacy by disabling the feature that shares customer information to train its AI models, which help instruct its AI Companion virtual assistant. All data from calls such as audio, video, chat and screen sharing content therefore remains private unless the user specifies otherwise. Even when these are turned on, meeting hosts have further controls to turn other capabilities on or off.
“Our approach prioritizes user control,” says Ben Neo, CX sales leader for the EMEA region. “It’s up to administrators and account owners to enable these features, providing granular control to meeting hosts. We also maintain transparency with participants through in-product notices about the AI tools in use.”
Zoom’s AI incorporates its own large language model (LLM), Meta Llama 2, OpenAI, and Anthropic. “This approach offers high-quality results and adapts to incorporate ongoing AI innovations,” Neo says. “Users benefit from enhanced quality and performance without the complexity of selecting a specific model. Our focus is on delivering a seamless, user-centric experience while upholding the highest standards of AI ethics and responsibility.”
Read about seven more companies using AI to improve their customer engagement strategies.
Tips for navigating ethics in AI
Use this checklist to assess how your organization is currently using AI and what steps you need to take to ensure you're adhering to ethical guidelines and regulations:
1. Establish internal policies
Establish clear lines of accountability for AI systems and ensure that there are mechanisms in place to address any harmful consequences of AI-driven decisions. This may involve creating oversight committees or appointing individuals to be responsible for monitoring AI ethics.
2. Consider appointing a head of AI or a chief artificial intelligence officer
Debasmita Das, who leads the AI team at Mastercard, says: “A Head of AI is being appointed by organizations to strategically align AI initiatives with their business goals, keeping in mind the rapid advancement and market adoption of technology.
“This individual would be essential in creating moral standards, guaranteeing ethical AI development and application, and reducing the associated risks," Das adds. Find out more about the role of a chief artificial intelligence officer and whether you need one here.
3. Get regular feedback
Gather regular feedback from customers and stakeholders and keep updating processes and policies as needed, establishing a culture of continuous improvement.
“Organizations should actively involve customers in the development of AI solutions through feedback loops, allowing them to express their concerns and preferences,” Lempinen says. “Engaging in dialogue about AI and its role in CX helps demystify technology for customers and builds a foundation of trust.”
4. Upskill staff
Provide regular training for staff members so they have up-to-date knowledge of the latest regulations, data protection laws, and advances in AI.
5. Publish your AI policy
As Finland’s national broadcaster, YLE wrote and published its ethical guidelines and standards for AI use in 2023, which state that humans are always responsible for the use of AI. Lempinen advises all organizations should make their ethical guidelines and standards for AI use available to the public.