The impact of the European Union’s AI Act on customer experience

We explore how the new EU-wide legislation on AI will affect CX

Add bookmark
Leila Hawkins
Leila Hawkins
08/06/2024

AI illustration

The European Union’s Artificial Intelligence Act (EU AI Act) officially came into effect on August 1, 2024. To date, it is the first official set of regulations on AI to be introduced globally, requiring that companies that develop and deploy AI comply with rules on data governance, transparency and human oversight among others. But what impact will this have on CX?

The Act has a risk-based structure, assigning AI systems into high risk, limited risk, and minimal or no risk categories. First, let’s take a look at what each of these means.

Don't miss any news, updates or insider tips from CX Network by getting them delivered to your inbox. Sign up to our newsletter and join our community of experts. 

  • Unacceptable risk is defined as AI that poses a clear threat to the livelihoods and rights of people, such as biometric categorization and social scoring. All these types of AI are banned in the EU under the new act.
  • High risk refers to AI systems that are used in critical infrastructures or safety components that could put citizens at risk, for example driverless cars or AI robots used in surgical operations. The use of AI in essential services such as credit scoring, recruitment processes, law enforcement, court rulings, education and migration also fall into this category. In these cases, the AI tools will need to be authorized by a judicial or independent body before they can go to market.
  • The limited risk category refers to the risk involved if the use of an AI system is not disclosed. For example, the Act specifies that companies using chatbots must be clear that humans are interacting with AI, so they can make an informed decision on whether to continue. This also applies to AI-generated content including images, particularly when it is published with the purpose of informing the public and could therefore influence someone’s decision-making.
  • Minimal or no risk includes the use of AI in video games and spam filters. According to the Act’s policy document, the vast majority of AI systems used in the EU fall into this category. These can be use freely without the need to gain authorization or disclose the use of AI, but companies may wish to adopt voluntary codes of conduct to increase trust with their customers. 

Who is responsible for compliance?

While the legislation has been designed to protect the EU and its citizens, it will have a major impact on tech companies globally, particularly in the US where the majority of AI systems are developed. Meta and Apple have already delayed launches of new products in the EU citing the “unpredictable nature of the European regulatory environment.”

The Act is mostly designed to legislate for AI that falls into the high-risk category and the tech companies providing the software rather than those deploying it. “It's very important to keep in mind that the obligations of the AI Act apply [mostly] to the AI providers,” explained Thomas Regnier, spokesperson for the European Commission to The Drum. “For marketing companies and all the other citizens, we want them to be able to use all the potential benefits of AI, but in the end the obligation to comply with the legislation is not really for the ones using these AI systems – it’s for the ones placing them on the market.”

However, that doesn’t mean businesses deploying the software should ignore the legislation. In fact, according to legal firm Lexr, deployers must ensure AI systems are used in accordance with the Act’s standards on transparency for users, and provide clear information on the AI system’s capabilities and decision-making logic.

Additionally, article 50 of the Act states that deployers of generative AI systems to create content must disclose that the text has been artificially generated or manipulated, unless the content “has undergone a process of human review or editorial control and where a natural or legal person holds editorial responsibility for the publication of the content.”

How will the EU’s AI Act impact CX?

Perhaps the most significant impact will be for companies using AI-powered chatbots — as per the Act they must clearly disclose to users that they are interacting with a machine, and request permission from the user prior to the conversation taking place so they can take an informed decision over whether to continue.

As mentioned above, organizations do not need to disclose that content has been generated by AI as long as it has undergone a process of human review or editorial supervision; therefore, training and processes for human oversight will need to be implemented.

While there is no explicit mention of companies deploying AI for other processes such as customer journey mapping and data collection, the Act does not supersede data protection regulations such as GDPR, therefore organizations must still ensure the AI tools they use do not breach these laws.

What must companies do to be compliant?

The EU is giving tech companies up to six months to comply with the new rules or face large fines ranging from US$8.1 million or one percent of their global annual turnover, to $38 million or seven percent of turnover, whichever is highest.

Any organization adopting AI should implement a robust governance strategy aligned with business objectives that accounts for data security, upskilling and training, and continuous monitoring and transparency, while including regular audits of AI systems.

Ensuring staff are fully up-to-speed with AI systems is particularly important, as article 4 of the Act notes that: “providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in.”

The Act also includes a provision for the AI Office — established as the EU’s center of AI expertise and governance — and EU Member States to facilitate drawing up voluntary codes of conduct regarding ethical guidelines, AI literacy, inclusive and diverse design, and assessing and preventing the negative impact of AI systems on vulnerable groups of people.

The EU AI Act marks a pivotal step in regulating AI technologies, setting a global precedent for how AI can be developed and deployed responsibly. Passing similar legislation in the US has been complicated given the country’s various jurisdictions and the influence the tech industry has over regulatory discussions — however it remains to be seen what knock-on effect the EU’s regulations will have on the US tech sector, and globally.

Special Report: The Global State of CX 2024

Now in its eighth year, this edition of our flagship annual report is also the first in the Global State series to examine the profit and loyalty benefits delivered by disruptor technologies such as generative AI and virtual reality. Download the Global State of CX 2024 now.

Download Now


RECOMMENDED