Implementing artificial intelligence (AI) is an undertaking that can often give companies a moment of pause, as it raises questions over hurdles such as data governance, proving ROI and building the right internal culture. Ahead of CX Network's Artificial intelligence in customer experience: 2019 Report we sat down with Oracle's Jon Stanesby to discover how practitioners can supercharge customer experiences (CX) with artificial intelligence.
CX Network: Just under 40 per cent the research participants admit they do not have a strong data oriented culture in their business but have plans to improve it. What are the consequences for these brands and their AI projects over the next 12 months if they fail to improve this situation?
Jon Stanesby: It is true that if you consider data to be the fuel source of any machine learning project, then CX practitioners with a bigger and better source of that fuel will go further and faster.
As with the many CX automation and CX intelligence projects of the last decades, the quality and quantity and the unification and currency are all important factors. However, CX practitioners are probably more prepared for machine learning than they think. Because of the high maturity of automation and insight related projects backed by leadership support over many years, the same data that was required for all these will stand you in good stead for your first few machine learning based projects.
Read: Discovering trust in your data and security
Yes, it is definitely important to convert your data culture into an AI data culture and have that reach far and wide in the organisation, however it shouldn’t be a blocker to taking your first steps.
CX Network: Participants voted churn prevention, self-service and the ability to understand customers better, as the areas of most value for AI projects. Why do you think these areas have emerged as the most important priorities?
Jon Stanesby: These areas have something important in common: They are very complex problems which, if solved, would have huge benefits. Whilst you could say that of any problems where machine learning (ML) might help, there is something deeper to these areas which I believe puts them front and centre in people’s minds - customer retention is hard and not a solved problem. Not by a long way. To retain customers and service them individually, you need to understand them. These three areas are actually related to one common goal.
With the real-time nature of chat-based interactions, there is no hiding and no time to do the work of understanding the customer unless you have prepared that understanding in advance. One reason machine learning is often better at tasks like deep pattern discover in vast data sets is that humans are comparatively very slow at rendering, interpreting and acting on data. You just need to look at the number of graphs an analyst generates to produce three short bullet points of insight in a CX strategy document.
Churn prevention is probably one of the most complex areas. Even for a machine, because customers don’t tend to tell you why they stopped engaging or buying from you, sometimes they don’t tell you at all so you need to derive it from their lack of behavior, or if they tell you in a survey, were they being honest? Learning of any kind, machine or not requires both feedback and relevant observations. Try learning to drive a car with no windows!
Watch: The value of time in the experience economy
Machine learning can help to find those patterns of behavior, no matter how small they are, and give very early warning signs and recommend preventative action that could be taken. Attach that capability and customer knowledge to an AI chat bot or AI assisted service agent and you have a very powerful capability focused on keeping your customers happy and serviced well, at speed and at scale.
CX Network: How important are data governance procedures to the success of AI projects?
Jon Stanesby: A.I. initiatives do not differ from any other data driven project, they are more advanced. The same rigor applies regarding security, accessibility and quality. Especially quality, as the old adage of “garbage in, garbage out” is so very true of machine learning in any capacity. Be extra mindful of data permissions, as we are firmly in an era where a person’s data and how it is used should be entirely transparent and controlled by that person. Ensure any data policies you have in place cover the usage for machine learning, and if you are concerned about combining all your usage under a single permission (such as terms of use, or marketing opt-in), separate these permissions into controllable flags so that an individual can, for example, opt-in to your marketing emails but independently opt-out of having their data used for machine learning.
Take care also with data removal processes and anonymising data, since you want to maximize data usage for AI use cases but remain compliant. Having clear statements for controls and procedures will also help to build a healthy data culture in your company, and transparency will help to build trust and credibility in your projects.
CX Network: Any advice to those starting their AI journey?
Jon Stanesby: Spend time upfront defining and refining the machine learning problems you want to solve. This will involve engagement with a data scientist who will try to translate your requirements into extremely complex maths. If the objective or goal is poorly defined upfront, you may find that a lot of time is wasted in the early stages of a project. Begin with prototyping, essentially manually working to solve the problem with the data available. You may find that you don’t have the correct data, or that you don’t have enough, or that the solution using ML isn’t actually worth the investment. ML is not going to be perfect immediately. But you don’t want to discover too late that you’ve been heading in the wrong direction.
A data scientist colleague of mine once responded to me, as I was struggling for several minutes to articulate the exact problem we needed to solve: “If you cannot explain it in English, then I cannot solve it with Mathematics.”
CX Network: According to our research base, the top two challenges for AI implementation are linking initiatives to ROI and building the required internal culture. What advice would you give CX practitioners to address these points?
Jon Stanesby: On linking initiatives to ROI, unfortunately, innovation of any kind cannot be expected to generate returns right away so if you are embarking on something brand new ensure you have the buy-in from leadership. This includes the flexibility to fail a little along the way. However, more established uptake of machine learning, for example more mature areas or anything off the shelf that you purchase should absolutely give returns. Measurement of that will depend on the case, but it’s worth thinking about ways to measure the effort saving nature of most ML technologies. Most will not only improve your revenues, or retention but also do so with reduced effort. So the “R” should be a combination of Revenue and Reduced effort.
Read: The Big Book of Customer Insight and Analytics 2019
Some small measurable uplift should be enough for you to get the green light to proceed, scale and expand your use of ML to other use cases, and this is key to building a good foundation for adoption and the internal culture. If you start slow and small with a well-planned pilot project, that has scientific testing methodologies then that’s fairly straightforward.
I have seen some instances where like-for-like comparison isn’t possible. For example, it’s not a simple as testing a ML way of doing something vs. the non-ML way because it’s something that you were perhaps never even doing before! But having any close proxy for comparison, even a before vs. after would be of substantial benefit. Any data scientist worth their salt will be very focused on the uplift and outcomes of their ML algorithms, mostly because that feedback is often crucial to helping the machine learning actually learn. So include them in the ROI discussions upfront.
Regarding building the required internal culture, building AI internal culture, right now at least, will seem straightforward enough. Everyone should be going crazy for AI and wanting to use as much of it as possible everywhere it is possible. But that’s only as good as the trust that first needs to be built to support these initiatives. Else you may not get the support and buy-in needed to scale. The biggest “fears” of using AI are the fear of being replaced, and fear of the unknown. Currently, there is little evidence to support the fear of replacement, there is no machine learning algorithm yet capable of doing a person’s job entirely, and won’t be for some time. Task based (or narrow) A.I. can perform and assist with some tasks that will make life easier, with the focus being on allowing people to focus on higher level tasks.
The fear of the unknown, however, is present in nearly everyone I have encountered that is involved with an AI project. The art of AI explainability and auditing is critical to consider from the outset. As long as a machine learning decision can be explained, audited, corrected and controlled, then these sorts of fears should reduce. Couple that with sharing solid results once you’re up and running will slowly build trust. For example, very few people are ready to get into a fully autonomous vehicle for a one hour drive, but yet features like adaptive cruise control and automated parking are becoming common place. Gradually introducing intelligent features that work perfectly every time is the route to fully autonomous adoption.