In the comedy show Little Britain there is a sketch involving a customer service representative sitting behind a computer. Whenever a customer makes a perfectly reasonable request, she taps away at a keyboard and says: 'computer says no'.
The sketch resonated with viewers because it illustrated what can happen if you do not engage and listen to a customer’s request, and hide behind the cues you take from a machine.
In a recent project undertaken with The Foundation, for example, we learned of a call from a distraught customer who couldn’t make her next mortgage payment. ‘Our system will only let me help you when you have already missed three payments, so call us back then’, came the reply from the Call Centre agent. It might as well have been ‘Computer Says No’.
As a result of these all-too-common experiences, we’ve been doing further work on how we can use behavioural science and the latest improvements in generative AI to help create more empathetic conversations between human beings interacting with machines. And we’ve turned the focus of this Behavioural AI programme on call centres.
We started by developing a framework for measuring empathy in conversations between human beings. This was undertaken by CogCo behavioural scientists, who identified measurable behavioural constructs for empathy. They then turned these into a scoring mechanism, using insights from decision analysis, and validated the outputs through a series of tests with real world data.
Using a foundational AI model, we then built a set of tools for recording, transcribing, cleaning and labelling calls between customers and call centre operators. This then enabled our data scientists to analyse how well different calls scored on our empathy scale. This can be done on a previously unimaginable scale - taking the calls from thousands of call operators working over a period of months or even years.
The results from our early work are already striking, showing the patterns in calls which are more (or less) empathetic. In the calls below, for example, the high empathy call (scoring 90 in our Empathy Scale) is characterised by a balanced conversation between caller and agent, which can be contrasted with the low empathy call (scoring 25 on our Empathy Scale) in which one side (the caller) has a complex complaint that is not being fully addressed by the operative.
It is one thing to show what constitutes an empathetic call (or not). But it’s another to be able to generate constructive feedback that helps people to improve over time. And we’re now able to do this too at scale. In the example below, we can see feedback, generated using our Behavioural AI tools, giving direct feedback for the operators in specific high- and low-empathy calls.
These are just a couple of examples of the kind of analysis we can now perform using these techniques. Others include summarising the call intent and categorising it automatically; analysing the call dynamic (e.g. how long the agent and caller spoke for; how many turns were taken); and analysing the relationship between elements of a call and specific outcomes (such as an NPS score).
Our plan is to now embed these insights into human-to-human training programmes and AI-generated feedback tools. So that, in the future, ‘human, informed by computer, says yes’.
If you would like to arrange a meeting to run through some of our findings in more detail, and talk about how you can apply this in your work, feel free to get in touch: email@example.com.