The terms ‘caring’ and ‘AI’ are generally not used in the same sentence. We’ve all seen or heard of examples of AIs becoming racist bots within hours of being deployed, fake videos, and images, but also stories. As such, AI is more often perceived as something dangerous than as something that can actually add value to your business. As VP of Decisioning & Analytics at Pega, Walker acknowledges this, and to a certain extent agrees with the general impression people have of AI. He even warned the public during his keynote at PegaWorld that “we’re only at the beginning of the era of fake videos, images and stories.”

This doesn’t mean, however, that it’s not possible to do good with AI. On the contrary, within the right framework, AI can help companies massively when it comes to customer engagement, for example. At the end of the day, “AI is like dynamite.” According to Walker, “you can use it for good and for bad.”   

Morality

One crucial aspect of AI that’s been missing until now is morality. AI needs to have that, according to Walker, in order to be able to be used for good. He immediately acknowledges, though, that things like ethics, empathy and morality are human concepts, and are simply not present in algorithms. These, then, should be ‘folded’ around algorithms, so that the algorithms operate inside of a framework that’s determined by humans. In that way, you can ensure they behave according to that framework.

Transparency

Even if algorithms operate inside frameworks, AI still can be very opaque in how it works. In business environments, opacity often is unacceptable. A good example of this comes when organizations have to demonstrate compliance of certain processes with regulations such as GDPR. Besides compliancy, as an organization you probably like to know how specific algorithms work anyway, as they can and will access data, some of it even confidential perhaps. 

The approach advocated by Walker and Pega is to make AI as transparent as possible. This helps combat the trust issues people have with AI, and makes it clear what the algorithms do. However, transparency isn’t always possible, according to Walker, even though you should strive towards it as much as you can. The reason for this is that, in general, opaque AI is more powerful than transparent AI.

In order to give companies the power over when to use opaque AI and when not to use it, Pega introduced the T-Switch two years ago. The ‘T’ stands for ‘Transparency’. Pega’s Customer Decision Hub advises you on when you can responsibly deploy AI algorithms in your organization, without breaking compliance or running other unnecessarily big risks. If it flags a potential use of AI, you can disable it. 

Empathy 

Besides transparency, there’s one more component Pega considers to be crucial when it comes to AI, and that’s empathy. Empathy is linked to morality, and not something that’s innate to AI. A framework needs to be established within which the AI is allowed to operate. You could say that ethics define the boundaries of that framework and empathy defines what the next-best-decision is for a specific customer within those boundaries. 

Later this year, Pega will introduce the Customer Empathy Advisor. The most striking feature of this tool is an empathy slide switch you can incorporate into your Customer Decision Hub. The tool already uses AI to analyze your customers’ data, but up until now it hasn’t given you the opportunity to indicate the level of empathy that you wish to include in the next-best-action. You can go from cold – with very little empathy – to hot – where you put yourself more in your customers’ shoes.

Not a charity, so ROE

Companies in general aren’t charities. So even though it would be most empathetic for banks to give away a mortgage to someone, and not require him or her to pay it back, that obviously isn’t a viable approach. In an example such as this empathy doesn’t mean you should ‘give away’ funds to people. It does, however, mean you can take the person’s situation into account more when you sell them a mortgage. 

For example, if you know that selling someone a rather high mortgage may result in that person losing his home, a cold setting of the Empathy Advisor would let you sell it to the person. However, if you set the slide switch to a somewhat hotter setting, you would advise against it. Similarly, if the AI in your Customer Decisioning Hub tells you that he or she lives near a wildfire, perhaps it isn’t the best time to bother that person with questions about an outstanding payment.

As a company, you have the power to slide the switch to your preferred setting. Being more empathetic doesn’t necessarily mean that your company will fare worse, however. Conversely, customers may just as well be more loyal in the long run if you treat them with more empathy. Return on Empathy (ROE), therefore, is also part of the Empathy Advisor. It predicts what the long-term impact is of your empathy settings.

Man and machine, with heart

The end result of adding empathy to AI, is that it is a combination of human and machine decisions. According to Walker, this balance of brain (AI) and muscle (products/tools), with a heart in between in the form of the Empathy Advisor, makes it possible to operationalize empathy at scale.