Home » Fintech » To manage AI risks, we need to ask the right questions
FinTech 2019

To manage AI risks, we need to ask the right questions

avatar

Holly Whitehead

Research and Development Manager, International Compliance Association

Artificial intelligence (AI): two words that conjure images of robots and science fiction movies. While we are not yet in the realm of self-aware robots, the reality is that AI is no longer the stuff of Hollywood or science fiction books.


Autonomous vehicles (self-driving cars) are currently being tested for use in the not-too-distant future, and several sectors, from healthcare to banking, are using AI to assist in high-risk, high-consequence human decisions.

Numerous banks, such as HSBC, Bank of America, and NatWest, are implementing AI to fight fraud, save time and money, and automate back-office functions. But what are the risks of using AI for such important decisions?

Can we trust the machines?

All of these developments in the AI field, especially in high-risk decision environments, like financial crime, pose new and substantial challenges. So the first thing we need to do is ask the fundamental questions: how did the system arrive at a particular decision? And who is accountable if it goes wrong?

It all comes down to an issue that is at the very core of financial services – trust. How do you trust the machine? If AI is capable of evolving, which it should be through machine learning, then how do we know that it is functioning correctly? How is it checked? And finally, how would we prove this to a regulator?

AI source data must be high-quality

An AI system is only as good as the data that is fed into it. Let us look at an example from the healthcare sector.

Imagine you have a system trained to learn which patients with pneumonia had a higher mortality rate. The system does this successfully, however, what it inadvertently achieves is to classify patients with asthma as being lower-risk. This is because in normal situations, patients with pneumonia and a history of asthma go straight to intensive care, therefore receiving treatment that reduces their risk of dying. In this situation, what the AI system took this to mean is that asthma plus pneumonia equals a lower risk of death.[1]

Since so much of the source data for AI is ‘imperfect’ in this way, we should not expect perfect answers all the time. Recognising this is the first step in managing the risk. However, we can work to control the input. By doing this we can, to some extent, control the output.

Machine learning can complicate quality assurance

Traditional systems use rules to predict an exact outcome. The issue with AI is that it is difficult to predict the outcome, and so we are usually unable to see how AI achieved the result it did.

AI hasmachine-learning capabilities, which mean different outcomes can be reached when the same function operates in different occasions. This generates uncertainty; how did AI generate that result?  

One solution for this is to establish a margin of error within which an outcome will be considered correct.

Another action involves asking: ‘did the system do what it was expected to?’ An AI system must be able to explain why it came to a particular decision if it is to be trusted in high-risk decision making. This explanation can also provide a clear audit trail if anything goes wrong.

This means creating AI systems that can explain how a decision was reached to analysts, auditors, and, importantly, the regulator.

The human element prevails

There is now a view being cultivated that full AI autonomy is not the best step forward. The human touch needs to be integrated and AI should be supporting and collaborating with, rather than replacing, human beings.

Humans are essential in facilitating the explanation of an AI system’s decision making process. These decisions and explanations can then be assessed by us to pinpoint errors and feedback to the AI system.

Fundamentally, considerable progress is being made with AI and the issues raised here do have a chance of being resolved. If this can be achieved, then the vast capabilities and possibilities of AI, especially in financial services, can be realised.  


[1] Bianca Nogrady, ‘The real risks of artificial intelligence’, BBC, 19 November 2016: http://www.bbc.com/future/story/20161110-the-real-risks-of-artificial-intelligence – accessed May 2019

Next article