+44 203 808 8299
Organisations across the globe are making billions of decisions every day. They are striving to execute decisions faster and more accurately to improve customer service and reduce costs. Most decisions are, of course, only as good as the facts upon which they are based. Shrewd organisations have realised that the more data that is assessed then the better the decision. However, herein lies the challenge, the more data the longer it takes for someone to make the decision.
Imagine if artificial intelligence (AI) could automate the heavy lifting on the bulk of these decisions, even seemingly complex decisions and get them 100% right. Furthermore, decisions could be made sub-second with relentless consistency, 24/7, transparently and objectively. Could this transform your businesses operations and decision making? But is this possible in reality? The answer is yes, absolutely, and this case study will look at just such an example.
The case study
The year is 2017, and the UK train operating companies have been subjected to a regime of having to pay back customer claims for trains delayed by more than 30 minutes. Already the volume of claims is very high (tens of thousands per month) and to ensure the private operators are serving customers well, the scheme is to be extended to delays in excess of 15 minutes. This is leading to a five-fold increase in the volume of claims. Unfortunately, there is no way of truly knowing which person was on what train and this has allowed more enterprising individuals to fabricate some of their travel patterns to the point where they manage to cover the cost of their season tickets. Some have even worked out how to turn this into a profitable illicit side-line. Left unchecked, repaying fraudulent claims would lead to an explosion in the practise. It is therefore necessary that each claim is checked to ensure there is no fraud.
We took the challenge on, and in a 6-week exercise was able to configure the system to run a historic one month of claims for a given route in minutes. The AI engine was looking for fraud and placed claims into 10 “risk of fraud” bands (1 being lowest risk and 10 highest risk). It would then be possible to set a threshold of which bands should be: “clear to repay” and which should be “hold for investigation”.
Of the 5,688 Identities (based on emails addresses) that made claims, 4,630 (81%) were in the lowest 4 risk bands and could be marked as “clear to repay” (Figure 1 below). Upon manual inspection to validate, these 4 bands were found to contain no frauds at all. Clearly, it is very important to ensure that all legitimate claims are paid quickly and efficiently and this system is about minimising the effort to identify fraudulent claims and ensuring there is a deterrent effect in place.
On closer inspection of the risk bands and the frauds within those bands, one strategy could be to mark bands 1 – 6 as “clear to pay”. If this strategy were adopted then 7 fraudulent claim identities would slip through the net and get paid. However, like most fraudsters, if you get away with fraud you tend to repeat it. In the case of these 7 frauds, once repeated, their severity would increase and they would move to the higher risk bands and get caught by the system on future claims. Given the relatively low fiscal value of the frauds, this may be an acceptable loss in return for needing to manually review considerably less claims. By adopting this strategy, 93% of all claims could be marked as “clear to pay” totally automatically by the system, and the hit rate on those bands manually inspected for fraud would be around 70%, i.e. the majority would indeed be fraud.
Figure 1 – validated results from the delay repay AI system
Further investigation also revealed that a significant number of the historic manually reviewed claims that were paid, were in fact fraud. This is not a surprise as the AI system can process so much more data and far more extensively than a human checking for fraud. Therefore, not only can the system significantly reduce the false positive rates, but it was also capable of capturing more fraud than human inspection. Finally, the system pulled together all the relevant facts regarding the case and presented these to the investigators. This reduced the time for investigation, further increasing the efficiency of the investigation team. An example screen shot of the system is provided in Figure 2.
Figure 2 – supporting faster decision making and investigations
How to get AI delivering compelling results
One will hear a lot about “machine learning” or “deep learning”, a panacea where the AI can just process huge volumes of data and “learn” how to detect fraud. Unfortunately, in the case of most fraud, and indeed many decisions, this simply is not possible, without generating huge volumes of false positives and providing investigators no real insight as to why something was flagged as fraud. Fundamentally, there are too many ways to commit fraud and not enough cases of known fraud outcomes for such a machine to learn.
So, what I the right approach? First, like humans, the AI needs to be taught the basics, given a point to start and needs to have the complete picture. To use the Donald Rumsfeld quote “what you know you know” are those frauds you caught – expert investigators will learn from these. They can then look for similar patterns to identify “what you know you don’t know” – frauds of a very similar modus operandi that you have not yet caught. Fundamentally, this is an expert system approach, but with a twist, this time you have first networked the data together: shared identities, addresses, bank accounts, email addresses, web channel data, time correlated trips, relationships, transactions, providers, geographies, etc. When fraudsters attempt to industrialise their activity to make more money, they re-use parts of identities or leave other traces to indicate the frauds are related. They also repeat behaviour, that leaves evidence of homogeneous behaviour across a network, indicative of a “controlling criminal mind”. By replicating your best investigators’ approach across networked data, the AI will not only alert far more accurately, but will also provide the investigator team with a picture and explanation of the fraud.
But what about the “what you don’t know you don’t know”, those frauds that may be happening, you have never seen any examples, they are new and you do not even know the scale. You use the same starting point, networks within the data, and then apply a range of techniques to detect abnormal behaviour. Peer group analysis, clustering, outlier analysis, but not just on trips, also on all the entities or objects such as people, addresses, mobile phones, email addresses, etc. Then take this a step further and apply it at a network level. By overlaying this complex set of techniques it makes it almost impossible for a fraudster to second guess what the system is doing and they will no longer be able to operate below the radar.
For more on this topic refer to the article ‘Why AI? Why now?’