Organisations across the globe are making billions of decisions every day. They are striving to execute decisions faster and more accurately to improve customer service and reduce costs. Most decisions are, of course, only as good as the facts upon which they are based. Shrewd organisations have realised that the more data they assess, the better the decision. However, herein lies the challenge; the more data, the longer it takes for someone to make the decision.

Imagine if artificial intelligence (AI) could automate the heavy lifting on the bulk of these decisions, even seemingly complex ones, and get them 100% right. Furthermore, if decisions could be made sub-second with relentless consistency, 24/7, transparently and objectively. Could this transform your businesses operations and decision making? Yes. But is this possible in reality? The answer is yes, absolutely, and this case study will look at just such an example.

A Case for AI Use

The year is 2017, and the UK train operating companies have been commanded to pay back customer claims for trains delayed by more than 30 minutes. Already the volume of claims is very high (tens of thousands per month), but to ensure the private operators are serving customers well, the scheme is now to be extended to delays in excess of 15 minutes. This is leading to a five-fold increase in the volume of claims. Unfortunately, there is no way of truly knowing which person was on what train and this has allowed more enterprising individuals to fabricate some of their travel patterns to the point where they manage to cover the cost of their season tickets. Some have even worked out how to turn this into a profitable illicit side-line. Left unchecked, repaying fraudulent claims would lead to an explosion in the practice. It is therefore necessary that each claim is checked to ensure there is no fraud.

We took the challenge on, and in a 6-week exercise were able to configure the system to run an analysis on one month of historic claims for a given route in minutes. The AI engine was looking for fraud and placed claims into 10 “risk of fraud” bands (1 being lowest risk and 10 highest risk). It then was possible to set a threshold of which bands should be: “clear to repay” and which should be “hold for investigation”.

How AI Solved the Problem

Of the 5,688 Identities (based on emails addresses) that made claims, 4,630 (81%) were in the lowest 4 risk bands and could be marked as”clear to repay” (Figure 1 below). Upon manual inspection to validate, these 4 bands were found to contain no frauds at all. Of course, it is very important to ensure that all legitimate claims are paid quickly and efficiently, and this system can do just that.

On closer inspection of the risk bands and the frauds within those bands, one strategy could be to mark bands 1 – 6 as “clear to pay.” If this strategy were adopted, then fraudulent claims marked as 7s would slip through the net and get paid. However, for most fraudsters, if you get away with fraud once you tend to repeat it. In the case of these 7 frauds, once repeated, their severity would increase, and they would move to the higher risk bands and get caught by the system on future claims. Given the relatively low fiscal value of the frauds, this may be an acceptable loss in return for needing to manually review considerably less claims. By adopting this strategy, 93% of all claims could be marked as “clear to pay” totally automatically by the system, and the hit rate on those bands manually inspected for fraud would be around 70%, i.e. the majority would indeed be fraud.

Further investigation also revealed that a significant number of the historic manually reviewed claims that were paid were in fact fraud. This is not a surprise, as the AI system can process so much more data, far more extensively than a human checking for fraud. Therefore, not only can the system significantly reduce the false positive rates, but it, in this case, was also capable of capturing more fraud than human inspection. Finally, the system pulled together all the relevant facts regarding the case and presented these to the investigators. This reduced the time for investigation, further increasing the efficiency of the investigation team. An example screen shot of the system is provided in Figure 2.

How to get AI To Deliver compelling results

One will hear a lot about “machine learning” or “deep learning,” a panacea where the AI can just process huge volumes of data and “learn” how to detect fraud. Unfortunately, in the case of most fraud, and indeed many decisions, this simply is not possible, without generating huge volumes of false positives and providing investigators no real insight as to why something was flagged as fraud. Fundamentally, there are too many ways to commit fraud and not enough cases of known fraud outcomes for such a machine to learn.

So, what is the right approach? First, like humans, the AI needs to be taught the basics as a starting point and needs to have the complete picture. To use the Donald Rumsfeld quote “what you know you know” are those frauds you caught – expert investigators will learn from those fraud cases they have seen. They can then look for similar patterns to identify “what you know you don’t know” – frauds of a very similar modus operandi that you have not yet caught. Fundamentally, this is an expert system approach, but with a twist. You have first networked the data together: shared identities, addresses, bank accounts, email addresses, web channel data, time correlated trips, relationships, transactions, providers, geographies, etc. When fraudsters attempt to industrialise their activity to make more money, they re-use parts of identities or leave other traces to indicate the frauds are related. So you can now use the networked data to spot this.

They also repeat behaviour, that leaves evidence of homogeneous behaviour across a network, indicative of a “controlling criminal mind.” By replicating your best investigators’ approach across networked data, the AI will not only alert far more accurately, but will also provide the investigator team with a picture and explanation of the fraud.

But what about the “what you don’t know you don’t know” – those frauds that may be happening, but you have never seen any examples of and don’t know the scale of because they are new. You use the same starting point, networks within the data, and then apply a range of techniques to detect abnormal behaviour. Peer group analysis, clustering, outlier analysis, but not just on trips, also on all the entities or objects such as people, addresses, mobile phones, email addresses, etc. Then, take this a step further and apply it at a network level.

By overlaying this complex set of techniques, it makes it almost impossible for a fraudster to guess what the system is doing and they will no longer be able to operate below the radar.

You may be interested in…

Better decisions start here

See how our Contextual Decision Platform transforms every operational decision you make.

Related Solutions