Blog article
See all stories »

Workforce behaviour modelling: get proactive about insider fraud

Unsecured systems and processes can be remarkably easy to exploit. 

Just imagine a banking call centre, where employees process payments on behalf of customers every day. Before the world changed in early 2020, call centre supervisors could control and monitor employee access to customers’ information and money. When that work went remote, supervisors lost some of those controls, and unscrupulous employees had access to move customer funds into fraudulent accounts. 

Or imagine a bank that flags big, five-figure-plus customer deposits. This is common practice because a bank’s sales team may want to contact these customers to offer additional services and financial products. But an IT contractor can also get access to that list of high-value customers — along with their account details and contact information — then sell that list to the highest bidder. 

Instances of insider fraud are on the rise, along with the costs and complexity of those attacks. The PwC Global Economic Crime and Fraud Survey 2020 found that 37 percent of fraud that affects businesses is committed by internal perpetrators. PwC expects that number to rise as subsequent surveys more fully reflect recent changes to work arrangements.  

So what does 'insider fraud' look like, why do organisations struggle to catch it, and what they can do to monitor and prevent this kind of fraud. 

Let’s get specific about what ‘insider fraud’ means.

Insider fraud describes a broad set of actions perpetrated by a broad group of people: 

  • The person committing that fraud could be an employee, a former employee, a contractor or even a business associate.
  • The fraud itself can be malicious, or it can be from the carelessness, or the negligence of the person involved. 

In each case, the person committing the fraud has a good understanding of the organization’s processes, controls, security practices, data, and computer systems. This allows that person to steal or compromise confidential and commercially valuable information. It also allows that person to sabotage the company’s computer systems. 

This is what makes insider fraud so complex. Each of those potential actors has their own levels of access, each of the systems they use have their own vulnerabilities, and each person has their own motives for committing fraud.  

Why are organizations struggling to catch this kind of fraud? 

There are different ways to stop people from doing something you don’t want them to:  

  • You can explain to them why they shouldn’t do that thing. 
  • You can restrict their access. 
  • You can monitor their behaviour for indications that they are about to do the thing you don’t want them to do. 

Parents of toddlers understand this. When a 2-year-old learns to walk, the family’s entire home becomes one big threat landscape.  

You cannot reason with a 2-year-old, so threat management means restricting access and/or monitoring behaviours. The parents quickly learn to listen for things like the unauthorized opening of the silverware drawer in the kitchen. Eventually, however, many parents decide to install a baby gate in the kitchen’s doorway because monitoring the child’s behaviour is exhausting. 

Most insider fraud prevention tools and processes follow the baby gate method of prevention. In banking, organizations tend to have a variety of siloed systems that restrict access to specific users. Sometimes, this gets unwieldy: The call centre uses one system, headquarters another, the various banking branches yet another. Baby gates everywhere. 

Here’s where the complexity of insider fraud itself piles onto the problem. A bank can have hundreds or thousands of employees and partners, each with their own means and motives for committing fraud. Very quickly, those banks discover that preventing insider fraud is too complex for simple solutions.  

That means banks must deploy the other methods of fraud prevention: communication and monitoring. For monitoring to work, though, banks have to be able to see and track what everyone in the organization is doing. Manually monitoring a workforce would be beyond exhausting, in the same way it would be for the toddler’s parents to always be listening for the silverware drawer.  

With machine learning, however, this kind of monitoring is not only possible but incredibly effective at identifying fraud in complex working environments. 

What can banks do to monitor and prevent insider fraud? 

Banks need to have certain key controls in their fraud risk management frameworks. Those controls include: 

  • Onboarding controls. All employees, contractors and partners who access any banking systems must get vetted. 
  • Access controls. Each person’s system access and capabilities should be limited to what’s necessary for their roles. 
  • Continuous risk assessment. This helps the organization understand the likelihood and impact of fraud. 
  • Education and awareness. Employees and other users need to understand the risks and consequences of fraud. 
  • Intelligence sharing. This lets disparate teams work together to get a shared understanding of the threat landscape. 

These are the restricting access and communication methods of prevention. A monitoring tool powered by machine learning can then backstop all of these controls by building individual behaviour profiles of all users. Those profiles allow a model to understand what good, characteristic employee behaviour looks like and can then compare anomalies or suspicious behaviour against that benchmark. 

Let’s use a call centre again as an example. In most call centres, agents don’t control the calls they are connected to. As such, an agent’s log of inbound phone numbers will be statistically random. Behavioural data could confirm this. And so, it would be unlikely that the same number calls the same agent multiple times in a day.  

Now, imagine a day in which the same call centre agent accesses a customer account five or six times on the same day. If this were to happen, a machine learning based monitoring solution could flag that anomalous activity and the bank’s investigators could then follow up to see whether this was in fact normal activity. 

It’s akin to ears pricking up when the silverware drawer flies open. Very few banks have the resources to monitor such a situation manually — that would involve someone tracking call logs in real-time — but machine learning techniques can. 

This is what enables organizations to be proactive in preventing internal fraud. By monitoring things like anomalous employee behaviour, banks fortify their existing tools for catching insider fraud. As the scope and scale of insider fraud grows, banks that embrace these proactive approaches to fraud prevention will be better protected against insider threats. 


Comments: (0)

Alex Robinson

Alex Robinson

Fraud Analytics


Member since

18 Mar 2020



Blog posts




This post is from a series of posts in the group:

Financial Risk Management

This network brings together professionals involved in the oversight and management of their company's financial risks and exposures as well as solution vendors, in order to discuss risk issues including interest rate risk, foreign exchange risk and commodity price risk, among others.

See all

Now hiring