Blog article
See all stories »

Learning to trust in a scam pandemic

We've witnessed a sharp upstick in fraud cases and scams since the onset of COVID-19, but that's not to say technology within the banking industry hasn't improved. Now, it's getting tougher to crack someone’s bank account open via a brute force attack – as it should be. According to a recent Javelin identity fraud report fraud loss amounted to a whopping $56bn in 2020, impacting 49 million people. So where are we going wrong?

The fight against scams

Currently $44bn is originating from scams. We may have thought of fraudsters as cyberwizards before, but now they are upskilling as expert manipulators. Scams really took off with the onset of the pandemic, as fraudsters started mirroring and exploiting consumer insecurities. Some things never change, and that is our psychology, making us vulnerable. 

Hard-wired to trust, we are particularly susceptible to skilful and confident impersonators. We all too often fall prey to con artists who bet on us making irrational, quick decisions off the back of strong human emotion. Too many a headline tell stories of those ‘compelled’ to transfer money to a criminal masquerading as a representative of their bank, workplace or even the country’s tax authorities. It’s time to bring trust back to the places where it’s lacking: our entire digital economy, inadvertently opening doors to malicious actors. 

With many sharing economy platforms, such as Airbnb and Uber UBER -2.2%, opting to verify and re-verify users, we’ve made many aspects of our lives more scam-proof. Now, it’s time to tackle the broader issue at hand. Take a social media handle, a dating app profile, or a text from an impersonator asking to transfer money to them. Are they a real person or a bot? Are they who they say they are? Can they be trusted? 

Eye-spy a scammer

Sometimes, it’s hard to spot a scam even for experienced professionals. We’ve heard many a story of a “Bogus Boss” scam, with an impersonator calling or emailing an overseas office successfully demanding an urgent transfer. And that’s after years of countless cyber, data security and anti-money laundering training. But what chance does someone as young as 11 stand – the age at which young people start being targeted by social media requests and chatbot scams? While it feels like a storyline straight out of Black Mirror, it is frightening how successful teen financial scams are.

Tactics vary by age group, becoming more ‘analogue’ with age – phone calls for retirees, social media traps for Gen Z. Major banks are now proactively warning customers that legitimate employees would never ask them to transfer funds to a different account. Once payments to cybercriminals are willingly authorised by the victim, the options to reverse the social engineering maze and return the money are rapidly waning. Less than half of all bank transfer victims get their money back.  

Even when it comes to big traditional banking operations, such as consumer lending, we still see charlatans ‘go big or go home’. Fraud instances on big ticket items such as taking a car loans or a mortgage under someone else’s name jumped by up to 15% in 2020 compared to the year before. More can be done to stem these losses – just relying on our ability to look someone in the eye and tell whether their application is legitimate isn’t a viable solution for the global digital economy. 

Solving the problem

For years now, we’ve stressed the importance of balancing experience with secure fraud detection to both protect and empower. So it comes as no surprise that public confidence in static passwords is diminishing – only 45% felt their passwords were secure in 2020. 

Interestingly though, most scams in 2020 were triggered by our behaviour, not our ability to keep log-in details safe. This includes peer-to-peer transfers, robocalls, emails, and social media solicitation. This is where modern technology fails when human emotion takes over – our instincts don’t protect us from self-sabotaging our finances when we are scared, concerned, want to get a better deal, or want to help a fellow human being. 

Getting the technology right, then, must be the constant focus of banks and FS firms. It starts with future-proof digital identification and robust digital identity re-verification processes – ensuring particular attention is being paid to flagging recently lost or stolen documents, as well as potentially forged IDs. 

Our uniqueness plays a role

We’re also seeing that consumer trust in other types of security, such as biometrics, is on the rise. This suggests that after a slow burn in the last few years, fingerprints, voice-based verification, facial and retinal scanning are now acceptable – even desirable – to consumers. They provide effective authentication while removing the need to remember complex password sequences (that we either reuse across the board or can’t remember)  adding only seconds of time to existing onboarding processes. 

It’s this layered approach that will ensure fraud detection and prevention improve at the same rate as banks adopt new technologies. The more layers of authentication we have, the stronger our defences – face and voice together, for example, is many times stronger than face or voice alone. Layering will be the key to cracking down on fraud for good. 

Trust is the priority

Humans should never be considered the 'weakest link'. Technology is here to return trust as the vital ingredient of financial transactions and spread it far and wide in the digital economy. And it all starts with our ability to correctly identify and verify who exactly our counterpart is.


Comments: (0)

René Hendrikse

René Hendrikse

Vice President & Managing Director, EMEA & LATAM


Member since

23 Jul 2019



Blog posts


This post is from a series of posts in the group:

Digital Identity Management

Discuss upcoming trends in digital proofing, authentication, fraud and digital identity management.

See all

Now hiring