The challenge for biometrics today is that nearly all modalities need to have additional security layers. Nowadays fraudsters have found ways to bypass on-boarding or authentication procedures and in order to make biometrics more secure, liveness detection
or anti-spoofing solutions are required to back up the process. Of course, adding additional security layers can mean more friction.
Most of today’s methods involve the user taking a selfie to prove who they are but now we need to prove if it is actually a real person. Deep fake images, morphing and AI can fool most systems. Even simple photos and masks cannot be detected if there is no
security layer. A replay attack using a video can also bypass most of these processes too. Therefore, what’s the answer?
Liveness detection has become important. However, there are lots of liveness detection solutions on the market today with variety of different functions that involves the user to perform a task, and there isn't a common standard among them. This could involve
a user moving your head from left to right, blinking, smiling, touching your nose or ears, following dots round the screen, flashing lights at your face or even moving the device around. You might as well stand on one leg at the same time! Nearly all of these
tasks are really unnatural to the user, especially to a new consumer who has no idea what this is or why they are performing these tasks. No wonder customers are confused!
This can slow down the enrolment process and the results of new research shows that 40% of consumers abandon retail bank on-boarding process when applying for a new product or service already. The same report found that more than 1 in 3 of abandonments were
due to the length of time taken and just over a third were because too much personal information was required as well. The number of abandoned applications has increased from 26% on average to 45% in the last year.
So while active liveness detection methods might guarantee the user appears to be real in some way, what is the point if it adds in creating a poor user experience and can be so easily manipulated. A lot of active methods are completely unnatural with some
solutions claiming it is frictionless which is difficult to see how as more users can be frustrated with how long winded the whole process of on-boarding is. If you take into account that cyber-criminals and hackers have also found unique ways to replicate
some of these tasks in video replay or AI modelling attacks then any kind of active liveness detection becomes useless if it appears that it can be spoofed.
Today we are seeing a new form of liveness detection which is AI based known as the passive model. Why passive? Actually, because it involves the user to perform none of these extra motion tasks. Taking a normal selfie means exactly that! It just looks like
the user is being requested to take a picture of themselves. So where is the liveness detection? Well just because you are not moving or having different colours being flashed onto your face doesn’t mean there is no liveness functionality going on, as there
is. There is a misconception in liveness techniques that the user has to do something! The human eye can spot the difference between a real person in front of them and a photo, and so can machines. Even more so as machine learning and AI is improving all the
time! As you may know, in some instances of on-boarding it is just proving that you are not actually a bot!
What is more in the passive model, no hacker would know there is a liveness check in progress and would not know how to defeat it. Using a photo, video replay or mask would not matter. So I could explain more on how passive liveness works but that would be
giving the game away and no leading technology vendor would want to do that. The answer is more to do with anti-spoofing technologies than actual liveness detection. This is the key difference. Detecting something that is moving is easy and really just a gimmick.
Detecting photos and masks is much harder.
However, whether active or passive is better really comes down to your appetite for risk. There are many forms of potential attacks from simple representation and impersonation to injection attacks bypassing cameras altogether which many companies using remote
digital on-boarding methods have to deal with. Therefore, if criminals know how to defeat your automated systems, you can be open to potential fraud, and we have already seen so many examples of that in the press.
So whether it is logging in, on-boarding or even using in payment authentication, the passive approach to liveness detection does not disrupt the user experience, speeds up the customer journey and uses anti-spoofing technology to detect a potential attack.
Biometric spoofing has become a serious issue, whether these are from face artifacts or even synthetic voice replays - having passive modalities that works in the background can protect biometrics in helping to defeat fraud.