A UK scale-up unveiled an industry first for identity verification this week: users are asking to turn their heads.
onfido, a spinout from the University of Oxford, launched the software amid rising identity theft. Increasing economic pressure, increasing digitization and pandemic-fueled upheaval recently led politicians to warn that a “fraud epidemic” spreads across Onfido’s homeland, the UK.
Similar developments have been observed all over the world. In the US, for example, in 2020, approximately 49 million consumers were victims of identity fraud — cost them in total about $56 billion.
These trends have been triggered a boom in the identity verification market. Increasingly sophisticated fraudsters are also forcing providers to develop more sophisticated detection methods.
onfido gave TNW an exclusive demo of their new entry into the field: a head-turn capture experience called Motion.
The adoption of biometric onboarding has been held back by two major issues. “Active” detection methods, which ask users to perform a series of gestures in front of a camera, are notorious for their high abandonment rates.
“Passive” approaches remove this friction because they do not require specific user actions, but this often creates uncertainty about the process. A little friction can reassure customers, but too much puts them off.
Motion tries to solve both problems. Giulia Di Nola, an Onfido product manager, told TNW that the company tested more than 50 prototypes before deciding that head-turn capture offers the best balance.
“We’ve been experimenting with device gestures, different pattern movements, end-user feedback and working with our research team,” she said. “This was the sweet spot that we felt was easy to use, safe enough and gives us all the signals we needed.”
Onfido says the system’s false rejection and acceptance rates are below 0.1%. The authentication speed, meanwhile, is 10 seconds or less for 95% of users. That’s fast for onboarding clients, but fairly slow for frequent use – which may explain why Onfido isn’t using the service for regular authentication yet.
In our demo, the process felt fast and seamless. After sharing a photo ID, the user is asked to provide their facial biometrics via a smartphone. They are first instructed to place their faces within the frame and then turn their heads slightly to the right and left – the order does not matter.
As they move, the system provides feedback to ensure correct alignment. Moments later, the app comes with its decision: clear.
Under the hood
As the user turns around, AI compares the face on the camera to that on the ID.
The video is sequenced into multiple frames, which are then separated into several sub-components. Then a suite of deep learning networks analyzes both the individual parts and the video as a whole.
The networks detect patterns in the image. In facial recognition, these patterns range from the shape of a nose to the colors of the eyes. In the case of anti-spoofing, the patterns can be reflections from a recorded video, edges on a digital device or the sharp edges of a mask.
Each network builds a representation of the input image. All information is then merged into one score.
“That’s what our customers see: whether or not we think the person is real or a spoof,” said Romain Sabathe, Onfido’s applied science lead for machine learning.
Onfido’s trust in Motion stems in part from an unusual business unit: a fraud creation unit.
In a location that resembles a photo studio, the team tested various masks, light resolutions, videos, manipulated images, refresh rates and angles. In total, they created over 1000,000 different examples of fraud, which were used to algorithm.
Each case was tested on the system. If it passed the checks, the Motion team investigated further with similar types of fraud, such as different versions of a mask. This generated a feedback loop of finding problems, solving them and improving the mechanism.
Motion also had to work on a diverse range of users. Despite stereotypes about victims, fraud affects most populations fairly even. To ensure that the system works for them, Onfido has deployed various training datasets and extensive tests. The company says this has reduced algorithmic biases and false rejections across all geographic regions.
Sabathe demonstrated how Motion works when a fraudster uses a mask.
When the system captures its face, it extracts information from the image. Then the findings are displayed as coordinates on a 3D map.
The graph consists of colored clusters, which correspond to characteristics of both real users and types of fraud. When Sabathe puts on the mask, the system plots the image on the fraud cluster. Once he takes it off, the point comes into the real cluster.
“We can begin to understand how the network interprets the different spoof types and the different real users it sees based on that representation,” he said.
Onfido’s head-turn technique resembles one unveiled last month by Metaphysic.ai, a startup behind the viral Tom Cruise deepfakes. The company’s researchers found that a sideways glance could reveal deepfake video calling.
Di Nola notes that such synthetic media attacks remain rare for now.
“It’s certainly not the most common type of attack we see in production,” she said. “But it’s an area that we’re aware of and that we’re investing in.”
In the field of identity fraud, both attacks and defenses will continue to evolve rapidly.