The gig economy is exploding these days, especially in the wake of the global virus outbreak. According to the Bureau of Labor projections, the portion of gig economy workers will increase
to 43% in 2020. Among millennials, 40% have identified as participating in the gig economy.
Gig economy means transitory jobs. Rideshare drivers, work-from-home graphic designers, temporary customer service agents. While most are part-time jobs, they are quite stable while they last. In times of economic crisis, some gig economy jobs may disappear
altogether, and some – such as delivery services in the current crisis – see a huge uplift in demand.
The surge in a freelancer economy has left many organizations that provide these services struggling with a new phenomenon: their freelancers are sharing their digital account with someone else. Why is this happening now more than ever? Due to the current
economic climate, people are anxious to earn more income. Their digital identity is suddenly quite valuable as they have been verified and vetted and allowed to engage in whatever service the gig economy company is allowing them to do. But as 24/7 work is
impossible, the notion of sharing their account with family, friends or other interested parties to continue generating income in an economic downturn is appealing, and a very real phenomenon happening today.
This makes a lot of sense from an individual perspective. These are difficult times, and the extra income shouldn’t be anybody’s business. However, taken from the larger perspective of society, this creates a major trust and safety issue. Think about your
favorite ridesharing app: You order a ride, step into the car, and find a completely different driver behind the wheel.
Or think about a call center service that operates on behalf of Fortune 500 companies. Lockdowns mean that most customer service agents are now working from home. Who can say whether an agent who punched in 12 hours per day is really working two shifts,
or just shared her account with a friend? A friend who is not properly trained, has not signed an NDA, and not authorized to have access to your private data?
Or consider a high-ranking web developer who provides online services as a freelancer, earning top dollar for projects. As a way to generate additional income, he “rents” his identity to unvetted freelancers so they can enjoy access to top paying jobs, and
he gets a commission in exchange. You're paying premium dollars for work that, in fact, isn't done by the top ranked freelancer at all.
Trust is a key component in work-from-home environments, and when identity controls are broken, you can trust no one.
When digital accounts are misused and shared, there are far reaching implications. Lack of accountability. Lack of attribution. Impact on reputation when foul play is discovered. And, quite often, trust and safety concerns.
Devices Can’t Be Trusted
The verdict on passwords as a way to authenticate a digital identity has been decided ages ago: absolutely untrustworthy. Which is why in the last two decades, online and mobile applications found a great way to handle digital identities: the idea of the
Trusted Device. The premise of “something you have” became synonymous with digital identity. If you come from a device that has been seen before in your account, it must be you. Furthermore, once you verify your device via a one-time passcode, it becomes a
trusted device. A token of your identity. As long as you log in from your “trusted” device, it's got to be you.
The reality is that this is no longer true. Quite far from it, actually. Cybercriminals have found so many ways to bypass device checks that it is scary to even list all of them here. And that’s only half of the problem: People now use multiple devices
with the average household owning 11 connected devices. The fact you come from a new device does not
mean it’s not you.
So traditional ‘what you know’ and ‘what you have’ are not really reliable measures to help gig economy companies control their identities. How about biometrics?
Selfies and Fingerprints to the Rescue?
Selfies and fingerprints are becoming mainstream authenticators. But when you think about it, if your device recognizes you based on a fingerprint or face recognition, it’s basically proof that the device knows you, but not proof that you are who you claim
you are. Unless the face image or fingerprint are matched against a central database or a separate document that can be independently validated, all they really mean is that the device recognizes the person who has set it up.
Moreover, fingerprint and face biometrics are even less effective in addressing the gig economy identity split problem because you can easily add more fingerprints to your iPhone or Android device – after all, they were originally designed as a convenience
factor. So if you want your iPhone to be unlocked by your spouse and kids, knock yourself out. It’s wide open. The same goes for face recognition – you can add a second face that your device would recognize as legit, which is especially useful nowadays when
people walk around with face masks. So no, if someone is willingly sharing their account with a friend or family member in order to boost their freelancer profits, device-based biometrics are the least of their problems.
What if the biometric analysis is done behind the scenes, though?
Several types of passive biometrics have been developed and perfected over the course of the last few years. In a call center environment, it is now possible to continuously record and match the agent’s voice against their historic profile. If an anomaly
is found, it can be investigated.
In web and mobile applications, behavioral biometrics is the new queen. Behavioral biometrics silently monitors the user interaction – mouse motions, typing patterns, cognitive choices and navigational preferences. Behavioral biometrics is not designed to
replace a password; in fact, it’s actually interesting to see how one types their password. But it does provide continuous monitoring and can point to anomalies in user behavior. It’s also less intrusive than matching fingerprints and faces, because those
can actually trace a person and - if stored in a central reposiroty - can be compromised and traded, while behavioral biometrics are more statistical in nature and used to verify that the behavior in the account matches past behaviors and no foul play is spotted;
it was never designed as a way to trace a specific individual and can't be used as an identifier.
Account sharing is one of the things behavioral biometrics can highlight. Banks already found out it’s the only way to really know about, say, users of corporate online banking services who share their credentials with coworkers. Which, from a bank’s perspective,
creates a serious breach of confidence as their actions cannot be attributed to a single, personally accountable identity.
Back to the gig economy. So what’s wrong with someone trying to make a few extra bucks by sharing their work from home account with friends and family? For the worker, it is seemingly harmless and not done with bad intentions. However, for the organizations
that operate gig services this is an opening to incredible risk. When a digital identity is split, they lose all means of control. There is no way to really know who is providing the service, whether they have been vetted, or maybe even disqualified for some
reason and came back under a different identity of someone who was willing to share the account with them.
Identity is extremely important, and splitting it through account sharing creates significant risk. It’s time for gig economy businesses to re-think digital identity and models for building trust and safety.