Insurance unicorn Lemonade has scrambled to reassure customers after a tweet thread boasting about its AI prowess led to worries that it was treating claimants differently based on physical characteristics.
In a now-deleted series of tweets, Lemonade claimed that its use of bots and machine learning means that it collects 1000 times more data that rivals.
Offering up an example of its supposed superiority, the company tweeted that "when a user files a claim, they record a video on their phone and explain what happened. Our AI carefully analyzes these videos for signs of fraud. It can pick up non-verbal cues that traditional insurers can't, since they don't use a digital claims process."
The phrase "non-verbal cues" caused a storm, raising the spectre that AI may be used to discriminate against users based on things like race, gender or disability.
In a blog, Lemonade says the "poorly worded tweet" caused confusion that "led to a spread of falsehoods and incorrect assumptions".
The company can "unequivocally confirm that our users aren’t treated differently based on their appearance, behavior, or any personal/physical characteristic".
Claim videos are used, says Lemonade, to make it easier for customers to describe what happened and because evidence shows that humans are less prone to lying when looking at themselves speaking in a mirror or selfie camera.
Says the blog:
"Let’s be clear:
1. AI that uses harmful concepts like phrenology and physiognomy has never, and will never, be used at Lemonade.
2. We have never, and will never, let AI auto-reject claims."