Blog article
See all stories »

Armed With Advanced AI, Brand Impersonators Are Casting A Much Wider Net For Victims

Digital impersonation scams are emerging as the “new normal” in the cybercrime industry, with recent research highlighting the lucrative nature of such attacks, a rise in the use of AI to enable them, and the inability of brands to protect their customers. 

The rapid growth of “brandjacking attacks” was the main focus of a June report from Cisco Talos, which describes how attackers often like to pose as recognized brands to gain the confidence of their victims, reaching out via email or social media. According to that report, the attackers are using sophisticated measures that go beyond spoof emails, leveraging official logos and titles to bypass existing security systems. 

An earlier report by Check Point Research named Microsoft and Google as the two most impersonated brands, followed by LinkedIn and Apple, but it’s not only technology companies that attackers love to impersonate. Another company often targeted is DHS, while Wells Fargo and Airbnb also made it on the top ten list. 

While the exact methods differ from scam to scam, brand impersonation attacks all follow a similar modus operandi – sending a message that appears to come from an official representative of the company, inviting users to click on a “link”, which takes them to a fake website, which is then used to steal the victim’s login credentials. 

Brandjacking A Growing Threat

Brandjacking isn’t just a threat to top tier companies, though, as many scammers are now targeting mid-sized brands in order to cast a wider net in their search for victims.    

In its 2024 State of Digital Impersonation Fraud survey, the digital trust technology firm Memcyco highlights the increased prevalence of brand impersonation attacks against all types of companies with a digital presence, including lesser known companies. Furthermore, companies that don’t have adequate solutions in place against website impersonation are being targeted because they often remain unaware of such scams for weeks, and sometimes even months. 

One of the major findings of Memcyco’s survey is that a majority of companies only learn about brand impersonation attacks from their customers, usually when they complain on online forums and social media, causing significant negative publicity. It found that 66% of brands essentially rely on their own customers as a source of threat intelligence on impersonation attacks, mainly because they’re unable to detect them prior to being “brand shamed” by victims. 

Another interesting issue raised in Memcyco’s report is the responsibility of companies to reimburse their customers who became fraud victims due to brand impersonation attacks. Notably, despite the fact that 48% of companies are aware that upcoming regulations will most likely force them to reimburse customers in such scenarios, the report found that a whopping 81% of companies currently do not reimburse customers for losses stemming from fraud.

Brand Impersonation is Evolving with AI

Recent research also highlights the rapid evolution of brand impersonation attacks. In January, Visa revealed that attackers made off with more than £239 million in so-called “authorized push payments fraud”, or APP fraud, which involves tricking victims into sending funds directly by posing as a genuine payee. 

The cybercriminals' increased sophistication is being aided by the widespread availability of advanced AI technologies. In May, Signicat said in a report that over a third of reported fraud attempts now use some form of AI, highlighting the rise of “deepfakes” that can be used to create fake personas that fool identity verification tools, as well as AI voice cloning to impersonate human callers. Around a third of such attacks are believed to be successful, Signicat said. 

In April, BioCatch published its first-ever study on AI fraud, which quizzed around 600 fraud management, anti-money laundering and compliance officials. It found that almost 70% of those respondents believe cybercriminals are better at using AI technologies to enable fraud than their companies are at using AI to prevent such scams. 

How Can Consumers Stay Safe?

Experts say that consumers should always be cautious when dealing with unsolicited communications from companies. To stay safe, consumers should verify the identity of any company that messages them, and should avoid clicking on any links or attachments embedded in such messages. In addition, users can implement two-factor authentication on their accounts, as this makes it much harder for attackers to steal all of the credentials needed to access them. 

As for businesses, any company with a digital presence and an online customer base should highly consider taking proactive measures against brand impersonation attacks, as they are only increasing and becoming more sophisticated with time. And, if the UK’s mandatory reimbursement requirement for APP fraud is any indication of the direction regulation is going in, companies would do well to safeguard their customers from the start, or else it’ll come back to haunt them financially in the long run. 


Comments: (0)

Sheza Gary

Sheza Gary

Project Strategiest

Self Employed

Member since

07 Dec 2018


New York

Blog posts


This post is from a series of posts in the group:

Artificial Intelligence

After the successful launch of the Chat GPT 4.0 chatbot by OpenAI at the beginning of 2023, many businesses started testing the tools provided by artificial intelligence and the areas of their application.

See all

Now hiring