Join the Community

22,106
Expert opinions
44,113
Total members
420
New members (last 30 days)
191
New opinions (last 30 days)
28,710
Total comments

Part Three: AI Security Can Make or Break a Financial Institution

“In order to fully realize the potential of AI, we have to mitigate its risks,” the White House Office of Science & Technology Policy recently tweeted. “That’s why we’re excited about @NIST’s release of the AI Risk Management Framework…”

NIST, formally known as the U.S. Department of Commerce’s National Institute of Standards and Technology, released its Framework on January 26th to help innovators manage the many risks of artificial intelligence technology, which is trained through data about things like human behavior.

In the context of retail banking customer service, that behavioral data must be combined with a user’s account details and personally identifiable information (PII) in order for AI to create the personalized interactions that elevate a financial institution’s customer experience (CX). Thus, collection and use of the data inputs necessary for AI technology to do its job in financial services must be safeguarded to the greatest degree.

Security is the last of the four pillars explored in this series, which together support the transformed and evolving customer experience that bank and credit union leaders should expect from their investments in AI:

  • AI technology.
  • Access to quality data (see Part I of this series).
  • Customer experience solutions that enable responsiveness, natural interaction and context retention (see Part II of this series).
  • Security for enrollment, authentication and fraud detection

Four in five senior banking executives agree that unlocking value from artificial intelligence will distinguish outperformers from underperformers. However, privacy and security concerns were identified by bankers, in the latest The Economist Intelligence Unit Survey, as the most prominent barrier to adopting and incorporating AI technologies in their organization. 

Seasoned, rules-based chatbots require less in the way of privacy and protection than AI-backed bots, because they’re limited to answering basic, hours-and-location-type questions. Next-generation bots, such as Bank of America’s Erica (who has helped 32 million users), blow self-service wide open. These intelligent virtual assistants (IVAs) can give financial advice and complete commands that require authentication, such as scheduling payments, making transfers, compiling reports and much more. The best part? They get smarter over time through an always-on learning loop that amasses data during every interaction. More accessible, quality data means better-performing AI—but the involvement of all of this sensitive information calls for strict security measures.

While it may seem that chatbots are vulnerable to surreptitious attacks, they actually provide stronger security than human agents handling service requests. During these AI-driven self-service interactions, data moves between the user, the bot and the backend systems, eliminating human touchpoints, which reduces the likelihood of process breakdowns and sensitive information being compromised by accident or, unfortunately, on purpose. Additionally, smartly designed IVAs do a better job than humans at thwarting attacks by fraudsters impersonating accountholders. In fact, they’re highly trained experts in the detection of suspicious activity through the recognition of patterns and anomalies. AI automation of fraud controls increases the overall security of consumer banking.

As powerful as artificial intelligence can be as a competitive advantage in banking, lack of strong security measures is a nonstarter. Without them, bank and credit union leaders jeopardize their customers’ and members’ money, privacy and loyalty, as well as their financial institution’s assets, resources and reputation. To stay competitive, FIs need to give customers and members peace of mind that their data and money are fully protected.

It's important to understand that NIST’s Framework is a voluntary guidance document for organizations designing, developing, deploying, or using AI systems. Not all AI systems are built responsibly. When choosing AI-fueled CX solutions, make certain that the technology’s security standards are up to par for use in the financial services industry.

If done wrong, AI technology can put a financial institution’s future at risk. If done right, AI-powered CX solutions—which improve customer satisfaction and loyalty—can solidify a financial institution’s role as a future market leader.

External

This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.

Join the Community

22,106
Expert opinions
44,113
Total members
420
New members (last 30 days)
191
New opinions (last 30 days)
28,710
Total comments

Trending

Tachat Igityan

Tachat Igityan Founder and CFO at destream

Is Fintech Neglecting the Creator Economy?

Nkahiseng Ralepeli

Nkahiseng Ralepeli VP of Product: Digital Assets at Absa Bank, CIB.

Blockchain Oracles in Payments: The Unsung Heroes.

Francesco Fulcoli

Francesco Fulcoli Chief Compliance and Risk Officer at Flagstone

Insights into the FCA Crypto Roadmap and Consumer Research

Now Hiring