Large language models could change how banks interact with customers and their own knowledge bases, and how they protect themselves and their customers from fraud and financial crimes, but few have released products that actually deploy the nascent technology.
That has left smaller banks that are in the learning and experimentation stages to take cues from technology leaders on where large language models — the kind of technology that powers OpenAI's ChatGPT — will become most useful in banking.
Large language models are one example of generative AI, a type of artificial intelligence that can generate content to mimic text, images, videos or other content on which it has trained. According to Michael Haney, head of product strategy at Galileo Financial Technologies, ChatGPT put this technology on many banks' radars very suddenly.
"There are very few banks who've put this into the production environment," Haney said of large language models. "Most banks may have not even been aware of generative AI until ChatGPT made headlines."
Two examples of banks using large language models in an experimental capacity or otherwise keeping its use strictly internal include Goldman Sachs using generative AI to
Additionally, JPMorgan Chase
As banks grow more interested in adopting AI for various use cases, they need to be careful about their strategy for doing so, according to Jen Fuller, U.S. financial services lead at PA Consulting.
"One of the big risks about AI for organizations at the moment is it turning into a Frankenstein's monster of pet projects," Fuller said. "Everybody's doing their own little thing with AI, but to really get the organizational value at a strategic level, you need to build a framework where AI is part and parcel of the way that your organization does business."
One way that banks are making AI part and parcel of their business is by organizing their knowledge bases by training language models on internal documentation and allowing employees to interact with a language model that can answer questions that can only be answered by searching that documentation.
Organize institutional knowledge
SouthState Bank's director of capital markets
Similarly, in March, OpenAI and Morgan Stanley
Internal uses of large language models to organize institutional knowledge have the advantage of filtering model output through bank employees rather than giving it directly to the customer, as one of the well-known problems with large language models is that they can hallucinate — state something as fact that sounds plausible but is actually false.
This is one of the main motivations for Sydney-based bank
Bloomberg has also taken a stab at organizing financial knowledge, by
Provide customer service
Few banks have deployed chatbots that they publicly claim are powered by large language models, but companies like Kasisto and Monarch offer services to banks and consumers respectively that promise powerful chatbots by large language models.
As for chatbots overall, some of the leading customer service chatbots include Capital One's
"I haven't seen anyone market their chatbot as a large language model," though banks will often market them as AI- or machine learning-powered, said Doug Wilbert, managing director in the risk and compliance division at Protiviti.
Rather than working like a language model, some chatbots work more like interactive voice response. Also known as IVR, this technology enables the automated interactions customers have when they call a company's support line. Rather than telling the caller to select from a menu of options by pressing a number during the call, IVR enables the caller to give short descriptions of what they need and redirects their call accordingly.
As banks started to release chatbots, some viewed them as replacements for IVR, according to Galileo's Haney. Rather than run the user input through a large language model to sift through the nuances of what the customer said, these chatbot systems tend to look out for keywords, which can lead to shortcomings.
"The problem is you can't anticipate every random question that the customer is going to have," Haney said of these IVR replacements.
For example, such systems struggle to interpret longer user inputs that provide context for their inquiry ("I deposited my paycheck before going shopping, but my card declined. Why did that happen?"). These systems can also struggle with inquiries that include multiple requests in one ("I want to see my checking balance and put half of it into savings").
These are the exact kinds of shortcomings the Consumer Financial Protection Bureau