From dictation to data leaks: FINRA, SEC scrutinize AI risks

FINRA headquarters

Artificial intelligence tools, such as large language models (LLMs) and ways of using AI to market to new clients, are becoming profoundly beneficial to financial advisors. But they also pose new risks that governing bodies such as FINRA and the U.S. Securities and Exchange Commission are heavily weighing.

In recent weeks authorities have raised issues with emerging risks like "AI washing," in which a firm overstates its use of AI; hallucinations, which can occur when using models like ChatGPT; and ethical concerns in using data to personalize marketing tactics to individuals. 

"We have to understand not only how these models are working, but also what are the opportunities and what are the risks within each of the models that are out there, especially in generative AI," said Brad Ahrens, senior vice president of advanced analytics at FINRA, who was speaking at the self-regulator's advertising regulation conference on Sept. 27. 

Preventing AI heartbreak via regulation

A week before the FINRA conference, U.S. Securities and Exchange Commission Chair Gary Gensler gave a somewhat humorous warning about AI, comparing it to the movie "Her," in which an AI-assistant Samantha forms a romantic bond with her user, who becomes overreliant on AI.

"Regulators, market participants, I believe, need to think about what it means to have the dependencies of potentially 8,316 brokenhearted financial institutions on an AI model or data aggregator," Gensler said during his Sept. 19 video "office hours." "Let's do our best to keep that heartbreak out of our capital markets." 

However, problems with AI tools don't always make the most obvious cases, as when the SEC charged two advisory firms in March for misleading the public by overstating their use of AI or the widely known hallucinations in large language models and so-called AI "deepfakes." 

Officials are weighing how firms are using AI dictation services in meetings and whether a firm's staff are hyper-aware that this data can leak into the public learning models of OpenAI's ChatGPT or Anthropic's Claude. 

"If you're using openai.com or anthropic.com, you should be concerned if your employees are using that, because there is a possibility that you could leak data back into the models," Ahrens said. "It does happen."

Another area that Ahrens noted FINRA is seeing more AI use-cases in is when a firm uses a chatbot-type AI to answer or summarize a question, like how to deal with disputes when there are multiple tenants in an investment property.

"A lot of firms are using an LLM there, and they can just have the firm enter a prompt or question that says, 'What do I do when tenants are in a dispute?' And it will take them right to the content in the manual using something called retrieval augmented generation," he said. 

FINRA did not necessarily take a stance on that type of AI use, but officials emphasized the importance of having a human monitor and compliance procedures on the back-end or final output. 

"How are you going to ensure that this gen AI is functioning as it's supposed to? In other words, how are you going to supervise its use," said Philip Shaikun, vice president and associate general counsel in FINRA's Office of General Counsel. "You're going to want to be sure that you've got certain types of procedures, a human in the loop, spot checking." 

That also goes for tech vendors, which both FINRA and the SEC have heavily emphasized that firms monitor closely. Amy Sochard, vice president of FINRA's Advertising Regulation Department, said firms need to check back in with their existing vendors who might have updated their technology to include AI since their contract was first signed. 

For example, "if you haven't refreshed the vendor contract because it was a two-year contract and you're finding out, oh, they're now using some kind of generative AI," she said. "You need to know about it." 

AI could open door to exploitative ads

FINRA updated its Rule 2210 guidelines in May to include the use of chatbots and AI in communications with investors and the public. While Ahrens said they "aren't seeing a lot of customer-facing use cases" because of the heightened risks, they are seeing more "hyper-personalization of ads." 

This poses challenges in using AI and machine learning tools to track a client and their behaviors through their so-called "digital footprint" for marketing. 

"It's opening up the door to exploitative advertising tactics where advertisers may know more about a digital user than the person knows what they're giving up. And that's where we want to start talking about ethics," said Rachael Chudoba, a senior strategist of planning and research at McCann Worldgroup, a global advertising network. 

Chudoba, who was speaking at the FINRA conference, said firms need to ensure their teams are not just educated on the ethical concerns with AI but that they also understand how it applies to the various tools, and how these tools work. 

"Things like, is your bias and ethics training up to date to include generative AI situations? Do your teams feel comfortable accessing and learning about legal and policy guidelines when it comes to AI," she said. "And are you educating your teams on the systems that they're being asked to use so that they feel that the tools are explainable and transparent to them?"

FINRA dives into AI for sentiment analysis

FINRA itself is also experimenting with AI tools and behavioral data analytics. For example, it is using AI to detect the sentiment of public comment letters that are submitted during a proposed rulemaking process, rather than an individual having to read thousands of comment letters. 

"It's not just a simple sentiment that you're used to — like happy, sad or myth — it's more figuring out: who wrote it, where is it coming from? Then, we cluster those together. And then we actually go deeper into the sentiment," Ahrens said. "We have a slew of use cases that are already in process." 

For reprint and licensing requests for this article, click here.
Technology Artificial intelligence Regulation and compliance Industry News FINRA SEC
MORE FROM FINANCIAL PLANNING