FINRA, SEC send warning on deepfakes amidst their own AI plans

robert cook-FSI OneVoice

Regulatory officials from FINRA and the U.S. Securities and Exchange Commission said May 14 that they are looking at how to apply AI tools within their agencies, but they also gave caution to its use, particularly the risk of deepfakes for firms using voice-verification software.

During FINRA's annual conference in Washington, D.C., most regulators welcomed the use of AI within safe limits.  

"We really want to encourage that innovation. And, for the FINRA team, we have to be looking at how we can help support that as best as we can," FINRA CEO Robert Cook said at the conference. "There are clear risks that we need to manage. But there are also clear opportunities for the industry, for investors, for the markets." 

READ MORE: In AI we trust? The peril and potential of embracing AI in wealth

FINRA launched a Crypto Hub in 2022 among several initiatives to examine evolving technology and regulation, including monitoring AI applications in the securities and broker-dealer spaces. 

Eric Noll, chairman of FINRA's board, said the agency is beginning to conduct similar examinations of AI and machine learning tools as they have with cryptocurrencies.

"We're beginning that same process that we did in crypto, which is understanding the [AI] space . . . to really bring it in-house and understand how those things are going to impact the operations of a regulator and the operations of broker dealers," he said. "And then, how can we be ready for them as they become part and parcel of our industry going forward" because "we have no idea what's coming next. There's going to be change."  

SEC officials, who often work with FINRA in examining how to regulate emerging areas like crypto and AI, said they were also looking at ways to apply AI. 

In particular, the SEC has been looking at how large language models (LLMs), similar to ChatGPT, could streamline regulatory workloads like coding assistance for technical staff and "cleaning up" unstructured data at the SEC, said Marco Enriquez, principal data scientist in the SEC's division of economic and risk analysis. 

"We have tons of unstructured data that we have to deal with. And quite frankly, it's challenging for staff at our agency, given the limited numbers," he said. "So, to the extent that the LLMs could help at least tidy up some of that data, remove some of the noise, that actually goes a long way." 

While there are use cases where regulators are looking at how to apply AI, they also cautioned that the technology is not yet being used to make investment recommendations to retail customers. Regulators also raised major concerns about how AI technology can be used to mirror the voice and image of a person to conduct fraud, known as a deepfake. 

This poses a heightened risk for financial companies that use software programs that identify a customer's voice for personal authentication. 

READ MORE: Banks hear the eerie echoes of AI-generated voices

"Deepfakes are a serious concern," said Scott Gilbert, FINRA's vice president in the department of member supervision and risk monitoring. "With voice authentication as a mechanism for identifying customers, ensuring integrity of transactions is going to be very challenged, if not impossible, in the near future."

For reprint and licensing requests for this article, click here.
Technology Industry News Regulation and compliance FINRA SEC
MORE FROM FINANCIAL PLANNING