Advisors are using AI but skipping compliance guardrails

compliance 080624
WrightStudio - stock.adobe.com

Financial advisors are increasingly adopting AI tools, including automated notetakers, meeting schedulers and portfolio analysis, but one critical step is largely missing: a governance framework for experimenting with AI responsibly.

Among more than 200 financial service compliance leaders recently surveyed, only 12% who are using AI said they have adopted an AI risk management framework, and just 18% said they have a formal testing program for AI tools, according to the 2024 AI Benchmarking Survey conducted by ACA Group's project, ACA Aponix, and the National Society of Compliance Professionals (NSCP).

The survey released Oct. 29 found an even more alarming result when it came to tech vendor oversight: 92% of respondents said they have yet to adopt policies and procedures to govern AI use by third parties or service providers.  

"Our survey shows that while many firms recognize the potential of AI, they lack the frameworks to manage it responsibly," Lisa Crossley, executive director of the NSCP, said in the release. "This gap not only exposes firms to regulatory scrutiny, but also underscores the importance of building robust AI governance protocols as usage continues to grow."

READ MORE: How to get new tech past the compliance check 

The results were not surprising to industry leaders, who said there is a disconnect between compliance officers and the advisors on the ground who might be experimenting with highly popular and public large language models like ChatGPT, for example.

"What you're doing is you're downloading it and playing with it, but you're not really sure how it fits in the practice. And you don't have a plan around it," said John O'Connell, founder and CEO of industry tech consultant, The Oasis Group. "And because of that, you're actually in a more dangerous place." 

Fifty-two percent of respondents who already use AI said they use public enterprise generative AI tools, like ChatGPT, while 50% said they use private or enterprise generative AI. Using public models like ChatGPT is especially concerning in wealth management because most professionals are handling personally identifiable information (PII) of clients. 

"It is not advisable to permit ChatGPT searches on sensitive client PII. Unless the AI tool is utilized within a closed network, the information is likely not secure," said Suzanne Kellogg, compliance officer at Bogart Wealth. "When AI is used to generate a narrative or drive a function, there is a risk that the output could be wrong and needs to be reviewed for accuracy. Compliance needs to be involved when it comes to AI."

Compliance concerns increase as more advisors experiment with AI

Though not shocking, the lack of a governance framework is increasingly concerning as more financial professionals are experimenting with AI tools. 

Most recent studies have shown an AI adoption rate of between 40% to 60%. A recent advisor outlook survey by Schwab Advisor Services found that 62% of more than 1,000 independent investment advisors said they plan to use AI tools to automate routine tasks, next to 39% who said they'd use AI to enhance risk management and compliance efforts. 

READ MORE: AI tools more popular behind the scenes, Schwab survey shows

More recently, a survey of 270 wealth management professionals conducted by Financial Planning found that 34% said they were either doing small-scale implementation or taking an incremental approach to AI during the next 12 to 18 months. And 45% said they were still learning and collecting information on AI. 

READ MORE: AI for wealth client growth? Slowly but surely

"The bigger issue remains that if, as an industry, we still don't know how to use it, it seems natural to assume we aren't going to have a great feel for how to regulate its use," said Scott Lamont, managing director at industry consulting firm F2 Strategy. "This feeling of confusion or not knowing applies to the regulators and the users alike."

Most firms do have compliance programs that AI could fall into, like cybersecurity policies or vendor oversight procedures that largely address data protection and management. 

"Beyond those, I'm not surprised that we don't have more detailed governance programs at this point," Lamont said. 

How to build guardrails around AI experimentation

There are many different AI tools advisors can use, but at the end of the day, everything comes back to the data that users feed the AI. 

In building out an AI governance framework, firms should "start by answering these three fundamental questions: How is AI accessing my data? Is it learning on my data? Where is my data stored and for how long," said Arnold Hsu, founder and CEO of GReminders, an end-to-end meeting management software provider for financial advisors. "Firms should also take the time to understand the touchpoints of AI across their organization."

Ken Lotocki, chief product officer at financial planning platform Conquest Planning, agreed, adding that there needs to be a human on the back-end as a data gatekeeper and editor, especially before using AI as a consumer-facing tool. 

"Adopt the philosophy where human intervention is necessary today [and] should remain in the AI future," he said. "AI should be used to suggest, but a human can make the final decision. AI should be used to summarize and generate an analysis, but a human should edit. [And] AI should be used to identify patterns, but a human should decide how that information can be utilized."

Concerns of cybersecurity and privacy around AI tools were the top concern for 45% of respondents in the ACA/NSCP survey. A close No. 2 concern was uncertainty around regulations or regulatory examinations, at 42%, followed by a lack of talent with AI expertise (28%) and lack of tools that meet compliance programs' needs (20%). 

While there are many compliance concerns with new AI tools, O'Connell cautioned firms against simply banning AI use in the workplace. 

"Compliance is very fast to say 'no' to certain things when the reality of it is, it's going to happen. So it's better for compliance to get ahead of it and come up with some real guidelines around it," he said. "Have an established, formal testing program that's driven by standards that will then drive the adoption of AI in your firm. If you don't have that formal testing program with some standards around what you're going to use it with, you're just going to have people playing with it off-book."

For reprint and licensing requests for this article, click here.
Technology Practice and client management Regulation and compliance Artificial intelligence
MORE FROM FINANCIAL PLANNING