When Brian McLaughlin first tested artificial intelligence as a predictive tool that could be used by advisor clients at Orion, it backfired by recommending his firm find another advisor.
"The hallucinations that we saw when we were developing it were at a level we're completely uncomfortable with," McLaughlin, the president of Orion Advisor Technology, said. "That's hallucination fear, right? And we saw it in real life."
Still, that hasn't stopped Omaha, Nebraska-based Orion — which provides a full-service digital platform for advisors — from deploying other useful AI technologies, such as an investor behavioral tracking tool called PulseCheck. But it has taken years to teach the AI models what type of data to input to get the correct output.
"It's all about the models, how to train what data you're putting in, what to ask of it — all these are factors," McLaughlin said. "And the only way we can learn is by testing and playing with the technology."
Those who have been experimenting with AI-backed tools in recent years agree that the potential is worth the effort, said Sal Cucchiara, chief information officer and head of wealth management technology at Morgan Stanley.
"Technology always has hype, but this one — this one feels a little different. This one will eventually meet the hype," Cucchiara said. "We are seriously looking at this to determine how it will transform how we code, how we build applications. And I think it's really important to look at the patterns of how AI can help us."
But not everyone wants to be the first to test AI. It takes time, can cost millions of dollars to develop and requires an endless amount of data input without the promise of a perfect output.
For advisors, "there's some that they don't know if it's accurate, and then there's some that just don't trust it," said Christopher Marsico, chief financial officer and a partner at Rossby Financial, an open-architecture RIA platform based in Melbourne, Florida. "It's the accuracy of the data that I think is going to be the holdup for a lot of advisors."
Playing with data in AI, without breaking integrity (or the bank)
One of the biggest fears with using AI centers on data integrity and accuracy. At its core, AI is a learning machine, which means it only understands and gives outputs based upon what a human tells it through the data it's fed.
"There's still so much unknown: how this data is being used, where it's coming from, how it's being protected, that sort of thing," said Josh Schwaber, head of customer experience at Kwanti, a digital portfolio analytics provider based in San Francisco. "I don't think we're there yet where people can really scale their practice with AI on the financial aspects."
And the data required to do that can easily take hundreds of thousands of documents and datasets.
For a global analytics provider like Morningstar, which has a machine learning tool that reads and rates more than 400,000 investments out of about a million investments globally, the key was adding in AI-language tools to help explain or visualize the basis for financial decisions.
"Nobody used it really in the first early days. What we found was, there's this trust factor, this explainability needed. So we started wrapping around it more explainable pieces," said Lee Davidson, chief data and analytics officer at Morningstar. "Not only do we need to have some good, accurate insights," but "there is a trade-off between how accurate something is and how transparent the process was to get to that answer."
The process to train AI to have accurate and useful responses can be grueling and takes time.
But Cucchiara said AI actually sped up a process they were already doing with machine learning in which they were teaching it to accurately respond to 4,000 common advisor questions. Initially, it took nearly three years to train the older machine learning models, but once they applied OpenAI — an AI firm that launched ChatGPT — it made it infinitely faster.
"We just ingested all of our content into using OpenAI to allow anyone to ask an open-ended question and get a really, really high-quality answer," Cucchiara said. "So what took us three years to curate 4,000 questions, took us six to nine months leveraging OpenAI to get better sets of questions, high-quality answers and really make it more open-ended so that it could be more useful."
The result was Morgan Stanley's new AI-powered assistant that launched in September 2023 for financial advisors, giving users access to more than 100,000 documents.
As for smaller advisory firms, some are using OpenAI as an entrypoint into the technology but on a limited basis. They often said it only works well if it's contained and applied to specific workflow areas that need greater efficiencies.
For example, advisors are most often using AI as a large language model to help create summaries or emails to clients, or to dictate meetings faster. When Financial Planning surveyed advisors recently, half of them said they use AI "for general office productivity."
AI trust barriers with compliance and regulation
A critical unknown for advisors when it comes to adopting AI solutions is the compliance and regulation around these rapidly developing models that are built on data — arguably the most sensitive and valuable information an advisor holds.
"Security is the primary concern we have, and that's where we put a lot of our focus. We personally don't do anything with AI at this point" because of that, Schwaber said. "It's something that we're definitely looking into and exploring. But we want to make sure that anyone using our tool feels fully comfortable with the data security of anything they're putting into the system."
Schwaber is not alone.
Nearly half (49%) of advisors surveyed by Financial Planning said they feared that AI, in its current iteration, could create new ethical concerns and biases. And 24% said their firm has banned AI or restricted its use to specific employee functions or roles.
Part of the issue is that most AI large language models, like ChatGPT, operate through OpenAI. While companies can create more secure clouds for machine learning technologies that employees can use within the firm, OpenAI is trickier because firms are just beginning to create more private domains for it.
In the case with Morgan Stanley's advisor assistant, Cucchiara said they essentially have a private service with OpenAI where the data is not stored outside the firm.
"They don't save it, they don't store it, they don't learn from it," he said about the service agreement with OpenAI. "That's how a large financial services firm that's highly regulated would need to operate. And that's how we want to operate when we're trying to protect ourselves."
U.S. Securities and Exchange Commission Chair Gary Gensler raised regulatory concerns about AI because current guidance on areas like risk management and conflicts of interest are not built for rapidly developing technology.
"While current model risk management guidance — generally written prior to this new wave of data analytics — will need to be updated, it won't be sufficient," Gensler said in remarks at the Yale Law School on Feb. 13. "The challenges to financial stability that AI may pose in the future will require new thinking on systemwide or macro-prudential policy interventions."
The SEC proposed a rule in late 2023 outlining potential conflicts of interest when using predictive analytics tools by broker-dealers and investment advisors. FINRA has also been monitoring AI, offering its own reports on the developments and steering its member firms to the SEC's new cybersecurity rules in 2023, for example.
READ MORE:
Advisors said AI needs a clear regulatory framework in terms of how to use the models, especially when using it to make financial recommendations. But at the core of it, firms need to know exactly what they're using AI for rather than jumping into it because it's trending.
In March, the SEC cited two advisory firms on allegations that they misled investors by suggesting they had advanced AI capabilities.
"A lot of people are bumbling in the dark. … They don't have really clear requirements, they don't know where the finish line is," said John O'Connell, CEO of the Oasis Group, a technology consultant for advisors. "If you don't know where the finish line is, don't start the race."
Nobody is ready (yet) to trust AI to handle their financial lives
While most advisors are comfortable using AI in a limited capacity, like for summarizing extensive documents, most do not feel comfortable using AI to predict financial outcomes.
When Financial Planning asked wealth managers the areas of their personal lives that they trust AI to be mostly responsible for, only 25% said in making financial recommendations, while 50% trusted AI to predict their car or house maintenance needs.
READ MORE:
Even when an AI believer like McLaughlin was asked whether he's used AI to make a financial decision in his personal life, he simply responded: "Nope."
"I don't think AI is quite there yet to use it directly. I would play with it, but I don't use it personally to make decisions," McLaughlin said. "But also, my current personal financial situation is more complex."
Advisors also pointed out that this concern is good because it's the reason why AI will not fully take an advisor's job. People still look for a specialized, empathetic approach that only a human advisor can provide, at least for the foreseeable future.
"At the end of the day, and I can speak for myself, I'm 45, so I'm kind of in the middle of the people that won't touch AI and those that are really embracing it," said John Mackowiak, chief revenue officer at Advyzon, a cloud-based portfolio management platform based in Chicago. "As long as you get the answer quickly, and do have the ability to connect with a human as needed, I think that's the important thing to balance."