When it comes to building AI tools for advisors, feeding the learning technology what to say is just as important as telling it what not to say. And for the advisory world, that often means making sure the AI is not giving out financial predictions when a user asks a question.
This is an ongoing process for global firms and AI developers like Morningstar, which deployed the AI-driven
"For my data analysts who are ingesting data and want to look at a 300-page prospectus document from Luxembourg. … You want it to be able to read that and then provide you assistance," he said. "One of the ways I think about it is it's keeping things on track: What are the guardrails in place that you need to have to keep it focused on the task at hand? And the task at hand is providing an assistant."
Using AI to act as an advisor assistant is becoming a popular approach, with major firms moving in that direction, including Morgan Stanley, whose new
READ MORE:
This is largely meant to streamline advisors' workflows. But part of the AI soft approach — versus using it to make financial predictions — is meant to help contain AI applications in the heavily regulated wealth management space, where agencies are closely watching how AI is used, especially in ways that impact investor decisions or stock performance.
AI gray areas and triggering a human contact
In March 2024, the U.S. Securities and Exchange Commission
If a user asks an AI chatbot about a particular stock and the AI says, "'That investment is projected to continue to grow at this rate,' now I've just given you a forward projection. If that is public, then it definitely violates the marketing rule," said John O'Connell, founder and CEO of The Oasis Group, a Monroe Township, New Jersey-based technology consultant. "If it's one-on-one, it's a gray area right now. There's no enforcement on that."
A safer approach is to create triggers, so the AI knows when to refer the user to a human advisor instead of answering a question that would lead to making a financial prediction, he said.
Davidson also noted that Morningstar heavily built in prompts and warnings to appear if the AI senses the user is inputting personal information, for example.
READ MORE:
Larger firms are also starting to train new AI chatbots on so-called closed sets of data rather than having it trained through public data or OpenAI. Such is the case with Bank of America's virtual assistant Erica, which surprised 2 billion interactions in April.
"Really large financial services firms are going to train their AI on closed sets of data so they can control the responses that AI comes up with. You're going to see that lot," O'Connell said.
Going broad with AI advice
Another tactic to train AI on what not to say is to build it to provide more general summaries instead of specific or reactive responses to a drop in a particular stock, for example.
Savvy Wealth, which built an AI-backed advisor platform, has an Advisor Dashboard that automates client communications, portfolio management and the onboarding processes for advisors. The
"Use other tooling to get the technical analysis and pipeline it into the large language model to do the summarization," he said. "And then just have a really quick check to make sure that it's good from the financial advisor's perspective. That's how we think about it."