How tech leaders are removing bias from AI

ADVISE AI conference 101024
Helios Quantitative Research CEO Chris Shuba, QuantStreet Capital Chief Investment Officer Harry Mamaysky and StockSnips CEO Ravi Koka speaking on a panel during the ADVISE AI conference on Oct. 10.
Rachel Witkowski

One of the biggest concerns that financial advisors have in using AI tools, like large language model ChatGPT, is distrust because of the widely known bias and hallucinations in the output. 

"Trust is a huge deal," said Brooke Juniper, CEO of TIFIN's AI investment platform Sage. "We are trying to ensure that our products can build trust with advisors."

Juniper was speaking at Financial Planning's first AI-focused industry conference, ADVISE AI, on Oct. 9-10. Top tech leaders including those from Alai Studios and StockSnips fielded many questions about trust and bias in AI during the conference that was largely meant to show how AI tools can be applied to investing, internal workflows and client engagement. 

READ MORE: Goosing productivity, saving time — AI advice from industry leaders

"It's a good question and a tough question. There is no such thing as no bias," said Ravi Koka, founder and CEO of StockSnips, an AI-powered investment strategies platform that combines trading sentiments. "You can take any data in the world and you're going to see some bias. The question is, can you minimize it?"

Koka said StockSnips has selected data inputs from about 50 million articles curated from 25 different sources to train its AI, and then the firm applies a "360-degree view" to help minimize instances of bias. 

"All you can do is conquer it by getting a 360-degree review, and then you have to be very careful about the sources you choose," Koka said. 

Establishing parameters on the data that's training the AI was paramount for top tech developers at the conference.

In order to prevent AI models from drifting or hallucinating, developers like Alai Studios and Sage are training their AI tools through controlled data in private settings, versus a public OpenAI universe for ChatGPT, for example.

"It's so, so important having the right amount of appropriately structured, clean training data. You cannot underestimate . . . the importance of having a clear idea of what you want the outcome to be," Juniper said. 

Teaching the AI through contained, clean data also helps the advisor get a more customizable output for the client, whether that be through rebalancing an investment portfolio, or managing taxes or estate planning. 

READ MORE: AI for wealth client growth? Slowly but surely

"Every client's portfolio is different. Every financial situation is different. So you can give AI the data and train it and show how to evaluate" a client's portfolio, Juniper said. "Ultimately, our goal is to power advisor practices and give advisors more personalized outputs that they can take to clients so that they can scale and deliver more and better advice."

Alai Studios, for example, recently partnered with Shaping Wealth to launch Lydia, a large language model that combines AI and behavioral science to create an empathetic AI assistant for the wealth management industry. Lydia will soon hit the broader market but in building it out, Alai CEO Andrew Smith Lewis said they spent a lot of time making sure Lydia's responses did not "drift" as the user asked more questions — a common issue with public models like ChatGPT. 

Preventing the AI from drifting is also crucial for advisors when using AI so that it can correctly recall details of the last meeting between the advisor and client, for instance.

"You can then tell Lydia whether [the meeting] went well or didn't and correct from there. So Lydia will remember those things," Lewis said. "There's a lot of really interesting opportunities to deepen the relationship between the AI and the advisor. And ultimately, the advisor and the client." 

READ MORE: How AI is changing financial advisors' jobs

Chris Shuba, founder and CEO of Helios Quantitative Research, said there is an inherent bias in simply selecting the type of data to teach the AI models, so it becomes more important to make clear the possibilities of bias. 

"I tend to think about the conversation of bias as both good and bad. You want to innately remove as much bad bias, [ensure] data cleanliness, things like that, as you can," he said. "And then just be communicative of purpose, with that purpose from good to bad. Know it, understand it, embrace it. But nobody's expecting perfection out of AI."

For reprint and licensing requests for this article, click here.
Technology Artificial intelligence Practice and client management Advise AI
MORE FROM FINANCIAL PLANNING