Algorithms have long been the driving force social media, retailers and others use to determine what we want and to serve us more of it. AI has the potential to take those insights to new heights including In the wealth management industry. Client data will inform advisors and solution providers in numerous areas including retirement goals, investing preferences, risk tolerance and so much more.
In this session, we'll discuss how AI can help predict what clients want or need in their portfolios and in their financial plans, giving advisors who use it an edge in providing customized service.
Transcription:
Vijay Raghavan (00:10):
All right. Hey everyone. Thanks all for joining us. I'm really excited here for our panel, give them what they want on unlocking client insights with AI. I'm Vijay Raghavan. I'm going to be the moderator for this session. I'm the Analyst at Forrester Research that covers the wealth management sector. So why don't we just first take a few minutes and meet our panelists. Just take maybe three minutes or so and just tell us who you are about your firm and how it relates to advice and AI.
Carrie Nelson (00:41):
Wonderful. Thank you Vijay. Can you hear me? Yep. I don't even know if we need mics in here. You could probably still hear me. I'm Carrie Nelson, Founder and CEO of Atlas Point. Atlas Point is a behavioral science firm focused on solving for organic growth, which is a big hairy challenge. There's two sides to our solution. One side, it's a product called Wealth Wiser. And Wealth Wiser is an interactive journey for prospects and we can make that very unique at the enterprise level. And then it can feed leads to the advisors. We're gathering behavioral data, so any of our matching's done based on behavioral data and compatibility with the advisor along with quantitative information. And then the other side is a solution for financial professionals and it's very intuitive and supportive. There's a couple of pieces to it. One is about nurturing clients. So once you accelerate those prospects and leads that are coming in, how do you nurture that relationship ongoing?
(01:50):
And we're feeding insights to the advisor in an ongoing way so that you can really personalize those interactions. And then the other side, we're really learning about the advisor as a business owner. So learning, do you want to grow, run, transition your business? What are your preferences in terms of how you like to learn and receive information for your team? And all of that is teeing up and behind the scenes. We're using AI to really serve specific insights to the advisor, but also to the prospect and clients. So it's a comprehensive system and we'll pick a few components of that to talk about today because we only have 45 minutes and I need to share with my friends here. But that's Atlas Point and who I am. And if you don't mind, because I think this will help Brooke and Andrew too, I'd love to know who's in this room so we know who we're talking to up here. How many of you are our financial professionals or advisors? Okay, I'm going to start. Alright, that's cool. Versus home office or headquarter folks? I know the Raymond James guys. Alright, so we know our audience guys. Very good, thank you.
Brooke Juniper (03:07):
Thanks Carrie. Hi everyone, I'm Brooke Juniper. I'm the CEO of Tiffin Sage. I have a background in investments in portfolio construction. Before I joined Tiffin earlier this year, I was with BlackRock for 17 years. I helped found their consulting business and also ran their advisor center. So their digital set of capabilities for independent advisors. Our product Sage, many of you may have seen on the stage today. It's a natively generative AI app that helps advisors ideate more quickly around portfolios and deliver personalized advice to clients. It's not designed to disintermediate the advisor at all. It's not managing the money. Sage is designed to be a co-pilot ever present next to the advisor, helping them make more data-driven decisions at scale across their book of business. What's really powerful about Sage I think, is that it combines both quantitative and qualitative information. So it's not really like ChatGPT for your portfolios because it has access to a lot of data and tooling underneath it.
(04:22):
But then the beauty of generative AI is that you can train a generative AI process on how to solve a problem, which is really appropriate in wealth because all of the challenges that we face in delivering advice to clients have evergreen characteristics to them, fees and taxes and risk and returns and how things trade off in the portfolio. But every client's portfolio is different. Every financial situation is different. So you can give the AI the data and train it how to evaluate it and make those trade-offs and help the advisor make a decision. Ultimately, our goal is to power advisor practices and give advisors more personalized outputs that they can take to clients so that they can scale and deliver more and better advice to more clients.
Vijay Raghavan (05:14):
Great. Thanks Brooke. Andrew,
Andrew Smith Lewis (05:16):
I loved your demo today by the way. Thank you. There were great demos today. Sage demo was very cool. I like that a lot. Sort of the next generation of where AI can take us, Andrew Smith Lewis, I'm the Founder and CEO of a group called Alai Studios. We're a creative AI studio focused on the concept of amplifying human brilliance. This comes out of my lifelong passion around human performance. So I don't come from the tribe of wealth management. I spent a couple of years working running innovation for an Alts platform, but most of my time had been spent in research in cognitive science, neuroscience and ai. So I was doing AI before it was hip and cool and you could actually tell anybody you did AI because they'd giggle at you if you were saying you did AI in the nineties. But now it's the revenge of the nerds moment where people like us are just sort of like, wow, we can actually do something cool.
(06:08):
But my passion has been around understanding human learning, human memory, and utilizing that to help amplify people's performance. So whether you were learning, worked a lot with the US military and non lethality training for the Army, for the Air Force, worked in finance, worked in pharmaceutical, worked in many fields where we wanted to bring out the best in people and if we could help people accelerate their learning and performance, they could get a better output at the other side of it. Now you introduce generative AI and you stack generative AI with an understanding of data science and cognitive psychology and you can start to do magical things. So our firm builds platforms and invest in projects across education, obviously wealth tech, InsureTech and sports. In this market, we're stepping in with our first product, which is called Lydia. And Lydia is a very powerful AI in the field of wealth management.
(07:09):
It's geared at behavioral finance. So our partners are a firm called Shaping Wealth. They have an expertise in behavioral finance and their mantra, sort of personal finance is more personal than it is finance. And it's only an understanding the psychology of the individual and their complex relationship with money and the people around them that you can actually help them. So the idea behind Lydia is not to be an investment partner for a financial advisor, but rather a guide to the guides to help financial advisors navigate the complex relationships they have with their clients. I saw something that said 74% of Americans who have a financial advisor admit that they're not truthful with that financial advisor. That's kind of scary. So how do you help financial advisors bridge the gap and better connect and relate and build trust with their clients? And we're trying to introduce a little AI to do that.
Vijay Raghavan (08:04):
Great, thanks Andrew. Okay, so speaking of financial advisors, Carrie, you test drove your own Atlas Point capabilities with your financial advisor. That was a pretty good story. I want to share it with the
Carrie Nelson (08:15):
Sure. And it actually piggybacks nicely to what Andrew was just saying. So early on right when we first started, which is five and a half almost coming up to six years ago, we were in the behavioral assessment space, one of the first movers in the behavioral assessment space. And I'd say now we're way more comprehensive and more in sales enablement at this point. But in the behavioral assessment space, we had worked with Washington University, partnered with them, did a ton of science and research and had been testing, learning, developing this solution for a couple of years. And I had the prototype, it's called Financial Virtues. It was finally down to just a five minute survey. It's still our longest survey. We have 14 different types at this point. Some are just 30 seconds, the five minute survey's the most comprehensive. And I thought, you know what?
(09:14):
I'm going to try this at home with my husband. I'm going to try it with our financial advisor. Because for 15 years that we had been working with our financial advisor, I had told my husband, if anything ever happens to you, I'm changing advisors. Because like 80% of women who either go through a divorce or loss of a spouse, they often will change advisors and tend to look to their oldest child to say, what should I do? And so I would've fit in that bucket. And my husband was surprised, he's a CFO and he's like, no, no. And our advisor, I can use his name, his name is Chad. He is like, no, no, Chad's great. My husband likes to send him a bulleted list before we meet each quarter. And we would get to the quarterly meeting and the advisor would work my husband's bulleted list.
(10:07):
And it was a great meeting. Now what's interesting, and I didn't say this in the tee up, is that I've spent most of my career in wealth management. The first half of my career was with Ernst and Young focused on the lending side of the business, and then I went to Experian for six years. So data and analytics and the lending side of the business, predictive modeling, all of that. Then I moved back to EY where I met Vijay and led wealth and asset management before joining as a partner at EY, before joining a client, Edward Jones as a partner to lead firm planning and firm strategy. So I've spent most of my career in wealth management and here I am saying my advisor is not getting my agenda on the table and I understand this industry. So we took the survey and what we found is that the advisor and I were polar opposites.
(11:03):
The way we think and feel about our money was completely opposite. The way we think about risk was very different and we weren't connecting. And so to his credit, he took that result and looked at the action items of what to say, what to do, what to bring to a meeting for a client like me. He made those subtle adjustments and within two meetings he had completely changed our relationship and I would not change advisors at this point. In fact, what he learned is that my husband is a distributor and I'm more into holistic planning and I wanted to consolidate assets. So I became a gift to Chad. So this is how some of these tools can work, and that's a very personal example, but we have a lot of those examples engaging women in the conversation as well as the next generation.
Vijay Raghavan (11:57):
Great, thanks Carrie. So you would say that these are financial virtues that are the output of the survey that,
Carrie Nelson (12:03):
Yeah, financial virtues is one of the components and that is, it's all based in how people think and feel about money. But part of what we show and the advisor is here's how to communicate with this particular client. This is just based on this one survey. Here's where the client will go under stress. So there's a downturn market, you can filter your business and say, who should I call first? It highlighted, it highlights your top three behavioral blind spots and what to do about them giving the advisor very specific questions to ask or what to bring to help get a client unstuck and moving forward, making a better financial choice. It even gets into detail like what gifts this client would most value. And so you can get that's just off of one of the surveys. And we have, like I said, other engagement tools as well. Some require no time at all for your client and that way you can filter your business ongoing and serve up the right content because we ingest content, tag it appropriately based on bias personas so that the right person's getting the right piece of content at the right time.
Vijay Raghavan (13:18):
Okay, great. Makes sense. Okay, so going to Sage. So we all saw the demo, it was excellent. And so a lot of use cases were presented during the demo, but can you tell us, Brooke, a bit more about explainable AI, trusting the output? How do you think about monitoring and mitigating bias when it comes to the output that the advisor is seeing when they ask the questions?
Brooke Juniper (13:44):
Yeah, absolutely. Trust is a huge deal and there's a huge body of research around bias and explainability in AI. I think we are trying to ensure that our product can build trust with advisors. So for any qualitative information that we serve up, for example, the source document is also surfaced. I dunno whether anyone uses perplexity as a search engine. If you haven't. I think we've got a biased room here in terms of people who are early adopters of these innovative things. But if you haven't tried it, it's very cool, but it gives you those source references and we wanted to do something similar with Sage. So the knowledge base that underlies sage. If you are receiving something that's come from one of those source documents, whether it's a summary or a translation of that to your client's portfolio position, you can uncover the document if you want to read more or if you're beginning to use it and you want to verify like, hey, that doesn't sound right.
(14:52):
Let me check. I think the other thing that's really important around bias in models, we've seen a lot of large language models that have been trained on the entire content of the internet and the library of Congress and all of this information that some of it is potentially questionable is that it's really important with any generative AI that you're thinking about using to understand a little bit under the hood how it works, which parts are generative ai, which parts are tools, and maybe closer to the more traditional tech that you might use that the AI might just be the mechanism for translating your question and calling those various services to answer the question. And what is the underlying knowledge base? So for Sage, when we implement, we have a completely private knowledge base for the enterprises that we implement for. So there's going to be no articles from the onion that are in the research base and surface to users. So I think it's really important to understand the data, understand the model, understand what's generative ai, what's supervised AI or algorithms and machine learning, and really make sure that you know how the product works. And I think that makes it a lot easier to evaluate and understand. And then you can see, okay, there will be areas where there could be bias or there won't be bias at all because it's a calculation that's being delivered.
Vijay Raghavan (16:27):
Makes sense. And so for the advisor, they can always just double click and look at any footnote for any of the outputs. Everything's footnoted using what was that search engine again called?
Brooke Juniper (16:37):
Perplexity.
Vijay Raghavan (16:38):
Okay, I'll check that.
Brooke Juniper (16:38):
So Perplexity has a similar approach where they surface the documents.
Vijay Raghavan (16:43):
Okay, great. Thanks.
Brooke Juniper (16:44):
We kind of stole their UI.
Vijay Raghavan (16:47):
Okay, Andrew, so you mentioned Lydia, can you just, during our prep calls you talked about how it has this AI assistant that it's got this memory capability, can sort of remember what the personality of the advisors like that it's interacting with. I thought that was really interesting. You want to tell us a bit more about that?
Andrew Smith Lewis (17:05):
Sure. We actually do use Onion AI and it's a very fine AI. So how many people have used perplexity? Does anybody? It's an awesome system, right? It's very interesting, the whole idea of combining web search with sources. It's a powerful concept moving forward. We do the same thing. We actually use their APIs underneath the hood if it pierces through our content layer memory. So there are different, memory is complex, human memory is complex, and AI memory is complex because trying to get an AI to think about what's to hang onto is a very complex topic. We have three different types of memory within Lydia. One is what is within conversation. So part of the problem with these models is they drift. So if you're in a conversation with pick your favorite clot or ChatGPT, what happens is it begins to forget the context of the conversation.
(17:56):
So it starts off strong, but over time it'll drift away and it won't recall things. And then it'll start to do everybody's favorite thing, which is hallucinate information. So if you start it off by giving it some numbers, it can react to those numbers. My client has this much of an AUM, I'm not recommending you do this with ChatGPT, but if you did a couple turns into the conversation, I forget that one of the things that we work on a lot is within conversation memory so that Lydia doesn't lose the bubble of information while she's going. The other part of what we do is a longer term memory so that Lydia will remember the advisor and interactions with the client. So you come back and Lydia could be like, Hey Brooke, how did that advice go for that client that we talked about yesterday?
(18:44):
And you can then tell Lydia it went well or didn't and correct from there. So Lydia will remember those things. And lastly, we have sort of this, we call it the hive mind internally, which we've got to rename as a bad name, but Lydia is actually orchestrates many different agents underneath. And so with the hive memory, the different agents that we're using can share one distinct memory for one agent, excuse me, one advisor. So if an advisor is touching multiple different agents and models, we'll roll it all up. So there are three different types of memories, and this is not, we figured it out. This is with all of these things really a work in progress. So this is very much a journey, not a complete destination.
Vijay Raghavan (19:27):
Okay, makes sense. And so there's proactive prompts that say, Hey Brooke, how did that meeting go? Or how did this panel go? So it's just proactively prompting you when you log in and you can respond and
Andrew Smith Lewis (19:40):
Yes, it can remember things like that. I think I was telling you when we first built this many months ago, I was working with it and one day it asked me, Hey, how did the dinner with your anniversary dinner with your wife go? And I got these chills and we built it so we knew what it was supposed to do, but it's just kind of wild when one of these things remembers something about you that you hadn't been thinking about. And I think if you expand that out, there's a lot of really interesting opportunities to deepen the relationship between the AI and the advisor and ultimately the advisor and the client.
Vijay Raghavan (20:13):
Okay, great. Okay, so Carrie, at this point, somewhat client engagement. So beyond the onboarding process, you have some new partnerships you want to share with us?
Carrie Nelson (20:26):
Oh yeah, we'll take that a different direction. Yes, I do. We just signed a contract with Equifax, so we'll be bringing more data directly to the independent space so that IBDs and RAs that I think is going to be very compelling. So that will feed into many of our models, but it'll also allow us to bring direct to the advisor some really cool quantitative data that otherwise in the independent space they haven't had access to. So we're excited about that.
Vijay Raghavan (21:03):
So how would that work? It's more of a distribution channel, like your proprietary data getting,
Carrie Nelson (21:08):
Yeah, so I've been working very closely with Equifax and some of the power harnessing the power of their data and some of their models to bring it direct to the independent space. I am not going to say too much more about that other than because there's more coming.
Vijay Raghavan (21:25):
Okay, great.
Carrie Nelson (21:27):
Yeah, this is just being announced.
(21:32):
I do want to go back to what Andrew was saying and what we're talking, see if we can connect the dots on these three very different organizations that we have up here. I think that a lot of the solutions and the demos that we're seeing, they're working to solve for an efficiency challenge. And so if we back up macro level and look at what's happening across the industry, we have a shrinking number of experienced advisors and we have a growing number of underserved clients and how do we solve for that? And I think we're all trying to take a swing at it, solving for it in different ways and with a number of these AI solutions, attack the administrative tasks and try to create more efficiency so that the advisor can serve more clients. I think what we're doing with Atlas Point, it can be very complimentary to many of those solutions because what we're focused on is how do we create efficiency in the 60% of the time that the advisor is spending in front of their client or communicating with their client in some way. How do we make that time more efficient so that you can go deeper into your business and with a real personalized approach. And so if we break it down that way, I think mean that would be an easy way to say how Sage and Atlas Point would be complimentary and to some regard, but also tackling a similar problem around we've got a fundamental problem but also a tremendous opportunity for all of us in this room.
Vijay Raghavan (23:12):
Yeah, so you mentioned underserved clients, so that's just using some of these capabilities to reach what the mass affluence and just,
Carrie Nelson (23:21):
Yeah, that's right. So there's opportunity like with our wealth wiser piece to really have a way to engage the underserved client and then we can better qualify and say, is this a really well qualified lead for the advisor? But there's other ways for the advisor to, I guess you're getting at the client engagement piece of the Alice Alice point platform. Beyond surveys, we have loads of content that advisors have found great success with, including seminar packs. I mean as simple as a seminar pack, if the more seasoned advisor is bringing in the next generation advisor, they can use some of these solutions to really position the newer advisors coming in to help qualify themselves. And this is important because as you're transitioning your business to the next generation advisor, you want them to understand your clients more than just the numbers, but also who is this person?
(24:22):
We hear a lot from the more seasoned advisor that the newer advisor just doesn't do it the way I do. They don't have the same level of EQ that I have. They don't read people the way that I do. And so this is a way for the team to really come together, including the everyone to come together with a common language position, that young advisor, they're going to do things differently, they are going to read people differently, but let's create some awareness around it and a common language and make sure that that more seasoned advisor knows their client's being well taken care of because they understand who this person is, how they like to receive information, how to communicate with that person. So that can really help. And so client engagement, it depends. We find out from the advisor, do you want to grow, run, or transition your business? That's a fundamental question. And the reason we're asking it is so we can serve up the right experience for you with the right tools and all of that.
Vijay Raghavan (25:25):
Yeah, makes sense.
Carrie Nelson (25:26):
Using AI in a lot of different ways, but it's all in support of that advisor going deeper into their business.
Brooke Juniper (25:35):
I Just want to build on Carrie's point. Morgan Stanley and Oliver Wyman I think did some research last year about the mass affluent and the lower end of high net worth. So individuals in the US who have 500,000 to 5 million in investible assets. I think, and I don't remember exactly the numbers, but 85% of that cohort did not have a professional financial advisor. So very underserved and it's the largest pot of revenue available in the market. So I think it's a place where if advisors can give the quality of advice that is necessary in a personal financial relationship of this nature at a cost that is reasonable. And I think this is where technology can really help provide great advice in a more cost effective way to democratize access. And that's where Sage is operating. How do we combine some of that intelligence of advice in a way that can help the advisor come to the decisions and trade-offs that they need to make more quickly for these clients to make advice affordable in that segment of the market.
Carrie Nelson (26:54):
That's great. And I've also seen 88% would prefer to work with an advisor if they could afford it, right? People want that human trusted steady hand. So there's something to that. How do we go deeper, create efficiency and make it affordable?
Vijay Raghavan (27:15):
Yeah. So we have all this advisors in the room, right? And the topic of AI, our favorite topic, I mean it obviously can be intimidating. So what's your approach to get advisors more comfortable with this technology and how it can work as a companion like Lydia or Sage or any of the capabilities you have? Andrew?
Andrew Smith Lewis (27:36):
I think that it's about how you position the AI. I think there are some systems that are being positioned. I sort of look in the world very simplistically, it's kind of bifurcated. There's augmentation and there's automation. We all need a certain amount of automation in our lives. But I don't think that these systems, I don't think it leads to the most positive outcome if we solely focus on the automation. First of all, it's hard to do the promise of these systems magically making your CRM make sense and delivering you the right data at the right time. That's like holy grail stuff and it's very, very difficult to get right. And if you talk to people who are trying to implement these systems, it's scale. It's super tricky. I also think that the floor is rising, so you fast forward a couple of years, everybody's going to have this, it's just going to be standard to have this.
(28:22):
You have no competitive advantage having these systems and it's not like you're going to be free to, what are you going to do with all that time? It's going to get sucked up into other things. So I think that you need to start using AI to actually improve the human when they're not with the machine. So how do you use AI to take a younger advisor and accelerate their path towards being a more effective and sticky operator for your clients? That I think is the key. And I think that's where we need to put more energy and the stuff that Carrie's talking about is really taking the science of understanding the psychology of people and their relationships with money, linking it with AI and scaling personalization and making the advisors better, stronger, faster as opposed to, well we're going to save you some time now, but I think that that is going to quickly evaporate.
Vijay Raghavan (29:16):
Okay, great. Thanks Andrew. So why don't we take some questions from the audience. Anyone have any questions for our panelists? Alright, I see we'll bring mic over to you.
Audience Member 1 (29:34):
Yeah, love the comments I think. And Andrew, your last point about massive workflow integrations, it's not going to be easy. Three of you're doing it in your areas. How are you guys handling the context? Is it brute force, too many business logic, too much prompts which are giving the context to these or are you doing something which is more advanced with more reinforcement learning techniques? What's happening beneath the hood?
Andrew Smith Lewis (30:02):
Is that for me or for anyone or
Audience Member 1 (30:04):
Any of you?
Andrew Smith Lewis (30:05):
What's going on under the hood, Brooke?
Brooke Juniper (30:06):
What's going under the hood? So at Sage we have a multiple orchestrator model, not to get into the weeds, but we have multiple agents and orchestrators to kind of pause and understand the question and then use AI to direct that question through a number of different services underneath. Some of them are LLMs themselves, so the ones that answer the questions about research documents are an LLM. The tools that answer questions about portfolio context will hit an API to a standard type of tool that you would perhaps be using today. So I think this AI plus API is essential under the hood and then the business logic and the financial logic needs to be built into there as well. And this is where we're seeing this trend towards verticalization and deep where we're working on some of this together, and it kind of links with the Microsoft talk this morning as well. Microsoft is creating generalized technologies and as Charles said, Microsoft sees you don't need to worry if you're a tech entrepreneur here, Microsoft wants you building on their stack. They're not coming to try and build financial services apps because they know that the industry has the domain expertise there. So it's really important how you build that domain expertise into your product.
Andrew Smith Lewis (31:42):
Yeah, we have a similar approach, probably a little less complex because we're not dealing with the type of financial data and the research that you're dealing with. We've built an intelligence layer on top of the foundational models and we're model agnostic and we have clients that prefer to use sonnet 3.5 versus four zero or whatever they want. So we will build that way. But the intelligence layer, there's a rag stack retrieval automated generated, which we've all heard about is the semantic search capability. But then we build an intelligence stack on top of that that has things like long-term memory, short-term memory, semantic routers. I think that there are companies that are trying to stuff a lot of functionality and dependency around just the underlying LLM, which is dangerous because you can't control it. And these models do drift. And over time I've seen solutions that work that no longer work because open AI changes the model that you're hitting and it's just not answering things the right way. So I think it's important when you look at systems to understand how much depth they have on top of just a wrapper over an LLM.
Brooke Juniper (32:52):
I would just add as well, you reminded me your training data is so important, so important. Having the right amount of appropriately structured clean training data cannot underestimate with LLMs the importance of having a really clearer idea of what you want the outcome to be so that you can fine tune the model to deliver exactly what you want.
Carrie Nelson (33:21):
I totally agree with that. I agree with that and I'm not going to pile on too much. I think my CTO would do a much better job of answering this question than I would, but we've spent a tremendous amount of time, first from a content management standpoint to make sure we're structured in a way to be able to absorb a lot just a lot of content we are using. And a lot of times, and honestly I feel like we have stretched what is AI a bit in some of these conversations. And in many cases we're using machine learning and so we're stretching some of the definitions, machine
Brooke Juniper (34:01):
Learning's, AI, it's siblings
Andrew Smith Lewis (34:03):
It's siblings.
Carrie Nelson (34:05):
It's siblings. So I want to be clear on that. And then how we're we're training the AI, I am going to default to my team on that. I've got a tremendous team that I trust a great deal from data scientist and experienced CTO, so.
Audience Member 1 (34:21):
Great. I would love to know how painful the process is, but we can discuss that in the cocktail hour rather than,
Vijay Raghavan (34:27):
Great. Great question. Any more questions? Get right over here.
Audience Member 2 (34:42):
Thank you. I just have two questions. One is about the adoption of your technologies and Andrew, you spoke a little bit about that, so probably all data, but maybe have anecdotes if you have a seven person, seven advisor team, hypothetically speaking and you want your team to do it, are you finding that your clients have better adoption if the advisors are handling the software or the support is handling the software and then sort of feeding it to the advisors? And the second question, Carrie, is if I beg you, will you have your software integrated with Advisor Junction? Oh, I'm sorry. This is Curious Brooke. I'm a demo person and I saw the Sage demo earlier.
Andrew Smith Lewis (35:28):
I'm getting nervous, I'm happy. To your first question, I think it's critical at the end of the day, if the advisors aren't using the solution and if it's too complex or too much of a burden, if there's too much friction for them to use and adopt it, I think you're host, right? So if you have to rely on a support team, especially with a small group to get folk over the line, it's just from the beginning, that seems to me like there's too much friction in the system for it to really succeed.
Brooke Juniper (35:59):
Yeah, I think for Sage we want to address multiple use cases. So there's a meeting prep use case and that might be the advisor doing that meeting prep. In some practices it might be a support person or an analyst that's doing that meeting prep. So I think we think less about who specifically is doing and more about what is the job to be done and make sure that we are answering that. In terms of integrations, we do do integrations. We're working with two large platforms at the moment with their investment strategy groups, so helping their internal teams, but would love to talk about platforms that you're interested in and where we could help. Data comes into Sage as well. So for RIAs who use Schwab, fidelity purging, Orion, talk to us about others, we can integrate your data directly in Sage.
Carrie Nelson (37:01):
And we meet the advisor and the advisor's team where they are. And so we recognize how each advisor likes to learn differently and we have different learning options. Some prefer an online training, others will never touch that. Others prefer peer or group calls, and we make those available also for the CAs. So it depends on your business if it's going to be the advisor versus the CA, but we make that available. And then we also have different versions of personalized coaching, whether it's straight through technology or you actually want a human as part of that as well. So we really work to support the different learning styles and the different places that the advisor may be on their journey. So to support that adoption.
Andrew Smith Lewis (37:50):
Carrie, you're still using humans?
Carrie Nelson (37:52):
We are, yes.
Vijay Raghavan (37:55):
Very good. Alright, I think we have time for one more question before we get some final thoughts from our panelists.
Audience Member 3 (38:02):
That must yell how soon before open source LMS catch up to what you guys are doing? Things like Google's notebook, LM, where you can upload your documents and do it yourself. Amazon's queue, you can do it yourself has soon before that becomes so good that we don't really need these specialized individual standalone applications.
Carrie Nelson (38:22):
I can see, in fact, we're trying this right now with one of our white label offerings that we have with an enterprise. We've added to that feature kind of something unique where they can customize. Now not everybody's going to want that piece because for compliance reasons, but we would just embed that so we're open to it, we're going to embed it, but then ingest that material and bring it back to the behavioral piece.
Audience Member 3 (38:53):
See more complimentary than competitive.
Carrie Nelson (38:55):
Totally.
Brooke Juniper (38:56):
I think all of these financial specific apps will continue to build upon the capabilities that are there. And where you can see things advancing week over week is the format of the app, the outputs. You can get video, you can make audio clips, the ingestion rates are much faster. So I think the open source LLMs and the tech platforms are going to get faster, cheaper, more capability, more form factors and the domain specific apps will build upon. And on top of that, as I said previously, I don't see the big techs trying to get into wealth tech or healthcare or any regulated industry in that level. They want to partner and I think that raises all boats.
Andrew Smith Lewis (39:50):
I would agree, but I would add that I think that's the most important question for companies like ours. Are you going to get gobbled up and is your whole company just a feature on the roadmap of OpenAI? I filed patents for memory way before OpenAI launched their memory thing, and I'm sitting there just with my fingers crossed that this one comes through and I can just go to them and say, Hey, I need you to license this wonderful thing. It's a real issue. I think that you can do a lot with the enterprise ChatGPT, but it's a generalized platform and trying to do something very bespoke and specific to creating a companion for an advisor that understands behavioral finance and all of the nuances there is very complex. I think you can get maybe 60, 70% there. That'll be good enough for some people. But for the folk who really want to go the extra mile and are looking at this not as just a transaction to spit out some information like ChatGPT loves to do, but really do enhance the capabilities of their team, you're going to want a bespoke solution.
Audience Member 3 (40:53):
So these DIY tools could become sort of an on-ramp to your product.
Andrew Smith Lewis (40:57):
Yeah,
Audience Member 3 (40:58):
Try. Hey, I like it, but I want more.
Andrew Smith Lewis (40:59):
Yeah, this is a great idea. Wow. Is there something out there that really does this.
Vijay Raghavan (41:06):
Great. All right, so just a few minutes left. Let's close the final thoughts. So to all of you, what do you think is the one thing needed? We're all optimists about AI and the future of wealth management. What is the one thing, Carrie, you think we need to drive wider adoption of this cool technology along with trust in wealth management?
Carrie Nelson (41:27):
The one thing, because covered a lot of things up here. So I think if we look at where we've been with very product focused to more solutions and goals focused and where's the industry headed? I believe it's more toward behavioral focused. That's the value add piece. I also think organic growth is going to become a very hot topic. I mean, just look at how much money from PE money is being put into the IBDs and RIAs. At some point. Organic growth will be a cool topic and I think we're going to be well positioned when that happens. Excellent. So I think if I were to pick one thing, I think let's get back to the basics and think about accelerating close rates and consolidating assets and bringing in the next generation and using some of these tools to help us do it efficiently. Organic growth key.
Vijay Raghavan (42:35):
Okay, thanks. Carrie, what about you Brooke?
Brooke Juniper (42:38):
I'm thinking about broader adoption of AI and what I've seen work at big firms and small firms that we've worked with that it really comes from the top. You've got to want to do this and I think I'm preaching to the choir here. You're all here at this conference because you're interested and I would say make the leap. Think about AI as an additional safety mechanism. It might not be right all the time as humans, we're not right all the time, but the human plus the power of a machine with the human in the loop can be tremendously more powerful. And that's better than either of those systems on their own. And I think the firms who've taken that view from the board, from the top level down, like Morgan Stanley, they've really accelerated because it's an imperative for them to integrate AI into their way of doing business. And they think about how can we not, why shouldn't we?
Vijay Raghavan (43:32):
Okay, great. Thanks Brooke. Andrew, bring us home.
Andrew Smith Lewis (43:35):
Yeah, super smart points. I think that I was talking to Charles, Chuck Morris who did the panel this morning at lunch. He was really interesting speaking with him. And we were talking about the fact that if AI advancement stopped, you couldn't train models anymore. And we just kind of hit the limit of what these things can do. We would never run out of applications for the current state of AI. We just have barely scratched the surface. And so I think the thing, we always look for more technology and that's not it. I think really, and Brooke was touching upon this, it's about an internal shift. I think we need to shift our mindset when we think about AI. AI is not Google, it's not the internet as we're used to it. It's not transactional. You shouldn't be asking what questions all the time. You have to be thinking how and why. And I think the thing that's going to drive adoption is for forward-thinking folk like you in your industry to kind of step forward, start playing with tools, experimenting even publicly available tools. Perplexity is a great example. Start to just explore and excite the possibilities. Start asking how and why questions, and just shift the way you look at this super tool because it is the super tool.
Vijay Raghavan (44:46):
Excellent. Well first of all, thank you so much, Carrie, Brooke, and Andrew for having such a great conversation. Great panel. Thank you for everyone for joining. If you have more questions, you can find them at the networking break or at the cocktails later tonight. So thank you all for coming to our panel. Thanks. Thank.
Give Them What They Want: Unlocking Client Insights with AI
November 12, 2024 3:06 PM
45:14