This keynote session brings together leaders and decision-makers from some of the largest and most innovative firms dedicated to exploring AI's potential and impact. In this forward-looking discussion, you'll gain insights into: AI in development: Discover the exciting projects and cutting-edge tools these leading firms are developing for future implementation. Immediate AI Applications: Explore the AI-driven solutions and advancements that are already available and making waves in the industry today. Industry Transformation: Understand how AI is expected to reshape businesses and the wealth management landscape in the near future. Our expert panel will share their visions, challenges, and success stories, providing a comprehensive view of how AI is set to revolutionize wealth management.
Transcription:
Chana Schoenberger (00:10):
How about now? All right, so the question briefly to repeat the question, are the robots going to take our jobs? We're taking a quick poll. Robots taking our jobs. Yes. Okay, so we have one. Yes, one maybe, and two absolutely nots. Alright, so why don't you start and tell us why you think Yes?
Nick Reed (00:31):
I think my mark's on, so that's good. I guess I was a kind of half hand raised, so in my mind there is a bunch of work and a bunch of activity that is an obvious candidate to be completely automated and completely removed from the workforce. And I probably err on the side of that being more rather than less. And what I don't know, and the reason I guess my answer was maybe is what that gets replaced by. So I imagine that there'll be a role for humans to play. I imagine that there'll be a significant role for humans to play. It'll just be different than it is today because we don't need to care about gathering knowledge and processing information and potentially even building inference or making connections. But we need to work out what the next thing is. And that's a pretty open question for most people.
Chana Schoenberger (01:18):
Okay. And you were a maybe.
Deep Srivastav (01:20):
Yeah, I think starting the same language, let's be super clear. AI will take away a lot of tasks and actually we'll do those tasks better than we do. Whether it'll take away jobs or not, depends about how you are defining your jobs and how much of your job is filled with these tasks. If that's all you are doing, then those jobs will get taken away. But if you're very quick and flexible and replacing and augmenting, then they won't. But tasks, yes, it'll absolutely take them away.
Chana Schoenberger (01:48):
Okay, so that's a good thing. And you guys both said no, why not?
Jeremi Karnell (01:52):
Well, go ahead.
Kristie Edling-Day (01:53):
Thanks. Yeah, I would say I'm a categorical no, and the reason for that is the famous quote, the past is prologue. I think if you look back at history and the history of the wealth management business and of our financial advisors, you all are a very resilient bunch. So whether it was the portfolio accounting software that then became total portfolio management solutions, that then became kind of the online brokerage and the dawn of the online brokerage. And then robo, how many, I was actually a software developer at the time the robots came out and that was, I worked at Vanguard. And so of course they took a big bet at the time on whether the robots were going to actually replace the financial advisor. Every time people have predicted the end of the financial advisor, they've been wrong. I actually totally agree with what Deep said though. It's an adaptation, right? I'm sure there were individual advisors over the course of history who said, Nope, I'm not going to change and I'm just going to do this. Those that don't evolve, I think those are the ones where there is a threat, but for the profession, the demand is greater than ever for financial advisors and only growing.
Jeremi Karnell (03:10):
So I agree that there are going to be probably a lot of different workflows that will get automated, but our superpower is human connectivity and empathy. Alright? And I feel this way not only about the financial services industry, but that's the case with doctors as well. Alright. And I think that customers are going to expect their advisors, their doctors, et cetera, to be able to have in the palm of their hand access to some of the most accurate information they possibly can to help direct them and help them make some of the most important decisions that they have to make in their life. And I don't think they're going to relegate that to a computer.
Kristie Edling-Day (03:50):
Can I riff on that for a second? Yeah, please. So one of the notes that I took as I was prepping for this was the example of radiology. So radiology is one of the medical professions that is best positioned right now to sort of be supplanted by AI. So great, right? Radiology, back office, more predictive. So AI is really good at it. AI says, Hey, you have cancer, do you want to hear that from an AI or do you want to hear it from a doctor who has with his own experience fact checked what you saw and then has the empathy to know how difficult the news would be to deliver.
Chana Schoenberger (04:26):
That's right. Also, the radiology, the AI detection of cancer has actually not worked that well. Eventually it will, but right now it kind of doesn't.
Jeremi Karnell (04:37):
Yeah, there's been a lot of conversation about AI as copilot and so what is the role of the advisor over the next three, five years? And I would say to not be a shitty pilot, get your pilot's license, be AI literate is probably your number one, two, and three objective. You don't have to be a data scientist. And it's probably a misnomer to say co-pilot, probably it's co-pilots understand the differences and the different LLMs and the different solutions that at hand you're going to have top of the funnel marketing large language models that do a very different thing than sort of very sophisticated decision intelligence. Next best action advice that you're going to get out of your trading platforms. So understand the difference and be able to work fluently there. And I think I saw yesterday or the day before, financial planning had released either this year or next, there's going to be this inflection point in which AI is going to be the number one source where individuals get their financial advice from surpassing friends and family, which has been number one consistently,
Chana Schoenberger (05:46):
Which Is great because friends and family don't actually know anything.
Jeremi Karnell (05:48):
No, exactly. And so the AI, as far as Mindshare is concerned, is going to really start taking over your client's worldview. And so in order for you to be able to actually engage and actually have some level of trust from your clients, I think it's again, just be literate.
Deep Srivastav (06:08):
I think Jeremi, you touched on a very interesting point, just wanted to make sure everybody goes that is that it's not one AI, it's interesting when we talk about humans, we talk about this guy is intelligent and this person is intelligent in something else and so on. But when it talks AI, it feels like this God kind of entity which is everywhere and will do everything how that's not the way it'll be. There'll be my AI, there'll be your AI, there'll be my forms, AI interacting with your forms AI. And that is where the interesting stuff is going to happen.
Nick Reed (06:36):
Even in the more cynical view, and I'll put myself in that bucket, outsize gains have been made because there's information asymmetry, that's where people make money. And so if what AI does is reduce the likelihood of information asymmetry, then it actually raises the value of relationships because they're the only sources of information asymmetry. And so actually my view would be that the role of the financial advisor and the role of the advisory part of that and the ability for those people to be able to access information that isn't mass produced, that isn't completely available and ubiquitously available to everyone, actually goes up in value because you've leveled the playing field for everything else.
Chana Schoenberger (07:17):
So it's not the end of financial advisors, it's the end of hedge funds. Okay. So tell me about, let's just go around and say what is the most interesting way your firm is using AI right now?
(07:32):
Team, why don't you start?
Deep Srivastav (07:33):
So one big thing that we have done is one, we have shifted away from creating these use cases which look very nice and shiny, but they don't deliver much value. So the big shift is we are trying to develop end-to-end applications and we are developing them in four areas, investment management, so all research and investment decisions and how do you stay on top of those operations, A lot of our backend trading and how does that get better automated through AI distribution, how do our sales and marketing and our products interact with financial advisors and their products and that come together. And then finally, wealth management. How do we provide asset allocation advice? So all four areas developing end-to-end applications. So bringing in a bunch of different AI to make this work through.
Nick Reed (08:19):
Very cool. At Moody's, we're mostly focused on access, I guess as much as anything else. So you think Moody's has a relatively sophisticated product that's mostly used by sophisticated capital markets and investors. And the thing that we've loved about AI is that it kind of democratizes the ability to access content. And so most of the things that we are working on, the products that we are building are about broadening the access to be able to use the value of what we have to an audience that can now interact with it. They can use natural language. So we have a product called research Assistant, which used to be just the purview of highly specialized asset managers and now it's the kind of tool that would be really valuable to a wealth manager for example.
Jeremi Karnell (09:06):
I think to best describe what in Investnet is doing, at least what my team is doing on the data solution side, is it's our journey from business intelligence to decision intelligence. And I think for that to matter, it's probably really important for me to put investnet in scope as far as scale and size is concerned, we've got about half of all financial advisors in the United States on our platform in some way, shape, shape or form, either on our enterprise trading platform, UNP, MoneyGuide Pro with financial planning, Tamarack for the RIAs, and then Yodlee for our consumer permission to open banking efforts. And we've got 6 trillion of assets under management. So you can imagine the digital exhaust that comes off of that size and scale. And so we have traditionally been about, I dunno a little over a decade ago, we've been traditionally really good at business intelligence.
(09:58):
By business intelligence. I'm talking about descriptive and diagnostic analytics, what happened and why. And we've got just great BI, we've got it around insurance and annuity data. We've got it around financial planning data, we've got it on fees flows and performance valuations of firms. That's all set. And in many ways that's table stakes. But in the context of machine learning and ai, this migration not necessarily migration, this emergence of decision intelligence next to business intelligence, business intelligence being sort of rearview looking, decision intelligence, being prescriptive and predict, I'm sorry, predictive and prescriptive. So what's going to happen and what should you do about it? That next best to action sort of approach is where we and McKinsey actually made sort of decision intelligence like a thing probably back in 2019, 2020 as one of the top seven technology trends that are going to happen. And so we leaned into that about two years ago.
(10:59):
So we made hundreds of models all mapped to very specific categories like in the wealth category, you've got brokerage to manage next best action opportunities, APM like single stock concentration, FSP tax overlay opportunities. We've got this year alone we introduced a machine learning model that is 75% accurate in predicting held away assets. We've got another machine learning model that's 95% accurate in identifying money in motion. And so that was, I'm going to get to my point here, I'm sorry, I know this is probably taking a little longer than necessary. One of the things we found out we're doing 20 million insights a day. Those who leverage those insights grow 45% faster than those who don't. There's only 2000 advisors right now currently accessing those insights. Yet those 2000 advisors at the end of Q1 generated 14 billion of new net flows into Investnet through that tooling alone.
(12:01):
And we're about to integrate that seamlessly within the trading platform, which is 120,000 advisors. And one of the things that we found out in going down this path for decision intelligence is this concept of decision fatigue, which is great. You're generating hundreds and thousands of next best action insights for me. I don't know which one I should start with. And so at the beginning of this year we stood up a knowledge graph for both the advisor as well as the client to give you an idea of the scope of that. If I were to extract a single advisor out of that knowledge graph, it'd have 3 million rows of data. And those two knowledge graphs are currently training a machine learning model to be able to provide and predict which of those hundreds or thousands of next best actions you should take. So that's one component.
(12:53):
The second component is using generative AI, we've been able to put our core infrastructure in place with Snowflake and set up a retrieval augmented generation assisted LLM that we're training off of our insights data and those knowledge knowledge graphs that I talked about so that those advisors can just be able to interact with that data in a very different way, but also get the advantages that an LLM brings, like client prep, like data enrichment, which of these insights map to the FINRA regulations that are important for me to be a fiduciary to my client. And then finally on the augmented BI side to be able to use natural language queries to be able to generate APIs on the fly to create grids, graphs and dashboards that have never existed before in our libraries ever. So basically democratize access to this data so that the clients, you folks can be able to, I want to see it in this way and be able to do that on the fly. So those are the three things we're doing.
Deep Srivastav (13:56):
By the way, there's a quiz at the end of this session about what we are doing.
Kristie Edling-Day (14:01):
So I won't talk about Advisor insights. Maybe I'll pivot my answer. So LPL is doing something similar with Advisor Insights, but one of the ones that I get most jazzed about is when there's an opportunity for a win-win. So we heard this morning about advisor efficiency and Deep, you actually mentioned that a lot of the near term opportunities are going to be task related. And so what we like to do is, hey, with limited capital, what are ways where both LPL and our advisor clients can benefit? And one of our favorite use cases right now is around compliance. So in times past prior to implementing AI, our advisor and our advisor base would for whom we do the compliance activities, if they wanted to send out any kind of advertising to their clients, of course that needs human review. You have to make sure that it's compliant with regulation.
(15:02):
Well that was done by a human at LPL. So you can imagine first of all the expense and the cost of that. If you're LPL, if you're an advisor, that experience isn't so great because you have to sit there and you were in the flow, you had an idea you were going to do it, and then you go into a wait state and you get distracted. And we all know how it feels when you're trying to get something done. With AI implemented into that compliance process right now as we sit 60% of the submissions from our advisors are straight through. That makes for a much better experience for them. It reduces cost for us. And we are implementing kind of to the point that Jeremi mentioned machine learning so that that model starts to get smarter. So over time, fewer false positives, more the things that truly need to be flagged are the only things that get flagged, which again makes it less expensive for us to do our jobs and it makes it a much better experience for our advisors. And so there's several, which I won't talk about all of them, but we do try to think about the win-win.
Chana Schoenberger (16:04):
Awesome. I would love to hear, because that was actually my next question about compliance and AI. How are you using AI to deal with compliance? How are you getting around the concerns and the naysayer? So we just heard about LPLs use case for that. Do you guys have other compliance related things going on?
Jeremi Karnell (16:22):
I just had probably the best legal and compliance meeting ever around the gen AI approach with insights and,
Chana Schoenberger (16:31):
You had a good legal meeting.
Jeremi Karnell (16:32):
Great. It was great. And the reason being is that the retrieval augmented generation allows our data to stay within the Snowflake governance framework so it doesn't go anywhere. So it's like, okay, our data doesn't go anywhere and the augmented BI case, that's the same thing. So we're just sending to open AI, the natural language query, but our data doesn't go anywhere. It brings back the API and it generates the result right there. And then actually this is probably one of the biggest things. So like I said, we're generating 20 million insights a day that's based off of a hundred different models. Those models have all been individually reviewed and approved by legal and compliance over the last couple of years. And so in phase one of us rolling this out, we're not asking AI to do the calcs for us. We're asking AI to reassemble and make those 20 million insights that have already been calculated already been approved by legal and compliance and to deliver them in a different payload in a different way. And so maybe at some point in the future we'll be asking AI to actually do the calculations, but we've taken a step over the last couple of years to sort of I think get through that or get past that hurdle with compliance just because they've already been there with us.
Nick Reed (17:50):
It's funny. Rag as a technique is a bit of a savior for a hundred percent this kind of legal compliance question. Again, Moody's a regulated entity, we get sent lots of non-public information all the time.
Kristie Edling-Day (18:02):
Does everybody know in the audience what RAG is? Dude, would you like a quick explanation?
Chana Schoenberger (18:07):
Yes, go ahead. Go ahead.
Nick Reed (18:08):
Is it the best named acronym in world? No, not at all. Augmented generation.
Jeremi Karnell (18:13):
So here's interesting OpenAI meta, they've consumed the corpus of human knowledge. They haven't consumed our data, they haven't consumed LPLs data, they haven't consumed our financial life, they haven't consumed our health life. That's all private data and it's incumbent upon us to be able to take LLMs and train them safely and securely. So retrieval augmented generation is the ability to have a LLMs Snowflake has like six or seven that's actually native that's on platform. Meta's lama is one of 'em. And so you can get the benefit of what that LLM has consumed from a public data point of view, but train it off of your data, your data being the single source of truth related to the financial information that your audience is asking for and that's safe and secure. That LLM that you're training, that you're fine tuning is yours and yours alone, it's not going to some open source, it's not being leveraged in any other way. And so like you said, it is a savior, especially at the enterprise level for anyone with private data to be able to actually start using LLMs in a safe way.
Kristie Edling-Day (19:30):
In short, it's hallucination management. We've all heard that at LPL we say that our AI is an eager to please Labrador. If it can do it, it's going to try to do it. And then you have the hallucination problem. Whereas if you are implementing a rag is sort of the vernacular for it over it. It's how you make sure that the answer that comes back from the AI is based on your data and not something that it came up with in an attempt to please you with an answer.
Nick Reed (20:02):
We have a zero tolerance policy and literally have a license on the basis that we drive markets because we produce ratings. So if you're asking an assisted copilot or an engine, what a rating of something is, it will hallucinate. You can literally go to ChatGPT and ask her to rate anything in the world and it will tell you a rating and it will make it up. And so partly one of the benefits of rag engines is that you can also provide instructions about what not to do. And so for example, we would say don't answer unless the answer is contained in this database. And so again, it's not even that it slightly reduces hallucinations, it gives you the ability to turn the temperature down to zero basically. So we would say our preferred model is don't answer and then when absolutely certain answer, and that's a much better way of being able to respond rather than having these much more generic ChatGPT style.
Chana Schoenberger (20:58):
And that works.
Nick Reed (20:59):
That works. It also protects that data and we don't want that data to be out in the public. We don't want it to be made available to train models and we have a fiduciary duty to keep it safe and secure.
Deep Srivastav (21:11):
Yeah, I mean it's not as simple as just saying don't answer and then it can also quickly stop answering a lot of other that you're trying to extract. So a lot of it is where you need to build in that capability that when can you allow it to really think through reason, pull the right information, when do you want to team it down? So that's where a lot of that backend work ends up happening. But I think the key part where we started this was yes, it does affect legal compliance one in a very positive way. So when I talked about our four use cases, which did not cover legal compliance, and we started strategically going after them, guess which team came back and said that why are we not prioritized? And that was actually the legal and compliance team because they absolutely believe that we should be putting this in.
(21:50):
So we are now starting another track to get them into the mix and sometimes that passion shows up. Now the other side of this though is that yes, when you are providing advice, it's very important to have also human oversight. So when we started, for example, providing portfolio advice to enable financial advisors or in our 401k plans and so on, we set up a cross-functional oversight team. So you have the engineers literally talking with the investment managers and with the legal compliance on how these algorithms are finally coming together. So these algorithms can be very powerful. There's a lot that can happen, but you cannot expect one team to be able to figure it out. So that cross-functionality and how are you really doing it that allows you to enable the power, but it does require humans to come back and put that structure around it.
Kristie Edling-Day (22:37):
Chana, I'll play ball because I loved the way you phrased the question, which was deal with compliance. So for any of you who maybe are struggling a little bit with your compliance organizations, I have a couple of tips. So I'm actually happy to say we have a great partnership with our legal and compliance team, but I think it's because of some of the actions that we've taken. So a couple, like I said, tips on how we've navigated at the first is it's a top-down focused from the CEO and the board have really made it clear that we are prioritizing AI as a firm. And so when everybody comes to the table and hears that level of executive leadership saying this is important, it's a step in the right direction to say, Hey, we need to get on board too. Now that's just a start. So once we kind of gave that mandate, the second thing was bringing everybody along on the education journey, being really transparent about here are the challenges, here are the risks.
(23:39):
And inviting our compliance organization in because unlike sometimes where it feels like the compliance department is sort of the department of no, in this case we need them to have our backs. And so involving them in the discussion and in the education and being transparent about, hey, here are the things that we're worried about has gone a long way. The second tip that I would share with you is just a question that as we run into that, that sort of no mindset is what would need to be true for you to say yes. So it starts to take the fear out of it and say what are the guardrails that need to be in place in order to get to yes. And then the last I would say is framing for the risk and compliant teams, the risk of inaction. So it's really easy for them to kind of look at the risk of action because look, especially in a new space like this, there is risk of action, but as fast as it's moving, there's risk of inaction too. And being the voice of the client to push back and framing that has been a very helpful couple of things that we've done.
Chana Schoenberger (24:44):
Can you give an example of a guardrail that allowed them to say yes when they would've otherwise said no?
Kristie Edling-Day (24:52):
Yeah, I mean, so one of the things that we're working through with the compliance and legal team right now is the transcription and recording, which I'm sure some of you are familiar with and are working through as well. The fear from the legal and compliance team is, hey, some of these meeting transcription tools provide a full transcription and once it's transcribed it is discoverable and we don't want it both from the risk to protect our advisors and we don't want it a risk from protecting the firm. And so in times past and in compliance organizations past, it could have been very easy like, hey, the answer to that is no. With technology, you can start to have a conversation about, hey, what specifically is the concern that triggers and oh, it's specifically the transcription. Well hey, guess what? Technology is such now where transcriptions can be ephemeral, they don't even have to be stored. So if the transcription is never stored, does it trigger? And so you start to be able to have creative conversations by really working with them and trying to understand their perspective. They're trying to protect the firm, what is the specific concern and how that's what would need to be true for you to say yes. Often that can be solved with technology.
Nick Reed (26:09):
Just to add to your list of tips, then one of the things that we found really powerful was to attempt to leverage what we already had. That there's a bit of a, I guess an initial reaction that says, well, this is brand new and I don't really understand it and so therefore I have to create this entirely new way of governing this thing rather than thinking about it the other way around to say, well, we already have an employee policy, we already have a communications policy, we already have a whole bunch of processes and so therefore what are the adjustments that we need to make? Because even though there are parts of it that we don't understand, why are we pretending that we can suddenly govern something that actually is a little more fluid and flexible anyway, let's work on the things that we have a level of control over.
Kristie Edling-Day (26:51):
I love that when we first got started, it was super tempting to think, Hey, we need to create this whole separate set of processes and oversight and governance. And we were really fortunate to have somebody in the middle of that early stage saying, wait a second, why would we reinvent a wheel when we've got a wheel that works and let's just modify the wheel a bit.
Nick Reed (27:08):
And there were modifications, not that we didn't, again, transcriptions is a great example. We had again, pretty significant conversation about what the implications were of recording every meeting to the point where we said, well, let's not record every meeting because there are consequences of doing so.
Deep Srivastav (27:24):
I guess I would agree the fact that we leveraged our existing structures as well and we should, that's where it was. But I think part the only change was the modification that came in and partly because of Kristie, what you mentioned, we started realizing that risk of inaction is very high if you don't, and the speed with which these things are moving are very high. So while you're using the same structures, but you cannot use the same speed. And that requires retraining people and saying, how can we do this much faster? Because you've got to deal with LMs, privacy, security at the same time. These things are super powerful. So when stuff happens, productivity jumps two x three x, you start seeing those kinds of things happen very fast. So we are using the same structures, but I just want to question that you may have to think about having the right people, the right product managers, the people who can think about it in a holistic way if you really want to go down that path fast.
Chana Schoenberger (28:15):
Makes a ton of sense. Okay, so what do advisors need to be thinking about five or 10 years into the future to do their jobs? Just assuming that AI will be a large part of their day.
(28:27):
Start this one.
Deep Srivastav (28:32):
So first off, nobody knows, of course. Let's be super clear on that. Who knows what we'll be doing five to 10 years from now? I don't know what my job is going to be in two years from now with everything that's happening. So there's a lot of change that will happen. I think Kristie had already mentioned earlier, and I think we're a little bit on the other side of it, but the financial industry is very resilient. So we'll be definitely doing a lot. Will you be coexisting with AI and interacting with it at a very rapid pace and many times a day? Absolutely, yes. And would it be changing the value proposition and taking you deeper into your client's lives significantly more and helping them manage their goals at a much more granular level? I think absolutely, yes. I'll give you one example on that because we worked on the wealth space again, where we provide that portfolio advice.
(29:21):
Many of you might be familiar with Monte Cardo simulations, I'm guessing. Quick show of hands, people use them I guess tactically everybody does in their financial advice. If you really want to map out your entire client's lives, journeys and the ups and downs and the possibilities in the markets and what can happen, you would have to run those Monte Cardo simulations for almost one year running. What things like dynamic programming and reinforcement learning can provide that level of insights in a matter of a few seconds. So that's the quantum of difference first of all that we are talking about. Now, if that quantum of difference comes in, how it changes the way you are interacting with your clients, how granular you're getting into their lives and how holistic you're getting in terms of taxes and annuities and every elements of that advice, it would be very different. So that I think is what I can tell you for the five to 10 years is the nature should change.
Nick Reed (30:13):
I'm going to give you a weirdly specific and technical answer, which is signal identification and pattern recognition. That's what people are going to be doing. That's what financial advisors are going to be doing because I'd work on the assumption that all activity will be automated, all of the mechanical stuff will automatically be automated. That'll all be run by agents and including all of the first phase pattern recognition and signal identification because AI is really good at that too. But then if you work on the assumption that all of that becomes unbelievably accessible and democratized, then that's going to regress to the mean, which means no one generates value out of any of that activity. And so value is going to be generated by financial advisors that are really focused on patterns that haven't ever existed yet, signals that haven't ever been used before, the ability to build inference and make connection between things that haven't happened. And the best way to do that is to be really entrenched in what's happening in the industries that you specialize in, the companies that you're going to provide advice around and in your customers. And mostly that's going to be driven by relationships and human interactions, but everything else will just be automated.
Jeremi Karnell (31:27):
Yeah, no, I think playing off of that, Nick, already you're serving as life coaches, but I think you're going to start seeing tool sets. Let's put it this way, today is the worst day this product will ever be, and it's going to be that way from every day moving forward. It's just going to get better so much faster, and it's going to give you access to so many different things to be able to add value to your customers that I think you're going to start needing to have a product mindset as well. So yes, keep that life coach concept in place, and I think this is going to actually be a natural transition for even the younger generation as they come in, those that are tech savvy that has grown up with this from day one, that have through elementary and junior high and high school, leveraged these tools to help build different innovations and different collaborations they were having with their friends and other students. And so I think that that's just going to carry through to the profession as well.
Kristie Edling-Day (32:34):
So I'll take a, I guess today is my day to take lessons from history. I'll give you guys something to think about and then I'll make a practical application of it. So back in the early, I always mess this up early 20th century. So in the 1900 era in the us, a very large majority of households were involved with agriculture, very large percentage. And yet the United States, as they saw manufacturing and the industrial revolution impacts and how much technology could actually influence the productivity of farming, they made kind of a revolutionary decision. They said, Hey, the future is not agriculture. The future is going to be augmented. The future of agriculture and food production is going to be augmented by manufacturing and by technology. And so they introduced legislation that made it so that children had to stay in school until eighth grade because they wanted to make sure that they prioritized the education that was going to be required for the future.
(33:42):
Now, in the immediate term, that meant that there was one less worker on the farm in the moment or in the field or whatever. So there was a trade-off. And yet if you look at the pace of development of the United States, it gave them an edge. It gave us an edge. So I would say it's actually pretty similar. We don't know exactly. I completely agree. We don't exactly know what the future is going to be, but we can pretty well predict, hey, there's going to be a lot less of the rote manual stuff that's taking a bunch of our time. There are things that can have attributes attached to them that can be predicted. We'll probably have tools to predict them. And so it's like, okay, I'm a financial advisor. I'm going to have a lot more time on my hands. Where do I want to spike? Is it the I want to be a greater life coach, I want to build my relationship skills, or is it I want to get really deep in pattern recognition and signal detection and anomaly? You get to choose. But I would say knowing that the future is coming, prioritize some time to get smart to start experimenting to figure out what that spike is going to be so that you evolve as technology evolves.
(34:50):
Yeah.
Chana Schoenberger (34:50):
Nope, that makes perfect sense. Okay, so let's talk about how AI changes the talent profile of the people who get recruited into wealth management. So right now, the people who are coming into wealth management are of different kinds. You have the people person, the one who really is great at relationships. You have the person who's really good at math and is more of a technical manager. You have a bunch of these folks and a lot of those things are going to be done by ai, let's assume. So is it going to be harder to recruit the best people into financial advisory and are advisors going to become basically product managers?
Deep Srivastav (35:30):
I love the product management part that I do think the product manager part is something if you are not familiar, if you're not explored that roles enough in your forms and business practices, you should really be thinking about best product management is a place which really integrates what you're doing from a technology perspective to what you're really trying to do, assemble as a client proposition perspective. And across the board, we have seen in our roles that those have become one of the most prominent roles because those are the ones which are not either or they're the ones which are really, and they bring the subject matter expertise and that value proposition completely in lockstep with whatever AI capabilities are being offered at different points in time. So in my mind, when we are looking for people, you don't need, for example, coders as much because coding can get done a lot. But do you need people who can think mathematically and who can really understand the business proposition and can really understand the implication of that on the clients and let them help design what products, what capabilities we need to have, what we need to bring from the market? How do you build and buy? I think those are the roles. So out of all the things, if you have to take one role away, I would say think about product management.
Nick Reed (36:39):
I think the thing that probably keeps us awake the most at night is any of the roles that have an apprenticeship model. And so if wealth management in general has an element of apprenticeship, then that's going to be hard. So the analogy that we use internally is to say, well, if all of this is just about automation, then we've seen this playbook a thousand times over a bit, to your point. So if you just talk about the auto manufacturing industry. So robots got created and a whole bunch of jobs just disappeared overnight, but all of the people that were really good on the factory floor became the people that helped shape and define what the robots were doing. And then those people that were even better at that became the designers of the robots. And there was a group of people that won in the automation of auto manufacturing, but there was a whole bunch of people that lost.
(37:31):
And then of course generationally, there's no ability to backfill any of those people because the reason that they were good is because they'd been through some apprenticeship, they'd got their hands dirty at some point in time and they'd had an ability to be able to grow and develop their skills. And so just in general, we just don't know what to do with those people. So we have ratings analysts, they're really good because they start small and they grow their skills base and we just don't know what the future of that role looks like. If you don't get to practice, if you don't get your hands dirty, if you don't do some of the things that we know are going to get automated away.
Chana Schoenberger (38:04):
But are you now still hiring junior ratings analysts?
Nick Reed (38:08):
We are, but they're not undertaking the same kinds of tasks that would fulfill their ability to become more senior, where again, the apprenticeship model is starting to already get shipped away. They're much more focused on product management, they're much more focused on leveraging technology, less on the more practical activity that they would kind of build on over time and eventually become more senior.
Jeremi Karnell (38:38):
I think one of the challenges I agree with, looking back to maybe understand where we're going, I think one of the nuances though with this that may be different than past transitions from the agricultural revolution to the industrial revolution is during that time there was a massive asymmetry between those with capital and those who did not have it. And those with capital could build the factories and those who did not have it would work in those factories. I think what we're going to see with AI, especially around digital products is just the price to be able to think about something, build something quickly, get it to market and to monetize it, it's going to basically almost go to zero and people are going to start building with no limitations around context windows. They're going to start building with no limitations around tokens, like all of it. It's going to get cheap and it's going to get easy and it's going to be fast. And I think the idea that my twin brother, actually, I'm going to credit him for making this observation, and he does have a breakout session tomorrow. So when you see that it's not me. So
Kristie Edling-Day (39:45):
It's an identical twin.
Jeremi Karnell (39:47):
It's an identical twin, yes. But he made this observation that for so long we've always thought of software as intellectual property that drives value of a firm that's going away, especially with how easy it is to build and really maybe where the new value for enterprises is going to be is how you train and fine tune that very specific large language model that is yours that is unique to you. And so I think finding those advisors, recruiting those individuals that want to be in the wealth space that has that type of data mindset that understands, I can be creative, I can think about new ways of driving value and maybe training this AI to just be that much more unique or that much different to give us a competitive advantage over someone else who's not thinking about it in this way. So regardless of whether they have a high school degree or a college degree, I don't think that matters either anymore if they have competency and literacy around the concept of artificial intelligence and machine learning and how to best use it, I think becomes very important.
Kristie Edling-Day (40:59):
I think as we were preparing for the panel, Chana, you said you were hoping for some debate.
(41:06):
I'm going to take the contrarian point of view on this one. So I don't think it's going to be harder to recruit new advisors. I do think the talent profile is going to shift, and I'll give you one example. So LPL just recently had its largest conference of the year. It's called Focus for any of you who've not heard of it. And so one of my favorite things about that is we meet with so many advisors, we get tons of feedback, tons of perspective. One of the advisors that we talked to, because AI was a hot topic there, he was really, really excited about meeting note taking and the meeting note taking tools that are out there. And some of them are here I noticed. And he was actually really optimistic about what that was going to do to talent attraction in the typical apprentice career path because he said, Hey, the way we used to treat or train new advisors, they would start, they would come in, they would sit there in the meeting and take notes.
(42:05):
I had to pay them. It was a smaller salary of course, but I had to pay them to do that. That was expensive for me. And it was a really passive kind of learn exactly how I do my job sort of way for them to learn. He was like with the AI note-taking tool. I don't have to pay for that advisor to sit there and take notes for me and learn passively. I can give them smaller clients, starter clients, I can teach them how to start to hunt and farm on their own. And I think in the future, I dunno if any of you have seen, there's YouTube videos out there now. I think they were Khan Academy, I think they were put out by that group, but they're using GPT-4o and the coaching that was coming out of this amazing LLM. But of course it was conversational. Imagine if that we ultimately have a true advisor that is actually coaching and training and listening in on the conversation of these new advisors and actually providing coaching and feedback over time. I think that the talent profile shifts. I actually think that you can recruit a greater diversity of background, liberal arts background, people who are great with people and teach them the fundamentals of being a financial advisor, but take advantage of the things that you can't teach, which is the EQ and the empathy in the relationship. So
Chana Schoenberger (43:27):
You could probably find a way to use AI to find those people, right? There's got to be an application for that.
Kristie Edling-Day (43:35):
All sorts of fraught with the bias and stuff we're still figuring out yet. But yes,
Nick Reed (43:39):
There's that.
Chana Schoenberger (43:40):
And now we're back to the compliance question, right? Okay. So what are the risks and responsible use of AI for financial advisors? So if you're a frontline advisor, what should you be doing at this point? You guys all represent big firms, the folks out there in the trenches.
Deep Srivastav (44:01):
So I was talking to one of the financial advisors at the lunch today, and I won't take their name so that I don't misrepresent anyone here, but we were talking about it that he said that you're a financial advisor, you're talking to a client and you say, how's family? So right off the back, the client would know that this person really doesn't know me. Family is so generic, you completely don't remember what had happened. That one short meeting that you had, do you really remember the client and all the nuances and the name of the cat and what that person's priorities were. And if you are a really scaled up firm for, you'll be really tough to stay on top of all of that. But if you've got ai, which is helping you assist, curate, get it right, then you would be able to know the client very deeply and have a real conversation.
(44:51):
So very good example, very small example. But the fact that AI can enable this with current technologies, not to what Jeremi was saying about what the future technologies are going to be, but with current technologies, if you start bringing them together, those discussions will go significantly better. And then if the client calls back and some other client calls back and says, Hey, what did you do to my portfolio? Are you able to really give the right answer of what you had done, how things went, where they went today? What are the implications for the future? There is a lot more that can be packed into every single conversation, and those can have implications on how long your relationships are. Those can be done with your technologies as they stand today. But for that, you may have to reconfigure your teams a bit. You may have to bring some of that talent. You have to think about those product managers and how you're reconfiguring the AI so that it is talking your language and your passionate helping you out. But you don't need to be a big firm. Back to Jamie's point, it's like the costs are going down, everything is going down. The efficiencies will keep going up. You should be able to have way deeper conversations. At the very least.
Nick Reed (45:57):
I think our approach to responsible use was pretty similar to what Kristie mentioned, mostly kind of based in education more than anything else to say. It's incumbent on our organization to make sure that all of our people really deeply understand more than any other topic. I think how these things work as best we can, what they're really good at, what they're not really good at, and use that base of education to be able to allow people to act responsibly because there are lots of unknowns. So rather than trying to get people to predict or have an understanding about exactly how an LLM is going to produce an answer, maybe let's focus a little more on the way that they were created, what they created for how they work at a more kind of technical level so that we can be informed about how to responsibly use them. And so again, all of our responsible use policies are all grounded in education on this topic. And then connected to our existing employee policies and existing employee propositions.
Jeremi Karnell (47:00):
So everything that they said. But then on top of that, I just think on the most basic level, because these tools are so accessible and so easy, just be conscious of the data that you share with the tools, right? I mean, we've seen just with texting alone and sort of the need of preserving data for books and records and things of that nature, just how sensitive regulators can be. And the idea that you could be uploading client level data, financial level data into a GPT because it made your job easier and quicker, but may have trained a model on private financial data is a huge risk. And again, it goes back to education. It goes back to literacy around AI and understanding how these tools should be used and when. And so just obviously treat data very carefully around these tools.
Kristie Edling-Day (48:02):
Some of the worst air disasters in history have come from over reliance on the copilot in the plane. And so it's like if there's one thing for advisors to keep in mind is fly the plane. Fly the plane. You're the pilot where in cases where the AI is providing an answer, so the autopilot in some of the aviation disasters that I mentioned had erroneous telemetry. One of the monitors or one of the sensors went bad. This was the air. There was an Air France 4, 4, 7 some years ago. The pilots were so focused on trying to figure out what was going on that they never stopped to think like, Hey, I should actually put my hands on the control and fly the plane. They assumed that the copilot had it when it was actually the copilot. That analogy had the issue. And so for as long as you kind of never check your brain at the AI and say, Hey, I'm in charge. I'm accountable because to your point on putting information into the GPT, because it makes my job easier, they hallucinate too. We had the conversation about the rag and the regulators will come, especially when it relates to investment advice and say, how did you get to this? Can you demonstrate to me that you went through the due diligence? And if your answer is, well, the AI told me to kind of go back to hands on the control you as the advisor flies the plane.
Chana Schoenberger (49:33):
And then you run into your compliance problem again, right? Because it's same thing as all the WhatsApp finds clients want to communicate on WhatsApp. Advisors are perfectly happy to do that. It's just not compliant. We all use it in our personal lives. You're not allowed to talk to your clients that way. The regulators say. So
Nick Reed (49:51):
Sometimes that goes back to the basic guardrails that you put in place that give you an ability to act with confidence that give you an ability to be able to act responsibly. So sometimes those guardrails are really technical, like you said, making sure the data isn't leaking, making sure that you have onboarded models into your own environment. So part of the way that we think about it is this idea that good breaks let you go fast. Seat belts let you go fast
Chana Schoenberger (50:22):
Helmets.
Nick Reed (50:23):
And so turning that conversation around slightly to say, if you can make sure that you're identifying what the seat belts and brakes are, that will give you an ability to now be able to leverage the tools and technologies at a much greater rate, push even further than you might normally have. You've got some guardrails.
Deep Srivastav (50:40):
So that goes almost 360 where we started. Will AI take our jobs away? No, because AI never gets fired. It'll always be humans who get fired. So you'll always need humans to do the jobs.
Chana Schoenberger (50:52):
Great. At this point, we're going to take some questions from the audience. We have a mic that we'll go around. So raise your hand and one of our folks, we will bring it to you. Yes,
Audience Member 1 (51:15):
Thanks. So it's occurred to me for some time that you've talked about how the tasks that advisors do will change, the skills that they'll need to bring to the role will change. And the comment that you've just made about these sort of guardrails, for lack of a better word, the seatbelts, whatever, it strikes me that there's a shift in responsibility that puts a lot more burden on the firm and by extension the service providers to the firm. So all of you make platforms for your service providers to advisory firms in some respect. How do you think about that? Right, because it used to be that we hired smart certified people and we trusted their judgment and spot checked them occasionally, but we broadly gave 'em the tools, trained them, made sure they were certified and set them off and did their thing. But now that more of the process is digitized, they rely on the technology more so in the process of scaling, we're concentrating response. More of the burden goes to the platform platform being the firm and the tech that's on it. Howard does, how does that hypothesis land well with you and how do you think about those responsibilities?
Deep Srivastav (52:50):
I think that's a great question. I'm literally doing my PhD on this, by the way, just so you know. So we can have a separate conversation on that. But the key part of that is that yes, there will be massive interconnections as we do this because technology, and you see that, for example, in the supply chain world, a shock in one part of the world has an implication across the whole value chain. And all of a sudden you're short on one thing and something of that nature, which happened because of the interconnectedness that could done in a digital world. It'll be much deeper interconnections. And those deeper interconnections will lead to a lot of these amplified problems because a problem on one side of this value chain could have a huge implication somewhere downstream. So back to your point, how does that change the nature?
(53:33):
I think that's where the shift in the rules are going to happen. The shift in the rules are not going to be so much about as an advisor, what am I doing as an investment manager? What am I doing? It becomes do I have, because my job goes down on the day-to-day tasks that I had been doing, but my job significantly goes up on how much am I able to see how this end-to-end view is? And you would have specific roles which really have a better deeper end-to-end view of what's going on and what is being codified. And you would need oversight teams on your side interacting with oversight teams on our side, which are talking about some of these details. So whenever we do, for example, a digital advisory kind of a connection with a client, it's a very long drawn process. It takes months to figure out every implication of that. But once you do it, of course the system starts to come. Interactions become better, but it's a massive one on how do you make that shift. So it'll not go away, is my short answer.
Kristie Edling-Day (54:28):
Yeah. I mean, I'll say we take it very seriously. And so with that in mind, please forgive the larger firms if we seem slow to act or to roll out certain things that you all see out there that you would be really excited about, because we're wading through a lot of that. The onus is on us. We want to protect ourselves, we want to protect you. And so with that in mind, there are things that at least I'll speak from my own perspective and the team can speak for theirs, but we will be slow at LPL to provide tools that will provide automated financial investment advice or that provide access to it for the reason that it's like, Hey, that creates a lot of burden on us and risk for the advisor for taking it before we're really sure. And so a lot of the solutions that we're focused on right now are driving your efficiency and your productivity and things that are going to keep everybody out of trouble. Especially because the regulatory landscape is so fluid right now.
Jeremi Karnell (55:33):
And we've seen this again past this prologue. We saw that with the internet. We saw that with social media. We've seen this in the past where huge big companies, large enterprises, were slow to adopt tools or resourcing or approaches and policies or things that would help enable you to do your jobs easier. And a lot of people then would just assemble what those solutions would look like on their own in absence of anything that would be something that they could depend on. And so it did put a lot of onus on individual. It put a lot of onus on the firm to make sure that they were doing things within the regulatory environment that they lived in. But all of that changed over time, right? As enterprises caught up and provided these solutions and it took the, I think that ultimately will take the onus, not necessarily 100% off, but I think how things look and feel today are going to be vastly different in the next year. Even. So
Nick Reed (56:40):
Part of the reason why we're advocating for using what you've already got or adapting what you've already got is the leakage into the system itself. And the requirement for the system, the company, whatever it is to respond, is already happening. And I'll give you an anecdote for us. So we stood up like a Gen AI approval and governance council so that all the things we were building needed to be signed off before we could give them to our employees. Completely absent was the fact that every single SaaS software that we had had already implemented some level of gen AI solution that was being used by our employees. And we had no oversight over it. And we had no ability to control whether it got turned on or turned off because it was part of a SaaS solution that we'd already subscribed to. And so what we needed to do was then lean into the fact that we already had responsible use policies. We already had employee programs in place and we needed to make some adjustments and some adaptations. But this idea that you'll be able to control it or you'll be able to segment those things is probably not true. And so the requirement for the organization to play more of a role in whatever it might be, the tools and applications that are made available to advisors is already happening.
Chana Schoenberger (57:58):
Great. Well, we are out of time. I want to thank my panelists for joining me. This was super interesting.
AI in Action: Current Tools and Future Trends in Wealth Management
November 12, 2024 3:16 PM
58:12