AI-Powered Efficiency: Streamlining Operations for Bottom-Line Impact

In today's competitive landscape, businesses are under pressure to optimize operations and increase efficiency. Leveraging AI presents a powerful solution to this challenge, enabling firms to automate and streamline a multitude of operational tasks. From account applications to client onboarding, risk analysis to compliance checks and beyond, AI can transform mid- and back-office operations.

Join us during this session as we explore the practical applications of AI in operational activities. We will delve into real-world solutions that firms can implement now to drive tangible improvements in their bottom line. Discover how AI can revolutionize your workflows, freeing up valuable resources and enabling your team to focus on high-value tasks. Don't miss this opportunity to learn how AI-powered efficiency can reshape your business.

Transcription:

Craig Iskowitz (00:10):

All right. All right. Welcome to the first panel of the afternoon. Thanks for being here. My name is Craig Iskowitz. I'm the CEO of Ezra Group. We're a technology consulting firm, and this is our panel AI and automation, AI powered efficiency, streamlining operations for bottom line impact. So I hope you've all gotten as much out of this conference as I have. I found it'd be fascinating to have so many experts on AI all in one place, in one topic, has really delivered a lot of great value. I think hope you have enjoyed that. We're going to do a lot more here with this panel, but just a couple of comments to sort of set the stage for what we're going to be talking about. We've all seen AI permeating every aspect of our lives. It's everywhere, it's every application, it's all over. Even the Nobel Prize in physics was just awarded to the godfather of AI.

(00:59):

So it's being shown that as affecting science technology and all of our everyday lives. I jotted down a couple notes from previous panels from yesterday and today I thought were interesting that relate a bit to what we'll be talking about. One of the takeaways from other panels was data centralization is crucial for effective AI implementation, especially for complex systems that touch a lot of other applications like new account opening. We're going to talk a bit about that today. AI is being used to enhance client engagement and advisor efficiency. I'm sure everyone here has seen the advisor tech map that Michael Keis and I work on. It's available on keis.com. That map has over 550 applications on it. But just two years ago in the client meeting category, there were only four applications and now there's 14. So these applications are coming very fast and entrepreneurs and startups are seeing these different areas as places they can add value for advisors and for us in general.

(02:00):

And third point I found was while AI adoption is growing, there's an emphasis on using it as a tool to augment human advisors rather than replace them. There was recent Harvard Business Review article about that AI should augment humans and not replace them. Have you heard Michael Kites this morning? Talk about he recommends that vendors who are selling AI tools don't use the word automate because advisors don't want to automate themselves out of a job. They want them to expedite. So use expediting, make it more valuable for us and reduce the amount of time we're spending to do the same tasks, but keep the humans involved in the process. So our panel is designed to provide real world expertise and real world examples to you about using AI in wealth management. I'm just going to quickly run through the panel. You can look 'em up, Google if you want more information about them. We have Amy Young from Microsoft, ra, Jane from Zeplyn, Nick Graham from Cambridge Investment Research and Andree Moore from Integrated Partners. So thanks everyone for being here. Lemme just kick things off. So we're going to start right to left. Andree, you first, so I was talking about new account opening earlier, and that's a process that you have enhanced with AI at your firm. Can you talk a little bit about how that works and what tasks have you streamlined?

Andree Mohr (03:17):

Yeah, so when we started thinking about ai, you touched on two things that were important to us as a firm, which is getting our data together and then also what are the tasks that our team does every single day. We could do better by leveraging technology. So before we even dove into AI, we started by taking a step back and said, what are the problems that our team members are facing that they're frequently getting upset about that is causing frustrations in their day? And how could we leverage technology to do that in a way that felt safe for us as we started to get into ai? So for us, when we took a peek at that, it really was streamlining operations. When we look at workflows and the tasks that you and your team are doing every single day, how could we leverage technology to create a better experience for your team and for your end client?

(04:13):

And so the easiest thing for us when we thought about that was how do we not automate, I'm not going to use that word anymore, but how do we make more efficient the new account opening process because it was crazy to meet and our team that we were still having to enter the same data into multiple systems because we are multi custodial. So with partnering with Invent, we're able to create a tool where we can streamline the new account opening process, but also to create the workflows. So we have the data to streamline that process and allow for our team to have different markers along the process to personalize that experience. What AI, what Craig's talking about is what AI is going to help us do is to create better experiences for our clients. And we thought if we can leverage AI so our team has more time to make that personalized call, hey, your TOA came in and let us walk you through our client portal and help you get what you need and link in different things. They have the time to do that now they are creating a better experience for the end client. So we started with what's important to us and what can we automate. So new account opening was great.

Craig Iskowitz (05:33):

Lemme ask you a quick follow up question.

Andree Mohr (05:34):

Yeah,

Craig Iskowitz (05:35):

So we saw Oleg from event earlier on panel. What was it like working with a vendor like that and how long did it take to implant?

Andree Mohr (05:42):

Yeah, I mean it's been a great process for us. Like I mentioned, we are multi custodial. We work with high net worth clients, so we have insurance and products all over the place. And so one of our biggest problems with streamlining operations was the fact that we couldn't easily see a full picture of our clients. And so by working with Oleg and his team, he was able to create a customized version of Invent only because not a tech person, so I can't explain it as well as he would to power our business. And so the power of having data to better understand our clients data, to better understand our business and where we are spending our time has been amazing for us. So here's what I'll say, it was definitely work to put it all together. It took us the better part of five, six months to do so, but we have a lot of accounts and things across different platforms. But for somebody who is not a tech CTO, it was great to partner with people who could take the tech side, translate it into business sense for us and help us build something that was personal to us and allowed us to do it the way that we want to do it.

Craig Iskowitz (06:54):

Thanks. My next question for Era, so your product Zeplyn improves the client review process. Can you talk a little bit about that and can you provide some before and after examples of how firms are working in before and then after they implemented Zeplyn?

Era Jain (07:10):

Absolutely. So for those of you who don't know, Zeplyn streamlines the client meeting process by cutting down the time that advisors typically spend on prepping for their meetings, documenting their meetings, so note taking and then all the follow-up work that happens after meetings, which is updating logs and CRM creating tasks, following up with clients, share two examples here, two very different clients that we're currently working with. So one of our partners, they have seasonal surge meetings. So typically they have 435 meetings in a year. They meet with their clients three times a year and at a minimum every meeting would have four to five action items coming out of it. And because it's like a very seasonal pattern, they cover a range of topics across these different meetings all the way from tax planning, retirement insurance, estate identity theft, investment planning and so forth. And so because they go into such a level of depth and personalization with their clients, what they were doing was they would having a junior advisor or a second person sit through 50% of their meetings just so that they can document these meetings while the other advisor leads those conversation.

(08:28):

And then after that they were spending another hour just consolidating those notes, putting them together in a very cohesive and readable way that they can share with their clients, put them in red tail, create workflows and red tail based on the outcomes that were coming out, and also making sure that the tasks are assigned to the team members. With Zeplyn, they saw massive time saving the entire process of updating notes in CRMM, creating tasks and workflows generating client follow-ups was taken care of, it's cut down to five minutes from an hour, so that's close to eight hours on an average that they were saving per week. And then on top of that, they didn't need a junior advisor to sit through 50% of their meetings, so that was another four hours, so that's like 12 hours a week time saving. So it's a no brainer to have a solution like this in meetings, but then apart from time saving, another thing that they observed was that the task execution rate had gone up on their client's side because now they were communicating these tasks very clearly right after the meeting to the clients and there was more transparency that was created.

(09:38):

And so that was a very big improvement that they saw. And just another example of a slightly different firm which is bigger. So this firm, they have a hundred plus advisors and they're in a process of acquiring several other RIAs. So they're growing and one of their biggest need was to standardize the entire client review process. So what they had was they use Salesforce as their CRM and they wanted a very consistent process around case note management and follow ups for their old advisors as well as the advisors are coming in. So they adopted Zeplyn and earlier the advisors were spending again like an hour manually updating these case notes, but then all of that was cut down to 10 minutes. The notes are more cohesive, understandable. So the client service team, if they want the CSA, if they come in, they have a very good understanding of what this discussion was about and what are the new cases that are to be created based on the meeting outcomes. And so that was a big time saver for them and also help them standardize the process.

Craig Iskowitz (10:44):

Thanks Era. I want to Jump over to Nick now and we're going to stick with the client meeting category. As I mentioned, there's already 14 vendors and meetings and we've seen AI powered assistance being launched. Michael and I have to deal with it. And then every month coming into the platform and at Cambridge you have working with two of these meeting applications, one's called Zocks Zocks, one's called Jump. Can you talk about how you've integrated them into your advisor workflow?

Nick Graham (11:09):

Absolutely. One of the things that were important to us, and it was mentioned in a couple of our other sessions here, is getting everybody comfortable with who the partners would be. We looked at a lot. These two stood out for us because it made our compliance team and my technical security team feel more comfortable with their approach for what was not held offshore in some recorded method and model, what was still accessible as the final product that could be reviewed and archived. Both of these providers approached us with good stories that we felt we could offer to our advisors. If you know the Cambridge solution, it's one where we don't dictate the technology stack. We have more of a framework that they can make personalized choices on, and these two providers provided a solution with regard to that. The other thing that was important to us as we represented it to them was technology adoption 1 0 1 know the problem.

(12:03):

And we had actually done a lot of analysis with a variety of different types of advisors to document the story that you just shared about lost time, the inefficiencies of the actions of a meeting, the post actions and activities that go along with that. And both these products integrate well with CRMs. They both provide output that's very well intended for some of the other actions that are done. Luckily I get to take a little bit of credit for this, and we have other associates from my team here. We had a corporate goal of saving over 40,000 hours for internal associates for our direct business and some of our service activity and 40,000 hours of savings for our community of advisors externally. We've already exceeded both of those goals through this specific set of partners that we've adopted. It's been really quite a interesting journey to see that efficiency be realized and then also for us to continue to be a partner in that because training those tools, teaching people how to adopt them effectively is now where we get to have additional value. 40,000 hours is a lot. It's a lot. I mean we have just under 4,000 advisors, some thousands of firms that are represented in that for an eclectic crowd. Cambridge has an independent market, so they're not all the same. And being able to actually track and be able to validate those hours was important for us to be able to make sure that we saw greatest value for the greatest investment that we made on something that would impact them.

Craig Iskowitz (13:33):

Yeah, thanks Nick. I want to apologize to Amy. You've been sitting here for 15 minutes and having a chance to speak it. Sorry, but can you talk from a Microsoft perspective, let's Jump to Microsoft copilot, which we've heard from a number of other companies earlier in this conference using copilot for various different things. Since everyone's going to have it, it's going to be baked into every Windows machine. Can you talk a little about how it's going to impact the financial services industry, financial advisory specifically?

Amy Young (13:59):

Sure. So I mean it's been great to hear the other perspectives for me to be able to riff off of here. And I expect we might have, we'll continue to riff on this after you hear what I have to say. So I think everybody is sold on the productivity lift from being able to automatically transcribe meetings, extract the follow-ups tasks, et cetera. There's virtually no debate about that, but I think when we, in the context of automation, I want to separate the baseline features of that transcription and extraction of tasks from the verticalization, the vertical specialization.

(14:58):

And this industry already struggles with a pretty onerous toggle tax, if you will. Some people call it the swivel chair effect and to the place mat thing that you and Michael work on. We've all seen the explosion of logos on that thing. And so not withstanding that I work for Microsoft, I kind of struggle with the idea that anybody would want to add another point solution. Like if you are already using the Microsoft Desktop Suite in particular, if you're using teams for your client meetings, the fact that all of these capabilities are now built into that with our copilot is a slam dunk in my mind. And similarly, and the reason I say that is it's about the integration at the data layer and through the UX frankly. Similarly, if you're using Google Suite for those types of functions, you should use their AI note taking and extraction tools.

(16:02):

I assume they have 'em. Similarly, I think what we are starting to see is firms that used teams for their internal collaboration but zoom for their client communications, they're starting to question the role of Zoom because it's not integrated with email and calendars and things like that and the files that you have. So when you start to put an AI over top of these things, it becomes more and more important for the data to be unified and structured the same way. I loved the invent guys story. I know a lot of advisors, their eyes glazed over when they started talking about data governance, but I think that matters more and more that integration. Now I'd also say that where, whereas Microsoft I think has a really strong case in those horizontal capabilities, what we will need to do is partner with firms that bring vertical specialization.

(17:13):

So I love the part of Zock's story yesterday where they were talking about they were starting to extract entities from those meeting notes, and in particular he gave the example of being able to find all the clients who have 529 plans in Illinois because there's been a change to that kind of vertical specialization, I think becomes extremely powerful when it's plugged into the horizontal capabilities that something like a Microsoft offers. And I think that as we see the industry evolve in this area, those specialized plugins, if you will, are going to play a much bigger role.

Craig Iskowitz (18:04):

Thank you. I want to go back to Andree. I'm going to change the question a little bit, but you had first talked about reevaluating or reimagining the new account opening process and Nick mentioned the time that they'd saved. Did you quantify the time saved or headcount reallocated when you automated the new account opening process?

Andree Mohr (18:26):

Yes, we did it. And I like that you went to ours. We just did it in the term of humans, but it took away two full-time humans from our account opening process by leveraging in technology. And what we were able to do with those people was to allow them to have more impactful moments with the end clients. And so they're happier, clients are happier, it's a complete, everyone wins, but we're not as big as Cambridge. So two people to us was a big deal.

Nick Graham (18:56):

Oh yes, a big deal.

Craig Iskowitz (18:58):

Also in your response, you mentioned that it's important to personalize the experience for clients. Are there any tools, AI tools that you've implemented that have helped you make the experience of clients more personalized?

Andree Mohr (19:11):

Yeah, it's a great question. And I think that as we all talk about and think about ai, I mean AI is going to be one of the best things in my personal opinion that has happened to our industry. It is going to allow us to be better advisors and deliver more personalized advice to our end clients. And so for us, as we think about how do we do that, there is the personal component that AI will never take away from. So how can we make AI remind us to have some of those more personalized interactions and components, whether it's through the new account opening process or things like when the kid of a client turns 14, are you reaching out to that client and saying, Hey, there's some interesting opportunities for us to open some new accounts for your child to help them grow some money tax free because they're of working age.

(20:06):

So things like that. We have now, we've done all of these things all through invent. So that has been phenomenal. But there's, you talked about Jump. Jump helps us personalize the conversations that we're having in client meetings because it's so easy to go back to the notes and have when you can, everything takes time from an AI perspective, but then you say, this is how I want the format of my notes to be put in. So then I'm clicking here and saying, all right, this is the piece of the note that I care about. I can quickly go there, know what I want to relate back to and bring that into the next meeting. So Jump has been great from that perspective in helping us have these personalized, Hey, you mentioned in our last meeting, this, that or the other thing about your personal life and then how do we take that data, get it into the CRM and use Jump to create those tasks that remind us, oh, you're retiring, your kids, getting married, whatever, all those little things that are going to become the big things when we have time through Ai.

Amy Young (21:13):

Do you mind if I Jump in on

Craig Iskowitz (21:14):

Absolutely. Go ahead.

Amy Young (21:15):

Okay. So I want to share a story. So I'm an investment professional by training. I spent the first 10 years of my career as a sell side equity analyst and trader. So I didn't have an advisor for the first, I dunno, 15 years of my professional life and whenever FATCA came in 2014, something like that. So I'm Canadian, I'm actually a dual US Canadian citizen, but I've spent most of my career, my entire life in Canada except I was born in the States. And so FATCA, of course when this came up, the US government required Canadian banks to disclose to the IRS any US citizens. So I had to start filing US tax returns for the very first time and I knew this was going to create a whole bunch of complexity. So that was my, I went looking for an advisor who said he would help me with that complexity.

Craig Iskowitz (22:15):

Can you explain what that is?

Amy Young (22:16):

What FATCA?

Craig Iskowitz (22:19):

Can you spell?

Amy Young (22:20):

What does it stand? It's F-A-T-C-A and it's basically the thing I hope people realize it's not typical around the world that if your tax having to file taxes is typically a function of where you reside. So now that I've moved to the states, I no longer have to file taxes in Canada, the joy that the IRS gives every American citizen that you have to file for the rest of your life. So this FATCA law was the IRS's hammer to get Americans who were living elsewhere to have to file, I think was the point. It was some kind of reciprocal reporting thing.

(23:03):

Anyway, so it required me to have to start filing and I knew this was going to create a lot of complexity because registered accounts like the equivalent of your IRA in Canada, the tax, tax treatment is slightly different. So anyway, I get this advisor who says he has the expertise to help me with this fine, we do a financial plan, no sweat, but a year later I make an outsized contribution to my Canadian RRSP, the equivalent of your IRA. And normally if you contribute you can make a one time large like a catchup contribution in Canada and you don't get taxed for that or you get the full tax benefit of that. But that did not apply to my US taxes. So my accountant does my tax return for the states and tells me I owe the IRS 15 grand and my advisor knew that I was doing this. He made the contribution to my RRSP in Canada and did not flag that most basic factor in tax planning from an investment perspective. So when I look at, that's my problem statement, right? And it's an example,

Craig Iskowitz (24:29):

How AI is going to solve this problem?

Amy Young (24:30):

Well, but that's just it. So in this day and age, now we have the opportunity to ingest my tax returns and create a flag in a system somewhere that when I'm asking my advisor, my advisor is making that large contribution that IT flags for that advisor, you need to tell her to adjust it in this way or spread it out or whatever provisions needed to be made. But that in my mind is a classic example of data-driven personalization. It's not stuff like what's my kid's birthday? This was so much more substantive and meaningful and technology absolutely could have averted this in this day and age technology can avert that problem.

Craig Iskowitz (25:24):

That's assuming of course that the LLM underneath that was trained in Canadian tax law and US tax law able to.

Amy Young (25:32):

So we're talking about actually it's not LLM, I'm not saying that that's an LLM use case, extracting the data from my filing using the OCR and being able to interpret that form correctly and dump it into a database that is an LLM use case, but not the rule. In addition to that,

Craig Iskowitz (25:52):

It could be if you were to train an LM on that tax code, it could do something like that.

Amy Young (25:56):

Yeah, I don't think that's maybe so I'm not that technical, but anyway, the point is we can do this now and we couldn't do it 15 years ago. And that is the kind of thing that we need to be using technology to enhance the quality of advice in a systematic, reliable way, much more reliable than the human.

Craig Iskowitz (26:16):

That does lead me into my next question, which is for Era about the difference between open source LLMs like Meta has and Google has versus closed source LLMs like open AI, and how will that as the open source LLMs gain more capabilities and catch up to the closed source? We're going to see a bit of a war there and that also has impact for task specific AI tools, ERAS like Zeplyn. So what's the benefit and how are you staying ahead of the curve when it comes to the open source versus closed source to keep providing benefits for advisors?

Era Jain (26:51):

So we have seen that so far. We don't work with open source LLMs, but that's kind of on the roadmap. What we have seen though is different LMS are good at different kinds of tasks. So for example, GPD four is pretty good with logical and reasoning type of tasks versus Claude that does a pretty good job at summarization. We are still in experimenting with LAMA, so we're not sure where we are going to get over lama, but what we have tried to do is any outcome, any output that we produce in Zeplyn, we break down the process of getting there into steps. So think about generating a meeting summary, right? There are multiple steps here in the process. You first need to understand, okay, who were the different speakers in the meeting so that you're able to attribute the tasks that are coming out of the meeting to the right person.

(27:40):

That's one task. The next task is to actually do classification on the type of meeting. Was this retirement planning discussion, was this an annual review? Was this an intro fact finding type of a call that would bear that would determine the relevancy of the output? That's the summary that's coming out. And then lastly, you have the summarization task, which is actually extracting out the details and then organizing all of that information in a desired output. So the way we look at it is these are three different tasks and these are three different agents. Each of these tasks is, one of them is more of a logical reasoning identifying speakers. The other one is more around classification. The third one is summarization and different LLMs are better at doing each one of this. And then each will have different context that's going to be feeding into it.

(28:30):

Speaker recognition requires the information. Who are the attendees in the call? Maybe more information on who are the clients input from the CRM meeting classification would require some global context of in general understanding of wealth management topics and the types of meetings. And then finally all of that output with feed into meeting summarization, which now understands these different little nuances about the meeting that would then generate an output which is sort of more relevant and accurate for the advisor. And so the reason we have built out our technology this way is because there is no, there's not a single solution that can help you get there to that 95, 98% accuracy, right? We want advisors to spend minimum amount of time after they've received this meeting summary to tally the details or make changes in the formatting or the output because otherwise what's the point? Why are they using our solution? And so we try to get to that 95, 98% and in order to get that, we need to really tune each agent that we are using to do that specific task with a very high accuracy and then feed in the right context at the right time. So that's kind of how the technology works. And yeah, I don't have an answer for how well we are doing with lama, but currently that's an experimentation.

Craig Iskowitz (29:53):

That's good. I just wanted to hear how you're handling that because we're all seeing the changes coming so fast and if we're going to invest in a standalone tool and make sure that it's going to continue to provide value, but you're saying you have a federated approach using multiple olms for different tasks.

Era Jain (30:07):

Exactly. And tomorrow SEC comes and says, you can't use open AI anymore, so we cannot be reliant on a single LLM.

Craig Iskowitz (30:15):

Right and you're covered. Thank you. So over to Nick talking about Microsoft copilot is another application that you have implemented at your firm. Can you talk about how it improved efficiency for your internal teams and provide some specific examples?

Nick Graham (30:28):

Sure. It was a great presentation yesterday around where that product continues to unfold and we were an early adopter of it and we've applied it in two different ways. One is for general operation staff, the amount of activity dealing with correspondence or activity of running the business or doing some of our M&A work and other activity of digesting just a standard office suite type information. Copilots made it very effective for us to be able to query that history, that individualized view of information. We have a strong security model inside of Cambridge, and so being able to protect what I might be able to see versus some associate being able to leverage existing repositories, all of that's had a cumulative effect back to that overall office number that we were trying to track. And we've looked at methods or people's jobs that have been affected by the fact that we could use a tool, find a source, or now expect a level of behavior efficiency that could be realized That was wholly new, is part of what we've done there.

(31:36):

The other aspect of adoption for us is also development. Cambridge does a lot of integrations as you would imagine with having a more flexible technology stack approach. So I have a large team of developers that either do integrations, partner with our custodians or build native products for our clientele. And a lot of this always exposes us to the technology flavor of the day, name a custodian that's not playing with Snowflake, that kind of deal. But as we've gotten into trying to touch those solutions and build for that or update legacy code that used to be a full born architectural approach and a dedication of resources. Now with copilot in the development space, I'm able to do far faster and much more predictable quality of code development by using their tools to take old methods and convert them to something that's more current, use existing templates as a reference for something that we'd like to have a standard against. And that's helped us move and improve our overall efficiency with regard to developing from that standpoint. So as a overall experience, both office operations and just general individual use and team behavior and what we've done with development, we've received very material gains in both efficiency and cost savings in the form of time and capacity.

Craig Iskowitz (32:56):

Did the 40,000 hours include developer time or was that only advisor time?

Nick Graham (33:00):

We didn't quantify that as much. We only held ourselves to something we could prove that's kind of our end. It affects our bonuses. So we basically wanted to make sure we could point to something that changed. We could document something that was removed, we could identify something that was specific to a role and an action and our client experience teams and a lot of what we've done internally could document that the developers not so much. It was definitely something we've seen in terms of speed or the way we've had to challenge ourselves with new technology that happened faster than what we originally forecasted. So that's more of a feel than it is a quantifiable number.

Amy Young (33:39):

Are you looking forward to, you referenced advisors being more effective by being able to go back over the history of these notes, but at a firm level, are you looking at best practices that you can extract from that? Having all of these notes creates a tremendous data asset, right? You can analyze at scale and ultimately connect it to the growth of a UM, right? So everybody has always said, man, if I could just make my bottom quartile advisor as productive as my median or mean advisor, boy the productivity of my business would improve dramatically. So have you been able to extract insights to connect behaviors to outcomes?

Nick Graham (34:29):

The teams that actually do that kind of consulting work inside of Cambridge would be leveraging it for that purpose, but copilot by itself, my personal opinion isn't really giving you those kinds of distilled insights. You have to know what to ask and how to go look. It's a data-driven exercise to identify patterns that have the outcomes that have that effect. But we have the experts on staff that would be doing that. So the question is how have they started to leverage that for the goal of trying to exhibit best practices? They have got a hundred people doing it this way that seem to save this much time. That's where you would see them start to incorporate and adopt with intent. Again, I'll be the one preaching the story all the time of know the problem you're fixing. Just waving a shiny new toy around with no real purpose for it is a dangerous activity of expense because some of these things take time, energy, people and some knowledge to effectively deliver and maintain. And we've tried to be very, very thoughtful about that at Cambridge to make sure these are sustainable adoptions and there are good foundations where everyone's familiar with how to use it so that when that kind of a question comes up, everyone knows what is possible or how it might be approached.

Craig Iskowitz (35:41):

Thank you Nick. And speaking of dangerous activities, I like to open the floor up to questions from the audience. Anybody like to ask the esteem panel a question that's on your mind? Now's the time. Anybody got their hand up? Everyone's in a little bit of a food coma from lunch. Yes. Way in the back. I see a question now we have the intrepid microphone runners.

 Audience Member 1 (36:07):

Oh hi. If the LLMs have been trained on possibly copyrighted data, do you worry about that data being used then to operate any or part of your operations and any sort of risks or issues coming from that?

Craig Iskowitz (36:27):

Anyone want to take that copyrighted data being used to train?

Era Jain (36:31):

So I guess I'm trying to think. So I guess by copyrighted data, so yes, there was an issue with LLMS that were being used to generate images. For example, there was a lot of copyrighted data that was going in there. I'm just trying to think in the context of wealth management, the context of what we are doing, for example with client meetings. So one of the things that we do is we are not fine tuning any of the LLMs, so any of the meeting data that's getting captured, any of the conversations, none of that is being used to fine tune the model your client BII is safe. It's just being used as context that's being provided to the LLM to produce an output. So it's input in it's output coming out of the model and then that's it. Nothing is actually being used to train the ai. So that really helps I guess de-risk a little bit. But I don't know what you exactly meant by copy. What in your mind, what type of copyrighted data are you referring to? Can you give an example?

 Audience Member 1 (37:38):

I think there's a whole bunch of examples where people have provided say books and essays and research work and whatever. And if you're typing it into an lm, if some reason or another that comes out, why is there?

Amy Young (37:53):

So what I would say is, so I think you're right to segment the data leakage. So is my firm's data being used to train the model? I think that problem has been addressed. I think what you're talking about is if I am using an LLM to give me insights in my business, am I contributing to some kind of copyright infringement because I'm using a tool that was trained by this third party data without consent. Microsoft actually put in a provision for our M 365 copilot product, this was about a year ago, that indemnifies our customers against that legal liability only where they're using our co-pilot product, not if they use the LLM directly to create a custom co-pilot that doesn't apply. But in the context of the M 365 copilot, that isn't an indemnification that we offer to give our customers some comfort on that point.

Nick Graham (39:02):

Nick, a thought just to Jump in. I mean you've heard almost two days of people preaching talking to or referencing data governance and the architecture for LLMs. And a lot of the more, I would say higher quality solutions that are out there are using that rag type approach where you have the basic language model of being able to interact with you effectively and then you have your own ability to control your repository of source information that's unique to your firm. This is where you need to have the right people in the room and think through those challenges because effectively the lawyer or the compliance person that comes to you is going to say, that was your responsibility. So you need to take it seriously. You need to be very thoughtful about what you actually compile and how you create that repository. That's the base for your training and that's where grooming your data and managing the references that you're going to leverage for these kinds of tools is something that has to be a firm managed activity. You have to invest in it. It doesn't just do this stuff for free. There's an element of training and tuning or sculpting of that data to protect you from that and also give you control over it. And that's the data governance exercise.

Craig Iskowitz (40:12):

Thank you. We're running out of time. I was going to do a quick lightning round, but we don't have enough time for that. So I want to throw it to Andree. I don't think you had as much talk time as everyone else had on this panel. Can you talk about maybe the negative aspect? Is there anything you've seen that AI can't do that you tried to have it do at your firm?

Andree Mohr (40:32):

I don't want to say can't do, but it definitely, Nick made a good comment about taking the time to learn. So things are moving so fast with ai, there's so many new tools coming at us and if you just Jump in and start using that is a negative to AI because you need to take the time to learn the tech it, learn you to properly leverage it. So it can do a lot, but you have to understand the power of the technology, how are you protecting yourself? So the speed of AI would be, that would be a thing.

Craig Iskowitz (41:03):

So don't go too fast into AI.

Andree Mohr (41:07):

You can get yourself into trouble for sure.

Craig Iskowitz (41:08):

Would you agree, Nick?

Nick Graham (41:11):

 I definitely agree. I guess I'll pull on that thread a little more and say be intentional. This is an emerging technology. I think we've also heard from many of our vendors, just about every product you probably already have is now advertising some AI capability. So you'll end up with five or six versions of something that has AI in it inside your shop and knowing exactly and having a vision for how you're going to manage what they do and how they interact is something you have to have a good knowledgeable view of how you're going to feed that beast and how you're going to manage the value you want to extract from it. That's a real risk. I mean we've had that challenge because of the exposure we have at Cambridge quite a bit and trying to figure out exactly where we want to see certain behaviors be the focal point of how we train our staff and where we put the time and the money versus the competitive product or the overlapping product or the alternative product that our vendors are positioning like use mine, not theirs. That becomes an issue of vision. Like what are the use cases, know your problem for how you apply a technology.

Amy Young (42:17):

And if I could connect the dots between those two comments.

Nick Graham (42:20):

Yeah 45 seconds.

Amy Young (42:21):

Yes, I'll take less. So Andree's comment about don't rush, but your comment about do your homework. I think the bottom line is you have to start because the only way to learn is by doing this isn't something where somebody can just give you the answer. We all have to develop. People have talked about literacy fluency around AI. The only way you do that is by rolling up your sleeves and experiencing where it works well, where it works less well yourself. And so you have to put the guardrails around that, but it's a both and it's not just don't do it.

Craig Iskowitz (42:59):

And on that note, last thought,

Era Jain (43:01):

No, I was just going to say you probably don't need AI for every type of problem that you're trying to solve for. So for example, next best action. Sometimes all you need is simple heuristic based solution, right? You don't need AI to tell you that your client is turning 73, have a discussion around RMDs. But what you might need is for AI to go back into those past conversations and see if the client mentioned something of sort that, Hey, I'm not in an immediate need of living expenses. I'm doing pretty good. So maybe talk to them about a Roth IRA conversion strategy. And that could be an insight that AI could provide. So just try to understand what is the problem that you're trying to solve for and it's not necessary that you would need an AI-based solution for that.

Craig Iskowitz (43:44):

And on that note, thanks to the panelists for being here and sharing so much valuable information. Thank you to the audience for attending. Have a great conference.