Gaining AI Superpowers: How AI is Impacting The World and The Way We Work

Artificial intelligence is reshaping nearly everything we do. From emerging technologies such as robotics, to generative AI like ChatGPT, it seems clear that AI is going to change the world. The question is, how? And what does this inevitability mean for you and your business? In this session, AI expert innovators will discuss the evolution of AI, what we can learn from other industries, and how the true potential of AI lies in its ability to empower people in the work they do.


Transcription:

Presenter (00:10):

Please welcome to your stage Editor in Chief, Brian Wallheimer.

Brian Wallheimer (00:20):

I can't wait to tell my children someone clapped for me. They won't believe it. Welcome, as said, I'm Brian Wallheimer. I'm Editor in Chief of Financial Planning. I want to welcome everyone here today. It's great to see such a full house, a big crowd, people excited to learn more about AI, how it's affecting our industry. This is really the first deep dive into AI and wealth management in terms of a conference and at financial planning. We're working really hard to make sense of AI, how it's affecting the industry, how it's affecting advisors and their businesses. So this is a great fit for us and we're glad you could make it out today. I want to clear up one thing. We talk about AI as being this new revolutionary technology, and I think most of us probably know that that's not exactly true. Charles is nodding.

(01:08):

He's like, yeah, over here. AI has been around since the 1950s, at least as a concept, right? But as we know, people like to make the joke about how there's more computing power in the first iPhone than there was that sent the Apollo missions to the moon, the ability to compute and store data and all these things. They kept everything at the concept level for such a long time. But then you jumped to something like 1997 when IBM was able to build a computer that was able to beat Gary Kasparov in chess. From there, things happened very, very, very quickly. And AI as a concept went from, and by the way, 1997 I was in high school. I just like to say that, but in 1997, we had tools that we're able create to make decisions on a level to beat some of the brightest minds in the world.

(02:02):

And now here we are, almost a quarter of a century later, more than a quarter of a century later, if I'm doing math right, we journalists aren't known for that. But Moore's Law says that computer power basically is going to double every two years. So when chat GPT hit the scene and everyone thought, well, this is revolutionary, this is something I've never seen that was two years ago. We're double that and we're going to keep going as the years pass. So this is technology that is not moving in at the pace that we have normally seen technology move. This is moving very, very quickly.

(02:39):

We have now a couple of inflection points in wealth management. We have to separate the wheat from the chaff. Everybody wants to create an AI tool and AI solution, and some of them will be amazing and some of them will not. And we as those on the journalism side, and those of you on the business side have to make decisions about what works for your business and how it's going to help you scale, how it's going to help you attract clients' market and all those sorts of things. And we've got to get up to speed. It's not news that there's a significant amount of money moving to younger generations in the coming years. We call it the great Wealth transfer. And a lot of the clients who were happy to have a sit down meeting and get a quarterly statement or in the mail or an email, they're going to be gone one day, unfortunately.

(03:25):

And the people they pass their money down to are expecting their wealth managers to live in the same type of technological world that they live in on the day-to-day basis. The key, of course, is doing it right, protecting data, staying compliant, investing in the right tools that integrate with our workflows and move the needle for wealth management. So that's why we're here today. And I first of all want to say thank you all for being here, because what you're doing is you're exploring you're learning and you're putting in the efforts to get it right. We're in the early stages. We know that. A lot of people have said to us, it's kind of early for all AI conferences, isn't it? It's not because a year from now, how much further will everything be? So thank you all for being here and for meeting with each other, connecting with each other.

(04:13):

Please say hi to me as we go. Before I step down and have Charles come to the stage, I would be remiss if I didn't thank a slew of people here this morning. First of all, we have an advisory council of around 20 people. I'm not going to name all of them. There's a ton, but they're all throughout here and they're some of the biggest movers and shakers in the wealth tech space. And their fingerprints and knowledge and expertise are seen on pretty much every session, panel and event you're going to attend here in the next two days. A big part of that council was Suzanne Siracuse, who many of you know she consulted on this event and her knowledge, connections and time were invaluable. So thank you Suzanne. Our conference team from Melissa Mills who is tireless in her event planning, you will see her, she'll be the one moving the fastest around these rooms throughout the next two days to our marketer, sales team, studio, creative team, everyone involved in setup and site coordination, and anyone else I might've missed. My editorial team in particular, Rachel Witkowski, who's up in the front row here. She's our Tech Reporter who has done an amazing job of covering AI in the wealth space for the last, how long have you been here, Rachel?

(05:16):

Eight months just killing it. And Kat Auer, who's right next to who, our Managing Editor who holds everything together for our team on a day-to-day basis. They've both been instrumental in planning this entire event. The team on the ground here, there's folks in the back, running cameras, lights, audio, everything. They're essential and we appreciate everything that they do. And again, one last time, thank all of you. I hope you leave here inspired, making new connections and pushing boundaries. So with that, I'm going to get us started. We're going to kick off with a big picture. Look at AI in the financial sector. Charles Morris has spent time as a lead data scientist at Putnam Investments before he went to Microsoft where he's now the chief data scientist for financial services. That background gives some great insight into the challenges and opportunities for AI and wealth management. In his current role, he's involved in projects with banking, capital markets and insurance sectors, helping execs in those area prioritize and develop initiatives when it comes to strategic projects in the industry. Charles is likely involved in that project or he knows the people involved in that project. He is advising people who are making some of the biggest decisions in AI in the financial sector today. So I'm excited to hear what he has to say. Join me in welcoming Charles Morris to the stage.

 Charles Morris (06:43):

Alright. Hey everyone. Is this on? Okay, cool. Let me find the clicker. Okay, here we go. So as Brian said, my name's Charles Morris. I'm from Microsoft and I have a fortunate position of being exclusively dedicated to financial services. And so I'm not just AI guy, but I actually work across financial services and that includes everything from banking and hedge funds and insurance. But especially in wealth advisory, I've been seeing a lot because this is such an area where we're going to see major changes. And I want to echo a lot of what Brian said. I mean, I think he nailed a lot of the key points about this is a general purpose technology. This is something that's not like, Hey, what are the 10 use cases that we're going to use AI for in the next couple of years? It's sort of like you can use it anywhere.

(07:33):

And so we have to think about as we're thinking about every individual, every process and every industry being affected by this, it's not a question of if AI is going to be involved, it's about how do we prioritize where we're bringing AI and how are we bringing that to our users, our customers, our teams, our advisors. And so when we think about this wave of ai, we've thought a long time as AI as sort of synonymous with automation. And I want to challenge you to go beyond that framing for this wave of AI. And so with this wave of AI, Microsoft loves the term copilot. You've probably seen we've launched a million different copilot products, but that naming is intentional. That naming is really intended to highlight this idea of co intelligence, that AI is not a replacement for what you're doing, it's an assistant to what you're doing.

(08:28):

It's going to help you do your job more effectively. And so it's going to actually collaborate in the flow of your work and help you do the things you need to do because there's lots of things we do in our job that are sort of cognitively demanding, but they're not necessarily all that cognitively challenging or require judgment. And so AI can kind of come in and help us with a lot of those things and save our time, attention and energy for doing those things that we're actually really good at as people and that AI is not so good at. And so we want to think about put that human being at the center. That's the most important thing about this wave of AI that's different than what came before where it was sort of, I know logically there's some AI in the background piping me sales leads or new client leads.

(09:13):

And logically I understand that, but this is going to be much more tactical. You're going to feel this. It's going to feel like you are actually collaborating with an AI that is designed to make you better at your job. And so the scale of this should not be understated. Again, this is not dissimilar from the mobile moment where we're going to have every app be reinvented with AI as you're probably already starting to see. Now, some of that, as Brian said, is growing pains. This is going to happen in fits and starts. People are going to throw some really bad features out there and see if they stick. But the overall trend is that people are going to find the useful things. They're going to find this technology that make people's lives easier and their jobs better. And so we're going to build new apps that we can't imagine now.

(10:00):

And I kind of liken this back to the mobile era where when mobile first started, it was really websites on phones. It was a mobile app. It was sort of take our website and the top five things people do and put it on a phone. But now mobile is much more than that. We changed the way that we meet people, we network, we date, we buy groceries, we get taxis. That's all shifted. And we didn't know what that was going to look like when we started. AI is going to be like that, but on a much compressed timescale. And if you think about it, we've only been in this wave for about two years and it's just been incredible innovation in that time. And what's really underlying this is this idea that in current apps you have to define every possible thing that a user can possibly do.

(10:44):

So you sort of have these happy paths that a user can do X, Y, Z. You have all these complicated nested menus where you say, okay, well they need to do this here and then this here. And if a user starts to fall outside of what you've designed them to be able to do in an app, they're kind of stuck. And so with gen ai, what we have is because we have this natural language interface as well as the ability for models that can make plans and reason and dynamically call tools, you're going to see much more dynamically user-driven interfaces over the next couple of years. I'm going to show you some of the ways that that's starting to evolve already, but again, we can't even fully imagine what the future UIs are going to be. But things like generative UIs that go based on what the user is doing in a moment are going to become commonplace.

(11:31):

And so if we think about it like this, there's sort of these three pieces to an AI application. So there's the interface and we're seeing the evolution of the interface go from beyond basic chatbots where it's just a wall of text that never ends. And yeah, that's useful for some things, but now we're starting to see the introduction of things like canvases and workspaces and actually generating components of a UI. And so that interface is going to be really important. We're also seeing a lot of work be done around how do you go from general purpose AI application, something like a Microsoft copilot that's designed to help you go through your emails and figure out your documents to domain specific copilots that might be like an advisor copilot that's helping you with client preparation or post client meeting things. That's all being worked out. Same thing with memory and context.

(12:24):

In the beginning we were just doing a lot of stuff with just models and very basic data connections. As this matures, we're going to see even order of magnitude more value. But even with the very simple things we've done so far, there's already a ton of value. And so this study is actually pretty old and some new studies have come out since then and old being like a year, but new things have come out. And the ROI on this is already very positive. And so again, in the study by the TEI, Total Economic Impact, it was $3.50 for every dollar invested was the median and the top organizations were closer to eight, and that was at the very beginning. So this study was done a while ago and we're seeing that just really accelerate and we're seeing adoption of AI already accelerate as well. And I see that a lot more in the enterprise space because in consumer it's interesting because I as a consumer don't need AI that much.

(13:21):

In my day-to-day life, I don't really care about productivity in my personal life personally. Some people do. I'm not that hung up on it, but in my job I really need it. I have a task, I want to be better at my job. I want to spend less energy, less attention doing that. And so more motivated and I also have more discreet problems that I can solve in my job. And so that's where we're seeing in enterprise that's sort of like this iceberg of what you see visibly in the market versus the things the customers I have are starting to actually do behind the scenes. So again, if you take Microsoft copilot as an example, we're seeing incredible adoption of this technology. There's a change management component of this as well. We can't just hand people copilot and say figure it out. We find that's not the best way to do it.

(14:06):

There is a way of getting users to learn how to use this technology, but at this point it's like for me, if you took copilot away from me, it would materially impede my ability to do work. I just take for granted. Now, things like being able to ask questions about meetings that happened and getting summaries of things that happened while I was away. And also how do I rewrite and redraft this email based on the tone and message I'm trying to. So that habit formation is already starting and that's only going to continue to start. So I'd say suggest if you haven't started finding ways to incorporate AI into your workflows, you start because it's going to help you as better tools that are more targeted to the specific things you want to do.

(14:49):

It's going to go to where you understand where and how this technology should be used and can be used and what it's useful, what it's not. So like I said, we started very crudely and it didn't seem crude at the time. It seemed groundbreaking at the time. And there's sort of this hype wave in AI where the thing comes out, people's minds are blown and then they realize all the things it's bad at and can't do, and then we get to the next wave. And so we keep regressing in this way where it both feels like elation and disappointment. But then if we actually step back, we see the progress that's being made in these material ways, when we start to measure the impact on what people are actually doing, that's only going to accelerate because the models keep getting better, faster and cheaper. And so Brian mentioned Moore's law with GPUs, it's actually four x, so it's actually double Moore's Law.

(15:39):

And what that means is that if we think about scaling compute for AI models, there's this cycle of training a very big powerful model. And then once you get that base model, distilling it down to smaller and smaller models that have the same capabilities in a much smaller footprint. And so if you think about as we've scaled up, GPT-4 was on a extremely large compute cluster. The next version of that model is on an exponentially larger compute cluster. And so we haven't hit the ceiling yet on model performance. So if you look at when GPT four original GPT four launched in March of 2023, and then you compare it to May of this year, you get a six x speed boost. So models are six times faster and a 12 times reduction in cost for that model. Now for those of you taking a picture of this slide that's May, in August, we released an update to the model 50% cheaper for the same quality of model.

(16:40):

And so this speed of going getting more powerful capabilities in a much cheaper, more fast and efficient footprint is accelerating rapidly. And to that point, we started to see a lot of work with small language models where after we come up with the large language models, we can shrink them down and create models that are super efficient. And so these are models that can run on a consumer laptop or a phone. And so what we basically have now is small language models that can fit on a phone that rival what GPT-3.5 could have done a couple of years ago. So you think about the level of innovation where at the time GPT-3 five was like everybody's blown, how do we handle this much compute and that can now run on a laptop, right? That's absolutely incredible innovation. And so we haven't hit the limits of that yet.

(17:30):

So this combination of large language models and small language models is going to empower us to do all sorts of use cases where the most powerful models will live in the cloud, but then we'll have small language models that can do more specific tasks that are very cheap, very fast, very low latency. And so if we come back to this idea, we think about, okay, the model itself is not a product. And this is the other thing that has caused a lot of despair in the industry is you actually need more than just a model. So the model is sort of the key ingredient without which you can't do anything, but it's not enough in most cases. And you probably have seen that with some apps that are basically just around the model API, they say very silly things. They do very stupid things. What we're seeing in the enterprise space is people are really, really, really investing in this idea of memory context, reasoning and planning for their domains.

(18:22):

So you think about financial advisory, people are starting to connect the data that financial advisors need, the workflows they need, how do we internally codify how we make decisions and make suggestions to our advisor base? And so we think about this as actually pulling in all these pieces from the front end to the middle layer and then all the models working together. But we're also moving beyond just this idea of a one-to-one copilot or one-to-one AI assistant. So most of the assistances you've used so far, you are just interacting with it. one-on-one, what we're moving to is this idea that actually copilots and agents are going to be multi-party. So they're going to work across teams and across organizations to help people collaborate. And so to make that real, I just wanted to, we launched a thing called Team Copilot and this is inside of teams. And so this is a new sort of paradigm. So this is how it shows up in teams, but this idea that AI agent is going to help teams work together and you think about maybe an advisor team working with a home office and things like this, that's where the next wave is sort of headed. So I'll play this video and kind of show you what that looks like and just note how the AI is facilitating collaboration across people. So it's not replacing the need for working with your team, it's facilitating it.

Video Presentation 1 (19:42):

Microsoft copilot is evolving from your personal AI assistant to your team AI assistant, introducing team copilot, a new valuable team member helping everyone achieve more Together, you'll be able to add copilot as a meeting facilitator, taking notes during the discussion that everyone can edit and add during the meeting, along with Follow-up tasks and actions for everyone to see sharing and managing the agenda with the team, keeping the conversation on track and integration with Teams rooms ensures co-pilot can continue taking notes even for ad hoc meetings. You'll be able to add copilot as a collaborator in chats, keeping everyone aligned with up-to-date Insights. As the conversation happens and interacting with the team, responding to questions based on messages and shared files, you'll be able to add copilot as a project manager, creating project plans with tasks and goals and assigning them to team members. Co-pilot can also take on completing tasks on behalf of the group notifying team members when input is needed and facilitating group collaboration to complete the task. With Team Copilot evolving to provide team assistance, your team will be more productive, collaborative and creative together Microsoft copilot.

 Charles Morris (21:02):

And so the emphasis there is again, AI is not trying to replace what people need to do. It's trying to get us out of the things we don't need to do so that our collaboration can be more intentional, more thoughtful, more targeted at the things that we actually want to collaborate on decisions that need to be made. So instead of having to rely on that one colleague who has God-like powers of organization and note taking, everyone has one and probably abuse them a little bit, we're going to have that as an AI agent. And so that's going to mean that when you collaborate, you're going to have the notes and actions that you need to do. And so instead of having all these meetings ahead of the meetings ahead of meetings, we're going to be able to shorten that cycle and get people into true collaboration mode much faster.

(21:44):

And that's not a silver bullet. There'll still be meetings for meetings, but hopefully better. And then we're moving to this concept of agents. And so the idea of agents is now, instead of it just being a live kind of back and forth interaction, how can we actually enable LM applications to use tools? And so an agent is not that complicated. It sounds scary, people overload the term, but all it is is this idea that you're going to, instead of having one model try to do everything, you're going to have different instances of the model that you give, different personas, different sets of instructions, access to different data, access to different tools in your organization. So they're more specific to a certain task. And so that's all an agent is. It has some tools and it has a persona and it says, here's your goal, here's what I need you to do specifically.

(22:34):

And so what you'll have is agents that do specific things, we'll work together to help you complete a task. And so you may have one agent that helps you find research and another that helps you compose a message. And those will be different agents because what we've found is that when you try to combine that all into one thing, the models get kind of confused. It's similar to how I don't like to compare models to people because we're very different. But useful analogy here, if I'm in writing mode and editing mode at the same time, I'm probably going to do both very poorly. And I actually want to separate those context out. And so we're just doing that for models.

(23:12):

And so where we're going next is beyond text, right? We're really introducing this concept of multimodality. And so text is great, it's going to be useful, it's not going away, but this idea that now models can natively use vision and speech natively, this is really cool. So I've been, sorry, I've been antis speech interfaces for a long time. My mindset has recently shifted on that because now models natively understand speech and it's much more fluid and dynamic and you can interrupt models and things like that. And so it really shifts the paradigm quite a bit. So just show you a quick concept of how you can have a conversation with a model that both has reasoning abilities, can see what's on your screen as you're working and also naturally interact with you through voice.

Video Presentation 1 (24:02):

Hey copilot, I'm looking for a place to stay.

Video Presentation 2 (24:05):

Let's take a look. What do you think of the locked house? It's a bit pricey. You are a bit bougie, aren't you?

Video Presentation 1 (24:11):

I am not. I'm just looking for something nice, a little color on the walls.

Video Presentation 2 (24:16):

This one definitely has some color. Wow,

Video Presentation 1 (24:18):

It's giving me a headache.

Video Presentation 2 (24:21):

We don't want that. Wait, this one looks perfect. Minimal modern. You,

Video Presentation 1 (24:26):

You're right. I love it. We're booking it.

 Charles Morris (24:32):

So that's actually an AI voice that's not like a voice actor. You can go get copilot right now on your phone and use the voice mode and there's four voices and they're really good and you can interrupt them, you can go back and forth. And so it's not like this old voice mode where it was like you speak, we transcribe what you speak, convert it to text, then the model processes that generates new text and then converse to text to audio. That was painful. No one likes that. We've gotten to the point where now it's actually real time. And that's an incredible form factor because all of a sudden it means things like note taking applications with an AI agent where it's like, oh yeah, did you talk about this? Did you talk that become very viable in a way that they were not before.

(25:15):

And so again, I have this slide up a lot, right? This idea that what I'm really trying to drive home is that models are not the product. You still need good design, good engineering, but that the three components of this is where the innovation is going to happen in terms of building these domain specific interfaces, context and planning abilities. And so again, going beyond chat bots, if we think about coming into something like a Microsoft copilot initially, is this rolling? Okay, initially we were just saying, Hey, you chat with it Does endless wall of text can be useful, but if you've used it, it's kind of frustrating that you have to keep scrolling back through this kind of endless wall of text. And I actually want to start creating artifacts with the things coming out of my conversation. So in this case, I'm asking about EV charging near LAX because I'm an EV charging company.

(26:08):

And so copilot will go to the web and come up with some examples of how many parking spots, EV parking spaces LAX currently has. But now where we're going with this is one step further where we can actually take that output and we can actually do inside of Microsoft copilot, we call it pages, we can actually pull that into a live shared collaborative document. And so now we can tag our teammates in with the context the model provided with the citations from the web, we can assign them and ask them to find additional information. And now we're actually live collaborating on a document in the Microsoft copilot is also a collaborator on this document.

(26:49):

And so we're going to add these links. We're going to come in, we're going to ask copilot some more questions here. It's going to think for a little bit. And by the way, we just released what we call wave two updates to copilot. And it's significantly faster, significantly better response times have gone down significantly. And so we expect that to continue in terms of people were kind of frustrated because maybe response takes 30 seconds and now we're down to single digit seconds for certain responses. And so as we actually start pulling this thing together, we're using our copilot, our teammates are using our copilot to actually start building a project plan.

(27:31):

And so now I can just ask copilot to create a bullet list of requirements based off all the information that we've collaborated on so far. It'll pull that out and then I can go ahead and create an outline as a table based on this document that I did in the past, and I can have it include description and links to relevant sources. And again, it will give me this table and I can just add it to my live collaborative document. And so that's just an example of how interfaces are starting to evolve, where we're actually starting to co-create documents with both our real life team collaborators, but also having an AI agent in that mix. And so financial services is no exception here. You see some stuff in the news, but what I'll say is that's just the tip of the iceberg. I've never seen financial services companies adopt a technology so quickly.

(28:28):

And that doesn't mean that everything they're trying to do works. It doesn't mean that everything they will try to do will work. But what I'm saying is that if I look across all the best examples, I've seen work and I think about if everybody got to that stage across all these capabilities, the things that we're going to have access to in the next couple of years are absolutely incredible. And so if you look at somebody like Morgan Stanley who has been very vocal about how they're using AI for in the advisory space, they've started introducing AI agents to their financial advisors and have come out and said that they've been quite successful with that. And so the scale of this shouldn't be underappreciated. So for JP Morgan, they see this as adding potentially a billion to 1.5 billion of value for the bank, for the bank, not for the industry, just for themselves. At Moody's, who's a customer that I really love to work with, they've already started rolling across all 14,000 Moody's employees using Moody's copilot, using Microsoft copilot and these tools to make them better in the flow of their work and prepare them for the next wave of applications.

(29:36):

And so again, some of these are actually resulting in new products. And so if you look at, this is a project I actually worked on, and NASDAQ has a product called Board Vantage, which is sort of before a board meeting. After a board meeting, there's sort of a system of record for managing all those documents and signatures and decisions that the board has made. And so what we did with Board Vantage was we built in some gen AI capabilities starting with document summarization so that as you're going through these hundreds of pages of documents as a board member, you can actually use an AI to help go through that process, help find what you need to find, help hone in on the areas that you as a board member specifically care about. And you're going to be talking about, and this has already started to accelerate how much information board members can kind of retain, right?

(30:27):

Yes, I can mechanically go down and sit and read all these pages, but how well am I able to synthesize that? And that's where AI is really accelerating that process. So again, that's not like small time decisions that's board members making decisions about their companies, right? Same thing, Hargraves, Lansdown based out of UK has started enabling their financial advisors with Microsoft copilot and teams copilot, and they're able to complete client documentations four times faster. They're saving two to three hours a week, and 95% of employees report positive outcomes from using this technology. Like I said, this is just the beginning. This is with the tools as they are today or even as they were when we wrote this story together, that ROI is immense. We haven't seen something that has such high ROI so quickly and is only going to get better. And then again, I mentioned Morgan Stanley bringing their advisor base AI assistance to help them do all of the things in their workflow that the advisor doesn't need to spend most of their time doing.

(31:38):

And so the idea is that we want this to be a people driven business. We want this to be about your relationship with the client, how well you understand their needs and how you're planning and using your judgment as an advisor and suggesting plans for them to come up with the right decisions for them. And doing things like pre-meeting and post client meeting, we can do that much more effectively. And so if we can do that more effectively, if we can do background research much efficiently and much more targeted to our specific clients, it means that our relationship with our client is going to be improved. And so we think about this as, again, it's not like what's the five use cases, but it's about the entire life cycle of any given line of work. What are the things that I can start using AI to do?

(32:29):

And so that an advisory that goes everything from client acquisition. How do I find the right people that match what my personal book of business looks like and my style looks like? How do I reach out to them in a useful way? How do I brainstorm those ideas? And so that's still on you, but now AI can help you do that much more efficiently and help you scale those efforts. Make sure you don't miss on follow-ups, help you pull back the initial research that you wanted to share with them and anticipate their needs. And then when we come into onboarding and financial planning, there's a lot of very mechanical painful processes, as I'm sure many of you are aware. And if we can use AI to facilitate that, get the time down, onboard people faster, less painfully, we don't want to start out our relationship with going to the doctor's office kind of thing, and you have to fill out 37 forms a hundred times.

(33:19):

We don't want that experience. We want that to be as streamlined as possible. I want to understand your goals, I want to onboard you into the system in a compliant secure way, but I don't want you as a client to have to experience that onboarding pain. And that's a place where it's not just going to be AI, but AI is going to facilitate a lot of solutions in this space. Again, I'm not going to go into all these pillars, but also things like blind spot monitoring, like both blind spot in terms of are you following up with the client? Did you catch everything? Did you follow up with everything you said you were going to follow up with them on? But then also on the compliance side, if I think about am I saying anything that I shouldn't be saying, right? We can actually have AI as just sort of a safety system.

(34:01):

So it doesn't take away the onus of the advisor to be on top of this stuff, but I think about it as kind of blind spot monitoring on your car. And so it's not that if you crash into somebody, you sideswipe somebody that's still your fault, but having those little indicators on the side windows that really helps a lot. It just adds a level of safety. And I think that's where we're seeing AI stacking safety is the defense in depth strategy. If AI can go over to things you're doing and help you make mistakes less often, that's the net good thing. But again, I think it's really important to keep in mind that the AI is not about the AI and it's not about this technology. It's about actually empowering the people and really designing with them first in mind. And so again, if we think about this generational intergenerational wealth transfers that are coming up in the next couple of years, we think about the different ways that states are being changing hands, whether that is through something like death of a spouse or divorce or something and there's new client with new money looking for a new advisor.

(35:13):

We want to make sure that the advisors are empowered to be the best advisor. And this is where this space is going to become competitive from an AI perspective is because the advisors who have good AI tools will just seem like they have superpowers. That was sort of where the title came from of this is, Hey man, my advisor is always on top of it, never misses a follow-up, always has stuff that's really tailored to what I need and is really just very proactive. An event happens in the market and they've already reached out and they say, Hey, here's the things I found. Do you want talk about this? Or are you good? Again, we're moving away from that dentist appointment approach of every six months, get your teeth cleaned, and we're moving to more proactive, more reactive ways of engaging with clients. And so if you're trying to manage that entirely on your own, it's going to be very challenging.

(36:13):

Both the complexity of the markets and the world that people are going through is growing, but also you're probably trying to scale the number of clients you can handle, grow your book of business. And so if you're not using ai, you're putting yourself at a disadvantage in that space because the people who are using it are going to be able to just seem like they're much more on top of things. And again, the end user, the end client may never interact with an AI agent. They may never even know it's in the or. They might know because you would disclose it, but they're not going to feel like it's there. They really is about empowering the financial advisor to be really, really good at their job. And so even things like coming up with suggestions of your client had this question, here's some potential options.

(37:05):

You still might make the final call on what you should still make the final call on what that option you should suggest to them and which one's first and second, but the amount of time you will have shaved off by having to go think about it, research, go dive into a corpus of documents to figure that out. We're shaving that off immensely and that's going to help advisors scale. And so again, we really just, if I get nothing else out of this, it's really when we design these things, the good solutions are going to be the ones that are putting the human being at the center of the design process and not trying to say, Hey, well you know what? We don't need this anymore. I think that's why maybe Robo-advising was less successful was because it was saying, Hey, how do we get the advisor out of this equation?

(37:52):

And for certain market segments that can be useful of the cost benefit, but a lot of people, this is a lot of money they're talking about for anybody's portfolio, however much money they have is a lot of money. So whether you have a massive client or a less massive client to them, it's always really important. So it's not something you want to just fingers crossed, hope for the best you want somebody you can trust, but also that you feel has your interest in mind. And so again, put that person at the center as you're going through this, you're going to see good implementations, you're going to see bad implementations. But what I'll say is as you're going through and some of the implementations are bad or seem gimmicky, don't write off what this moment is because again, it's been two years and we've already accelerated so much beyond what was ever thought possible.

(38:43):

And so we have a really exciting future head and I'm really excited to be part of it and see the work that I'm doing with my customers go public and become ways that the industry operate. And I can't wait to see where it goes next. And I know all of you are going to play a role in that journey as well. And so what I'll just say is thank you all for taking time to listen to this and I challenge you all if you're not already thinking about how as is going to impact your business and how you're going to use it to make yourself better, to make your teams better, you should start doing that. It's going to be an avalanche. It already is. And so thank you very much. Any questions? Questions? Yeah.

Brian Wallheimer (39:27):

Anybody has any questions?

 Charles Morris (39:33):

We have a mic. Okay. Yeah, one sec.

Audience Member 1 (39:40):

One of the things I run into a lot is when I go into copilot, it seems to be siloed and that it doesn't look at information in my OneDrive, it doesn't look at other places as easily. There are tricks that we can use that will essentially lower those barriers.

 Charles Morris (39:56):

And no, so part of it is the natural product development life cycle. And so if you've been using copilot over the past several months or a year, you've noticed it's gotten just better in general. And particularly after the wave two stuff, which was later last month, it's just substantially better on its own. I think some of this comes down to how we actually think about creating and organizing our documents is going to be something that changes over time. Even what we think of as a document, we might move away from having a hundred page reports into having sort of a set of distinct knowledge pieces that then we use instead of going reading one through 100, we actually treat those as retrievable artifacts in terms of co-pilot. Some of the things that you could do to make it better. And this is not necessarily trying to do change management copilot, but giving your documents better names or specifically having which set of documents you have in mind, things like this, creating a teams channel that or teams team that has documents for a specific workflow that you're in and grounding it there. These are all things that help the model find it, but partially that's just on, I keep doing that partially that's just on our side as we're going to keep making the product better.

(41:21):

One sec.

Audience Member 2 (41:24):

So in noticing the meeting recorder, obviously here, there are several companies that create meeting software for financial advisors that have prompting capabilities and output for compliance and different things. Is Microsoft going to try and put them out of business going in that direction? What's happening there?

 Charles Morris (41:46):

So Microsoft has products, but we're fundamentally a platform company. One thing that we've always done is create an ecosystem and a platform for people to be able to build on top of the same technology we're building on top of the products we offer. They're actually built on the same tools and services inside of Microsoft Azure that our customers have access to. And so generally, we don't really move into such domain specific areas as a company, we want partners, we want vendors, we want our customers to go out and build those domain specific capabilities on top of our platform. And so if you're trying to get into a general productivity tool and you're competing with Microsoft copilot on general productivity, that's probably more of a challenging area. But if you actually have value differentiated on domain specific areas, that's where we encourage you to use this as a platform. And we have a lot of customers doing exactly that, bringing their solutions to market on top of our platform. And then it lines our commercial incentives. So we're all winning together. So very much if you're kind of focusing on industry specific scenarios, you're probably fine. Again, opinions are my own kind of thing, but yes, we got one up here.

Audience Member 3 (43:20):

Hey, one of the things that we are struggling with is what's the right platform within the Microsoft ecosystem to actually build out agents? Are you recommending Studio as your platform right now or you recommend M 365 with plugins?

(43:35):

Because it's such a complex ecosystem, it's so many products, it's a bit of a struggle to land on one while we start on the journey.

 Charles Morris (43:43):

So I'd say there's three tiers to that question. So one is we're enabling people to bring in their domain specific data and functionality into Microsoft copilot. And so that's basically you want to use Microsoft copilot, but you want to have access to your data sets and things like that. And that's kind of the lowest technical complexity thing, but it's really useful if I'm using copilot and now I can pull in data from some internal source or some trusted third party source. Really useful. On the other far end of the spectrum is obviously Microsoft Azure, and that's like a full pro code. You're building a solution using cloud services, and that's sort of when you're building those really high scale, high priority kind of scenarios, particularly things that have thousands of users or products that you're bringing to the market. And so that's when you need very bespoke scenarios, you build it with Microsoft Azure, and then there's sort of this thing in the middle where there's all these sort of long tail use cases that maybe you don't want to own a full lifecycle of an app that you have to manage and maintain.

(44:51):

That's a pro code developed app, but there's a lot of value there. And that's where we would bring what we call copilot studio into the mix, which is sort of a low code way of building agents. And so that is a bit intended to be a bit more self-service slash quickly get something together quickly get value out the door, but it can do quite powerful things. So I don't want to write it off as just like if it's small, use this, it can do quite large things. We have customers using it for pretty at scale cases, but it really just comes down to how much control you need. And so the low code option is co-pilot studio. And honestly that can be a great way to prototype before you commit to building the pro code version. But that's sort of the three is like extend Microsoft copilot, build a copilot with low-code tools and copilot studio or build a full blown application with Microsoft Azure. And I mostly focus on that last pillar of build really full-blown custom applications.

Audience Member 4 (45:52):

Alright. What plans are there, if any, for copilot being able to handle in-person meetings? Because in this room and across wealth advisors, still a lot of in-person meetings that take place either internally or with clients. So we've been using copilot for quite a while for virtual meetings and find it quite helpful, but in-person tried launching a meeting, putting the phone in the middle of the table, that does not work well. So what plans are in store there?

 Charles Morris (46:29):

Yeah, so there's a couple ways to approach that. I mean, obviously if you're inside of a room that's like a teams enabled room doesn't need to necessarily have virtual attendees for you to just use teams organically. But if what you're saying is like I'm at golf or something and we're talking about it or at a restaurant or whatever, where I would actually start leading into the Microsoft Azure APIs, we just announced and released the real-time API, which is that natural real-time voice model. And so that's something where you could actually think about building your own real-time speech model for the scenarios you're looking for. It may possibly come in product, I'm not going to quote roadmap on whether it will or won't, but if that was something that you had a specific scenario in mind, that's something that we would also give you the tools to build inside of Azure. I dunno if that answered your question, but the easy answer is try to link into a teams room. But the more advanced answer is if it's really important, you can actually build using those SDKs to do that, got two, got five minutes before it comes with the sheet hook.

Audience Member 5 (47:43):

So generative AI, just like other forms of AI generally, right? Often but not always, right? As you think about agents and the opportunity to automate complex workflows, how do you think about the compounding error problem, right? Of each agent might be 99% accurate, but then if it interacts with another agent that's also 99% and then another, you very quickly end up with a process that's inherently highly unreliable.

 Charles Morris (48:12):

Yeah, a hundred percent. And that's why I honestly, I think don't necessarily start with automation as the goal. And so a lot of this is actually less, it's technology constraints on how you do design. Because if you know that there are certain areas where things are going to be inaccurate, the question is, okay, if they're inaccurate, what happens? And if the answer is if they're inaccurate, we have this mitigation path to fix it and resolve it, fine, great, go do that. But if not, then you want to actually just think about changing the design entirely to where it actually is not just human in the loop, not being an automated process with a human rubber stamp at a key few choke points, but actually designing it with the person as the active driver the entire time. And so what do I mean by that? What I mean by that is if I was thinking about coming up with responses to a question a client asked, one way to do that is the LLM agent generates a response and then I say, approve or don't approve or edit, right?

(49:21):

Okay, sure. Maybe. Or what if I actually came back and said, here's three options for a response. Which one of these do you want me to go actually go dive deeper on? Okay, this one, okay, now I've pulled research. Here's the potential research aligned to this topic. You now as a user curate that research and say, these are the ones I want to use. Okay, now generate a first draft. And so you're breaking that down into I'm the driver in every single step rather than the overseer of the process. And so that difference of oversee AI versus drive AI is really important for a lot of these high stakes scenarios. And that's how I think about that, right? With the compounding thing, if you're trying to do straight through automation where you have 99%, 97% and there's 20 steps, it's probably by the time it gets to the end, probably not that useful. And so I like to think about that active participation as a design consideration, not so much a technology consideration. Put that person in the driver's seat, like I said. One more. Okay, last question.

(50:26):

Oh no.

Audience Member 6 (50:37):

Hi there Is this one. Yeah. Can you explain the relationship between ChatGPT and copilot? Is that the engine that's driving copilot? And can you kind of clarify Microsoft's relationship with open AI and how all that works?

 Charles Morris (50:53):

Yeah, so ChatGPT and Copilot are applications. They're full applications that are designed to do similar but different things behind the scenes. Those applications are backed by models. And the LLM is like the core reasoning engine and the core, the text generation and knowledge kind of reasoning piece. Those models, the ones that we primarily use are the open OpenAI models. And so Microsoft has a relationship with OpenAI where we have exclusive rights to distribute those models. And so even when they're distributing it, they're doing it via Azure, right? They're just doing it inside of their Azure tenant. So we take those models and we deploy them as a service inside of Microsoft Azure. And then, like I said, we build on our own platform. We then build our products like GitHub copilot, Microsoft copilot, using that Azure OpenAI service and a bunch of other services as well.

(51:51):

And so the idea there is we OpenAI owns and builds the models. We have exclusive distribution rights of those models, and then we use them to build applications. They also use them to build applications. And so we're really focused on our customer base and our product base. But from a behind the scenes infrastructure perspective, it's like the same kind of thing. And so that's really the difference is the models are GPT-4 oh, and then you build an interface on top of that. An OpenAI builds an interface, they call it chat, GPT, and then we build many interfaces, one of which is Microsoft copilot, and that's how to think about that. Okay. I think I'm at time here. Almost exact. I have 20 seconds over.