Bad data is detrimental to business. It can waste time, effort and capital, and its flawed outputs can erode client trust. Conversely, clean and reliable data can transform AI initiatives, turning them into powerful tools for advisors and helping to future-proof your business.
Navigating the complexities of data management in AI, however, presents challenges. Advisors need strategies to ensure they are working with clean, standardized data sets and that the tools they use protect sensitive information while allowing for data sharing.
Join us in this session as industry experts discuss best practices for data governance, strategy and privacy within AI applications. Panelists will explore the advantages and disadvantages of building your own data storage solutions versus leveraging third-party services, among other points. Discover how to effectively harness clean data to maximize AI's potential in your practice while maintaining the highest standards of data security and compliance.
Transcription:
Timothy D. Welsh (00:10):
My name is Tim Welsh. I'll be your moderator and it's my privilege to be up here with the outstanding all-star cast of data folks, title of the session's, all about good clean data as opposed to dirty, terrible data. I guess I'd love to have my panelists introduce themselves. So we'll start down at the end. Andrew, go ahead and introduce yourself and your role at Fidelity.
Andrew Brzezinski (00:32):
Andrew Brzezinski. So I am the Head of Data Analytics and Insights for our institutional business, which means I serve our clearing and custody function and also investment distribution. My team has responsibility over a variety of things. So we are taking care of the data strategy. We do reporting, dashboarding, analytics and of course AI. And we serve a variety of aspects of the business, whether it's relationship management, sales, marketing, service, ops, product, you name it, we've got all that. Our efforts are predominantly internally focused, but we do have some client facing reporting and analytics capabilities. On the AI side, we've got a team of data scientists that we point at a sharp, small, limited set of high value, high urgency, high impact oriented use cases.
Devon Drew (01:34):
Devon Drew, I am the Founder and CEO of Assetlink. Assetlink is a verticalized AI solution that unlocks the power of data for the wealth management ecosystem. Said more simply, we take the existing data sources that are germane to your industry and your business, and we make them actionable with an AI overlay that allows us to visualize all your data sources most of the time in your CRM system. So you could search and query from enterprise level all the way down to client relation intelligence and next best action to help you grow faster than you've ever grown before, more efficiently.
Timothy D. Welsh (02:09):
Ole.
Oleg Tishkevich (02:11):
Thank you. My name is Oleg Tishkevich. I'm the CEO of Invent. And Invent is a technology platform, consists of four parts. It's the data lake that creates ability to bring all the data from different sources, kind of similar to Snowflake. It's the integration layer that makes that possible, that bidirectional integration. It's the visualization layer with our experience builder that allows you to create workflows and experiences on top of the data and analytics and AI that allows any firm to leverage that to build solutions and then publish on event store.
Timothy D. Welsh (02:45):
Dan.
Daniel Catone (02:47):
I'm Daniel Catone. I'm the Founder of Golden State. I'm the CEO of Arimathea. We're incubating software that allows financial advisors to lean very heavily into the digitization of that's happening within financial services, utilizing AI agents to align value orientation, risk and reward expectations of end using clients in an entire digital ecosystem that doesn't require any direct human involvement. We're launching that here in Q1 of 2025.
Timothy D. Welsh (03:14):
Excellent. So we just heard from Michael Kitsis this morning about the fact that advisors don't have a data problem and I think my panelists here disagree strongly with that point of view. So maybe Dan we'll start with you. What do you think he was referring to and what's the reality you think?
Daniel Catone (03:29):
Yeah, I mean reality is that data isn't a problem per se, but I feel like the way we're at with artificial intelligence is like 1996 talking about the internet. And when we look at data, you're pulling from multiple sources. So you don't have standardization, systemization or interoperability between these sources, whether it be custodians, broker dealers, trading execution. You have client experience issues, account opening issues, all kinds of data coming from different places. And so unless there's purity of that data, the output is going to be erroneous. And just to give one quick example, and I'm sure you guys could chime in on this as well. You get one bad data point on a client account, say a misstated social security number, now you've got a CIP exception and the account can't get open and the digital experience for that client's a disaster and suddenly they don't trust AI anymore. So it's a big deal.
Timothy D. Welsh (04:13):
Oleg, what do you think?
Oleg Tishkevich (04:15):
I'm definitely here with agreeing a hundred percent with Daniel because we deal with data every day. We work with companies that are on our AA side, on the broker dealer side, any different sizes from 30 people to 11,000. So you see the data on the small side of the spectrum as well as the big side of the spectrum. And I can tell you that there are a lot of challenges, not just with the data being wrong, but even delivery of the data. So we're relying on various different data exports or APIs. Oftentimes those things are not always reliable. So in addition to maybe missing data or something was mistyped, there's also issues with actually delivery of the data consistently between different systems. And then once you have that, then actually merging that data to be consumed by an advisor or a client is yet another problem because we'll all share the same data points. They're all stored in a very different data formats. And obviously bringing this all together, oftentimes firms require just individual. I remember with financial planning software, you get new accounts coming in from the custodian feed, somebody goes in and manually assigns them to households because you need that planning household when you work with the clients. So those are the things that still exist no matter how big or small I believe the firm is.
Timothy D. Welsh (05:52):
Devon.
Devon Drew (05:53):
Yeah, what I would say is we're in an intelligence revolution. I don't think anyone could dispute that. However, no one's data is really AI ready at this point. So when you start talking about just from a competitive landscape, you need in order to effectively service and for business development efforts, you need to have all of your tools to go to proverbial battle with. However, if you look at just the numbers, 80% of firm's data is in inaccessible formats. It's called unstructured data because the PDFs aren't talking to your Excel spreadsheets, aren't talking to your oovs and.mp fours. But what if there was a world where all of that was interconnected learning from each other? So then you have a muse when you're going to talk to clients and from an intelligence standpoint, how to engage, what are next best steps in order to truly align yourself with your clients. So from a individual standpoint, if you're an individual advisor and you don't really have many data sources, you have maybe one or two clients, you connect your email, you connect like a meeting note taker, everything's okay, maybe in that case. But as you grow and scale and as the client experience becomes a journey, you're going to have to make your data AI readable and it starts with good clean data.
Timothy D. Welsh (07:16):
Andrew, I think I called you Dan, my mistake. So Andrew, what do you think about Michael's statement?
Andrew Brzezinski (07:22):
Yeah, so Michael, it was a nice framing that he provided to the answer to this question. I like this description that the advisor themselves on this individual basis, they know all the information, they've got the data, it's in their heads. That makes a lot of sense to me. Devon, you're describing it nicely, is that information ready for AI? So that's the next step that has to be taken here. I think that fine, you can manage the client, you can engage with them, you can have an amazing interaction and relationship, but what if you want to leverage that next amazing tool that's coming out? What if it has certain implications of interoperability of data? I think that's what was missing there. And so then we have an integration challenge in this industry, a standardization challenge around the data models that we need to support these future use cases or the future enablement of technology that's going to come our way. And so it's a lot about trying to think ahead and say the data needs to be fluid and ready for future potential use cases.
Timothy D. Welsh (08:40):
Yeah, Danny, you got a point here.
Daniel Catone (08:41):
Yeah, it's an interesting point that you raised about the segregation of the financial advisor, endpoint user, and then the service providers like ourselves. And as long as financial advisors are manipulating data and pushing that data back over to systems, data is an issue regardless of pulling data from different institutions, it's the human involvement of pushing data into the systems where you're going to have data issues as well. Of course you have other erroneous problems where we're pulling data from the larger institutions, but I think as long as humans are involved, there's going to be error. Yeah.
Timothy D. Welsh (09:11):
Alright, so let's get into it. Obviously AI tech is moving fast. I think we all can agree on that. But from these experts up here, what is hype? What is reality? What are some of those use cases and how does data play a part? And so Devon, let's start with you. What is hype? What's reality from what you're seeing?
Devon Drew (09:27):
I think what's reality is the making your day more efficient, whether it's workflows, whether it's note taking, having next best action based off that notetaking. What I don't think we're ready for now, and obviously our industry is never going to be on the cusp of innovation. We're always going to be laggards. You have obviously regulatory concerns there, but I think aggregation is something that we get asked a lot where aggregation is very difficult and I know Dan in your business trying to have faith-based values. One of the most difficult things in data and AI is to aggregate off of large data sets to say, I'm looking for a client that is a Catholic that loves investing in ETFs, that also alternatives and digital assets. Show me everybody now. Great client. Exactly. So I think what is real is the efficiency aspect of the note taking, the copilot type of functionality. And I think what's hype currently is the broad scale aggregation.
Timothy D. Welsh (10:40):
Ole, what do you think?
Oleg Tishkevich (10:42):
On this one? Actually I do have to agree with Mike's comment on the previous, with Michael Kit's comment about the just perception of AI because definitely that mix of perception of somebody just absolutely embraces it and somebody completely walks out a door and then they don't want to deal with it. And it has to do with the issue of like is it hype? Is it the real thing? And I think as any new technology, AI is something that is, remember we all talked about the roboadvisors a few years back, everybody was talking having the same questions. Is this a hype? Is this a next new thing? I think all these things shall pass and I think there's definitely great aspects of AI that could be leveraged. But also we don't talk about AI on our website, we just talk about it here. I don't want to put this out all of our clients for those very reasons that Michael pointed out because I don't think the broader advisory community is ready for AI today just as a whole. I think it is good to leverage it wisely with specific use cases, but I wouldn't just completely just go. And similar to Roboadvisors, we don't just all go to start and that's all the things we do. Although that's a tool or a capability that many firms choose to include in their client engagement model, which may make sense for what they're trying to do.
Timothy D. Welsh (12:21):
So Andrew, you've got that massive fidelity resources and can only imagine what you guys are cooking up in your shop. So what is hype? What is reality?
Andrew Brzezinski (12:30):
Yeah, so let me start by speaking about my role. It is almost like seven years now. I've been asked to think about AI and how it might apply to Fidelity's institutional business and how it might create an impact for clients. And so it's interesting because paying attention to that, having to speak about it, I could feel this disconnect.
(12:58):
And so this is where it feels like there's been hype. And the idea is that it, it's pretty easy to say that, okay, AI is coming and this was even before anybody was excited about large language models and it's going to be transformational and hugely disruptive and wealth management industry and every other industry is going to see this tidal wave. But the disconnect has been that as we speak with firms, as we speak with advisors, they have a hard time latching onto what does that actually mean? What is this impact that's coming? And so that hype has been there and that's been probably detrimental to some extent, right? In the sense that people are just saying, okay, this is overblown and I don't need to pay attention to it. Now think about this conference and the specificity of the things that are being demoed or the use cases that we're discussing here.
(13:58):
There's reality now we're starting to see it. What does it mean for us? It means meeting prep and meeting summarization and synthesizing lots of information to try to gain insights from that information. It means trying to process through documents that take a whole heck of a lot of time for an individual to try to work through. It means prospect prioritization, it means forecasting household income. It means trying to work on compliant communication, marketing, content curate. Okay, great. There's some real things out there that we can work on. And I don't know how much you all are feeling it or not, you may be asking, is that all? I thought it was going to be something much bigger and grander and the robots were coming and stuff, but this is it to me. And I'm not saying that's it forever, but this is what it is right now and that's good. And I think that we have a chance here to try to explore, find the value out of that, and then once we clear through that, ask ourselves what are the next set of use cases.
Timothy D. Welsh (14:59):
So Dan, any thoughts?
Daniel Catone (15:01):
Yeah, I mean I was just thinking about some of the hype that we've been fed and then I think maybe some of what ChatGPT has ruined in terms of the situation in some ways, ways I come at this from a perspective of a financial professional play, acting as a data guy, that's my background and my history. And we've all been promised the AI replacement of financial advisor work through the robo platforms. And that just frankly hasn't materialized. And I don't think it ever will materialize for some reality-based reasons, but you include that particular piece of hype with instant, honest and accurate personalization, which I think goes back to the data cleanliness question. Good data in is good result out. And so your perfect personalization and from our perspective, that's a really big deal. I mean, we're working with end using clients who want to integrate their value orientation, which is a qualitative question with their risk and reward expectations, which is largely quantitative. And then you bring in behavioral finance as well. How do you digitize these systems into something that's perfect and useful? I think that's maybe a little bit further down the line than we're at right now. And then of course you have the performance questions with portfolio management using artificial intelligence, which I think all of us have been exposed to and the regulators are coming at really, really heavy. And so I would say those are the three biggest hypes.
Timothy D. Welsh (16:15):
Well being in the hype business myself, I can appreciate your answers there. So Devon, let's start with you. Firm's looking to start their AI journey, let's kind of boil it down, really need to think about what sort of data transformation can they actually do? This was your specific question, so I'd love your point of view on that.
Devon Drew (16:32):
So I actually Christopher the audience, how many people have moved houses or apartments or where they lived over the past like 10 years? A lot of people, when I moved the house was a mess. There was couches everywhere, there was boxes everywhere. I didn't even know where to start. You think about the past 10 years you went from data being on-prem, all these companies like, oh, we're going to build our data infrastructure, we're going to be on-prem, have more office space and we're going to build these data centers in our offices. And then Amazon knocks on your door and say, Hey, now we have the cloud, come on. Then it's like, okay, there's a cloud. So now we have to take all of the data and move it and move it to the cloud. And then you're just moving it and it's the same thing as a house.
(17:19):
So now all of your data is in the cloud, it is everywhere. You don't know how to retrieve it, you just know it's there. So when you start talking about AI journey, the first thing is getting your data in the cloud and you could do it with Olegs platform, invent, but then once it's in the cloud, you want some feng shui. You want your couch aligned a certain way, you want your chairs, you want to optimize your sound systems for football on Sunday. So that is that next step. That is the step for the AI journey. You need to have your data cleaned, you need to have it organized, you need to have your infrastructure in place, then you need to make it machine readable. And that comes into form of what we call the JSOM, right? So make it in those formats that computers can actually understand.
(18:04):
And that starts with the data lakes and the databases like your SQLs, the world. So I know from an individual advisor from an enterprise, everyone's struggling with that, especially if you're an individual advisor. All of your intelligence or data is in your head or maybe in your CRM system, but start by finding, using an event or having a database that that's converting your data. So then when it comes time for a computer to actually read it, it could actually find it. And then the next step for those who have done that, then I don't want to speak too granular, but then you're making it in a form where AI can read it and it's essentially a long quadratic equation where AI can get it and retrieve it and then you're on your way. Then when you have your copilots in your chat bots, then it could actually have something to retrieve. So it's not actually retrieving hallucinations, it's actually retrieving things that are rooted in truth, which is your data.
Timothy D. Welsh (18:59):
So all, if somebody comes to you, where should I start? What do you advise them?
Oleg Tishkevich (19:02):
Yeah, really very much agree that the challenge with AI is really, there are a couple of things. Obviously moving the data and making sure that you don't have duplicates so that you don't have those hallucinations or it is the same whether you're using AI or just a general reporting system. You don't want to go in and say, oh, this AOM number looks wrong, got feeds coming in from multiple systems and they're duplicating data because there are certain types of accounts that show up in this feed that you're also getting direct feed and now how do you harmonize all this stuff? So those are very, very important and very serious problems. But also when you are leveraging AI, you need to be cognizant about what I guess scope of data the AI is exposed to. Because if you have systems that can read all of your data across the firm, but let's say an individual within your firm only has access to a specific scope of data, the questions that these individual may ask AI may be beyond the scope that they're entitled to see.
(20:11):
So there's I think another level of complexity that lays on top of the data. Even once you organize and make it look good and everything's in the right spot and everything's duplicate and the socials are correct and there's no missing data or there's no issues, then how do you scope it so that individuals that are interacting with this data through some kind of AI technology don't necessarily see things that they're not supposed to be seeing. So I'd say those are other major considerations when you're creating some type of AI technology that you're going to be using just to make sure that we're a regulated industry. These are major issues if you didn't set it up correctly.
Timothy D. Welsh (20:57):
So Dan, you're incubating technology and software. How did you get started on this AI journey?
Daniel Catone (21:01):
And I appreciate this, and I think Devon, when you came up with this question, I think a little while ago or we were riffing on this and we've done some work in advance of this to prepare, I gave this a lot of thought and I wanted to take it a step back from the data and the AI stuff itself. And I argue that you have to begin with clear business goals. And so what is the use case for the AI? So it's not just cool to have AI, I mean big deal. Everybody can put that in their startup name and maybe they'll raise money, but we have to have a clear use case. And I think then you set measurable goals, KPIs and all that type of thing. So we begin with the end in mind, of course. But I think from the business perspective, before you get to data, and I think data is stage two and looking at data, you have to start with some type of data audit. What data do we have access to? Because with that, you can then do a quality assessment. So when you know what data you have access to, you understand how good that data is, which then informs the final, which is governance and permission. So who needs to access that data, how is that pushed around within the organization, which then of course touches on the regulatory questions and data privacy, which we'll I hope get to as well.
Timothy D. Welsh (22:09):
Andrew, how do you get started?
Andrew Brzezinski (22:10):
Yeah, I like these answers a lot. I think this step back that you just did, Dan, is along the lines of how I think about this as well. Maybe let me say this, if you're not using it yet, you should. It's not really a question of for those who are interested to get started, everybody should get started. And it goes to those kind of statements that people throw around, which is those who use this technology and learn how to use it will learn how to outperform those who don't. And I'm not trying to scare anybody, it's just realistic. You have to think about it this way. So given that everybody needs to jump in, the question is, okay, let me get educated about AI and what it's good at. Let me find the impactful ways that I'd like to apply it in my practice or at my firm.
(23:07):
So what problems are we trying to solve with it? And then we could get to the how, which would be, which tools do I want to bring? Do I want a vendor solution? And that will imply the data strategy that you have to put together around this. It could be as simple as saying, I just want an out of the box offering from a vendor, and they'll just give me a spec for how I can connect my data. It could be that you say, I want to start with something like ChatGPT or a Microsoft copilot or something like this. And I'm not advocating for anything. I'm just throwing out names. And the data might be all in just what you throw into the prompt and getting good at doing that, or you can actually hire an analytics team and build up a data ecosystem or there's lots of different ways of getting at this. And so you have to think about what it is that you're trying to solve and how do you really want to go after this from a solution perspective.
Timothy D. Welsh (24:06):
Got it. So Michael also teed us up very nicely when he said there's a trust gap in ai. Does it come down to the data? Devon, what do you think? I mean the driverless car, some will get in, some won't. What do you think the implications are there?
Devon Drew (24:19):
Yeah, so the trust issue comes from what we're talking about now, good clean data. It is the input that you're putting in. So if you're trying to sign up for, as an example, not advocate for anything but like a Microsoft copilot and your entire Microsoft suite is empty and you don't have, your outlook isn't connected, you have nothing in your Microsoft Dynamics and you start asking questions, you're not really going to get your intended results. Or if you start going to chat GBT and start asking some wild aggregation questions and it's not giving you the response that you were hoping for, all of a sudden you're viewing AI as a hoax, right? So yeah, there's going to be a trust issue, absolutely. However it is rooted, it has to have a source of truth. So then all of a sudden, if you have the same copilot and it's connected to your Microsoft suite of tools and it's all populated and your meeting notes are in, your calendar is in there, your calendar is aligned with contacts that are in your CRM, you're able to connect your note taker to that, and then all of a sudden you start asking copilot to do some tasks for you, you're going, oh, this is pretty cool, right?
(25:30):
So yes, there's going to be a trust issue, but yes, it's all about the input and then the output would be equivalent to what you put in.
Timothy D. Welsh (25:39):
Andrew, you've been at this longer than anyone. Is there a trust gap or how do we?
Andrew Brzezinski (25:43):
Yeah, for sure. And Kitsis has answered that really nice, this idea that the bar for technology is especially high and humans can make mistakes versus technology. And so we not trusting that. So that's interesting. There's also this trust psychological thing, is this going to disrupt the way I work and take my job? And so trying to push it away and avoid the problem might be one of our trust issues here as an industry that we want to try to come to grips with. But putting those two aside, AI itself is, as we know, potentially unpredictable. It could give you different answers to the same questions. When you repeatedly try to ask that same question, that means that you're going to have to trust it a little bit less and or try to find ways of putting controls in place or gaining that trust. And then of course, in other words, there are multiple layers of trust problems that are not data itself. And then when it gets down to it, the data could be an issue too. And so it's not an answer that's AI specific, but again, running analytics teams and having users and stakeholders consume dashboards from us. And the second that you stick a wrong number in there or mislabel something, your users throw their hands up and say, it's all wrong. I can't trust this. I don't want to use it. And so it's a very tenuous connection that you have with your consumers of your analytical capabilities, including AI and data can be an issue.
Timothy D. Welsh (27:20):
So Dan, what do you think?
Daniel Catone (27:21):
Yeah, I mean, what is trust but an internal assuredness of some type of external reality usually I think, and so when you look at financial services broadly, we've had a trust crisis within our industry, I think for 20 or 30 years. At the end of the day, does the consumer trust large entities and large institutions to keep their best interest in mind? It's why we have constant regulatory pressure with DLL and all these types of things that are happening. So all we're doing is just magnifying the trust issue into something immensely more powerful with vastly more vast more access to data and analysis. And so now we're combining two major trust things. One, financial services, doing something in your best interest, no idea if that's true. And then the second question is data. I mean, do we trust the data that is being stored by all these institutions about us, which is then being mined to manipulate us into a sales process?
(28:16):
So I'm on the side of the consumer on this, and I think the regulators are right. I think the trust issue here is absolutely massive. Why lack of transparency. We need to see exactly how the AI comes to its conclusions. That's kind of a newer development we're seeing in some of the different AI systems that we use. I think we have fear of loss of human interaction. I think people at the end of the day want to pilot the helm at the command of the aircraft, even if it's flying itself and it lands itself on the runway, we want to know there's a human being. We have bias and fairness issues. I think we have to all grapple with the end user with $5,000. Does he trust that the AI system he's interacting with actually cares at all because they have very little money? I mean, that's a major issue. And then of course, over reliance upon prediction, I think Tale wrote the book Fooled by Randomness, which I think is really excellent if you have a chance to read it. But the idea that we can have prediction of the future is embedded into AI. I mean, it's what an LLM is and it's built I think largely on a foundation of sand.
Timothy D. Welsh (29:16):
Oleg, love your thoughts.
Oleg Tishkevich (29:18):
That's a great thing right there.
Daniel Catone (29:21):
You tell me I'm wrong?
Oleg Tishkevich (29:22):
Oh no, you're right. To add to it, guys, we're going to kill the trust situation here today, trust us. One other issue that is being solved. Now, my personal trust issue with AI is the big brother. I come from the old country where the Big Brother is a real trust issue, and we can argue about how it is in our country, but the capability of AI, and specifically if you talk about ChatGPT, that anything it has knowledge of is essentially cumulative knowledge of everything that anybody's ever put into it. So careful about what you ask or what you say or what you write or what you send, because if that's collected by some type of system, some people might have that sort of Big brother challenge that I know I've spoken with as well, that if it all goes to the cloud, to your point and it's all aggregated there, how can I trust that this data is not going to be misused or I can be pined down when the types of questions I'm doing and all that kind of stuff.
(30:39):
So that I think is another challenge that actually now is being solved, which I'm pretty excited about. I don't know if you guys, it's the reason I got the new not to be advertising the new iPhone, but supposedly the new iPhone has the ChatGPT kind of AI capability on the actual device. So you could actually decide if you want to go to the internet for your Siri to respond or whatever, do things, or you can just lock it into your own phone. Now what Apple does with that data, otherwise we don't know, but at least you know that the AI piece is being limited to literally your own device and things you're interacting with with your own kind of chat capability is not going out to the open internet. So I think people have definitely seen that as probably a challenge or an issue, and maybe also because it's much easier on compute to do it right on the device without going to the internet. But I think their steps are being taken that are making that trust challenge. I think for people like me more adaptable.
Devon Drew (31:46):
And I got to push back on Dan a little bit ahead, man, you want controversy, Dan is my guy, love Dan, just met him 30 minutes ago. You start seeing these new models and I can see maybe back in 2018 with the Burt model, let's fast forward to 2024. If meta is going to be training, that's kind of scary. Anyway, that is going to be training models on our post. If LinkedIn is going to be training models on our engagements and interaction, I'd have to push back saying that these models are going to be grounded in a foundation of sand because what greater source of truth is there than people posting about whatever is important to them? So yeah, I could see 2018 Google comes out with Bert. Okay, I understand it. ChatGPT comes out 2022. Okay, I get it. But now these platforms with billions of users using fast to cluster and index these billions of users and using those to group the type of similarities, if now these models are being trained on that, I think that's extremely powerful. That plus a reasoning aspect to it as well. I think that foundation is getting stronger by the day.
Daniel Catone (33:07):
I Hope so. Do I get 30 seconds for rebuttal?
Timothy D. Welsh (33:09):
Absolutely.
Daniel Catone (33:10):
I mean, I think we largely agree on these questions and I think talking about just the big brother question, the big brother doesn't wear jack boots and hold a rifle anymore. It's tech joggers and a hoodie. And I think we all recognize that the trust that the general public has in big data has eroded. And so whatever AI systems we're building, and when I say built on a house, and that's kind of what I'm talking about.
(33:32):
That's the ground that we're growing out of. And on the other side, we're growing out of the ground of financial services, and both of those have huge trust issues. And so until I think the fundamental questions of trust within financial services and within data are solved, there will always be the shakiness with AI because it's pulling from both wells. That's my main point, if that makes sense.
Timothy D. Welsh (33:54):
Yes, we'll allow that. Fantastic. Thank you. The iPad has an infinite number of questions, but we'd love yours as well. So if anybody has one, please raise your hand. The micro runners will come around, so feel free to challenge the panel here. But in the meantime, Andrew, let's go back to some of the technical stuff. What do I should think about in terms of data governance, data privacy? I think that Mike also mentioned about transcriptions of meetings. There could be some really interesting things in there, discoverable. How should advisors think about this?
Andrew Brzezinski (34:22):
Yeah, maybe I can answer with kind of an anecdote. I was at a client conference earlier this week and basically hosting a round table discussion. So imagine 40 representatives from firms, advisors, principals, those types of folks, and we got to talking about what are you doing with AI? And it was a lot like the kind of discussion that's happening here. Again, back to meeting summary, meeting prep, all that kind of stuff, talking about favorite tools that they're playing with and everything. The question that I started to raise is, so what's your decision process around choosing a vendor around vetting them, that type of thing. And legitimately the answer kind of, it probably wasn't uniform across the whole group because this wasn't a survey, but a handful of people spoke up and there was head nodding that it's more like we're dabbling. It is me, I'm playing with this.
(35:22):
I've got a small focus group of people are experimenting with it, that type of thing. And so that makes a lot of sense as I reflect on it from the perspective of here, this is emerging tech, we're just trying to figure out whether it's even worthwhile to incorporate more broadly into the business. But it provokes the question about like, okay, so what is your governance process for choosing a use case and then a solution approach? And I know this is a data-centric panel, so I want to make sure that I connect to that because when you choose your use case, a good data governance process basically starts thinking about what are the legal risk, compliance, information, security, privacy, all of these types of considerations and constraints within the firm that need to be vetted as we consider using data in new ways. And what is AI other than a way of unlocking new ways of unlocking value from data?
(36:30):
And so it's really important to set up some governance. This is a useful thing. Beyond that basic dabbling that you're going to be doing, you should be thinking, okay, so how do I govern this? How do I think ahead of being proactive to make sure that I'm choosing the right approaches? And so if you've had data governance functions so far in your firms, you can pretty effectively roll some of these AI governance practices into them because it's the same sort of questions that happen. Now, back to your point, Tim, there are other AI governance considerations that go beyond data governance, like the ethics and all sorts of aspects about explainability and things like that. And those are often a similar set of people who will be interested in helping work through the answers of those kind.
Timothy D. Welsh (37:25):
Daniel Know you've thought about this a little bit. What's your point of view?
Daniel Catone (37:29):
Yeah. I mean, whose data is it as a really good question, I think to ask. And I think if you were to go to a regular person on the street who isn't involved in this all day long, like my wife, I come home, I talk these things, she looks at me like I'm an alien sometimes, and I ask her, what do you think about your data? And she says, it's my data about me. I don't want it in someone else's wallet. And I think that's a fundamental question that we have to ask when we're building these systems, when we're interacting with the general public is whose data is it? And we might have legal answers, we might say, well, they changed, they signed our terms of service, it's our data. Now nobody else thinks that way, the way the general world thinks is their religious preference.
(38:08):
For example, in the work that we do, that is one of the most personal things you can possibly have in terms of a data point about somebody. One, they don't even see that as a data point. They just see it as who they are in their person. And then us from the tech world, we're like, yeah, give us that information so we can sell you a product. And that's what I mean of course by not to harp on the trust issue, but at least from a data control issue question, we have to remember, it's not our data per se, and we have to treat it in a fiduciary capacity. We treat their money.
Timothy D. Welsh (38:37):
You're out there building systems supporting thousands of advisors every day. What goes into your thought process?
Oleg Tishkevich (38:43):
Very, very, very serious issue and need to take very serious look at how you are addressing and not forget about ai. I'm just talking about just managing data. So newer, more modern data management systems, data lakes, data lake houses provide capability to not only be able to bring all this data together, but I'm going to give you one more jargon board today, data lineage. Okay. So if you guys want to Google that. This essentially, think of it as like the FedEx, right? If you go to FedEx, you give FedEx a package. When it actually gets to the recipient, is it the same thing inside the package? How do you know? So this data lineage concept is essentially that. So when you're getting data from multiple sources, you're also applying certain business rules that transform this data or merge it with different types of other data.
(39:48):
How do you know that the data that started from the source is actually ended up at the end of that whole transformation process? So that's a very big aspect I think that we've not talked about is really the data delivery part and insurance from a data governance perspective, that data that you are receiving can be traced back to its original source and can be audited based on that information. So with that said, I think historically in our industry, it's always been the only data that we think about historical data is really our transactional data on the investment account. But it's a great concept of a ledger, but the challenge with different systems, like any kind of alteration of data, whether it's a name or social security number or needs a ledger. So you need to know not only what the information that you're looking at is today, but how it actually changed and who changed it. And because in terms of compliance and all the regulatory challenges, regulations that we're seeing every year come up, and I think they're doing their job, is to ensure that you have traceability of everything that's happening with the very important personal data that you're handling. And I think another point is be able to provide consent. So how do you actually deal with consent of who gets to see what data and what data is being shared with whom?
(41:33):
Those are the types of things that we try to solve on our platform within Invent, because that's all we think about. Because when you walk into an office of any RIA today, I agree with Michael, there's no big data problem. There's huge data problems, okay? So that's the reality because the way the data is handled, the way consent is handled, the way the data sharing is done, that needs to be definitely solved.
Timothy D. Welsh (42:07):
So Devon, any thoughts there? And then I've got a follow up for Andrew.
Devon Drew (42:10):
So what we struggle with at Assetlink is we sell to enterprises, but there's often a massive disconnect between with the enterprise once and with the individual advisor wants. So the enterprise wants the data to be digitized so that if an advisor is leaving, right? Let's say you go from LPL to Merrill Lynch or whatever the case is, right? The enterprise wants the intelligence around that data to stay with the enterprise. Now the advisor wants the data and the intelligence to stay within them. So from a governance standpoint, it's like, well, from an enterprise standpoint, you want to create a bullar based case for all of your different data sources, what happens if it gets out? But then you also want to be able to have that same data work for you and know that if there's a trend analysis on which advisor has a higher propensity to leave the firm based off of all these different data sources, that's very important for the enterprise. But the end user, the advisor wants the opposite. They want the autonomy. So I could see it from both sides and from a governance standpoint, just making sure the vendors that you are working with have whether cybersecurity, insurance, SOC compliance that are audited, ongoing auditing to make sure that if something goes wrong, which we know at some point something will, you and your clients are protected.
Timothy D. Welsh (43:28):
So Andrew, my question to you is, wasn't the blockchain supposed to solve this? Where are we with that? Or does that just go away after the crypto craze?
Andrew Brzezinski (43:36):
Blockchain isn't gone, but you see there's another, what is that? It's about 10 years now since that kind of came around and got everybody ultra excited. So talk about another hype cycle and then this longer play to try to see how that's going to translate into substantial potential disruptive impact to the whole backend of the financial industry. So it's still there, it's still coming, but it's finding its own reality to play out.
Timothy D. Welsh (44:09):
So we just got to cut. Yeah, go ahead.
Daniel Catone (44:11):
Ola made a really important point about the FedEx package, and I think one of the questions that we have to think about in terms of data is you have a FedEx package, what's in there? And who gets to know the details of what's in there? Because you're hiring somebody to transport the package from this point to this point. And the end user from a transparency perspective, and this comes back to trust, has no idea who generally is getting to see the contents of that package. You've ever filled out a form, you just say, oh yeah, it's say random clothing when it's really your underwear or something. But the reality is you don't want that exposed to 58 people and you might not want to expose even to Merrill Lynch, you want only your financial advisor to know that as he goes to LPL. And these are just massive questions of data access that also correspond with data clarity and purity and all of that.
Oleg Tishkevich (44:55):
So can I suggest a little poll here? You guys have been sitting for a while, but let's see if we can do a little interaction here. So think of right now, put your advisor head on the table and put your consumer hat on, okay, for a second. Alright, so I'm going to ask you a question. The data that you have that has to do with all of your financial account, personal information about your finances, about your birthdays, your family and all that stuff, whenever you're dealing with a financial advisor, let's say part of an RIA or a financial institution, whatever that is, you're entrusting that person to have that data. How do you feel as a consumer that data ownership, who data ownership should be with? Do you feel that you should be deciding how that data distributed to whatever systems? If you do, please raise your hands. Can you raise your hands? So it's a consumer driven, okay, so now let's ask you a question. Do you think that the advisor should be the one that you're dealing with? Should be the one deciding how that data should be dealt with and who should see that data?
Daniel Catone (46:12):
With consent?
Oleg Tishkevich (46:13):
With consent, right. Okay. How many people think that the institution that advisor worked at or move to or from should have the power of deciding where that data should go? Thumbs down on not single person. I just wanted to get the sense from the audience, right?
Timothy D. Welsh (46:37):
So wrapping up here, just a couple minutes.
Oleg Tishkevich (46:40):
There's a question right there. Yes,
Timothy D. Welsh (46:42):
Fire away. Is anybody out there?
Audience Member Amy Young (46:56):
Okay, yeah. Tech wizard. I'm Amy Young from Microsoft. So I want to double click on this notion of how you pick your use cases and the due diligence that is applied to selecting a use case. And I know ROI, and I know solving a real business problem, but I want to double click beneath that and here's why. So meeting summarization is a feature that everybody knows adds value, increases efficiency, et cetera. But I had a thought yesterday when I was looking at some of the meeting summarization demos, they all had something about the customer's assets. So you do an initial discovery and the person says they've got 500 grand or whatever, and that gets memorialized in a note. And so the problem that I have with that is, of course the number is wrong like an hour later. So when you're thinking about use cases, and this is a data question because over time what we need to do is meeting summarization needs to get smart enough to have that person's portfolio value, have that populated by the portfolio management system, not by something that was uttered in a note 12 months ago. So that's just an example of how we all need to think much more deeply about these use cases. And so I'm curious about what process you are all applying in your use case selection, whether that's who's involved in evaluating use cases, do you have a process to make this more rigorous, to identify random things like that?
Daniel Catone (48:48):
If I can just say really just, Amy, that's an excellent question. I would not want data being fed from outside sources into a note-taking system to purify those notes to be accurate because that's not what the note taking system is actually recording. The note taking system should be actually recording the statements of the client and the statements of the financial advisor. And so I think the separation of systems into their proper silos is fundamental to this question.
Timothy D. Welsh (49:14):
Yeah, the lighting around here. So 30 seconds each. Yes.
Audience Member Amy Young (49:17):
Don't get hung up on that specific example, right? It's thinking about if I'm using that, what am I going to use that summary for and how do we make sure it is useful for what I am going to use it for?
Oleg Tishkevich (49:26):
So your question is about what data is actually accurate that if it's the same data from different sources, which one should I trust?
(49:33):
Technical question to that, you need to have a master data model and you need to have essentially a golden record. So you need to get all the information you get based on priority, based on if it's coming from custodian, that should be probably the more accurate one than the one that advisor typed up in a note. So that's part of kind of any good data governance type of platform. So you have to have an MDM and you need to have essentially a golden record.
Timothy D. Welsh (50:02):
So Andrew and Devon, 30 seconds each. We got to go.
Devon Drew (50:04):
Yeah. So for me it's about context ingestion, right? Finding the stakeholders that is necessary to inject the type of context within that. And then when I use my note taker, I'm just having nothing other than having it synced to my CRM system, and then I'm able to add the opportunity size and trigger workflows from there.
Andrew Brzezinski (50:24):
So I like this question a lot. Even with these meeting notes and talking to the vendors, this is pretty much my straight up question. How much productivity gain do you expect this to create for advisors sustainably? And this is the way that it's important to be thinking about use cases. If we could do a time study of an advisor and understand exactly how they spend all of their time and where's the biggest way that we can create productivity for them sustainably, not in some kind of way that they're like, we're kicking the can and going to waste time elsewhere, that's important to be able to recognize. So find those use cases, find the big ways that you can create that impact, and then try to understand the end to end cost of creating, of delivering on that use case and supporting it, right? There's change management, there's education, there's probably user experience and product development. All of that needs to be baked in. It's not about trying to accomplish something that's very highly technical. It's more than that.
Timothy D. Welsh (51:21):
Awesome. Well, I want to thank my panelists here, outstanding conversation. Thank you all for listening, and we'll see you next time.
Good Clean Data: The Essential Element in Effectively Using AI
November 12, 2024 3:19 PM
51:36