Regulations are always evolving and often uncertain when it comes to AI. Stay ahead of the compliance curve with AI tools tailored for the wealth management industry. In this session, compliance experts will explore how AI can be utilized to navigate the complexities of FINRA and SEC regulations, as well as the new DOL retirement advice rule. Discover strategies to maintain compliance while minimizing reputational risks and enhancing operational efficiency. This session is essential for firms and financial advisors seeking to ensure their practices align with current and upcoming legal standards.
Transcription:
Marie Swift (00:11):
Okay, is everybody excited to talk about compliance this afternoon? I am. Let's go. So some of you may know me. I'm Marie Swift of Impact Communications. I'm a Marketing and PR professional. So you may be wondering why is Marie moderating a panel on AI and Compliance? Well, if you think about it, marketing, communications, client communications, they all go hand in hand, don't they? And so to me, compliance is one of my best friends when we're writing books, when we're creating social, whatever we're doing for our clients. So I'd like to start by introducing our panelists and get right to it because we only have 45 minutes today. And I know some of you have very specific questions about how to implement AI as it pertains to regulatory compliance. So we're going to start with Sid. Will you tell us who you are and what you're solving for with Surge Ventures?
Sid Yenamandra (01:06):
Absolutely. Yeah. Can you guys hear me okay? Yes. Yeah, awesome. Yeah, so I launched Surge Ventures about two years ago. We are a Silicon Valley based venture studio. We invest almost exclusively in AI based compliance startups, what we're calling RegTech regulatory technology. We have four portfolio companies within the fund. We solve for basically four problems. One is regulatory intel using AI data management around compliance, smart workflow using sort of AI and then data security. So these are the four pillars and we've got products, portfolio companies that solve for that. Prior to Surge, I launched a company called Entrea. In 2012, we built cybersecurity compliance software primarily for RIAs and BDs, which we sold to Smarsh in 2020. And I ran Smarsh's Cyber Unit for about two years before taking over the wealth unit for Smarsh briefly before leaving. So this space is super interesting, excited to talk about it.
Marie Swift (02:27):
Alright, Vall?
Vall Herard (02:29):
Yes. So my name is Vall Herard. I'm the CEO and founder of Saifr AI. Saifr was incubated at Fidelity Labs, which is the innovation in incubator arm or Fidelity investments. And so we then Saifr, what we do is build AI systems. For example, one of the capabilities that we've built is the ability for you to have what I call a grammar check for compliance. When you are creating marketing content, for example, imagine that you are creating content and as you are typing it, having the ability to have an AI assistant tell you this is potentially non-compliant with FINRA rule 2210 or SEC Rule 4 82. These are the suggested disclosures that we would recommend that you have. Or else if you write something, it's able to suggest language. Let's say that it detects something that's noncompliant to suggest language to you to make it compliant. We've also built capabilities to monitor electronic communications AI capabilities. We've also built other capabilities to allow you to do KYC and L types of capabilities. Prior to Fidelity, I did a few startups. Prior to that I was in capital markets as an equity equity derivatives trader worked at U-B-S-B-N-P and others. And then prior to that I also worked in risk management. So building credit market and operational risk systems and building models to try and understand the risk, if you will, in these various areas of risk.
Marie Swift (04:18):
Dan,
Daniel Bernstein (04:19):
I'm Daniel Bernstein, Chief Regulatory Counsel at Market Council. Also a principal at the Hamburger Law Firm. So I don't have an AI product. I am more the, I look at us as a bit of an intersection between the technology and the humans. And so our clients that we work with are typically small to midsize investment advisors, small meaning startups that are going to be state registered advisors up to firms north of a billion or so. Beyond that, they look to us more for a traditional law firm arrangement. But we are all for the use of technology to make compliance less of a cost center for firms. I think that's a pressure that a lot of compliance officers get is I am spending just for you and we're getting nothing out of that. If we can use other services to allow the compliance officers, the consultants, those with a lot more expertise to just come in and be that human to oversee the aspects like Sid and Val were mentioning. When it does kick back to them and say, here are things you may want to look at, there's still somebody that has to look at it. And I think that is that intersection between that technology in person and that's what we help 'em out with.
Marie Swift (05:34):
So this title of this session is about what's allowed, what's not and what's next. So first we're going to tackle the what's allowed and what's not. So Dan, maybe since you're in that MEU every day, you could talk about that.
Daniel Bernstein (05:48):
Yeah. So here's the great thing about the Investment Advisors Act of 1940. It's a principles based law. You don't have a lot of principles based laws. They're mostly rules based. If you look at the various securities laws between the Investment Advisors Act 1940 Company Act in 1940, Securities Exchange Act and Securities Act, the Investment Advisors Act will look sparse and kind of boring doesn't really tell you much. And there are advantages of that. It's not a rules-based law, which means you can kind of do anything you want because you're a fiduciary as long as you're acting your client's best interest, disclosing conflicts of interest, and there's a few rules you have to follow. But besides that, it's really just doing what's in your client's best interest. So there's nothing that you can't do with regard to AI as an advisor. And I think that's a misnomer when you sometimes see actions against advisors and people think, well, that means I can't do this.
(06:50):
And really you have to peel back that onion and look at what those layers are. Just by way of example, it's not AI based, although I'll talk about some of that later if it comes up with enforcement actions. But if you look at it right now, the hot issue is instant messaging and investment advisors all went, I can't use WhatsApp because the first few advisors that got in trouble we're using WhatsApp. There's no WhatsApp law, there's no rule against instant messaging right now. It's been a books and record retention rule. So if you can find out how to make that system work based upon the Advisors Act, you can use anything. And it's the same with AI. There's nothing that you won't be able to use as long as you can minimize, well eliminate your conflicts of interest that would harm your client's best interest and just know that you're responsible for everything that comes out of that AI if you're using it. But that's it.
Marie Swift (07:40):
So imagine, Sid, you and Val have some additional comments around guardrails and what's allowed and what's not. How do you solve for that?
Sid Yenamandra (07:47):
Yeah, I mean, so maybe I'll take a stab at it. We see AI as a enabler, as a tool that it's not the end all. It's not the panacea. It's a tool to save time, to help automate a lot of activity that is done with people. And in many cases, I mean if you look at the number of regulations that exist for SEC FINRA that are constantly changing and all the state jurisdictions, there's enforcement actions. I think there was a stat recently that there's about 17,000 different regulatory events that have to be tracked on an annualized basis. If you track everything, it's hard. It's hard for humans to do that. You can, a lot of folks can stay on top of it, but very few that I know would do a great job at it. So I think AI could be a good tool to help automate the understanding of regulations and the subtleties to summarize the changes.
(08:46):
And so one of our companies, reg versus does exactly that. We basically track all SEC FINRA and state regulations constantly. We track enforcement action and then we gap test RIA firm's cybersecurity policies or their entire compliance manual against those rules and try to find gaps and offer suggestions to improve their policies. And then everything runs through a human to validate, but at least you've established a transcript that is AI powered. So I mean, we were talking about this earlier, similar to when you go to a cardiologist, you run an EKG, you run the EKG through an AI program and it'll tell you based on data that roughly hear the issues, but consult with your doctor for the final word. And so it's that second transcript that we see value in with AI. And so as long as we're not using AI to tell time or predict risk, we find that that's an area that is perfect for AI models to work on.
Marie Swift (09:55):
Val,
Vall Herard (09:57):
If I can build on that a little bit, there isn't a role against being efficient. And so at the end of the day, what AI is, it's a tool that allows you to be more efficient. So I go back to my days in capital markets. We've had models to look at the risk in a portfolio and investment portfolio forever, and there's an entire body of knowledge that's been built around model risk management. And so consequently, a lot of time people talk about AI as if it's this other thing. At the end of the day, it's nothing more than a model that has underlying assumptions, mathematical assumptions, and that has input that has output. And so the extent to which that you can demonstrate that you are using it in a way that is consistent with traditional model risk management and it's making you more efficient, I think it's a net benefit to the industry.
(10:57):
So for example, I talk about some of the capabilities that we've built. We've taken one of the tools that we have and we've deployed it in a situation where a financial services company needs to answer questions about income. So basically bot, well, the answers are then provided by that chat bot if you will tend it to be compliant. As you might imagine, this is coming in real time, being able to generate an answer and check that you are not being misleading and the answer that you give back, it's a great efficiency tool that overall will reduce cost for the end user as well as for the firm. And so I think the adoption of AI from the perspective of being an efficiency saving tool is something that we're seeing more and more clients. When we first started safe and who were speaking with compliance officers about some of the capabilities we were building, there were a lot of skeptics.
(12:02):
But we are at a point now where if someone is generating content, we can catch about 90 to 95%, but a compliance officer would otherwise catch. And we can do that fairly, fairly quickly. However, what we cannot do is say, okay, this meets all of the requirements for series 24% to sign off on it to see's point a human still needs to review it and validate and sign off on it. So if a company is using the tool, we highly, highly discourage them using the tool in a way in which they say, oh, it's been signed off by the AI agent, so therefore I don't need to review it, so therefore I'm compliant.
Marie Swift (12:44):
I imagine there's some questions in the audience. We've been talking for 15 minutes, what's bubbling up for you? Nobody. Okay. No bubbles. Go ahead Dan. So I'm going to pop the next question to you. Talk a little bit about guardrails, some smart things that advisors can do to make sure that they're using AI and their compliance process efficiently.
Daniel Bernstein (13:09):
Yeah, so I've looked at things in two different ways. One, compliance with regard to use of AI for your services to clients, but then also compliance with regard to AI for use of compliance. And that's what my colleagues here have done. And I think there's been less SEC scrutiny to know SEC scrutiny in that area at this point. As far as I know, you can kind of maybe tie it to a potential rule which may happen, may not happen about due diligence on vendors. I'd look at it more like that, and I think that might be more of a focus. But for right now, the guardrails are just knowing what Val mentioned about the 90 to 95% of what a compliance officer would catch. My only concern is that compliance officer then relying a hundred percent on that content and knowing how much testing to really do because you don't want to have happen is within that 10%, five to 10%.
(14:10):
That's where something big occurred. The good news is most of the advisors act does not require a chief compliance officer to know everything that has happened, gone out, been produced. If you look at the marketing rule as a good example of that, the marketing rule originally proposed was going to require the chief compliance officer to pre-approve all marketing. That did not get into the final rule. That would've been really difficult and people looked at that at the pre-approval part, but it's also the approval in general. There are plenty of firms where the chief compliance officer will review all marketing, but there are other firms where that doesn't happen and they just do more of a sampling. And I think the use of AI would allow us to have that as our first run through and then do sampling. But there has to be testing, and I don't always think that testing should be random.
(15:02):
So I think that often gets mentioned is I'm going to do random testing, but the problem with random testing is it might never catch particular areas because of that randomness. So I'm a fan of some focused testing. So if it is with regard to marketing, for example, look at how the AI reviewed any hypothetical performance, any actual performance, any testimonials, look at specific areas to see if one of those areas fell into that 10%. It just wasn't ready for that yet. And I think between those two things, the AI and those guardrails, you're going to be in a good place.
Marie Swift (15:41):
If you want to ask a question, call my name anytime, Marie is my name. So the next question I have is around data quality and limitations of AI. So who wants to take that?
Vall Herard (15:54):
I can take it. Well, I mean I think that, again, when you look at AI, because it's a model, the whole adage garbage and garbage out still applies. So one of the reasons, for example, why OpenAI and ChatGPT hallucinate as much as it does is that it's relying on all of the data that's under intranet. And I know that there was someone from Microsoft who spoke earlier, but if you look at for example, the five three family of models, what Microsoft was able to demonstrate with that is if you take a smaller high quality level of data, you can actually build a model that addresses a specific task and it will do it much better than a large language model such as open ai. So for example, if you are writing code using an AI assistant, the five three family of models, which is much smaller than the GPT family of models, it far exceeds the performance that you get using that kind of a model.
(17:01):
And so the idea of having high quality data is one of the questions that if you are buying AI that you need to ask what is the provenance of that data? Where did it come from? Smaller firms obviously don't have the sort of resources like a Fidelity would have. So for example, when we build a model, there's actually two groups of people that we have to be in front of. One group that's made up of other data scientists who didn't work on the project as well as people from compliance, people from legal who ask all of those questions. And so there's a vetting process that larger firms can afford to put in place, but smaller firms, although they may not be able to put that in place, but there are some questions that we need to ask. And then the other area that I would say is what Rob mentioned earlier, which is the idea of testing it.
(17:59):
And I think a lot of times, even if you take an AI model that is highly efficient, and we think that within Saifr we build those kinds of models, it may not necessarily capture all of the risk appetite that is specific to your organization. And so consequently, one of the exercises that we go through with every client is to calibrate the model to the very specific risk appetite. So for example, although the marketing rule applies to everyone equally, but certain companies may want to take a little bit more risk than others as long as it's within the defined risk appetite of the company. You need to have a model that's able, that's ly enough to be able to catch that. So that's one of the guardrails. Even if you're a smaller firm, you have someone coming in and selling you ai, you need to do the due diligence of going through the process of saying that, okay, at all firm, here's how we look at item X or item Y, let's run through some scenarios before we just buy and implement it.
Marie Swift (19:06):
Sid anything.
Sid Yenamandra (19:07):
So AI is such a vast topic, and it's sort of like where cloud was 15 years ago, people were worried about moving data and trusting a third party provider to manage your data and you have to move your data to AWS or Microsoft. I think we're in early days of AI adoption. When you say AI, there's just so many parts of AI, generative AI, which is the ChatGPT genre of LLMs that everybody talks about is the one that gets the most press. But AI has been around for a while. I mean having access to data in this case risk and compliance data, and then using available models to analyze trends and spot trends within the data has been happening for years. What we see is the wealth space is a massive data problem, which is there isn't enough good data. If you look at CRM systems, there's a lot of junk data if even client data is not cleaned up.
(20:16):
I mean, I can tell you that we acquired a company, we acquired their CRM as a Salesforce instance. Half the data was junk accounts that haven't been updated in ages, client information that hasn't been updated in ages. So there's a massive sort of data cleanup activity that has to first occur before you can actually build out a model. So I agree with the garbage in garbage out comment, but I think we're in early days, early innings of trying to get sort of the data repository, the data archive centralized and cleaned up. We see a huge opportunity just in that even before AI comes into the mix.
Marie Swift (20:53):
Well, I think you called it a managed risk model. Anything to add about that building a managed risk model?
Vall Herard (21:01):
Yeah, I mean I think that, again, I had to keep going back to this one analogy. The fact is that the financial services industry, we've been using models forever and there's a lot of good practices that have been built over time. And if you look at, for example, the fact that if you have a market risk system, that one of the first questions that you're going to ask is how does the system handle the correlation between different assets, for example. And then you'll have someone who will go in and really interrogate that data and try and understand really what are the correlation dynamics, what happens if I change a return assumption in one part of the model? How does that impact return on other assets? Well, there's a potential coalition issue that you might have and you need to try and understand that in model that. So to see's point in terms of cleaning up the data, I remember in the early days when we were building these kinds of investment risk management systems, most of the time, 90% of the time that we spent was not in model building, was really looking and understanding the dynamics of the data itself. And I think that's where a lot of firm need to pay a lot of attention when they have a company coming in and saying that, okay, we are going to sell you an AI system because everyone now says that they do AI.
(22:37):
But I feel that going that extra step and really asking the company, spending the time and asking the company about the data and what went into the system. Because when you look behind the scenes, a lot of these models are not that complicated. Even if you look at these large language models, at the end of the day, it's a probabilistic guessing system that they are now with the Gen C AI, they are a little bit more sophisticated in terms of being able to go back and reduce hallucination. But at the end of the day, if you are going to be using these systems, it's really incumbent upon you to try and understand the risks and asking a lot of question and really to, because the risk can be managed once you understand it, you can manage it.
Marie Swift (23:32):
Any other concerns or ethical considerations?
Daniel Bernstein (23:35):
Yeah, so I think there was a great segue into some of the stuff that I wanted to bring up. I mentioned earlier that the SEC has not really said, nor do I think they're as concerned with the use of AI for your compliance. The concern is in certain areas that affect clients as of now, and what Vall mentioned I think is really hit on the head is the chief compliance officer or the chief investment officer. When you're investing in a mutual fund or if you're using a third party manager, the expectation is you're doing some due diligence. What's their background, what's their performance been? And that's about it. But when it comes to AI, I think there is an expectation right now by the SEC that you are digging deeper. You are getting to know the specific risks about that particular AI that you're implementing. And I think that intimidates some firms or they don't think they even need to do that, especially as something becomes more popular. If something becomes ubiquitous in the industry, it becomes, well, we trust it because everybody trusts it. And I do think there is an expectation that as the chief compliance officer, as the chief investment officer when it comes to AI, because the SEC has looked at that as a potential risk for systemic risk, that you're going to be expected to know the risks and conflicts and disclose them. Marie, I think you had question.
Marie Swift (25:01):
Yes, back here.
Audience Member 1 (25:03):
I think my question question actually goes off that last point. I think my firm probably filled out half dozen, dozen deals questionnaire over the past year. They all have AI sections that very clear compliance officer are really struggling to understand how to do that diligence. And even on our side, the applying technology, struggling even understand some of the questions accurate. So is there anything coming or out there that we work together on as sort of a standard template to go off on AI and specifically been AI relatively?
Sid Yenamandra (25:40):
Lemme take that. Yeah, so NIST actually has a framework that was just recently announced that does, it's a pretty comprehensive document similar to a SOC two or a SIG light if you're familiar with those. For security cybersecurity, you would essentially have a similar framework for AI. Then there's also the responsible AI framework, which is very popular. So there are more and more frameworks coming out to certify AI workloads. And there are firms that actually help you fill that out and actually offload that activity from you to help you map. If you're a developer of AI solutions, you can actually get the equivalent of a SOC two. But for AI, like a responsible AI seal of approval. And that works to some extent. But if you look at banks, we are actually looking at a company right now that's in the model security space and model compliance space for AI.
(26:51):
Banks have been doing this for years where if you're using AI to do credit risk scoring, and you probably know this from back in the day, those have to go through bias testing. They have to go through, there's myriads of models that are available to test AI. So it's AI versus AI. But I think the good news is in the area we're talking about here for compliance, if you're doing a very specific task, like I'm doing email review using AI, that's a very specific task. The risk is lower from an SEC or FINRA standpoint. In fact, we had a conversation with FINRA, in fact, the FINRA conference is going on right now, as you know, but they're annual FINRA uses AI to catch issues on adss. So an RIA or the SEC use AI to catch issues with ADBs, FINRA uses it for form CRS or U fours, same exact technology. So a dvs, I can't tell you how many RA firms we've worked with that make mistakes filling out ATVs, very simple errors. They're not licensed in a particular state or to sell insurance. So those things, I think AI could actually help because it could supplant what a human eye could catch. They're very specific tasks, and if you get a responsible AI frame, if you're a developer of technology like that and you get a responsible AI seed of approval, it just builds confidence with clients
Vall Herard (28:21):
And with respect to the point you made about ra. So we've had conversations with them. I don't speak on behalf of FINRA, but yes, they have built some of these tools. So for example, for marketing review, they have tools that they've built that can say, Hey, we think this is higher risk, let's route it to an analyst for review. So they've built those capabilities. I think as far as framework is concerned, if I can go back to that very quickly, I think the EU AI Act is actually a good starting point because in many ways it builds on a lot of the principles that we've had in Basel regulations for banks forever, which is the idea that if I can actually put things into risk buckets, what is the use case and is this high risk? Is this medium risk? Is this low risk? Then you start building out a framework. And so there's a lot of, because the Act itself went into effect back in August, but it comes into full force in 2026, August, 2026. And so the industry there, they're starting to build a lot of fairly enhanced framework in terms of how you look at AI on a risk. You look at the use case and classify the use case in terms of the level of risks it represents. I would look to that as kind of like a way to understand how to evaluate and look at AI systems.
Daniel Bernstein (29:57):
And I would look at the SEC's proposed rule, which will it ever happen? We don't know. But part of that proposed rule was for that advisor utilizing AI to have policies and procedures with regard to that AI to do due diligence, to do conflicts of interest. So I think the expectation of the advisor asking for it, I would assume, I would expect as the advisor to get a response showing here are the risks, here are our policies and procedures with regard to verifying that we're using the procedures that we say we're using. I mean, to my knowledge, the only enforcement actions we've had right now we're really not about ai. It was marketing. It was AI washing like greenwashing where advisors were saying they were using all these data points and they weren't. So that's what I'd be most concerned about as an advisor is what are the policies and procedures to ensure that these things you say you're doing, you're actually doing.
Marie Swift (30:54):
I saw three hands over here. Are they still there? Go ahead. In the back.
Audience Member 2 (31:00):
Hi, this is for you, Dan, specifically. We see a lot of AI note takers that are out there. How do you think the SEC will, I mean, again, don't speak on their behalf, but what do you think will happen with the Books and Records Act and meeting recordings or digital communication? Because it's still an open question in all of our minds, and I don't know if anyone who's given us a concrete answer yet.
Daniel Bernstein (31:27):
So I like to simplify that. I think if you look throughout the years, some of the troubling areas that advisors have had with regard to first using email, using the internet, using instant messaging, which is still difficult. There was this original push of, well, we shouldn't have to keep that. Well, of course you have to keep it. If it's written, then you have to keep it. So if you had a conversation about it, and maybe it wouldn't have had to have been kept, but you decided to memorialize it, I think the act is relatively clear. And I think the same thing will be with the note taking tools that if you've taken this note and that note discussed something that would otherwise be part of the books and records, then I think you're going to have to keep it. But I don't, so if the question is, are we going to have to keep those notes, if you had a conversation with a client that got reduced to notes that says, here's what I recommend, I recommend we change your asset allocation in the following ways, then yes, I think that became part of your books and records.
(32:21):
Do I think the SEC will dislike that? No, I think they'll like it just fine.
Audience Member 1 (32:26):
So what about the video recording at that point? If there is one or audio?
Daniel Bernstein (32:31):
Same thing. Yeah, so I think the video and audio recording would be the same thing where if there's something discussed now, if it's just happy birthday, how's everything going? Just because it is written or videotaped doesn't mean it's part of the book's records. It's about the content.
Marie Swift (32:47):
How about here in the front? Did you want to build on anything?
Audience Member 3 (32:55):
Circling back to when you've validated that the model is good and everyone's allowed to use this tool, what is your then responsibility as far as monitoring how people are actually using it? Right. So I've built a model. My firm is using the model to do X, Y, Z task. Are they using it correctly? What are they putting into it? What's your responsibility and risk management there?
Daniel Bernstein (33:17):
And I'll just be real quick. The firm is the advisor. The rep is never the advisor. So your responsibility is everything that comes out of that rep or anyone at the firm is the firm's responsibility. So going back to what I mentioned earlier, it doesn't mean that you're looking at everything, but there absolutely should be training and testing to make sure it's being used properly. Because in the end, the firm will be responsible.
Marie Swift (33:39):
Hands over here. So make sure I see him. Sorry, I stepped on.
Vall Herard (33:42):
Yeah, no, that's fine. The only thing that I would add to that is that just make sure that you have a regular testing program where you regularly going back, because one of the things that happened with these models is the fact that you have model drift that can come into it. And so having a quarterly, at least a biannual testing process where you take some out of sample data and then you test it to make sure that the model that was originally approved is still working, is what I would incorporate into the process as well.
Marie Swift (34:11):
Yeah, we're bringing the mic to you.
Audience Member 4 (34:17):
Thank you. I liked the scenario that you were about to go down where you were talking about, sorry, it's Daniel, right? Yeah, where you were talking about that the notes came out of the meeting and it said, we agreed to make this trade. What if a, I got it wrong, heard it wrong, and then you have to delete or amend, and then that becomes part of the books and records. So I've had counsel that kind of presented in a similar way that you can't just delete it and change it. You then have to delete, you have to amend, make sure the client acknowledges, and it goes down this lovely little compliance rabbit hole. So what's your kind of take on it from what if AI messes it up?
Daniel Bernstein (35:05):
So I don't know if that would be AI as much as just the transcription.
(35:11):
But it would be a little bit more scenario. And if it's literally when I transcribe my texts to my kids, they make fun of me. I don't check it and it says weird things. So if it was something that's just nonsensical, I don't think it would be a problem to correct that before you memorialize it. But if it is something like, I recommend you buy this, and then really what you had said is, I recommend you do not buy this. It might be big enough that you keep that record and then you have a record showing that actually what was discussed was the following. As long as you keep that record in a timestamped kind of way, I think you're going to be fine. What I mentioned, it's case by case. I don't know that that would require going back to the client saying, my transcription said the following, but it was really this. Can you please verify that? I don't know that you'll have to go down that route to depend on the case by case.
Marie Swift (36:06):
Was there a question in the front here? Go ahead.
Audience Member 5 (36:10):
Yeah, it's related to that because I was just talking with one of the vendors that pre-meeting prep and post-meeting and transcription, and it's like the AI is the tool to get the sound into words, but then it's up to you to go through and say, yeah, because there are mistakes all the time. Otter AI, I tried using that for a while, and that made a ton of mistakes. So what you end up choosing to put into your CRM is what you take from that tool as that output, and then you can get rid of that output, what you put in the CRM that truly memorializes it. So that's what the vendor was saying was that they have customers that after a week, that data that's created from a meeting or whatever, it's just fully deleted. It's perched completely. So it's only around for a week, and it's just there for you to grab it, make it as it truly needs to be and get it into CRM. So that was a big question for us. So I mean, does that sound appropriate?
Daniel Bernstein (37:21):
I think it sounds appropriate until we're told it's not appropriate, and I think it'll come down to the right, and that's an lawyer answer, right? It depends. But no, I think Sid started mentioning this. We're in the first inning. So what you can't do is just ignore the risks. So what you would just have in your, it could be a really easy policy. It could be bullet points, right?
Audience Member 5 (37:42):
Yeah
Daniel Bernstein (37:43):
We are going to use the following and and then from that transcription, we are going to copy it, we're going to paste it into the CRM, and we are going to make sure it memorializes what we believe it memorializes, and that's going to become the actual record. I think you'd probably be fine. And if you're not, I would most likely think it's going to be a, we don't think that really works. We think you should do the following. But if you take the effort to think about the risks, to think about how it's based on the rules and memorialize that and do it, then I don't see a rule against that. So yeah, principles based and acting your client's best interest, and I think you can make an argument. You're doing that there.
Sid Yenamandra (38:24):
Let me just add one thing to that. So what we're starting to see with a lot of the firms we work with through our portfolio is this notion of a AI use policy. So there is something called in compliance, you've always got to have a policy for something, but mobile devices, there's policies and procedures. So there is an AI use policy, and in that AI use policy, you'll first inventory all the use cases of AI within your firm and the vendors that deliver them. And the vendor due diligence stuff is an addendum to that. IE, you've asked a set of questions to your vendors on how they're using AI in their technology. And so there's that piece. Then how do you use the technology within your firm documenting that? And like you said, it's a few bullets. It doesn't have to be anything crazy, but there are templates. We've actually got one. If you're interested, I'd be happy to or anybody send that. It's like a template of here's the 17 things that you should ask your vendors. Here's a sample procedure that you can keep. So I just think that because this is a gray area in early innings, I agree with you a hundred percent. It's good to document it and just have it somewhere in case the question comes up of how do you use AI within your firm? You're like, oh, I have a policy for that. Let me,
Marie Swift (39:50):
Do you want to give how people can request that? Should they link in with you or
Sid Yenamandra (39:54):
Yes. Yeah, just reach out to me, sid@surgeventures.com.
Marie Swift (39:59):
More questions over here. Okay, went back here.
Audience Member 6 (40:04):
I think I know the answer to this, but when you talk about this AI use policy and inventorying use cases and applications that are used for those use cases in the world of SaaS, software capabilities of these tools evolve continually. Humans are creative. They use tools for things for which they may not have been intended
(40:29):
And not necessarily bad. It could be a completely appropriate, I won't say appropriate, it's a subjective word, but it could be in the proper performance of their duties, but you use something not for what it was intended to be used. So this morning, the demos of some of these note taking tools and note summaries, what strikes me is that many of them had asset values in them. The customer has X dollars that is incorrect an hour after that meeting ends, and it could potentially be very incorrect months later. And so where is the, that means this policy that you describe AI use policy strikes me as just unworkable, given the number of features, the growing number of features that are inherent in these things. So the AI use policy would have to say, you cannot use meeting notes as an input. The dollar values in your meeting notes as input to your advice recommendations. You have to use the book of record, which has the current value. You see what I mean? How silly it gets. And of course, regulators have a distinguished track record of judging these things in the rear view mirror, right? So I don't know, do you have any thoughts on that or
Sid Yenamandra (42:04):
Look, I think we're, again, early days of how these policies are going to be written. The point I was making was just knowing within your firm what applications you use that leverage AI and how you use it, at least documenting that at a high level gives you some level of comfort knowing that you at least have a doc now you can get pretty granular. You can get deep in terms of, okay, here's how we interact with this software. I think that's up to you. I mean that as a whole different level, but it's better to at least have a document that says, we've asked all our vendors, do you use ai? Here are the responses. Where do you use AI? How do you use AI? Give me the five features that you use AI for, document that. And it's stated, I mean, it's not validated. You're not verified. It's still self attestation from the vendor, but it's no different than asking them for a SOC two, do you protect my data? Yes, it's a self attestation from them. So I think what you're talking about is going into the granularity of how do I interact with this program? What am I doing with it? What kind of data is it? That's a different level.
Vall Herard (43:22):
And also, I mean, I think to that point, as we are in early days, but I think as the rise of agent AI happens, a lot of these kinds of problems will start to go away. It's the idea that you can couple a solution together, couple a solution together by having different agents that looks at different parts of the problem. So if, for example, you might have an agent that as part of the transcription that continuously updates the asset value, that can actually be connected to the book of records and go and get the asset value so that gets updated. So for example, you all know that Rod? Well, yes. And so I think that, and this is a capability that you can build right now, for example, that you couldn't build, let's say a year ago, because that capability just simply doesn't exist within the AI world.
(44:19):
So that's one example. So one of the things that we looked at was transcriptions. What we found was that when it came to transcription, a lot of what's out there actually, when it comes to financial jargon, are not that great. And so we ended up building another layer on top of a transcription service that actually understands financial jargon so that the transcript that you get doesn't, for example, one of the tools from the, well, well-known company is out there. If you are talking about mutual fund for example, it would transcribe that into a mutual fund, which is entirely different thing. And so those kinds of things, now we have capabilities to make them better. And also with agency c ai, you can actually break the problem down where you can actually solve it. The scenario that you just described.
Marie Swift (45:20):
We're actually at time, we're headed into a break. So in the last 30 seconds each, would you please say where you think we're headed in the future of AI and compliance? Dan
Daniel Bernstein (45:34):
AI will be like email, use of cloud, use of this messaging, use of all of these things that advisors kind of fought. And sometimes you have early adopters, sometimes you have late adopters, but everyone will be adopters even if they didn't think they were doing it at some point soon.
Vall Herard (45:52):
I would say that it's here, it's not going anywhere. I think that at conferences like this, we don't have specific sessions on cloud technology. I think in five years from now, having specific sessions on AI will be thing of the past because it's embedded in everything that we're doing. It's a question of using it responsibly and understanding that it's a model. It has risk, it's going to get some things wrong, which means we all will still have jobs.
Sid Yenamandra (46:26):
I mean, we see AI as being in compliance. It fits really well because compliance is lots of policies, lots of procedures, lots of workflow, lots of data, perfect for AI. So I think near term, you're going to see a lot of solutions. There's going to be a lot of AI washing and compliance. We're already seeing it now. So my only point is ask a lot of questions. Make sure that you know what you're getting because AI solutions have a way of finding themselves into everything, practically everything.
Marie Swift (47:04):
Yeah. Thank you all for being here. We'll stick around if you want to come ask personal questions onto the break. Thanks everyone.
What's Allowed, What's Not, and What's Next?
November 12, 2024 3:13 PM
47:20