What We Can Learn From the Robots: AI's role in Investing Decisions

Gathering and analyzing information is key to making sound investment decisions, and AI can do that in seconds There are so many ways in which AI can create significant opportunities  as well as save significant time.  But are clients (and advisors) ready for this?

In this session, we'll look at the ways that AI can assist in analyzing data, reviewing market trends and in forecasting, as well as address the challenges that exist when outsourcing these functions.

Transcription:

Brian Wallheimer (00:10):

All right, welcome. I think we're going to get some people still trickling in as we go, so we'll take it a little slow to start, but welcome to our panel what we can learn from the Robots Investing Decisions. I've got some great folks up here today. I'm just going to run down the line real quick. If you wouldn't mind guys, introduce yourself, say a little bit about who you are, what you do. We'll start here with Chris.

Chris Shuba (00:33):

Awesome. So Chris Shuba with Helios Quantitative Research. We're an in-source CIO for financial advisors, so we handle the analytics of holdings models and then ultimately portfolio construction.

Harry Mamaysky (00:47):

Hello, I am Harry Mamaysky, CIO at QuantStreet Capital. We use data science and machine learning tools to do asset allocation strategies for our clients. We have two types of clients. We manage SMA for people and we also provide our model portfolios and analytics as a subscription service, and so we have some clients in that part of the business as well.

Ravi Koka (01:13):

I'm Ravi Koka. I'm the Founder, CEO of StockSnips. My background is computer science and AI. I've been around AI for longer than you want to know. I was lucky that my mentor was the inventor of speech recognition, so I got introduced to AI a long time ago. What stocks Snips does is we use various AI as a broad term. We use natural language processing to derive a very unique sentiment signal, which is a proxy for investor sentiment based on new stories. We read a large volume of mainstream financial media news and derive a signal from that, and then we use machine learning and deep reinforcement learning techniques for asset allocation portfolio optimization. We launched our first AI powered DTF on NASDAQ in April. So thanks.

(02:14):

Brian.

Brian Wallheimer (02:15):

Great. We're going to get to the big question that I talked to these guys about right before we started here, and that is if you were here this morning and you heard Michael Kits talk, he said, if you're selling me software that says that I can use this to make investing decisions and make a billion dollars, I don't believe you. So we're going to get there. I just want you to know that before everybody's hands go up. When he said that, I said I know three guys who probably have something to say about that. So we'll get there. But I want to start off first with when we're talking about AI in this space in investing decision space, what are we talking about? How are people today employing AI investing decisions or what is the big picture dream of this? Anyone can jump in here.

Ravi Koka (02:58):

Well, the big picture is that based on my research, the markets are non-stationary and non-linear, sorry to be technical, but what it means is you're dealing with patterns that are constantly changing just like an autonomous driving. If you're driving on a road, you can assume the conditions on the road will be the same the next day. The objects could be different, it could be a ditch or the road could be blocked, so you need continuous training. So that's one of the biggest learnings is that the stock market is non-stationary and therefore if at all, most models fail because patterns change. So AI technically or hypothetically, is supposed to deal better with non-linear and non-stationary environments. So that's why I think that while there are many use cases for AI and you can use them for operational efficiency, you've heard quite a bit of that, but I really think that the bigger problems like being able to construct portfolios, optimize portfolios, risk management, fraud detection, these are some of the more advanced cases of AI.

Harry Mamaysky (04:06):

My 2 cents is we live in a world of information overload. No human being can absorb all of the information that one needs to actually make intelligent investment decisions. So AI, machine learning statistics more broadly are tools that allow us to synthesize large amounts of information and take what is our intuition that feels like something should behave in this way when interest rates go up and something else should behave in some other way when industrial production falls and systematize those intuitions and give the models enough data to capture the relationship between the drivers of returns and risk and the actual outcomes. And it just is a tool that human beings can use to collect vast amounts of information and use them in an intelligent and logically coherent way. That's really what I think it does.

Chris Shuba (05:03):

Yeah, I think big picture is relative to your seat, right? So it depends on the problem you're trying to solve. And in our world, because we have so many different types of data analytics, we're looking for how do we more efficiently select what holdings we're going to use? How do we design a model to have a behavior pattern that we're looking for? How do you combine all kinds of different models into a portfolio that optimizes the financial plan? I think when you think about AI as a big picture across it, the frontier for us is not about writing code that somehow predicts the future, that things that suddenly can be done that never could be done before. I don't think that that's the frontier at the moment. I think for us, what AI represents is the opportunity for consistency, and that consistency then creates things like compound features or the ability to make better statistically relevant decisions throughout time. So we look at AI from a big picture perspective as not as a silver bullet that can suddenly give us something that we never had before. It just makes us much more consistent in what we were doing previously, and if we start from those types of baby steps, they become real. If you start to think about it from a theoretical perspective, that's where Kit says his I'm going to make a billion dollars. It's a buildup to that point, but I think the big picture of AI is mostly limited to your seat at this stage.

Brian Wallheimer (06:20):

Sure, sure. Who's using this and who isn't using it today? I mean, when we talk about using AI for investing, I think that's where a lot of people are happy to write an email a of people are happy to take meeting notes. A lot of people are happy to create action items based off of meeting those sorts of things. Those things feel comfortable and easy and within people's grasps, when you start talking about making investing decisions, I think a lot of advisors say, wait, I'm not sure if I'm ready for that yet. I'm not sure. I mean because there are compliance and regulatory issues or all. So who's using this now? In what ways and what are you hearing in terms of who's not ready to quite let AI into this space? Go ahead, Ravi.

Ravi Koka (07:08):

The earliest users of AI are some very specialized large hedge funds. The most, well-known use case is Renaissance. Most of you have heard of them. I mean, they have delivered something like what, 60% annual returns over 30 years. And Jean Simmons who died recently, and some of the people who worked with them came from Carnegie Mellon University who worked on natural language processing 35 years ago. Now obviously at that time they were using it internally and no one knows exactly what they use. So it's kind of hard to say. It was like closed fund. They never took money from outsiders. They made a lot of money. They all became billionaires errors. And the other ones that I know are who use especially natural language processing two Sigma, two Sigma uses it. JP Morgan's done a lot of research, but they haven't yet made any offerings. So you don't see an AI powered model, but they have actually published, you can go to their site, their AI site, and they have actually published papers on more on the detection of risk and fraud is where they have applied more widely. JP Morgan, those are some of the big ones that I know. I think the opportunity is can these advanced tools and technology and research, which requires a lot of investment, can that be made available to advisors because individual advisors not going to be able to do this? So is this only reserved for the big funds and the big banks? The answer is no. That's kind of the gap that we are trying to bridge.

Harry Mamaysky (08:50):

I think there's an easy use case and a hard use case. The easy use case is this. If you have the question like who in the biotech industry in their last four earnings calls talked about the impact of generative AI and protein discovery, for instance, and you are a human being and you want to answer that question, you're going to spend a lot of time reading conference call transcript. So that's probably not a great use of your time. If you feed all of those things into Gemini or ChatGPT or one of these models, it'll digest the pfs. It'll identify the part of the call where this was mentioned, and it allows you to get information that human beings would not otherwise have had access to. So that is a current easy use case of AI. That's what the stuff is really, really good at doing.

(09:38):

The harder use cases are asking AI, give me a good 70 30 portfolio because I don't think the models are yet at a point where they can answer that question without using all the tools that financial economists have developed over the last 60 years to do exactly that. The models aren't as conversant in that kind of tool as a human being. So one day maybe, and in certain cases certainly they add value to the investment process as well, but I think they're really good for information collection and where you ask them to reason about markets, and I know our opinions vary on this, but that's okay. It's a tougher ask in my view.

Chris Shuba (10:27):

Yeah, the specter of AI is interesting because most of the time when you bring it up, you ask people like, how much do you know about it? And they know very little because most people don't. And then you ask if you're afraid of it and half the room says they are. So I think there's an understanding gap that's causing some of the hesitation in use. But I do think that just like every other piece of technology, as it proliferates in easy areas, low hanging fruit expectations come back on advisors and other people in our industry to have access and do things with AI because it's driven by their clients. So I do know some broker dealers that we have, advisors we support in don't allow even having ChatGPT loaded on their phones or anywhere of that nature. So we are in that gray area of knowledge about what AI is access to IT, compliance departments, but ultimately just like any other tool that's effective, the expectation of access from just everyday people and clients I think is going to drive usage whether everybody wants to or not.

(11:32):

So it's a question of how advanced do we get as far as where we deploy it, we write all of our own AI in house, all of our code, and we've had a lot of success helping advisors adopt it very comfortably because we don't build generative ai, so there's no risk of hallucinations or runaway algorithms. We limit things to known areas of machine learning, neural networks that are controlled. And I think that's step one in having comfort. But step two, it also comes down to efficacy. One of the great things about quant analytics, gen one is you had to predict in advance what averaging effect you wanted to occur over time and then just suffer through the goods and bad. So if any of you guys have used quant models in the past, sometimes they work great, sometimes they work horrible, right? And you hate that, but that was the price tag because you're looking for an average over time with an AI space, you can actually work backwards. You don't have to predict the algorithm you need to write anymore. You can simply take the available data and use that to write a new one instantaneously to understand the most statistically relevant decision at that point in time. That creates more consistency throughout time, consistency throughout time, and I think it's understandable in that context to start bridging some of these fears. So I don't think that we're too far off from it being table stakes, but it's more about understanding more so than, and that will breed comfort in my opinion.

Brian Wallheimer (12:57):

Sure, sure. What about biases? I think we talked about this when we talked a few weeks ago. A lot of people talk about the quality of the data being so important that you use to feed these models, and that's true, but is there a concern about building biases into an AI model that causes just as much damage or more than feeding it bad data?

Ravi Koka (13:28):

It's a good question and a tough question. There is no such thing as no bias. You can take any data in the world and you're going to see some bias. The question is, can you minimize or can you make sure that the import has fair amount of take our example, if I were to take all of the news just from the 10 k, 10 Q filings of the companies or the earnings transcript, that's just one view. That's the company management view. So obviously it's going to be have a management bias. So what we did was we said, look, we need a 360 degree view. So over the last eight years, we have about 50 million articles curated from 25 different sources. So it's a 360 degree view, which means that even if there's bias in two or three of these, they're countered by others. So that's the best you can do to prevent bias, and I think that's paid good dividends for us.

(14:27):

But is there a way to prove that there's no bias? That's tough. I think that's a tough one. I can say there is bias because the people writing it, ultimately they're influenced by their beliefs and their opinions, and there is bias. That bias you cannot eliminate completely. All you can do is counter it by getting this 360 degree view and that you have to be very careful about the sources you choose. Also, unreliable sources, like a lot of the social media sites where the content is not verifiable, that's also risky. That could introduce bias because the person posting that may have a vested interest in it. So that could be potentially also introduce a different kind of bias.

Harry Mamaysky (15:14):

Bias is an interesting word because bias, for those of you who are versed in this, has a very concrete statistical, meaning. Bias is the level by which your average forecast differs from the actual outcome. So for example, if you estimate some forecasting model based on a sample where Lehman Brothers was up 12% a year and then you're sitting in oh seven, making a forecast was going to happen to Lehman Brothers in oh eight, your model will have a little bit of a bias because it was trained then a certain regime in the world and it's going to forecast Leman should be up 12% like it's been the last 10 years, and then the thing goes bankrupt. So bias, in fact, in the context of AI models introduces a almost insurmountable obstacle, which is the following. When you have a model, it has parameters, some models have a lot of parameter, like the little knob you turn to calibrate your model to the data.

(16:10):

So the machine learning, the simple models may only have 20 parameters, 30 parameters, 50 parameters, something like that. You go to these large language models, like the early generations had a billion and then 10 billion, then a hundred billion, and now they have over a trillion parameters. So imagine having to train a model with a trillion parameters using data from financial markets in 2020 and 2021. I mean, you would train a, it's impossible. There's not enough data in the world to train a model with a trillion parameters. So this idea of overfitting to the sample that you have puts a very, very tight and really hard to overcome constraint on the size of models that could be useful, which is you can't estimate trillions of parameters using a year of data in markets. So you have to use many, many, many years. But then markets change and regime shift, and so there is a tension between how complex your model can be allowed to be and its ability to evolve and change and adapt to markets. And that tension is a very tough one to know how to triangulate that in the context of AI.

Chris Shuba (17:19):

We could spend an hour on bias, at least we have 27 minutes. We have 27 minutes. Alright, I'll get one of 'em. Same concepts there. I mean there's innate bias in the design that every piece of code is written by a person, they're going to bring themselves to it. So I often talk about quantitative analytics as being the best version of people. You bring in your design, you code it up, then it runs like a top without the human emotion attached to it. And that's the most basic way of thinking about why quant processes can be valuable. You have data cleanliness, I think you mentioned that a little bit.

(17:58):

That's a huge forefront right now. There's a super smart guy by the name of Lee Davidson I've become buddies with over the years. He heads up, he's the CEO of the data division of Morningstar. And the stuff they're trying to do right now just to make data clean would blow your mind. So data management, we spend a ton of time on that because biases can creep in there, but I tend to try to spin the conversation around bias away from something that is inherently bad into something that really fosters communication and purpose. So oftentimes when you're building, let's say I want an investment model that a risk averse investor who is willing to be more sensitive to risk data analytics, and we will get whipsaw from time to time, but net on net is going to seek higher sharp ratios. That is a bias, that is a goal, it is a purpose, and you're creating parameters, rules and structures and choosing what actual data you'll use not only in the design to serve a purpose. So I tend to think about the conversation of bias is both good and bad. You want to innately remove as much bad bias, data, cleanliness, things like that as you can, and then just be communicative about the purpose with that purpose comes good and bad. Know it, understand it, embrace it, because nobody's expecting perfection out of AI. They just again, want more consistency and the communication piece is what I think bridges the gap between bias and purpose for me.

Brian Wallheimer (19:33):

Sure. Alright, so let's get to Michael Kitsis real quick. Okay, so I could pull up the exact quote that he said I could because writing about it right now, but the gist of what he said was, if you're trying to sell me software that you say is going to make investment decisions, it's going to make me a fortune. I don't believe you because you would be silly to try to sell software on that when you could be making, I believe that the number he used was a bajillion dollars, right? That's a lot. So Ravi, you apparently went and talked with, what's that, Harry?

Harry Mamaysky (20:08):

That's a lot of money.

Brian Wallheimer (20:10):

Seriously, it would take me like 10 years.

Harry Mamaysky (20:12):

Is that a number? It must be a big number.

Ravi Koka (20:16):

Actually. Somebody actually came and told me I didn't attend this full session, that last piece I actually missed. Someone came and told me this. So I car of Michael, it says before he left and debated him, and I think I turned him around a little bit. I don't know, we'll see what he says. But here's my point. Not everybody can start a hedge fund. I'm an AI scientist. So many people said to me, okay, if you guys have some secret sauce, why are you giving it away to advisors and others? It's a go-to market strategy. So if you want to become a hedge fund, you got to raise billions of dollars. You can't start a hedge fund with $5 million. I mean Renaissance by the time they had a lot of initial capital, that seed capital. If you have enough seed capital, yes, I would go create a hedge fund.

(21:06):

The other thing that I found very interesting, we actually licensed to a hedge fund and then I found that all the hedge funds in the world put together were only about $20 trillion. And then I looked at the total wealth, there's like 120 trillion, and guess who manages them? Most of it is managed by advisors. So I said to myself from a business model perspective, rather than put all eggs in a basket, in a hedge fund, by the way, hedge fund the rule, the mortality is very, very high. Last 10 years, most hedge funds have underperformed. If you are down less than 5%, you are fired. And so to consistently perform hedge funds, you need many, many, many strategies. World quant, which is one of the largest firms, runs hundreds of strategies. Some make money, some don't make money. So I think the comment that go create your own hedge fund, that's a very flippant comment.

(22:00):

Someone who doesn't know about the investing world, it's not easy to create and run H one successfully. On the other hand, if you can bring technology and solutions to a wide base and if your returns are reasonable with the good risk management framework, what's wrong in sharing that technology with advisors? I mean our trial advisors are looking to, it's not just the JP Morgans and the BlackRocks who should have access to this technology. So I believe we should democratize technologies like AI so that advisors can benefit from it. You obviously have to prove it. You have to have safe, I believe in safe ai. So we run what is called clinical trials like a drug. We ran our models with three years live in the markets before we launched the ETF. So I'll stop there.

Harry Mamaysky (22:48):

I guess I didn't hear the comment, but my response to it would be, it depends what area of the market you're playing in. I mean, if the goal is to outsmart Citadel and Jane Street in high frequency trading, I don't think you're going to do it because that's what they're so good at doing and they've poured enormous amounts of money into it. So you have to identify, markets aren't stupid, there's not arbitrage opportunities all over the place. Markets are quite efficient and you have to sort of identify what is it that you're after. So to very narrowly answer that question from qu street's point of view, we think there are inefficiencies in the institutional asset allocation process. I won't name who it is, but one of the major multi-trillion dollar managers has a description on their website for how they make portfolio allocation decisions. And now to summarize, APM has an idea, let's allocate more to utilities.

(23:45):

It goes to a committee, first committee, they have to approve it. When that committee approves it, it goes to the next committee, higher up in the firm, they have to approve it, and once they've approved it, the CEO or the chairman of the board has to approve it as well. So from the time that the PM at this institution decides, I'd like to increase my utility allocation from two to 4% until the firm decides that's okay, takes a year or two, the Norwegian Sovereign Wealth Fund before they can allocate more to alternatives, have to have the Norwegian parliament vote on that decision and then they allocate capital. So the point is capital is sticky and for good reasons, you don't want to just whip it around because it's people's money, but that stickiness creates inefficiencies. And so if you can use your AI tools to identify the same decision making process that's used by the PM at Vanguard knowing and named the institution, sorry, unintentional.

Brian Wallheimer (24:43):

Can we rewind 12 seconds?

Harry Mamaysky (24:46):

I mean it's on their website so it can't be a big secret. But if you can identify that decision making process and if you can make that allocation this month and they're going to reallocate 50 billion based on that same decision making process, but in 18 months from today, you're going to have an edge because there's going to be a little bit of a drift of that asset class in the direction of the allocation that Vanguard chooses to Vanguard, fidelity. Anyone chooses to do it just takes them 18 months and you can do it in a month. Now will that friction always be there? Maybe not. There'll be other friction. So I would kind of just totally change the question is do markets have any frictions? I think they do. Can we use these tools to identify existing frictions in markets better than we can identify the same frictions without the tools?

(25:35):

I think the answer is yes. And then it's a question of distribution. Can you start a hedge fund front running a 2% reallocation of vanguard to utilities? Probably not because that's not what hedge funds do. But could you start an intelligent asset allocation strategy that reallocates monthly that is 12 months ahead of what the big institutions do? You could and then maybe that generates one or 2% a year out performance. So that would be my answer. I don't think you'd become bajillionaire, but you can do one to 2% in our business is pretty good. So I wouldn't laugh at that. That's my answer.

Chris Shuba (26:11):

Going third stinks. They take all the things to say we,

Brian Wallheimer (26:14):

Can reverse it.

Chris Shuba (26:15):

It's totally fine. No, we're good.

Brian Wallheimer (26:16):

You can go first next.

Chris Shuba (26:17):

No, I don't want to. Now I'm in a groove. So was he saying that because he was saying I don't think you could generate those types of excess returns, or was he saying that you, that type of software would never see the light of day because it would be hidden and you use for your own purposes?

Brian Wallheimer (26:34):

I think it was a little bit of both. Yeah. Second one, second one. That might be true, but I think part of what he was saying too was I don't think you can do it because if you could, you surely wouldn't be selling software. So a gazillionaire. Yeah, but I think he was also saying, I just don't think you're going to create an algorithm that blows everything out of the water.

Chris Shuba (26:57):

No, I mean the trick about becoming a gazillionaire is start with a gazillion and then that's funny.

Harry Mamaysky (27:04):

Or least half a gazillion.

Brian Wallheimer (27:05):

So yeah, half a gazillion money you bajillion and you get G money get a gazillion.

Chris Shuba (27:10):

So I mean taking a little bit different tax since it's not the same thing being repeated, the idea of inefficiencies is huge, at least the work that we do in the AI space. And the easiest way to think about that is just looking at asset classes. So if you're able to take in a vast amount of data at any one given time, characterize it into an environment, see how many of those exact environments existed historically speaking, and then match up a statistically relevant data set that says, well, we want to overweight this and underweight that. There's inefficiencies there just by opportunity. And you can do that across the economic landscape. You can do that within market behaviors, you can do that volatility structures, you can do that both fixed income and in the equity space.

(27:55):

So the great advantage that can be exploited is this idea, I'll keep coming back to it, of consistency. Consistency is a fancy way of saying compound rate. So if you can be more consistent about something and therefore compound at a faster rate than something else, then over a long enough timeline, I think you could get to a bajillion, would that type of thing see the light of day? I do think those concepts are already there, but it's much broader than just whether it exists. It's how it's used, how it's presented, how it's funded. I think there's a long chain of things that have to occur. But I do agree that the best versions of any technology, if it happens to be discovered in a private space, we'll probably remain hidden. The problem is that's not normally how things get developed. They normally get talked about battle tested, exposed and so forth. They don't stay hidden. So I don't know. I like that he said it. I like that. I'll read your article.

Brian Wallheimer (28:48):

I appreciate that.

Chris Shuba (28:48):

Okay.

Brian Wallheimer (28:51):

So let me ask, and I'm going to start with Harry on this one. Let's say we're talking about a company like Vanguard that has this long, long horizon, right? Between coming up with an idea for reallocation and actually getting there. Are there opportunities for anybody? I mean, is this an opportunity for smaller firms? Is this an opportunity for other firms in a space in which maybe they don't have? I mean, if you think about who has the money to develop this technology, your Vanguards, your large firms are going to be doing this, your JP Morgans, your Morgan Stanleys, they're going to be doing these things, right? Is there an opportunity for some firms to catch up or push ahead for a while and how long does that last?

Harry Mamaysky (29:42):

Okay, so that's great Questions.

Brian Wallheimer (29:46):

We're like nine in there. I'm sorry, a gazillion.

Harry Mamaysky (29:50):

So two responses and then I'll share the stage with my colleagues here. So the first thing is, it really is an issue of trust. Like what economists are called agency problems. When you delegate trillions of dollars under management to a given institution and they have so many people who are investing, you just have to put really, really rigorous safeguards around how the money can be invested. You don't just want to let any person start doing what they want to do. So the bigger the institution, the more institutional safeguards compliance you need because ultimately it's hard to trust everyone in your organization. There are too many people. So one is being small gives you an edge of trust. If you have a relationship with all your clients and they can call you up and ask you a question and they can say, why did you do that?

(30:39):

I'm okay to do this allocation, but explain to me why you did that. And you can get on the phone and explain to them. It builds a level of trust that allows you to do allocations more quickly because people trust you. You don't need those layers and layers of institutional compliance and control. So that's one thing. Now the disadvantage is scale, but here humanity has messed up along many dimensions. One thing we did well is open source software. So I dunno if you guys are familiar. If you want to build a neural network, you have two options. You can hire a team of 30 software engineers for 20 years and have them build up the infrastructure you need to build neural networks. Or you can go to an open source repository called GitHub and download something called TensorFlow and Caris, which has been developed by Google over the last 20 years, but made freely available.

(31:28):

And the same thing is true for portfolio analytics, for predictive regressions, for everything you can dream of is available open source. So plus data like folks like Morningstar and Bloomberg and Reuters and now Refinitiv and all the data providers out there make data available to small institutions that you used to have to be Goldman Sachs in order to have access to damage data. So we have access to the data, we have access to open source software, which allows us to massively scale based on what other people did and create really innovative analytics without devoting the 20 years of development time. And then we have trust. Your clients can just call you and talk to you. So I think those three ingredients allow you to be a little bit ahead of the institutions a little bit. Not 10%, but one to 2% a year maybe right now at some point that'll go away. Everything does in financial markets and then you have to think of something else. But where we are today, I think there's still then institutional friction that can be exploited.

Ravi Koka (32:29):

The larger the institutions. There's more In Asia, you've seen this in every industry, not just the financial industry, in the technology industry and IBM Denton, when the pc, right, Steve Jobs did it and others did it and so on. So we've seen this again and again. I think the larger the firms, like you said, they have trillions of dollars, they're more looking at managing that and the risks around that, retaining their clients. And this is what happened with IBM. They were so caught up in protecting their mainframe customer base and improving the mainframes that they lost the race on the PC or the mobile phone, all these other innovations that came from small companies. So I truly believe that same thing with financial services, innovations going to come from smaller firms. And then of course the big guys will, once they see it's a trend and it's going to impact them in a big way, they'll either acquire in my previous firm right through, I challenged IBM and they eventually became my partner because they could never build what I had built for enterprise software. And I would challenge them every year and they would say, oh no, no, we're going to do this in our labs, et cetera, et cetera. In the end, they became my partner and in fact even tried to acquire my previous company.

Chris Shuba (33:52):

Yeah, I mean, I would say that this is the latest in a long chain of things that are finally making the promise of being a small company real in the financial services industry, that it's been true for everywhere else. So you normally say two things about a small business, it would be that they're nimble and that they're generally focused, right? And the hard part about being in this industry is that especially modeling, there's so many permutations. So access to clean data like you talked about, that was one thing that you previously didn't have access to computing power. Now you can sign up for AWS I remember back when I was at Columbia, I used to have to wait until everyone went home, daisy chain, the entire thing, all the computers in the skyscraper together ask it a question. And maybe when I came back the next day I had an answer.

(34:38):

Now I can do the same thing from my laptop on the beach just by plugging in AWS. So there's a long chain of democratization that's bringing home that concept of nimble and focused back to small businesses in the financial services industry. But I think the coolest one about AI has been that it's democratized the people count. So access was a problem, access to computing, power, access to data. But the people one, now I can test millions of ideas at one time as opposed to write something, test it, write something, test it. And the big firms would throw bodies at that. They would throw hundreds of people at it and find the best answers more quickly than small firms. But that's now been democratized. So I think this is a chain of democratization that we're here now, and I don't think it's going to be bad for smaller companies, but it might be a bit of a brain drain.

Ravi Koka (35:35):

One other point, why are we talking so much about ai? It's all because of what happened with ChatGPT, right? AI is 50 years old, right? Term AI was coined by McCarthy in the fifties. So it's been building over time, so many robotics, natural language processing, predictor modeling. So there's so many branches of ai. The reason we are talking is because OpenAI in a hurry released ChatGPT, which I think they found there was a mistake because it was hallucinating, wasn't fully tested. This is the problem. It's software. It's like Microsoft releasing windows with bugs in it. I let the people debug it. So that approach Silicon Valley approach, I think is why we're talking so much about hallucinations and the safety issues around AI. I do think that many types of AI, you can go to Detroit and you'll see robots doing specific tasks. AI is nothing but automating certain human tasks. And the way I explain it is if you didn't have the submarines and the airplanes, you wouldn't be able to explore the oceans and the skies, right? So what AI is, it's automating cognitive tasks. So it's going to allow human beings to do cognitive tasks that otherwise you wouldn't be able to do, which is what the submarine and the airplanes have allowed us to do.

Brian Wallheimer (37:05):

Well, we have about eight minutes. I want to open it up to questions that anybody has. We have a microphone that can come around. Does anyone have anything for our panel? Back over in the corner here. We got a microphone coming. I probably don't need one. Don't worry about it.

Audience Member 1 (37:20):

So first of all, this is the best panel up here, you guys.

Brian Wallheimer (37:26):

We knew. We knew that's That's why we waited.

Audience Member 1 (37:31):

We're recording. Oh, all right. Lemme know if I'm talking too loud Anyway, so I know a lot of us in the room are very familiar with nitrogen. The old Riskalyze great company, they promote themselves is not AI, but utilizing intelligence and technology to help to forecast with a 95% probability, a specific range of return over 180 day period of time. We all, anyone who, I'm sure a lot of us use them here. What do you think about that company or that process? I mean, you kind of look at that as moving into the AI space. We don't promote it that way. We just say we promote it the way they promote it, which is using technology and intelligence, but not artificial intelligence. So what's your take on a company like that and what they're attempting to forecast as their version of what we're looking at today?

Chris Shuba (38:30):

I'll go first. I know those guys pretty well. We kind of grew up down the street from each other almost in the same town east of Sacramento. Well, first and foremost, any form of quant analytics I think is great, right? Anything that creates a process for analyzing anything I think is better than just kind of winging it. I don't know their exact calculations, but I do know that when I look at it from the outside looking in, most of those types of calculations are fairly simple standard deviations, right? Going out and saying, well, here's a historical period of time. Here's what the risk and return metrics look like, and then this is a probability curve after that. So I know that marketing gets interesting as you think about the word intelligence and whatnot, but those types of simple client conversations where you're really just trying to say, Hey, are you okay with this?

(39:24):

Gauge something as an advisor, I don't think you need to be super accurate with the prediction. I just think you need to have the conversation around odds. So we're not really building or aiming anything that we are doing around things that could be handled by a normal just analysis of first gen quant. We would be much more interested in what are the odds of shaping that, or if we did this by X, how much could you handle gauging the relationship between the client's range of emotions and how that needs to be behaving over time. Those are the types of fields we would be interested in, but to my knowledge, that's not an AI calculation.

Audience Member 1 (40:04):

Yeah, I didn't think that it was AI per se because they give you a big range. I mean the range is negative five to plus eight. And now granted, we've been using 'em since the very beginning, and I wouldn't say 95% probability. I say they've been almost spot on a hundred percent because of the range that they're looking at. And they're a big company and everyone loves 'em and all that sort of good stuff. I just didn't know, would you feel or anyone feel comfortable saying, Hey look, this is our first step into utilizing technology to help you shape your risk return preferences over the next 12 months. And the regulators, they love it. I mean, the SEC, we've gotten our random audits, and as soon as you tell 'em that you're using Riskalyze nitrogen, they're like, oh, this is great. So like I said, I almost came here to ask this question of you guys just to get each of your takes on it.

Harry Mamaysky (41:01):

My quick take is the portfolio allocation process has three parts, and I think you described one of them, which is you try to figure out what the risk characteristic of each one of the asset classes looks like. The second part of it is you need to think about what the average return is likely to be, where in that range is the likeliest outcome? Call it like that. So one part is forecast risk, the other one is forecast return. And the third part is you have to kind of combine it all into a portfolio allocation decision. And that has all sorts of its own subtleties wrapped around it. So I would say it's an integral part of the process, but it's one part, it's the risk forecasting part of the process. You also have the return forecasting and then the portfolio construction part of the process. And again, I don't know the company, I don't know if they weigh in on those parts as well, but

Ravi Koka (41:51):

Right.

Harry Mamaysky (41:52):

So I think it's all those things together are what you need to be doing.

Ravi Koka (41:56):

You're talking about Riskalyze, right? Which rebranded as nitrogen. Is that correct? No, risk is very important. And there are several companies, have you heard of Bera Factors? The 40 40 factors owned by I-M-S-C-I, Bera was the guy who came up. So they look at 40 different characters, right? Volatility, momentum, growth, all of those, right? So returns one month return, three month return, but none of those can a hundred percent be right all the time. So it's a probability thing. So if the rating is good 55, 60% of the time, then it's a decent product. That's what you should look at, and it is going to vary from time to time. You cannot just even the best of the what happened during, remember the GameStop episode with that whole Reddit thing, wall Street bets, all risk management systems failed at that time because people had, in their portfolios, they had a MC, they had all these meme stocks and actually our phone started ringing saying, Hey, wait a minute, can your sentiment maybe explain some aspect of risk, which is none of these 40 factors are explaining the risk. So we actually ran a research project where we took those 40 factors, added another nine of our sentiment factors, and we did a whole bunch of risk and did not reach final conclusion because even adding our factors did not explain all of the risks.

(43:29):

You could have these outlier black swan type of events that you can't predict. I think it's a good platform for risk rating, but there are others also.

Brian Wallheimer (43:41):

Got a question up front here, grab the mic so we can get it recorded and then this will be the last one.

Audience Member 2 (43:52):

Thank you. Thanks for the panel. Great stuff. I'm curious whether you think AI is going to shift investment management away from the traditional buy and the hold and much more tactical portfolios. It seems that many of the advisors as well as the space has stuck to this buy and hold strategy, the path of least resistance. And with AI being more predictive, perhaps it becomes a tool that advisors as well as investment management can gravitate towards.

Chris Shuba (44:20):

Yeah, a hundred percent agree. In fact, that's almost all the work we do is, is it worth changing when how much every trade's in and out? So it's one thing to have a hunch, it's another thing to unwind it. Magnitude is always there too. Again, I keep saying it because I'm drinking water. It does weird things. It's all about consistency of time. The problem that most advisors have found is it's not about whether or not they're tactical. It's the consistency at which they would make tactical trades in and out, plus magnitude that wasn't repeatable throughout time. And so the net result was often worse than just sitting there and letting it by. When you throw AI in the mix, now you have a program to way of doing all of that that always runs as designed. So the efficacy of transacting becomes more realistic at an advisor level, but it's not just there.

(45:20):

Models are one thing. It's also portfolios. The great frontier is not so much in having a model that will adjust. It's having teams of models that all understand exactly how they're combined in a portfolio together, specific to a client's needs and risk tolerances that once an advisor sets that portfolio, it actually changed its shape over time automatically so that the advisor's not sitting in front of the client saying, yeah, I really only make changes whenever I meet with you. So we better meet from time to time type of a deal. The process is the key to unlocking what you just said, not whether or not tactical has value. It's really just can it be done consistently.

Brian Wallheimer (46:00):

I hate to cut off where overtime. Ravi, rod looks like you had something to say. Maybe you can connect real quick.

Ravi Koka (46:05):

It's just the last 10 years, passive took over 50%. So active is under pressure rate, active strategies. The promise of AI is to improve active strategies so that they can compete against the systematic passive strategies. So active, actively manage AI data-driven strategies with our ETF wrapper. We see that as the future.

Brian Wallheimer (46:29):

Thank you so much, everyone on the panel. Thank you guys. Thank you so much. Everybody on the next session.