Back to Insights

11.01.2023

The Present Realities of AI in Investment Management

The clear value proposition of any novel technology can be elusive at the beginning; the shock to an established system of a completely new way of doing something creates a mix of excitement and confusion that can be difficult to quantify. On one hand, there’s the inevitable astonishment at the sheer magnitude of new possibilities the emerging technology represents, but that is often followed by an inability to articulate in the end, what value those new possibilities actually might have.


The Rocky Road to “Here”

Artificial Intelligence is not an emerging technology; it has been around in real application for almost as long as it’s been a part of our collective consciousness. Early iterations sought to automate sufficiently simple decision making at an increased rate of calculation, freeing up time that humans could use for more complex, nebulous contemplation of whatever problem they were trying to solve. And this did, ultimately work; background processes which run in every computer system on the planet are the most basic example of this type of “intelligence”. Resources are made available or throttled in response to changes in user behaviour and demands, without the user having to process any of the instructions affecting those outcomes themselves. What is new, is the breathtaking level of complexity and speed of decisions AI is able to make on our behalf, and our collective awareness of that power.

At Kaiju, we started using AI exclusively for non-Volatility Arbitrage trading decisions almost half a decade ago, after being an exclusively quantitative investment manager for almost the same amount of time before that. The reason we decided to do so was an extension of the reason we decided to use quantitative processes exclusively before AI: mathematically sound, emotionless decision making. That’s not often one of the primary benefits you will come across when reviewing the strengths of AI-assisted or directed investment management, but it should likely be listed in the top several reasons the technology has a bright future in that specific application. It doesn’t sleep, get cranky, lose focus, wonder about retirement, suffer from hormone imbalances, get hungover or over-caffeinated, revenge-trade, or otherwise make any of the mistakes humans make routinely. It processes instructions and makes decisions with increasing efficiency every single time it does so, while constantly refining the criteria it uses in pursuit of better and better outcomes - all without any intervention on our part, once it has been sufficiently established. But it wasn’t always as smooth a path.

The earliest iterations of the systems we use today, while powerful compared to humans, were quite primitive when compared with AI five years later. They took an enormous amount of time to train - months - and lacked all the elegance and sophistication we’ve come to admire today. They were very slow to detect and react to see changes in the market, and would quite happily keep banging away allocating capital to a broken pattern, convinced in between retraining cycles that it would “all work out in the end”. In early 2021, a very early predecessor of our current ARCⓇ (AI Risk Containment) system, failed to perceive (or understand) a stratified, spiral sector rotation pattern that appeared in US capital markets. Because we had not yet invented our Regime Classification or Regime Change Detection engines (which are now applied at every level from underlying security, to industry, to sector, to broad market), there was no mechanism by which this system could register or consider the anomaly effectively. Had the pattern unwound slowly, and the system had time to learn it over several cumbersome retraining sessions, it might have adapted in time to avoid being hurt - but it did not. The pattern unwound quickly, the system failed to react at all, and the end result was that five months of consistent outsized profit were lost in a single monthly options cycle. We took the system - and the strategy it ran on - offline shortly after that happened.


The Magic of Now

Advances in Deep Learning, supercomputing, and the ability to offload massive multi-node retraining to more efficient Cloud environments, have made the ARCⓇ (and all of our complimentary systems) vastly more powerful than they were just a few short years ago, increasing speed and capability exponentially. Retraining cycles which used to take weeks to months now take hours to days, and cost less, increasing the number of concurrent engines we can run and leverage for evaluation. We are seeing almost exclusively on-model executions with live capital applications, and the third-party valuations of our technologies reflect that. But all that said, a complete reliance on AI for anything that falls outside its bailiwick (large dataset analysis, pattern recognition and predation) would not be advisable for a number of reasons.

First, if we are talking about Generative AI, we still have the challenges of reasonable certainty and hallucinations to overcome, with respect to the accuracy of information Generative AI is able to collect and process for us. Without spending time building cross-checking mechanisms, which at present still include a manual component, we cannot be sure the integrity of the information processed is of high enough caliber (or complete enough in scope) to underpin vital investment decisions. Generative AI is powerful enough to substantially filter and cull information for us, and that information can create context around where we believe our personal analysis is leading us, but it alone is just not enough.

Predictive AI on the other hand, is certainly powerful enough to autonomously both curate and direct investment strategies profitably - however the caveat there is, “Not all investment strategies”. For Predictive AI to achieve a high enough degree of certainty to reliably generate profit, the underlying strategy must be built on 1) patterns which repeat with reasonable certainty, in 2) asset classes for which there are consistent, robust, plentiful data available. If the Predictive AI in question is being tasked with finding patterns in non-standard or “noisy” data (such as social sentiment patterns contained in social media posts), it’s going to suffer from the same challenges as it would were it tasked with finding and exploiting patterns in standard market data (price, time, and quantity) where the occurrence of the pattern is too infrequent to generate reasonable certainty of its integrity. Likewise, investment ideologies which are global in nature, highly complex with respect to their interrelational dependencies, and rely heavily on being able to parse both context and nuance from the available data, are not going to be strategies with which Predictive AI will be particularly successful.

Contrast those current weaknesses with what Predictive AI does extremely well: billions of simultaneous discrete examinations transacted on a nanosecond timescale, in aid of pattern-driven investment management decisions. There is no person, group of people, or enterprise of people who could come close to the sheer magnitude of mathematical processing power of a single AI decision making system, and that will never change. But despite that power, AI isn’t remotely close to being capable of true innovation - and that’s where the incredible potential of a collaborative relationship with its creators is revealed.

 
A Cooperative Evolution

Instead of attempting to separate and silo what AI and humans do best individually, each carving out space within its own unique landscape, working within a construct where work is passed back and forth through well-defined fences, we can chart a different path: collaboration. The spark of inspiration is a uniquely human gift, and no machine can now, or will likely ever, be capable of that. But AI can work with us to refine inspiration and invention, offering layers of improvement which in turn direct us down new and exciting paths where imagination might again catch fire. This cooperative process is potentially infinitely repeatable; AI refinement of an original idea might inspire us to invent new branches in a decision tree which would not otherwise exist. The AI can then pivot in response to our behaviour, rewrite relevant portions of its own codebase to adapt, and offer refined efficiencies which in turn optimize the outputs we seek to create. This is where AI currently represents the strongest value proposition globally, and yet amazingly is also the most underevaluated use case.

We are used to handing tasks wholesale over to technological systems; once a new technology represents a value proposition which supplants what used to be a human process, we do not expect to be brought back into the loop. A basic example of this would be a thermostat. We don’t want to continuously fiddle with temperature; we want to set specific targets for multiple scenarios, and then have the internal heating and cooling system simply “make it so”. Once email was capable of automatically fetching new mail for us, we did not conceive of any benefit to returning to a quasi-manual process. The habits we have formed across time with technologies which automate our world, are the very things that prevent us from viewing AI in the different light we will need to in order to more effectively enjoy all the benefits it can offer. The reason we haven’t done so yet is simple: we’ve never before experienced this level of collaborative possibility with technology; it’s always been one-way, until now.
 

A Promising Future

Imagine if you will, creating a movie from a collection of clips, or writing a song with specific instruments and vocal ranges in mind - but having at your fingertips a collaborative, suggestive engine made up of the world’s greatest film editors (living or dead), or the world's most talented composers from the past 300 years. Imagine what suggestions they might collectively make, not to create the movie or write the song for you, but rather as inspirational guidance you could pick and choose from, actively rolling the combined brilliance of all of that talent in with your own unique insights, to create the best material you were possibly capable of. In the above hallucination, AI didn’t replace you or render your gifts obsolete - it enhanced both. At Kaiju, this is how we have been working with AI for years now.

We start with manually managed trading strategies which have repeatedly demonstrated a capacity for substantial outperformance. Our team of AI scientists then teach those strategies to our systems, and then the systems autonomously refine that body of work, usually offering significant improvements. Sometimes this yields rounds and rounds of collaborative revision between humans and machines, with each honing the outputs of the other, until collectively we’ve taken the strategy as far as it can go. It’s a truly magical process. We, the humans, use our unique gifts of inspiration and innovation to seed the system with what the machines then use their unique gifts of quantum refinement to make better. Because we innately understand context and nuance, we are able to see cracks in the system that the machines cannot, and in turn they are able to process millions of iterations of variable outcomes with speed and accuracy far beyond our capabilities.

The end result of this process is what the goal of all responsible AI innovation and implementation should be: symbiosis.