
Ong Ai Ling is Head of AIOI (Artificial Intelligence of Investments) at Lion Global Investors. She started her investment career in London and has 18 years of experience managing Asia-Pacific and global equity investments. She debunks the myths surrounding Artificial Intelligence in Investing in our True or False series.
ChatGPT can tell me what stocks to buy. I can trust the financial advice from an AI.
Ong Ai Ling: False. You should not be relying on a generic AI (Artificial Intelligence) model like ChatGPT for financial advice. AI is like tools in a toolbox. You need to know which tools to use for what purposes, and how they need to be customised for their specific purpose. So ChatGPT is a very advanced Large Language Model. It was built to learn and replicate human language, specifically for communication purposes. It may appear very intelligent, but that is actually just because it is very knowledgeable. In the same way that if you have a lawyer friend who might be very knowledgeable about the law and is a smart person. But, if the lawyer has not been trained in finance, would you really want to be taking financial advice from a lawyer?
In fact, here at Lion Global Investors, we also have a transformer-based model, and this transformer architecture is very similar to the transformer architecture that ChatGPT uses. But specifically, we have trained it for financial data. We have tuned the parameters for financial purposes. This is just one of the many models that we have in our toolkit, and we have used multiple models in order to achieve the best financial outcomes.
The more complex a model, the better it will be at predicting financial returns.
Ong Ai Ling: Interestingly, false. In the artificial intelligence and machine learning world, we generally consider the complexity of a model based on its computational requirements. In other words, a number of iterative calculations the machine needs to make, and that is what determines its computational complexity. The markets will often think that the more complex the model, the better it must be. But very often we find that a well-tuned, properly customised, simpler model using properly cleaned inputs could be more robust and perform better than a complex black box model. Ong Ai Ling: Furthermore, the inputs themselves are an art and a science. Data cleaning, feature engineering. These are just some terms you will hear data scientists talk about. What it refers to is the choice of inputs that we use, how you prepare them, how you normalise the inputs. All these decisions effectively require some form of domain knowledge. What we mean by domain knowledge is the experience and knowledge that you pick up in the field of finance over time. You often hear the phrase "garbage in, garbage out." This is particularly true in finance because the data that we work with and deal with is inherently very noisy, subjective and changes all the time.

At Lion Global Investors, we adopt a “human-in-the-loop” approach. We believe that by constantly checking on the accuracy of our inputs and the consistency of our models, we will be able produce a better outcome.
AI machines can build new AI machines themselves. They do not need humans anymore!
Ong Ai Ling: True and false both, actually. From a technical perspective, yes, it is possible you can have one Siri speaking to another Siri. In the same way, you can tell ChatGPT to program another algorithm to do X, Y and Z. Technically it works, but what we found is that it tends to lead to an eventual model collapse.
There was a piece of research done in 2023, and it is called "The Curse of Recursion". This is an experiment where they used a self-feeding process where one AI is fed with the outputs from another AI. Eventually, both AI started producing rubbish. Both models just collapsed. This highlights the importance of having a “human-in-the-loop” approach or what in AI terms you call “reinforcement learning with human feedback”. At Lion Global Investors, we adopt a “human-in-the-loop” approach. We believe that having human checks and balances whereby we constantly check on the accuracy of our inputs and if the consistency of our models produces a better outcome.
Ask ChatGPT the same question 10 times and it will give you 10 different answers.
Ong Ai Ling: True. ChatGPT and other Large Language Models work on what we call a "probabilistic basis." In other words, the model inherently introduces an element of randomness deliberately. The outputs are not deterministic, ie. they are not repeatable. So, when you ask the same question once, it will give you a different answer each time you ask it.
However, in finance we generally prefer deterministic models such that when I start with the same inputs and I keep every other parameter the same, it should give me the same output. Even when we want to introduce some randomness into the model, we will do so in a very controlled fashion. We might look at sampling in controlled intervals, and even then, we have to think about how we average out the outcomes and how we think about the probability distribution or the confidence intervals that the outcome represents.