How will AI change finance?

April 2023. March was a huge month for developments in artificial intelligence (AI). GPT-4’s impressive performance on coding tasks and standardized exams raised many questions about what jobs would be automated. In finance at least, we believe that humans plus AI will beat AI.

The month of March 2023 may be remembered in history as one of the most consequential for AI developments. OpenAI’s GPT-4 model proved to be a significant improvement over its previous iteration, with impressive coding skills, the ability to accurately describe and interpret images and greater than 85th percentile performances on college admission exams and US bar exams. An early paper studying the model described it as having “sparks” of the holy grail of AI research: Artificial General Intelligence – the ability for a machine to learn any task a human can.

This, along with a recent flurry of AI advances was enough to prompt an open letter, calling for a six-month pause on training AI systems more powerful than GPT-4, with high-profile signatories including Elon Musk and Steve Wozniak.

Whether or not this letter will have any effect is yet unknown but, it is clear that the generative AIs of today possess general intelligence and already have an ability to massively change the way we work.

Some estimates suggest that as many as 300 million jobs could be affected, and that global GDP could be boosted by as much as 7% over a 10-year period.

So a natural question is: How will this affect finance?

AI may still have a way to go before it can provide good advice on its own. GPT-4 still ‘hallucinates’ – confidently and convincingly presenting incorrect information – although at a lower frequency than its previous versions. Moreover, it is not great at discussing the future, tending to draw on what other sources have said, rather than formulating new ideas.

As AI improves, its biggest hurdle may be how it can generate trust. The question must be raised as to how we can trust an entity which is not accountable for its actions. Moreover, trust in an AI, implicitly requires putting faith in the company that trained it. Regulators in particular will be paying attention to whether bias may arise from the underlying data on which AI programs are trained. There is a risk that AI could make discriminatory decisions, or exploit consumer biases and vulnerabilities.

For now, the role of AI will likely be to enhance human decisions. Here, significant progress has already been made. Recently, Bloomberg created BloombergGPT, a large language model which is trained on forty years of financial language documents. Banks are also testing chatbots which can quickly search through large repositories of research and data and provide detailed summaries of complex information. Human verification will be imperative for the most effective use of these tools to avoid the effects of hallucinations and bias.

These tools will allow financial professionals to spend less time finding information and more on understanding it. They will promote more original ideas over pre-packaged advice. But for now, humans plus AI will beat AI.

Tao Yu, Quantitative Analyst

Quarterly update webpage banner