Blog article
See all stories »

Artificial Intelligence and its Discontents

The application of artificial intelligence to financial markets has been making the headlines in recent weeks. Each day we see new articles about the countless and evolving use cases of chatbots and machine learning for trading and other varieties of capital allocation. But I’ve long thought that the only thing more interesting than the capabilities of AI are its limitations.

In a recent opinion piece, Wall Street Journal special writer Gregory Zuckerman pointed out that results generated from the application of artificial intelligence to investment strategies “haven’t been especially impressive”.

This is certainly true. However Zuckerman went on to state that the ‘one big problem’ responsible for the lack of results is simply that “investors rely on more limited data sets than those used to develop the ChatGPT chatbot and similar language-based AI efforts.” He would seem to imply that, in time, greater data sets will enable AI to generate more reliable returns than humans.

Such confidence is likely to be misplaced for two reasons, which are demonstrated by the application of algorithms in the trading industry.

Firstly, and as I wrote in the Wall Street Journal on 26 March 2020, Program Trading Magnifies Black Swan Disturbances. This is because one of the most important inputs in the present algorithms used to make computerized trades is volatility itself. This means that when true volatility hits the market, the computers exacerbate problems that humans themselves are in no position to mitigate or control.

Secondly, collective human insight, when aggregated across markets, tends to cancel out its own mistakes in a way that computers don’t. For any poor decision made by a given trader – whether because they lacked experience in that sector or simply forgot their coffee that morning – a better decision made by a more experienced and alert trader will cancel it out, ensuring liquidity is maintained across the market.

By contrast, and for the same reason they magnify black swans, computers provided the same data sets will tend to adopt the same strategies as one another, whether or not they’re successful, serving to amplify market failures at a smaller scale.

Fundamentally, the ‘one big problem’ is not just that past data sets cannot be relied upon to predict the future behaviour of markets (as the author acknowledges) but rather that computers tend to concentrate risk whereas humans disperse it.

So long as this is true - for the foreseeable future at least - the machinations of the market will remain as mysterious to artificial intelligence as the workings of ChatGPT appear to us.


a member-uploaded image

Comments: (0)

Daniel Schlaepfer

Daniel Schlaepfer


Select Vantage Inc

Member since

23 May



Blog posts


This post is from a series of posts in the group:

Artificial Intelligence and Financial Services

Artificial Intelligence and Financial Services

See all

Now hiring