A 2017 report by Investment Technology Group projected that electronic equity execution in the US and Europe would grow to around 57% and 55% respectively in 2019, up from 48% and 50% in 2015.
But as the market for execution grows so does the competition. The sell-side is under constant pressure to provide services to attract clients, from trading analytics tools to unbundling commissions to high touch execution advisory services.
Most importantly, however, is the ability to provide the highest quality algorithmic execution possible, which means constantly testing and tweaking to adapt to dynamic market conditions.
But if sell-side firms are all using exactly the same sets of historical data to develop trading decisions through their suite of algorithms, then could it be that they are unintentionally arriving at similar trading outcomes, leading to ‘herding’ and ultimately
increasing the potential for future flash crashes?
Flash crash flashback
Though 2010 was a long time ago, the flash crash that wiped US$1 trillion off the US market hasn’t yet been forgotten by the trading community.
According to the SEC, a large order of e-mini futures triggered the event, resulting in the largest ever point swing for the Dow Jones index. The herd was in full panic mode.
During such situations you certainly wouldn’t want your participation algo slicing out orders and crossing the spread. Not only would your algos perform poorly, but selling into such panic would only further exasperate an already bad situation.
Child orders competing with the lowest latency players for bids in fast markets where everyone is selling rarely produce favourable fills for clients.
But while the 2010 flash crash was a once in a decade occurrence that no one could have predicted, how can the sell-side best back-test its algorithm models to perform under such stressful situations without feeding into herding trends?
The role of agent-based modelling
This is where agent-based modelling (ABM) comes in to play, by allowing businesses to simulate the behaviours of complex systems through imitating individual heterogeneous agents and how they interact with each other.
Rather than modelling the outcome of a particular scenario, the outcome is actually the emergent behaviour from the simulation.
For example, using ABM you could modify the behaviour of participant agents to simulate a fast market based on a specific news event, such as the US Federal Reserve unexpectedly changing rates.
In response, specialist or market maker agents might show wider spreads, retail agents may stop trading, or sell side agents might send smaller child orders.
In essence, by running large numbers of identical simulations a quantifiable sense of likely market impact can be calculated, ultimately helping firms optimise every single trade they execute and avoiding the potential for herding and feeding into market
Outdated back-testing models serve to only reconstruct a past event from which you can tweak your algorithms, but that event may never actually happen again in the exact same way it previously unfolded.
Amongst the many benefits of agent-based modelling is the forward-looking ability to predict the possible future of markets or executions. With the flexibility of each agent you can modify specific inputs at a granular level, observe how other inputs are
influenced and assess what affect they may have on the entire environment.
It provides a new paradigm of forward-testing algorithms against any plausible scenario, giving insight into the robustness of algos under different conditions and enabling sell-side execution desks to more effectively tune their solutions.
Ultimately, electronic trading will continue to grow and so will markets. While there may not be another flash crash in the same way we saw back in 2010, you can bet that there will be other extreme trading events which back-testing with historical data
will never be able to truly prepare for.
So the question is simple, do you want to be prepared for the worst or do you want to stay part of the herd?