“Latency” is an issue that we hear a lot about over in the Capital Markets and Financial Technology spaces. It looks like it may well have an impact in the payments space too.
A lot of talk about Payments and SEPA is about the bank-to-bank part, involving a relatively-small volume of messages about high-value transactions. This is looking at the payments “iceberg” from above the water level – top-down. It’s not looking bottom-up
at the rest of the iceberg – the 90+% of the messages that cause most of the work.
Say that the percentage of card-based payments will increase and the percentage of cash-based payments will decrease: result - more electronic messaging. The communications cost of messaging is normally based on how big the message is and how long it occupies
space in the network. If you buy a 100 Megabit connection, the cost per megabit is less than if you buy a 1 Megabit connection (through economies of scale). Another way of looking at it is, the faster the network, the less time the message occupies the network
space, so you get charged less for the space.
Looking at it from the major retailers’ viewpoint, a primary goal at the point-of-sale is to move the customer faster through the checkout?
Card authorisation has to go from the checkout right the way up to the card processor and back again, so latency is relevant to how fast you can move customers through the checkout. Today, card processors talk about sub-second latency – like we used to
talk about in the Capital Markets space just a handful of years ago.
How soon will it be before we start talking about card payment messaging latency in single milliseconds? (Or would you rather make one of those world-famous long-term predictions like Digital Equipment’s “No home will ever need a computer”!)