25 May 2017
Find out more

Sandy-hit Wall Street faces questions over contingency plans

01 November 2012  |  5950 views  |  6 New York Skyline 2

As trading resumed in New York after a two-day Hurricane Sandy-inflicted pause, the effectiveness of Wall Street's back-up plans to deal with disasters was being called into question.

With business slowly getting back to normal, recriminations were soon flying over the failure to implement contingency plans, with former Securities and Exchange Commission chairman Arthur Levitt leading the way.

Levitt savaged the New York Stock Exchange, telling Bloomberg radio that "to see the exchange go down for two days without an adequate backup plan is very, very unfortunate".

As Sandy approached New York, Nyse had intended to revert to its contingency plan of moving all trading to its Arca electronic platform. However, this idea was ditched amid concerns over employee safety and the potential for malfunctions among market participants still spooked by recent technical glitches such as the Knight Capital debacle.

While most exchanges have back-up data centres located well away from their primary site, Nyse's contingency plan is the Arca system. Levitt though, told Bloomberg: "If you're going to have a stock exchange, it should have a backup facility of some sort so that regional events don't cause its closure."

Meanwhile, Knight Capital again found itself in trouble after a back-up generator in Jersey City failed yesterday afternoon, forcing the company to tell clients to route stock orders away from it.

Much of Wall Street was operating on backup generators yesterday and - despite the Knight issue - Nyse Euronext COO Larry Leibowitz told Reuters that "as a whole, it's actually going quite well. We haven't heard of widespread problems."

According to the FT, the SEC is now planning to "raise concerns" with trading firms about their contingency plans as part of its ongoing review of the market.

Meanwhile, among the Wall Street firms worst hit by Sandy is Citi, which says its office at 111 Wall St - which houses around 1800 operations, technology and administrative employees - will be unusable for weeks.

Comments: (6)

A Finextra member
A Finextra member | 01 November, 2012, 12:26

It's not unusual for banks to have their DR site located fairly close to the other side of the nearest river (East or Hudson).  Office in lower Manhattan near the Hudson, DR in Jersey City, etc.  Moronic, of course, but not uncommon.  And DR testing is frequently not a high priority, though that did improve some after 9/11.  But Sandy, far more then 9/11, should wake up some of the most complacent managers.  One US bank operation of a foreign bank that I know is probably mostly shut down now - its primary site is near WTC, and it's DR is just across the river.

Be the first to give this comment the thumbs up 0 thumb ups! (Log in to thumb up)
A Finextra member
A Finextra member | 02 November, 2012, 08:34

Putting DR centers further away and on higher grounds would certainly make common sense. But such an approach would not be very compatible with high speed gambing (aka HFT), due to longer signal transit times ...

Be the first to give this comment the thumbs up 0 thumb ups! (Log in to thumb up)
Ketharaman Swaminathan
Ketharaman Swaminathan - GTM360 Marketing Solutions - Pune | 02 November, 2012, 16:19

Not sure if this is an instance of regulators sitting in ivory towers and passing judgement at banks and FIs who are the ones really facing the brunt of the disaster. I partly blame technology vendors for conveying the impression that you can throw in an extra RAID here, install an extra blade server there, run a redundant cable between here and there, and be completely assured of business continuity. Having been through a couple of DR tests, it's virtually impossible to verify that the DR site can be activated and will work fine when the catastrophe actually strikes. Besides, all this talk of technology ignores the people angle. Amidst all the travel disruption that usually accompanies natural disasters, it's not easy to get the right people - who normally work out of the primary site - to the DR site in time, especially if it's located far away from the primary site.

Having said that, banks and FIs should do more than rely on a mop and bucket as their chief DR strategy - which is what one bank allegedly did - in the event of their data centers getting flooded!

Be the first to give this comment the thumbs up 0 thumb ups! (Log in to thumb up)
A Finextra member
A Finextra member | 02 November, 2012, 17:12

Well - actually the organisations running such critical systems are to blame, if they believe some marketeers painting low-cost plain vanilla PC technology (which today sits inside of those complex server farms) as being as reliable as the big iron that has been deployed there before. Once upon a time, typically fault tolerant (= failsafe) servers have been used in such trading applications - and those were also designed to easily and reliably switch over to the remote DR system if needed. Of course, that DR switchover has also been tested regularly.

And yes, those systems were also built to be run in a lights-out environment, and the (rather small) operational staff could operate them remotely. No need to rush them to the DR site just to activate it ...

Be the first to give this comment the thumbs up 0 thumb ups! (Log in to thumb up)
Ketharaman Swaminathan
Ketharaman Swaminathan - GTM360 Marketing Solutions - Pune | 02 November, 2012, 18:11

@FinextraM:

I agree with your point about falling for sales pitches for PC / Wintel systems claiming to be as failsafe as Tandem, Stratus and other big-iron that supported true redundancy.

As for remote operations, certain activities - e.g. changing switch encyrption keys - require the personal visit of one of very few highly specialized engineers and can't be done remotely for security reasons. I've also come across more than one bank where certain tasks can only be performed onsite. I don't know why such policies exist but they do pose severe challenges to keep the lights on in the event of a disaster.

Be the first to give this comment the thumbs up 0 thumb ups! (Log in to thumb up)
Ron Troy
Ron Troy - RT MDS Consulting - New York | 02 November, 2012, 18:35

I've been working in IT and related functions since 1979 (and studying it since about 1971).  I was taught some basics about DR and reliability along the way;

1. DR sites should whenever possible be on separate electric grids, not in major danger zones (flood, tornado, earthquake, etc.), have independent power, have enough capacity to carry your business for months, and have systems hot and backed up either continuously or daily from production.

2. DR scenarios should be tested to varying degrees multiple times per year.  Everything should be tested including incoming data, outgoing trade or like systems, web sites, intranet, backup, etc.

3. It's not a bad idea where practical to split production load (think Market Data backbones, for instance) between prime and DR sites as long as either one can handle the full load if need be.  Carefully handled licensing costs might be somewhat higher but then you know that the DR site is fully operational at all times.  It is critical though to have diverse routing from vendor feed sites, something frequently overlooked.

4. A financial firm should have its own network designed to work well even if one or more production locations are lost, and this connectivity should be frequently tested.  Back around 1990, Bankers Trust hubbed its network at 130 Liberty St - across from #1 WTC.  I pointed out at a DR planning meeting that there really needed to be some way to connect if the basement (where the hub was located) got damaged or flooded (I won't repeat the crude words aimed at me for that comment).  But we all know what happened to that building on 9/11 - eleven years later.

I read this morning that the New York Daily News had lost both its offices in lower Manhattan, and its printing plant / DR site in Jersey City.  I think we all understand that this sort of siting is at best, stupid (I think NYDN has now figured that out too).  So banks that have primary in lower Manhattan and backup either at say Metro Tech Brooklyn or in or near Jersey City or some such combo may finally be learning this lesson.

DR has long been a joke at many firms; I'd hoped that 9/11 would teach the needed lesson, but clearly it has not.  Maybe Sandy will - or not.

Be the first to give this comment the thumbs up 0 thumb ups! (Log in to thumb up)
Comment on this story (membership required)

Finextra news in your inbox

For Finextra's free daily newsletter, breaking news flashes and weekly jobs board: sign up now

Related stories

Sandy wreaks havoc on Manhattan

Sandy wreaks havoc on Manhattan

30 October 2012  |  6836 views  |  0 comments | 1 tweets | 2 linkedin
US stock markets shuttered as Hurricane Sandy approaches

US stock markets shuttered as Hurricane Sandy approaches

29 October 2012  |  4552 views  |  1 comments | 3 tweets | 2 linkedin
Buy bicycles! BofE's low-tech disaster recovery plan mocked

Buy bicycles! BofE's low-tech disaster recovery plan mocked

11 July 2012  |  9051 views  |  7 comments
European finance firms not confident on disaster recovery

European finance firms not confident on disaster recovery

24 November 2011  |  6138 views  |  0 comments
US bird flu tests highlight need for more planning

US bird flu tests highlight need for more planning

28 January 2008  |  4732 views  |  0 comments
US financial sector to test bird flu response

US financial sector to test bird flu response

25 May 2007  |  9326 views  |  0 comments

Related company news

 

Related blogs

Create a blog about this story (membership required)
visit www.events.sap.comvisit dh.comVisit www.capgemini.com/worldreports

Top topics

Most viewed Most shared
Banks must get on AI bandwagon now – new Finextra researchBanks must get on AI bandwagon now – new F...
9198 views comments | 22 tweets | 31 linkedin
Google and PayPal partner for mobile shopping by fingerprintGoogle and PayPal partner for mobile shopp...
9164 views comments | 28 tweets | 28 linkedin
Twins fool HSBC voice biometrics - BBCTwins fool HSBC voice biometrics - BBC
8939 views comments | 21 tweets | 24 linkedin
BBVA brings info and payments to social and messaging networksBBVA brings info and payments to social an...
6753 views comments | 11 tweets | 17 linkedin
UK SMEs missing out on £1.6bn by not accepting 'next gen' paymentsUK SMEs missing out on £1.6bn by not...
6645 views comments | 24 tweets | 18 linkedin

Featured job

Six Figure Base + Commission + Stock Options
London

Find your next job