02 September 2014

69297

Ajit Tripathi - Accenture

3 | posts 8,424 | views 0 | comments

BCBS 239 from an implementation perspective

14 February 2014  |  4442 views  |  1

The principles laid out in BCBS 239 set a high standard for risk data aggregation and reporting as well as a rather challenging timeline for implementation. At a minimum, BCBS 239 raises the standard for risk data quality to the level of the prevailing standard for P&L data quality.

However, the real implications run much deeper. Any firm that aims to comply with the letter and the spirit of the regulation will be keen to introspect how it is organized and what execution capabilities it has really managed to build over the years. This blog attempts to connect strategic objectives of BCBS 239 with the ground level operational insights from implementing data programs. For more insights, see John Barclay's blog here: http://www.finextra.com/blogs/fullblog.aspx?blogid=8846

1. Strategic Portfolio Planning: Several large banks have built themselves as an agglomeration of opportunistic revenue streams from different asset classes. As a result, there is very little integration in the front office. In several cases, each desk has their own trading, booking, pricing and reporting systems. Further, in order to rush through new product approvals, firms have often built systems that mirror and copy data from middle-office risk aggregation systems. These back office systems often receive risk in more than one data channel for the same asset class, and in some cases, even the same book. Often when risk managers need to know what the risk really is, they sometimes need to look at the front office systems and aggregate in access or excel.

BCBS tries to redress this situation by making board level review of risk data aggregation a requirement for new product approval and other strategic business decisions such as mergers, spinoffs and acquisitions. That means, inspite of the highly uncertain nature of financial markets, investment banks now need to engage in more thorough portfolio planning, or build nimbler infrastructure capabilities that can respond quickly to investment and disinvestment decisions; neither of which is particularly easy.

2. Start Integration with the Front Office: This is where OTC platforms similar to Goldman Sachs SecDB platform, the JP Morgan Athena Platform and the BOFA-ML Quartz platforms could be massive assets to banks that intend to continue to compete in some of the high margin investment lines which could come back to dominate the scene when the global economy recovers. While many firms struggle to building large teams of business analysts to document feeds in Microsoft Word, other firms will have a consistent set of working interfaces for risk data creation and risk data sourcing that also make it much easier to provide the traceability of data from the point of creation to the point of aggregation and reporting. 

3. Over-invest in Control Systems: While some historical data will be useful here, it can be  argued that the industry has historically underinvested in control systems relative to revenue systems. That was acceptable until this underinvestment led to considerable, frequent panic during the financial crisis as the risk operations teams across the industry struggled to extract position and exposure data from front and middle office systems with the help of often equally exhausted IT staff. Initial regulation in this area was piece-meal and focussed on producing a progressively sophisticated risk number. 

Then SOCGEN and UBS lost money on positions they could not see and it became very apparent that the completeness and quality of data in the back office was much more important than the complexity of the mathematics underlying risk calculations in the front office. If you didn’t know if a position existed, it didn’t matter how complex your risk calculations on that position could have been. In recent years, IT investment has shifted towards integrating what banks have rather than building the next stochastic differential equation into the quant library. BCBS 239 will further accentuate this shift.

4. Bring Your CIO to the Board: Although there is a growing realization among banks that IT is a strategic capability and not merely an operational one, not many CIOs currently have an executive seat on the board. By raising the potential technology cost of business decisions, as well as the business impact of technology decisions to unprecedented levels, BCBS 239 will eventually force senior management to bring CIOs on the board.

5. Data Driven Change Teams: Similar to Google and Facebook that refuse to make decisions without sourcing and processing the right data, both change and IT professionals need to invest in modern data science training to be able to efficiently analyze large amounts of complex data before requirements are defined or implemented. At the very least, requirements need to be validated against current or target data before large investments are made in implementation. That means hiring and training business analysts and developers who are adept at analyzing data. It also means investing in tools and technology to introspect existing data to generate or create the right level of metadata including shared taxonomies, semantic and syntatic validation rules, business rules - and adequate tools for data custodians and data stewards to implement the desired level of data governance.

6. Embed SMEs in IT departments: Many firms have created a somewhat strict separation between teams with functional expertise and teams with technical expertise. While the former remain primarily onshore, the latter is being off shored in a hurry. While colocation of risk managers and technology in the past led to a lot of reactive, tactical systems being built to support risk management on a desk-by-desk basis, reactive offshoring of technology capability, without aligning it with the offshoring of functional expertise is becoming a major bottleneck for decommissioning tactical systems and replacing them with systems that’d provide timeliness, completeness and accuracy.

If anything, risk departments are being forced to invest risk budgets in tactical systems through onshore end-user-computing functions while increasingly off-shored IT functions wait for finalized business requirements and aspire to build that increasingly elusive target state envisioned in the rather desirable aims of BCBS 239.

Some of this separation is driven by well intentioned regulatory oversight of investment bank IT functions which emphasized predictability along the lines of the the Carnegie Mellon capability maturity model. Unfortunately, predictability in IT sits poorly with the unpredictability in the business. Too much time is lost in translation between the end users whose needs are changing, and the implementers who are often found burning midnight oil to build faithfully to a spec that was valid 3 months ago. The way around this situation is to embed dev teams with people who are or have been end users of the data, such as risk production staff or expert change SMEs.

7. Rethink Build vs Buy: Driven by the belief that proprietary technology provides a source of competitive advantage, several banks have persistently chosen to build their own technology rather than integrate vendor platforms. This was true in an environment where higher product complexity meant higher margins, the cost of IT was small relative to gross margins and systems had to be flexible to respond to new product approvals within weeks. Some banks even built their own databases or programming languages and more often than not, their own BI tools. This is no longer true in an environment where product complexity is penalized with higher capital charges. In recent years, the challenges of implementing Basel III and sustained offshoring pressures on IT departments have led to a greater awareness that IT portfolios need to be rebalanced towards a more judicious mix of build and buy.

8. Build a deep rooted CTO Organization: Historically, IT departments have operated in functional silos without a deep firm wide CTO organization to assist the CIO. As a result, firms have multiple tools even for the same use case, often from different vendors. No wonder then that these tools interoperate poorly and create further data quality issues by requiring mind-numbing effort to simply translate data. That also affects the non-functional characteristics that risk managers, and now the regulator demand in terms of timeliness, completeness and accuracy now enshrined in BCBS 239. Often when existing solutions fail to meet the non-functional characteristics due to process limitations and integration challenges, the obvious response is to shortcut the analysis and change the technology, rather than improve the extent of integration. New, more expensive technology is brought in reactively, sometimes even on top of the existing technology that is blamed for the apparent failure. 

A cohesive, centralized review of technology decisions needs to be institutionalized by embedding technical specialists in project teams - that represent the need to have a consistent data architecture, which actually implies a consistent technical architecture as well.

9. Process first Thinking: Investing in technology change is easy - as easy as spending money on your next IPhone. Unfortunately, a better IPhone makes poor communication worse by making it easier to communicate poorly. Philosophically, if you think of risk data as a message between revenue and control functions, BCBS 239 is really about communicating well across the organization.

Investing in business process change is hard as business process change affects decisions relating to jobs, roles and careers of people within and across the organization. Unfortunately, badly defined processes, even when combined with the best technology, still generate bad data that’s late and incomplete. It is true that entering data into excel rather than a validated web-form leads to more frequent errors, but if traders use two completely separate systems to book trades and hedges, which are then priced with different versions of the quant library and which then take multiple different process and data paths through risk, finance and operations working in sound-proof silos, it hardly matters how good the the systems involved are in isolation.

10. Output Driven Data Architecture

This heading requires an entirely separate, much more technical blog. However, at a high level, the first step towards implementing BCBS 239 may be to create a comprehensive inventory of existing reports that is maintained and reviewed even weekly. This analysis has to start with the outputs, tracing back to the inputs.

Data architecture efforts frequently start with analyzing the unbounded set of inputs. What do our book hierarchies look like? Which sensitivities do we get from the commodities desk? Which feeds should we keep? Which operational databases should we merge into a data warehouse? Which feeds should we use for what? What translations happen to this data on the way to the risk number we produce? Can we use informatica to generate the metadata before we clean it?

Most banks have hundreds of unused risk reports that are being emailed around the organization without a clear idea of who is actually using the report and for what. These reports in turn require redundant data and functionality and even systems and people to continue generating them. Several of these systems have their own copies of master data, their own copies of slightly (sometimes inadvertently) modified risk data, their own databases, reconciliation scripts, folders, hardware and run the bank support that was probably requested by a trader four years ago, who may now have left the bank.

Fortunately, with BCBS 239 emphasizing the need to periodically assess purpose and set frequency of each report, these Augean stables will now need to be cleaned up. The regulator will additionally demand some of these reports to be turned around quickly as a stress test for the risk data aggregation capabilities, further forcing the issue. 

Summary

In summary, any firm that intends to comply with the spirit of the BCBS 239 regulation, rather than invest vast amounts in independent justification of the extent of compliance, needs to relentlessly simplify its structure and processes, which in turn will lead to much greater efficiency and return on capital. BCBS 239 is a regulation that presents a tremendous opportunity to force the strategic change that the industry has known it needs but hasn't had the will do implement due to the need to respond to the hail of piecemeal regulation. BCBS 239 potentially represents a comprehensive overhaul of the functional architecture of GSIBs, which is necessary to institutionalize a culture of better risk management in the industry.

On the surface, BCBS 239 is about data and technology rather than structure and process, only to the extent that regular exercise is about looking good rather than being fit.

 

 

 

 

TagsRisk & regulationInnovation

Comments: (1)

Dennis Slattery - EDMworks - London | 10 May, 2014, 11:37

This blog is one of the most insightful pieces of information available on BCBS 239.  Great work.

Comment on this story (membership required)
Log in to receive notifications when someone posts a comment

Latest posts from Ajit

BCBS 239 Trends - Miles to Go Before I Sleep

18 August 2014  |  1745 views  |  0  |  Recommends 0 TagsRisk & regulationInnovation

BCBS 268 Says Banks Must Ramp Up Data Governance and IT

22 February 2014  |  2238 views  |  0  |  Recommends 0 TagsRisk & regulation

BCBS 239 from an implementation perspective

14 February 2014  |  4442 views  |  1  |  Recommends 0 TagsRisk & regulationInnovation
name

Ajit Tripathi

job title

Senior Manager

company name

Accenture

member since

2014

location

london

Summary profile See full profile »
SME - BCBS 239, FDSF and Risk Reporting.

Ajit's expertise

What Ajit reads
Ajit writes about

Who is commenting on Ajit's posts

Stan Christiaens
Dennis Slattery