A RISKY BUSINESS

Investment banking has always been about taking risks — whether market, credit or operational — and then monitoring, controlling and modifying those risks in order to obtain the desired risk profile with the maximum return.

Risk management occurs at various levels within a bank:

  • At the organisational level, the Board will be monitoring high-level risk and concentration measures across different asset classes and locations. This information enables them to instruct business heads to modify exposure or to place macro-level hedges and generally decide how risk capital should be deployed. This is also the level at which regulatory risk reporting and capital calculations are performed.
  • At the desk and/or business level, a senior head of trading will be looking at the breakdown of the higher level risk measures for their individual business unit(s); monitoring the individual spot and forward sensitivities; how these sensitivities change with moves in the market; and how they combine for multiple trading positions into an aggregated risk position. This will then be discussed with individual traders as well as ensuring certain hard or soft risk limits for the business areas are not breached.
  • Finally, at the trader/desk level the focus is on monitoring positions, risk exposures and hedging equivalent positions so that a trader can either take on, remove or hedge unwanted risk.

In order to reduce reconciliation issues (and increase confidence in the figures) risk information should be consistent between each level.

To support the trading and management of financial transactions, there are typically two approaches to developing trading and risk management systems. The first is to implement an all-encom-passing global system that provides a single approach and storage location for all transactions across the bank, its locations and desks. The other is the more commonly seen ‘stove pipe’ or ‘silo’ approach, where each business area (and possibly location) develops a tailored solution to address that area’s individual needs.

Both approaches have their problems: the single, centralised system is often found to be unresponsive to individual traders’ needs, requiring significant global coordination both technically and managerially. Although it provides a single system, which can be used by all levels of the organisation to monitor risk exposure, the many, often contradictory, requirements and frequent lack of a clear business sponsor often doom such approaches to failure before they even begin. The stove pipe approach provides the advantage of implementing ‘best of breed’ systems in each business area as a localised and tailored (expensive) solution that is responsive to the individual trader’s needs – funded by the local profit/trading centre. However, these systems ignore the issues that come with globally managing risk across a firm, as well as the advantages of leveraging and sharing technology and functionality already developed elsewhere in the organisation.

It is interesting to note that such a strategy taken to its illogical conclusion of ‘ultimate trader flexibility at the expense of all other controls and support issues’ resulted in the ‘spreadsheet madness’ often witnessed in many investment banks during the 1990s, where traders developed and used individually tailored spreadsheets to structure, trade and risk manage deals.

Although flexible, these unscaleable stove pipe solutions result in little reuse of software or methodology in other areas of the bank, non-standard valuation and risk management approaches, and can prove impossible to control and maintain. For an individual trader, the focus on tailored monitoring of risk exposure is not seen as a problem — they should always be able to trade using the information that helps them to achieve the best risk adjusted return.

However, at the senior trader or organisational level, this is not an option since risk exposure will simply not net and often will not aggregate if totally different risk measures are used; an outcome that is not acceptable to any regulator. For example, if a trader in London buys a derivative and then a trader in Tokyo sells the exact same derivative, each using their own market data and risk models, it is highly likely that the value and risk exposure will not net as it should (even before we consider the issue of whether there is a master netting agreement with the client). An even worse outcome from this transaction would be if the two locations were to produce significantly different prices so that it was possible for a client to make a profit from simply buying from one branch office and selling back to another — not good news for the bank.

The previous example highlights the need to ensure some level of consistency in pricing and risk management between each trading system as well as the need to ensure that risks taken at different trading locations or different desks do not compound exposure in an unwanted manner.

Exposure

How many banks woke up from the Enron scandal in 2001 to suddenly realise that not only did they hold a large amount of corporate bonds or credit derivatives with Enron as the underlying name, but also had made a large number of loans, had counterparty exposure on various OTC derivative’s and even held a significant amount of Enron stock, possibly traded out of different locations/offices?

This shows that an initial assessment of exposure that looks at a single asset class system or geographical location can be misleading. Even more significantly, the regulators have increased their requirements to ensure the completeness, consistency and complexity of risk monitoring to address such situations. However, to achieve such a goal introduces a number of technical and process challenges.

Even where development focuses on centralising front office systems that can be directly queried in order to generate consolidated risk management reports, the transactional nature of these systems often means that they are optimised for inserting and updating current information, rather than performing complex analysis on snapshot data or performing historical analysis of positions and risk (such as VaR and stress testing). Any developer that has tried extracting significant amounts of historical information for use in data mining, or OLAP analysis tools from a front office system, will have discovered this as systems grind to a halt when the data query is executed (along with the associated screams from the trading desk). This also leads to the move towards replicating data into other systems or data warehouses, where it is presented in a common data representation and efficiently accessible away from the trading environment for risk analysis.

As a result, many banks take the hybrid solution of developing silo-based trading systems that utilise or send trade information to a common centralised pricing and risk management engine. This approach also runs into problems when integrating trading and risk environments together. The risk engine often acts as just another data mart where information is duplicated and risk managed separately, away from the trading desk using yet another set of (possibly inconsistent) models, market data and measures. Subsequently, obvious reconciliation and consistency issues result.

Where multiple front-end systems are used to feed a data mart/risk management database, the combining of this information requires significant effort in post-processing: cleaning, standardising and re-mapping data into a common format. The common experience is often that most of the effort in data collection is spent extracting, cleaning and uploading information, which prevents real-time information reporting.

Fortunately, with the arrival of middleware and improved system integration tools, it has become easier to extract, verify and correct information prior to loading it into a data mart. However, the issue of ensuring correct and consistent data, especially if data is processed or modified outside of the originating front office systems, remains. Even where pricing and spot sensitivity calculations used by traders may appear correct, the complexity and volume of risk information required for calculating more complex risk metrics and P&L can highlight other issues that need to be addressed and corrected.

This makes the problem of aggregating risk information into a single location even more complex than just mapping between data models. It also means that corporate level risk management is often only performed once a day based on market closing positions. Although this is acceptable for most regulatory risk reporting, it means that intra-day risk positions essentially go unmonitored. This can result in traders being able to take on significant intra-day risk without it being noticed by the corporate risk management group. In fact, if the trader is clever enough and passes positions around the globe into local books, there is a chance it may never be picked up in the risk reporting process. What is clearly required is an approach where intra-day risks can be monitored in real-time across all asset classes (as well as ensuring data is correctly entered). Real-time information monitoring can then be used to feed a data mart for more complex and historical risk analysis and reporting.

The component approach

One way around these challenges is a component approach. This provides an environment in which organisations can quickly build tailored risk management applications. By permitting incremental frequent delivery of new functionality using common pre-written and pretested components, operational risk is also significantly reduced. As a result, re-engineering a risk management environment to monitor global credit and market risk does not dramatically increase the operational risk within the organisation.

Fundamentally, risk management is about monitoring day to day risks, stress testing for improbable events and having a flexible environment and tools to spot new (as yet undefined or unnoticed) risks within the market, credit and operational risk environment. To achieve this within a dynamic innovative environment, such as the financial markets, banks require a flexible infrastructure whereby existing information and functionality can be easily extended to monitor new risks arising externally from changes in the market, and internally from changes in the organisation’s business model. This is what organisations will be able to achieve with the extensible nature of risk management applications written using component technology.

 

Norman Sachs

%d bloggers like this:
Skip to toolbar