77 posts categorized "Hedge Funds"

15 April 2014

Financial Markets Data and Analytics. Everywhere You Need Them.

Very pleased to announce that Xenomorph will be hosting an event, "Financial Markets Data and Analytics. Everywhere You Need Them.", at Microsoft's Times Square New York offices on May 9th.

This breakfast briefing includes Sang Lee of the analyst firm Aite Group offering some great insights from financial institutions into their adoption of cloud technology, applying it to address risk management, data management and regulatory reporting challenges.

Microsoft will be showing how their new Power BI can radically change and accelerate the integration of data for business and IT staff alike, regardless of what kind of data it is, what format it is stored in or where it is located.

And Xenomorph will be introducing the TimeScape MarketPlace, our new cloud-based data mashup service for publishing and consuming financial markets data and analytics. More background and updates on MarketPlace in coming weeks.

In the meantime, please take a look at the event and register if you can come along, it would be great to see you there.

31 March 2014

Innovations in Liquidity Risk Management - PRMIA

PRMIA put on an event at MSCI on Wednesday, called "Innovations in Liquidity Risk Management".

 

WP_20140326_18_04_09_Raw

Melissa Sexton of Morgan Stanley introduced the agenda, saying that the evening would focus on three aspects of liquidity risk management:

  • methodology
  • industry practice
  • regulation

LiquidityMetrics by MSCI - Carlo Acerbi of MSCI then took over with his presentation on "LiquidityMetrics". Carlo said that he was pleased to be involved with MSCI (and RiskMetrics, aquired by MSCI) in that it had helped to establish and define standards for risk management that were used across the industry. He said that liquidity risk management was difficult because:

  • Clarity of Definition - Carlo suggest that if he asked the audience to define liquidity risk he would receive 70 differing definitions. Put another way, he suggested that liquidity risk was "a strange animal with many faces".
  • Data Availability - Carlo said that there were aspects of the market that we unobservable and hence data was scarce/non-existent and as such this was a limit on the validity of the models that could be applied to liquidity risk.

Carlo went on to clarify that liquidity risk was different depending upon the organization type/context being considered, with banks obviously focusing on funding. He said that LiquidityMetrics was focused on asset liquidity risk, and as such was more applicable to the needs of asset managers and hedge funds given recent regulation such as UCITS/AIFMD/FormPF. The methodology is aimed at bringing traditional equity market impact models out from the trading floor across into risk management and across other asset classes. 

Liquidity Surfaces - LiquidityMetrics measures the expected price impact for an order of a given size, and as such has dimensions in:

  • order size
  • liquidity time horizon
  • transaction costs

The representation shown by Carlo was of a "liquidity surface" with x dimension of order size (both bid and ask around 0), y dimension of time horizon for liquidation and z (vertical) dimension of transaction cost. The surface shown had a U-shaped cross section around zero order size, at which the transaction cost was half the bid-ask spread (this link illustrates my attempt at verbal visualization). The U-shape cross section indicates "Market Impact", its shape over time "Market Elasticity" and the limits for what it is observable "Market Depth". 

Carlo then moved to consider a portfolio of instruments, and how obligations on an investment fund (a portfolio) can be translated into the estimated transaction costs of meeting this obligations, so as to quantify the hidden costs of redemption in a fund. He mentioned that LiquidityMetrics could be used to quantify the costs of regulations such as UCITS/AIFMD/FormPF. There was some audience questioning about portfolios of foreign assets, such as holding Russian Bonds (maybe currently topical for an audience member maybe?). Carlo said that you would use both the liquidity surfaces for both the bond itself and the FX transaction (and in FX, there is much data available). He was however keen to emphasize that LiquidityMetrics was not intended to be used to predict "regime change" i.e. it is concerned with transaction costs under normal market conditions). 

Model Calibration - In terms of model calibration, then Carlo said that the established equity market impact models (see this link for some background for instance) have observable market data to work with. In equity markets, traditionally there was a "lit" central trading venue (i.e. an exchange) with a star network of participants fanning out from it. In OTC markets such as bonds, there is no star network but rather many to many linkages establised between all market participants, where each participant may have a network of connections of different size. As such there has not been enough data around to calibrate traditional market impact models for OTC markets. As a result, Carlo said that MSCI had implemented some simple models with a relatively small number of parameters. 

Two characteristics of standard market impact models are:

  1. Permanent Effects - this is where the fair price is impacted by a large order and the order book is dragged along to follow this.
  2. Temporary Effects - this is where the order book is emptied but then liquidity regenerates

Carlo said that the effects were obviously related to the behavioural aspects of market participants. He said that the bright side for bonds (and OTC markets) was given that the trades are private there was no public information, and price movements were often constrained by theoretical pricing, therefore permanent effects could be ignored and the fair price is insenstive to trading (again under "normal" market conditions). Carlo then moved on to talk about some of the research his team was doing looking at the shape of the order book and the time needed to regenerate it. He talked of "Perfectly Elastic" markets that digest orders immediately and "Perfectly Plastic" markets that never regenerate, and how "Relaxation Time" measures in days how long the market takes to regenerate the order book. 

WP_20140326_18_34_29_Raw

Liquidity Observatory - Carlo described how the data was gathered from market participants on a monthly basis using a spreadsheet to categorize the bond/asset class type, and again using simple parameters from active "expert" traders. Take a look at this link and sign up if this is you. (This sounded to me a lot like another "market consensus" data gathering exercise which are proving increasingly popular, such as one the first I had heard of many years back in Totem - we are not quite fully ready for "crowdsourcing" in financial markets maybe, but more people are seeing sense in sharing data.). 

Panel Debate - Ron Papenek of MSCI was moderator of the panel, and asked Karen Cassidy of Morgan Stanley about her experiences in liquidity risk management.

Liqudity Risk Management at Banks - Karen started by saying that in liquidity management at Morgan Stanley they look at:

  • Funding
  • Operating Capital
  • Client Behaviour

Since 2008, Karen said that liquidity management had become a lot more rigorous and formalized, being rule based and using a categorisation of assets held from highly liquid to highly illiquid. She said that Morgan Stanley undertake stress testing by market and also by idiosyncratic risk over time frames of 1 month and 1 year. As part of this they are assessing the minimum operating liquidity needed based on working capital needs. 

Karen added that Morgan Stanley are expending a lot of effect currently on data collection and modelling given that their data is specific to a retail broker-dealer unit, unlike many other firms. They are also looking at metrics around financial advisors, and how many clients follow the financial advisor when he or she decides to switch firms. 

Business or Regulation Driving Liquidity Risk Management - Ron asked Karen what were the drivers of their processes at Morgan Stanley. Karen said that in 2008 the focus was on fundability of assets, saying that the FED was monitoring this on a daily basis. She made the side comment that this monitoring was not unusual since "Regulators live with us anyway". Karen said that it was the responsibility of firms to come up with the controls and best practice needed to manage liquidity risk, and that is what Morgan Stanley do anyway.

Karen added that in her view the industry was over-funding and funding too long in response to regulation, and that funding would be at lower but still pragmatic levels in the absence of regulatory pressure. Like many in the industry, Karen thought the regulation had swung too far in response to the 2008 crisis and would eventually swing back to more normal levels. 

Carlo added that he had written an unintentionally prescient academic paper on liquidity management in 2008 just prior to the crisis hitting, and he thought the regulators certainly arrived "after" the crisis rather than anticipating it in any way. He thought that the banks have anticipated the regulators very well with measures such as LCR and SFR already in place. 

In contrast, Carlo said that the regulators were lost in dealing with liquidity risk management for asset managers and hedge funds, with regulation such as UCITS being very vague on this topic and regulators themselves seeking guidance from the industry. He recounted a meeting he had with BaFin in 2009 where he told them that certain of their regulations made no sense and he said they acknowledge this and said the asset management industry needed to tell them what to implement (sounds like the German regulator is using the same card as the UK regulators in keeping regulations vague when they are uncertain, waiting for regulated firms to implement them to see what the regulation really becomes...). 

What Have We Learnt Since 2008 - Karen said that back in 2008 liquidity was not managed to term, funding basis was not rigorous and relied heavily on unsecured debt. She said that since then Morgan Stanley had been actively involved in shaping the requirements of better liquidity risk management with more rigorous analysis of counterparties and funding capacity. Karen said that stronger governance was a foundation for the creation of better policy and process. She said that regulators were receptive to new ideas and had been working with them closely.

What will be the effect of CCPs on OTC markets? Carlo said that when executing a large order, you have the choice between executing 1) multiple small orders with multiple counterparties or 2) a single large block order with one counterparty. In this regard, the equity and bond markets are very different. In lit equity venues, the best approach is 1), but in the bond markets approach 2) is taken since the trade information is not transparent to the market.

Obviously equity markets have become more fragmented, and this has resulted in improve market quality since it is harder to get all market information and hence the market is less resonant to big events/orders. Carlo added that with the increased transparency proposed for OTC markets with CCPs etc will this improve them? His answer was that this was likely to improve the counterparty risk inherent in the market but due to increased transaparency is likely to have a negative effect on transaction costs (I guess another example of the law of unintended consequencies for the regulators).

Audience Questions - there then followed some audience questions:

LiqidityMetrics extrapolation - one audience member asked about transaction cost extrapolation in Carlo's modelling. Carlo said that MSCI do not extrapolate and the liquidity surface terminates where the market terminates its liquidity. There was some extrapolation used along the time dimension however particularly in relation to the time-relaxation parameter. 

LiquidityMetrics "Cross-Impact" - looking at applying LiquidityMetrics to a portfolio, one audience member wondering if an order for one asset distorted the liquidity surface for other potentially related assets. Carlo said this was a very interesting area with little research done so far. He said that this "cross-impact" had not been detected in equity markets but that they were looking at it in other markets such as fixed income where effective two assets might be proxies for duration related trading. Carlo put forward a simple model of where the two assets are analogous to two species of animal feeding from the same source of food.

Long and short position liquidity modelling - one audience member asked Carlo what the effects would be of being long or short and that in a crisis you would prefer to be short (maybe obviously?) given the sell off by those with long positions. Carlo clarified that being "short" was not merely taking the negative number on a liquidity surface for a particular asset but rather a "short" is a borrowing position with an obligation to deliver a security at some defined point, and as such is a different asset with its own liquidity surface.  

Changing markets, changing participants - final question of the evening was from one member of the audience who asked if the general move out of fixed income trading by the banks over recent years was visible in Carlo's data? Carlo said that MSCI only have around two years of data so far and as such this was not yet visible but his team are looking for effects like this amongst others. He added that the August 2011 weak banks - weak sovereigns in Europe was visible with signals present in the data.

WP_20140326_20_10_03_Raw

Good food and good (really good I thought) wine put on by MSCI at the event reception. Great view of Manhattan from the 48th floor of World Trade Centre 7 too.

WP_20140326_19_46_09_Raw

 

 

 

 

 

24 March 2014

#DMSLondon - The Hobgoblin of Little Minds: Risk and Regulation as Drivers

The second panel of the day was "Regulation and Risk as Data Management Drivers" - you can find the A-Team's write up here. Some of my thoughts/notes can be found below:

  • Ian Webster of Axioma responded to a question about whether consistency was the Holy Grail of data management said that there isn't consistent view possible for data used in risk and regulation - there are many regulations with many different requirements and so unnecessary data consistency is "the hobgoblin of little minds" in delaying progress and achieving goals in data management.
  • James of Lombard Risk suggest that firms should seek competitive advantage from regulatory compliance rather than just compliance alone - seeking the carrot and not just avoiding the stick.
  • Ian said he thought too many firms dealt with regulatory compliance in a tactical manner and asked if regulation and risk were truly related? He suggested that risk levels might remain unchanged even if regulation demanded a great deal more reporting.
  • Marcelle von Wendland said she thought that regulation added cost only, and that firms must focus on risk management and margin.
  • James said that "regulatory risk" was a category of risk all in itself alongside its mainstream comtempories.
  • Ian added that risk and finance think about risk differently and this didn't help in promoting consistency of ideas in discussions about risk management.
  • James said that the legacy of systems in financial markets was a hindrince in complying with new regulation and mentioned the example of the relatively young energy industry where STP was much easier to implement.
  • Laurent of Bloomberg said that young, emerging markets like energy were greenfield and as such easier to implement systems but that they did not have any experience or culture around data governance.
  • Marcelle said that the G20 initiatives around trade reporting at least promoted some consistency and allowed issues to be identified at last.
  • Ian said in response that was unconvinced about politically driven regulation, questioning its effectiveness and motivations.
  • Ian raised the issues of the assumptions behind VaR and said that the current stress tests were overdone.
  • Marcelle agreed that a single number for VaR or some other measure meant that other useful information has potentially been ignored/thrown away.
  • General consensus across the panel that fines were not enough and that restricting business activities might be a more effective stick for the regulators.
  • James reference the risk data aggregation paper from the Basel Committee and suggested that data should be capture once, cleaned once and used many times.
  • Ian disagreed with James in that he thought clean once, capture once and use many times was not practically possible and this goal was one of the main causes of failure within the data management industry over the past 10 years. 
  • The panel ended with Ian saying that we not just solve for the last crisis, but the underlying causes of crises were similar and mostly around asset price bubbles so in order to recuce risk in the system 1) lets make data more transparent and 2) do what we can to avoid bubbles with better indices and risk measures.

3 Regulation panel

 

18 March 2014

#DMSLondon - Creating a Data Map of the Financial Enterprise

Rupert Brown of UBS did the keynote at this Spring's A-Team Data Management Summit (DMS). Rupert's talk was about understanding what data there is within a financial institution and understanding where it comes from and where it goes to. Rupert started by asking the question "Where are we?" illustrating it with a map of systems and data flows for an institution - to my recollection I think he said it stretched to 7 metres in length and did not look that accessible or easy to understand. He asked what dimensions it should have as a "map" of data, wondering what dimensions are analogous to latitude, longitude, altitude and orientation? Maybe things like function, product, process, accounting or legal entity as potential candidates.

1 Rupert start of day where are we

Briefly Rupert took a bit of a detour into his love of trains with a little history on the London Underground Map. He started by mentioning the role of George Dow who illustrated maps for train routes in a single line, showing just dependency and lineage (what stations are next etc) and ignoring geography and distance. This was built upon by another gentleman, Harry Beck, who took these ideas a stage further with the early ancestors of the current Undergroud map, showing both routes but interweaving all the lines together into a map that additionally was topologically sufficient (indicating broad direction - NESW).

Continuing on with this analogy of Underground to maps of data and data management, Rupert then mentioned Frank Pick who created the Underground brand. Through creating such an identifiable brand, effectively Frank got people to believe and refer to the map, and that people in data governance need and could benefit from taking a similar approach to data governance with data management. I guess it is easy to take maps we see every day for granted and particularly some of the thought that went into them, maybe ideas that initially were not intuitive (or at least not directly representative of physical reality) but that greatly improved understand and comprehension. Put another way, representing reality one for one does not necessarily get you to something that is easy to understand (sounds like a "model" to me). 

Rupert then described some of his efforts using Open Street Map to map data, making use of the concepts of nodes, ways and areas. Apparently he had implemented this using a NoSQL database (Mark Logic) for performance reasons (doesn't sound like a really "big data" sized problem with several hundred apps and several thousand data transports but nevertheless he said it was needed, maybe as a result of its graph like nature?). He said that the data was crowdsourced to refine the data, with a wiki for annotations. He said he was interested in the bitemporality of data, i.e. how the map changes over time. He advised that every application should also be thought of as its own "databus" in addition to any de facto databuses might be present in the architecture. 

In summary the talk was interesting, but it was demonstrable from what Rupert showed that we have long way to go in representing clearly and easily where data came from, where it goes to and how it is used. I think Rupert acknowledges this and has some academic partnerships trying to develop better ways of representing and visualizing data. Certainly data lineage and audit trail on everything is a hot topic for many of our clients currently, and something that deserves more attention. You can download Rupert's presentation here and the A-Team's take on his talk can be found here.

12 March 2014

S&P Capital IQ Risk Event #2 - Enterprise or Risk Data Strategy?

Christian Nilsson of S&P CIQ followed up Richard Burtsal's talk with a presentation on data management for risk, containing many interesting questions for those considering data for risk management needs. Christian started his talk by taking a time machine back to 2006, and asking what were the issues then in Enterprise Data Management:

  1. There is no current crisis - we have other priorities (we now know what happened there)
  2. The business case is still too fuzzy (regulation took care of this issue)
  3. Dealing with the politics of implementation (silos are still around, but cost and regulation are weakening politics as a defence?)
  4. Understanding data dependencies (understanding this throughout the value chain, but still not clear today?)
  5. The risk of doing it wrong (there are risk you will do data management wrong given all the external parties and sources involved, but what is the risk of not doing it?)

Christian then moved on to say the current regulatory focus is on clearer roadmaps for financial institutions, citing Basel II/III, Dodd Frank/Volker Rule in the US, challenges in valuation from IASB and IFRS, fund management challenges with UCITS, AIFMD, EMIR, MiFID and MiFIR, and Solvency II in the Insurance industry. He coined the phrase that "Regulation Goes Hollywood" with multiple versions of regulation like UCITS I, II, III, IV, V, VII for example having more versions than a set of Rocky movies. 

He then touched upon some of the main motivations behind the BCBS 239 document and said that regulation had three main themes at the moment:

  1. Higher Capital and Liquidity Ratios
  2. Restrictions on Trading Activities
  3. Structural Changes ("ring fence" retail, global operations move to being capitalized local subsidiaries)

Some further observations were on what will be the implications of the effective "loss" of globablization within financial markets, and also what now can be considered as risk free assets (do such things now exist?). Christian then gave some stats on risk as a driver of data and technology spend with over $20-50B being spent over the next 2-3 years (seems a wide range, nothing like a consensus from analysts I guess!). 

The talk then moved on to what role data and data management plays within regulatory compliance, with for example:

  • LEI - Legal Entity Identifiers play out throughout most regulation, as a means to enable automated processing and as a way to understand and aggregate exposures.
  • Dodd-Frank - Data management plays within OTC processing and STP in general.
  • Solvency II - This regulation for insurers places emphasis on data quality/data lineage and within capital reserve requirements.
  • Basel III - Risk aggregation and counterparty credit risk are two areas of key focus.

Christian outlined the small budget of the regulators relative to the biggest banks (a topic discussed in previous posts, how society wants stronger, more effective regulation but then isn't prepared to pay for it directly - although I would add we all pay for it indirectly but that is another story, in part illustrated in the document this post talks about).

In addtion to the well-known term "regulatory arbitrage" dealing with different regulations in different jurisdictions, Christian also mentioned the increasingly used term "subsituted compliance" where a global company tries to optimise which jurisdictions it and its subsidiaries comply within, with the aim of avoiding compliance in more difficult regimes through compliance within others.

I think Christian outlined the "data management dichotomy" within financial markets very well :

  1. Regulation requires data that is complete, accurate and appropriate
  2. Industry standards of data management and data are poorly regulated, and there is weak industry leadership in this area.

(not sure if it was quite at this point, but certainly some of the audience questions were about whether the data vendors themselves should be regulated which was entertaining).

He also outlined the opportunity from regulation in that it could be used as a catalyst for efficiency, STP and cost base reduction.

Obviously "Big Data" (I keep telling myself to drop the quotes, but old habits die hard) is hard to avoid, and Christian mentioned that IBM say that 90% of the world's data has been created in the last 2 years. He described the opportunities of the "3 V's" of Volume, Variety, Velocity and "Dark Data" (exploiting underused data with new technology - "Dark" and "Deep" are getting more and more use of late). No mention directly in his presentation but throughout there was the implied extension of the "3 V's" to "5 V's" with Veracity (aka quality) and Value (aka we could do this, but is it worth it?). Related to the "Value" point Christian brought out the debate about what data do you capture, analyse, store but also what do you deliberately discard which is point worth more consideration that it gets (e.g. one major data vendor I know did not store its real-time tick data and now buys its tick data history from an institution who thought it would be a good idea to store the data long before the data vendor thought of it).

I will close this post taking a couple of summary lists directly from his presentation, the first being the top areas of focus for risk managers:

  • Counterparty Risk
  • Integrating risk into the Pre-trade process
  • Risk Aggregation across the firm
  • Risk Transparency
  • Cross Asset Risk Reporting
  • Cost Management/displacement

The second list outlines the main challenges:

  • Getting complete view of risk from multiple systems
  • Lack of front to back integration of systems
  • Data Mapping
  • Data availability of history
  • Lack of Instrument coverage
  • Inability to source from single vendor
  • Growing volumes of data

Christian's presentation then put forward a lot of practical ideas about how best to meet these challenges (I particularly liked the risk data warehouse parts, but I am unsurprisingly biassed). In summary if you get the chance then see or take a read of Christian's presentation, I thought it was a very thoughtful document with some interesting ideas and advice put forward.

 

 

 

 

 

 

 

11 December 2013

Aqumin visual landscapes for TimeScape

Very pleased that our partnering with Aqumin and their AlphaVision visual landscapes has been announced this week (see press release from Monday). Further background and visuals can be found at the following link and for those of you that like instant gratification please find a sample visual below showing some analysis of the S&P500.

Sp500aq

06 December 2013

F# in Finance New York Style

Quick plug for the New York version of F# in Finance event taking place next Wednesday December 11th, following on from the recent event in London. Don Syme of Microsoft Research will be demonstrating access to market data using F# and TimeScape. Hope to see you there!

27 November 2013

Putting the F# in Finance with TimeScape

Quick thank you to Don Syme of Microsoft Research for including a demonstration of F# connecting to TimeScape running on the Windows Azure cloud in the F# in Finance event this week in London. F# is functional language that is developing a large following in finance due to its applicability to mathematical problems, the ease of development with F# and its performance. You can find some testimonials on the language here.

Don has implemented a proof-of-concept F# type provider for TimeScape. If that doesn't mean much to you, then a practical example below will help, showing how the financial instrument data in TimeScape is exposed at runtime into the F# programming environment. I guess the key point is just how easy it looks to code with data, since effectively you get guided through what is (and is not!) available as you are coding (sorry if I sound impressed, I spent a reasonable amount of time writing mathematical C code using vi in the mid 90's - so any young uber-geeks reading this, please make allowances as I am getting old(er)...). Example steps are shown below:

Referencing the Xenomorph TimeScape type provider and creating a data context: 

F_1

Connecting to a TimeScape database:

F_2

Looking at categories (classes) of financial instrument available:

F_3

Choosing an item (instrument) in a category by name:

F_4

Looking at the properties associated with an item:

F_5

The intellisense-like behaviour above is similar to what TimeScape's Query Explorer offers and it is great to see this implemented in an external run-time programming language such as F#. Don additionally made the point that each instrument only displays the data it individually has available, making it easy to understand what data you have to work with. This functionality is based on F#'s ability to make each item uniquely nameable, and to optionally to assign each item (instrument) a unique type, where all the category properties (defined at the category schema level) that are not available for the item are hidden. 

The next event for F# in Finance will take place in New York on Wednesday 11th of December 2013 in New York, so hope to see you there. We are currently working on a beta program for this functionality to be available early in the New Year so please get in touch if this is of interest via [email protected].  

 

04 November 2013

Risk Data Aggregation and Risk Reporting from PRMIA

Another good event from PRMIA at the Harmonie Club here in NYC last week, entitled Risk Data Agregation and Risk Reporting - Progress and Challenges for Risk Management. Abraham Thomas of Citi and PRMIA introduced the evening, setting the scene by refering to the BCBS document Principles for effective risk data aggregation and risk reporting, with its 14 principles to be implemented by January 2016 for G-SIBs (Globally Systemically Important Banks) and December 2016 for D-SIBS (Domestically Systemically Important Banks).

The event was sponsored by SAP and they were represented by Dr Michael Adam on the panel, who gave a presentation around risk data management and the problems have having data siloed across many different systems. Maybe unsurprisingly Michael's presentation had a distinct "in-memory" focus to it, with Michael emphasizing the data analysis speed that is now possible using technologies such as SAP's in-memory database offering "Hana".

Following the presentation, the panel discussion started with a debate involving Dilip Krishna of Deloitte and Stephanie Losi of the Federal Reserve Bank of New York. They discussed whether the BCBS document and compliance with it should become a project in itself or part of existing initiatives to comply with data intensive regulations such as CCAR and CVA etc. Stephanie is on the board of the BCBS committee for risk data aggregation and she said that the document should be a guide and not a check list. There seemed to be general agreement on the panel that data architectures should be put together not with a view to compliance with one specific regulation but more as a framework to deal with all regulation to come, a more generalized approach.

Dilip said that whilst technology and data integration are issues, people are the biggest issue in getting a solid data architecture in place. There was an audience question about how different departments need different views of risk and how were these to be reconciled/facilitated. Stephanie said that data security and control of who can see what is an issue, and Dilip agreed and added that enterprise risk views need to be seen by many which was a security issue to be resolved. 

Don Wesnofske of PRMIA and Dell said that data quality was another key issue in risk. Dilip agreed and added that the front office need to be involved in this (data management projects are not just for the back office in insolation) and that data quality was one of a number of needs that compete for resources/budget at many banks at the moment. Coming back to his people theme, Dilip also said that data quality also needed intuition to be carried out successfully. 

An audience question from Dan Rodriguez (of PRMIA and Credit Suisse) asked whether regulation was granting an advantage to "Too Big To Fail" organisations in that only they have the resources to be able to cope with the ever-increasing demands of the regulators, to the detriment of the smaller financial insitutions. The panel did not completely agree with Dan's premise, arguing that smaller organizations were more agile and did not have the legacy and complexity of the larger institutions, so there was probably a sweet spot between large and small from a regulatory compliance perspective (I guess it was interesting that the panel did not deny that regulation was at least affecting the size of financial institutions in some way...)

Again focussing on where resources should be deployed, the panel debated trade-offs such as those between accuracy and consistency. The Legal Entity Identifier (LEI) initiative was thought of as a great start in establishing standards for data aggregation, and the panel encouraged regulators to look at doing more. One audience question was around the different and inconsistent treatment of gross notional and trade accounts. Dilip said that yes this was an issue, but came back to Stephanie's point that what is needed is a single risk data platform that is flexible enough to be used across multiple business and compliance projects.  Don said that he suggests four "views" on risk:

  • Risk Taking
  • Risk Management
  • Risk Measurement
  • Risk Regulation

Stephanie added that organisations should focus on the measures that are most appropriate to your business activity.

The next audience question asked whether the panel thought that the projects driven by regulation had a negative return. Dilip said that his experience was yes, they do have negative returns but this was simply a cost of being in business. Unsurprisingly maybe, Stephanie took a different view advocating the benefits side coming out of some of the regulatory projects that drove improvements in data management.

The final audience question was whether the panel through the it was possible to reconcile all of the regulatory initiatives like Dodd-Frank, Basel III, EMIR etc with operational risk. Don took a data angle to this question, taking about the benefits of big data technologies applied across all relevant data sets, and that any data was now potentially valuable and could be retained. Dilip thought that the costs of data retention were continually going down as data volumes go up, but that there were costs in capturing the data need for operational risk and other applications. Dilip said that when compared globally across many industries, financial markets were way behind the data capabilities of many sectors, and that finance was more "Tiny Data" than "Big Data" and again he came back to the fact that people were getting in the way of better data management. Michael said that many banks and market data vendors are dealing with data in the 10's of TeraBytes range, whereas the amount of data in the world was around 8-900 PetaBytes (I thought we were already just over into ZetaBytes but what are a few hundred PetaBytes between friends...).

Abraham closed off the evening, firstly by asking the audience if they thought the 2016 deadline would be achieved by their organisation. Only 3 people out of around 50+ said yes. Not sure if this was simply people's reticence to put their hand up, but when Abraham asked one key concern for many was that the target would change by then - my guess is that we are probably back into the territory of the banks not implementing a regulation because it is too vague, and the regulators not being too prescriptive because they want feedback too. So a big game of chicken results, with the banks weighing up the costs/fines of non-compliance against the costs of implementing something big that they can't be sure will be acceptable to the regulators. Abraham then asked the panel for closing remarks: Don said that data architecture was key; Stephanie suggested getting the strategic aims in place but implementing iteratively towards these aims; Dilip said that deciding your goal first was vital; and Michael advised building a roadmap for data in risk. 

 

 

 

 

23 October 2013

Model Risk Management from PRMIA

Guest blog post by Qi Fu of PRMIA and Credit Suisse NYC with some notes on a model risk management event held ealier in September of this year. Big thank you to Qi for his notes and to all involved in organising the event:

The PRMIA event on Model Risk Management (MRM) was held in the evening of September 16th at Credit Suisse.  The discussion was sponsored by Ernst & Young, and was organized by Cynthia Williams, Regulatory Coordinator for Americas at Credit Suisse. 

As financial institutions have shifted considerable focus to model governance and independent model validation, MRM is as timely a topic as any in risk management, particularly since the Fed and OCC issued the Supervisory Guidance on Model Risk Management, also known as SR 11-7.

The event brings together a diverse range of views: the investment banks Morgan Stanley, Bank of American Merrill Lynch, and Credit Suisse are each represented, also on the panel are a consultant from E&Y and a regulator from Federal Reserve Bank of NY.  The event was well attended with over 100 attendees.

Colin Love-Mason, Head of Market Risk Analytics at CS moderated the panel, and led off by discussing his 2 functions at Credit Suisse, one being traditional model validation (MV), the other being VaR development and completing gap assessment, as well as compiling model inventory.  Colin made an analogy between model risk management with real estate.   As in real estate, there are three golden rules in MRM, which are emphasized in SR 11-7: documentation, documentation, and documentation.  Looking into the future, the continuing goals in MRM are quantification and aggregation.

Gagan Agarwala of E&Y’s Risk Advisory Practice noted that there is nothing new about many of the ideas in MRM.  Most large institutions already have in place guidance on model validation and model risk management.  In the past validation consisted of mostly quantitative analysis, but the trend has shifted towards establishing more mature, holistic, and sustainable risk management practices. 

Karen Schneck of FRBNY’s Models and Methodology Department spoke about her role at the FRB where she is on the model validation unit for stress testing for Comprehensive Capital Analysis and Review (CCAR); thus part of her work was on MRM before SR 11-7 was written.  SR 11-7 is definitely a “game changer”; since its release, there is now more formalization and organization around the oversight of MRM; rather than a rigid organization chart, the reporting structure at the FRB is much more open minded.  In addition, there is an increased appreciation of the infrastructure around the models themselves and the challenges faced by practitioners, in particularly the model implementation component, which is not always immediately recognized.

Craig Wotherspoon of BAML Model Risk Management remarked on his experience in risk management, and comments that a new feature in the structure of risk governance is that model validation is turning into a component of risk management.  In addition, the people involved are changing: risk professionals with the combination of a scientific mind, business sense, and writing skills will be in as high demand as ever.

Jon Hill, Head of Morgan Stanley’s Quantitative Analytics Group discussed his past experience in MRM since 90’s, when then the primary tools applied were “sniff tests”.  Since then, the landscape has long been completely changed.  In the past, focus had been on production, while documentation of models was an afterthought, now documentation must be detailed enough for highly qualified individual to review.  In times past the focus was only around validating methodology, nowadays it is just as important to validate the implementation.  There is an emphasis on stress testing, especially for complex models, in addition to internal threshold models and independent benchmarking.  The definition of what a model is has also expanded to anything that takes numbers in and haves numbers as output.  However, these increased demands require a substantial increase in resources; the difficulty of recruiting talent in these areas will remain a major challenge.

Colin noted a contrast in the initial comments of the panelists, on one hand some are indicating that MRM is mostly common sense; but Karen in particular emphasized the “game-changing” implications of SR 11-7, with MRM becoming more process oriented, when in the past it had been more of an intellectual exercise.  With regards to recruitment, it is difficult to find candidates with all the prerequisite skill sets, one option is to split up the workload to make it easier to hire.

Craig noted the shift in the risk governance structure, the model risk control committees are defining what models are, more formally and rigorously.  Gagan added that models have lifecycles, and there are inherent risks associated within that lifecycle.  It is important to connect the dots to make sure everything is conceptually sound, and to ascertain that other control functions understand the lifecycles.

Karen admits that additional process requirements contain the risk of trumping value.  MRM should aim to maintain high standards while not get overwhelmed by the process itself, so that some ideas become too expensive to implement.  There is also the challenge of maintaining independence of the MV team.

Jon concurred with Karen on the importance of maintaining independence.  A common experience is when validators find mistakes in the models, they become drawn into the development process with the modelers.  He also notes differences with the US, UK, and European MV processes, and Jon asserts his view that the US is ahead of the curve and setting standards.

Colin noted the issue of the lack of an analogous PRA document to SR 11-7, that drills down into nuts and bolts of the challenges in MRM.  He also concurred on the difficulty of maintaining independence, particularly in areas with no established governance.  It is important to get model developers to talk to other developers about the definition and scope of the models, as well as possible expansion of scope.  There is a wide gamut of models: core, pricing, risk, vendor, sensitivity, scenarios, etc.  Who is responsible for validating which?  Who checks on the calibration, tolerance, and weights of the models?  These are important questions to address.

Craig commented further on the complexity and uncertainty of defining what a model is, and on whose job it is to determine that, amongst the different stakeholders.  It also needs to be taken into consideration that model developers maybe biased towards limiting the number of models.

Gagan followed up by noting that while the generic definition of models is broad, and will need to be redefined, but analytics do not all need to have the same standards, the definition should leave some flexibility for context.  Also, the highest standard should be assigned to risk models.

Karen adds that, defining and validating models used to have a narrow focus, and done in a tailor-controlled environment.  It would be better to broaden the scope, and to reexamine the question on an ongoing basis (it is however important to point out that annual review does not equal annual re-validation).  In addition to the primary models, some challenge models also need to be supported; developers should discuss why they’re happy with primary model, how it is different from challenger model, and how it impacts output.

Colin brought up the point of stress-testing.  Jon asserts that stress-testing is more important for stochastic models, which are more likely to break under nonsensical inputs.  Also any model that plugs into the risk system should require judicious decision-making, as well as annual reviews to look at changes since the previous review.

Colin also brought up the topic of change management: what are the system challenges when model developers release code, which may include experimental releases.  Often discussed are concepts of annual certification and checkpoints.  Jon commented that the focus should be on changes of 5% or more, with pricing model being less of a priority; and firms should move towards centralized source code depositories.

Karen also added the question of what ought to considered material change: the more conservative answer is any variation, even if a pure code change that didn’t change model usage or business application, may need to be communicated to upper management.

Colin noted that developers often have a tendency to encapsulate intentions, and have difficulty or reluctance to document changes, thus resulting in many grey areas.  Gagan added that infrastructure is crucial.  Especially when market conditions are rapidly changing, MRM need to have controls that are in place.  Also, models are in Excel make the change management process more difficult.  

The panel discussion was followed by a lively Q&A session with an engaged audience, below are some highlights.

Q:  How do you distinguish between a trader whose model actually needs change, versus a trader who is only saying so because he/she has lost money?

Colin:  Maintain independent price verification and control functions.

Craig:  Good process for model change, and identify all stakeholders.

Karen:  Focus on what model outputs are being changed, what the trader’s assumptions are, and what is driving results.

Q:  How do you make sure models are used in business in a way that makes sense?

Colin:  This can be difficult, front office builds the models, states what is it good for, there is no simple answer from the MV perspective; usage means get as many people in the governance process as possible, internal audit and setting up controls.

Gagan:  Have coordination with other functions, holistic MRM.

Karen:  Need structure, inventory a useful tool for governance function.

Q:  Comments on models used in the insurance industry?

Colin:  Very qualitative, possible to give indications, difficult to do exact quantitative analysis, estimates are based on a range of values.  Need to be careful with inputs for very complex models, which can be based on only a few trades.

Q:  What to do about big shocks in CCAR?

Jon:  MV should validate for severe shocks, and if model fails may need only simple solution.

Karen:  Validation tools, some backtesting data, need to benchmark, quant element of stress testing need to substantiated and supported by qualitative assessment.

Q:  How to deal with vendor models?

Karen:  Not acceptable just to say it’s okay as long as the vendor is reputable, want to see testing done, consider usage also compare to original intent.

Craig:  New guidance makes it difficult to buy vendors models, but if vendor recognizes this, this will give them competitive advantage.

Q:  How to define independence for medium and small firms?

Colin:  Be flexible with resources, bring in different people, get feedback from senior management, and look for consistency.

Jon:  Hire E&Y?  There is never complete independence even in a big bank.

Gagan:  Key is the review process.

Karen:  Consultants could be cost effective; vendor validation may not be enough.

Q:  At firm level, do you see practice of assessing risk models?

Jon:  Large bank should appoint Model Risk Officer.

Karen:  Just slapping on additional capital is not enough

Q:  Who actually does MV?

Colin:  First should be user, then developer, 4 eyes principle.

Q:  Additional comments on change management?

Colin:  Ban Excel for anything official; need controlled environment.

 

21 October 2013

Credit Risk: Default and Loss Given Default from PRMIA

Great event from PRMIA on Tuesday evening of last week, entitled Credit Risk: The link between Loss Given Default and Default. The event was kicked off by Melissa Sexton of PRMIA, who introduced Jon Frye of the Federal Reserve Bank of Chicago. Jon seems to an acknowledged expert in the field of Loss Given Default (LGD) and credit risk modelling. I am sure that the slides will be up on the PRMIA event page above soon, but much of Jon's presentation seems to be around the following working paper. So take a look at the paper (which is good in my view) but I will stick to an overview and in particular any anecdotal comments made by Jon and other panelists.

Jon is an excellent speaker, relaxed in manner, very knowledgeable about his subject, humourous but also sensibly reserved in coming up with immediate answers to audience questions. He started by saying that his talk was not going to be long on philosophy, but very pragmatic in nature. Before going into detail, he outlined that the area of credit risk can and will be improved, but that this improvement becomes easier as more data is collected, and inevitably that this data collection process may need to run for many years and decades yet before the data becomes statistically significant. 

Which Formula is Simpler? Jon showed two formulas for estimating LGD, one a relatively complex looking formula (the Vasicek distribution mentioned his working paper) and the other a simple linear model of the a + b.x. Jon said that looking at the two formulas, then many would hope that the second formula might work best given its simplicity, but he wanted to convince us that the first formula was infact simpler than the second. He said that the second formula would need to be regressed on all loans to estimate its parameters, whereas the first formula depended on two parameters that most banks should have a fairly good handle on. The two parameters were Default Rate (DR) and Expected Loss (EL). The fact that these parameters were relatively well understood seemed to be the basis for saying the first formula was simpler, despite its relative mathematical complexity. This prompted an audience question on what is the difference between Probability of Default (PD) and Default Rate (DR). Apparently it turns out PD is the expected probability of default before default happens (so ex-ante) and DR is the the realised rate of default (so ex-post). 

Default and LGD over Time. Jon showed a graph (by an academic called Altman) of DR and LGD over time. When the DR was high (lots of companies failing, in a likely economic downtown) the LGD was also perhaps understandably high (so high number of companies failing, in an economic background that is both part of the causes of the failures but also not helping the loss recovery process). When DR is low, then there is a disconnect between LGD and DR. Put another way, when the number of companies failing is low, the losses incurred by those companies that do default can be high or low, there is no discernable pattern. I guess I am not sure in part whether this disconnect is due to the smaller number of companies failing meaning the sample space is much smaller and hence the outcomes are more volatile (no averaging effect), or more likely that in healthy economic times the loss given a default is much more of random variable, dependent on the defaulting company specifics rather than on general economic background.

Conclusions Beware: Data is Sparse. Jon emphasised from the graph that the Altman data went back 28 years, of which 23 years were periods of low default, with 5 years of high default levels but only across 3 separate recessions. Therefore from a statistical point of view this is very little data, so makes drawing any firm statistical conclusions about default and levels of loss given default very difficult and error-prone. 

The Inherent Risk of LGD. Jon here seemed to be focussed not on the probability of default, but rather on the conditional risk that once a default has occurred then how does LGD behave and what is the risk inherent from the different losses faced. He described how LGD affects i) Economic Capital - if LGD is more variable, then you need stronger capital reserves, ii) Risk and Reward - if a loan has more LGD risk, then the lender wants more reward, and iii) Pricing/Valuation - even if the expected LGD of two loans is equal, then different loans can still default under different conditions having different LGD levels.

Models of LGD

Jon showed a chart will LGC plotted against DR for 6 models (two of which I think he was involved in). All six models were dependent on three parameters, PD, EL and correlation, plus all six models seemed to produce almost identical results when plotted on the chart. Jon mentioned that one of his models had been validated (successfully I think, but with a lot of noise in the data) against Moody's loan data taken over the past 14 years. He added that he was surprised that all six models produced almost the same results, implying that either all models were converging around the correct solution or in total contrast that all six models were potentially subject to "group think" and were systematically all wrong in the ways the problem should be looked at.

Jon took one of his LGD models and compared it against the simple linear model, using simulated data. He showed a graph of some data points for what he called a "lucky bank" with the two models superimposed over the top. The lucky bit came in since this bank's data points for DR against LGD showed lower DR than expected for a given LGD, and lower LGD for a given DR. On this specific case, Jon said that the simple linear model fits better than his non-linear one, but when done over many data sets his LGD model fitted better overall since it seemed to be less affected by random data.

There were then a few audience questions as Jon closed his talk, one leading Jon to remind everyone of the scarcity of data in LGD modelling. In another Jon seemed to imply that he would favor using his model (maybe understandably) in the Dodd-Frank Annual Stress Tests for banks, emphasising that models should be kept simple unless a more complex model can be justified statistically. 

Steve Bennet and the Data Scarcity Issue 

Following Jon's talk, Steve Bennet of PECDC picked on Jon's issue of scare data within LGD modelling. Steve is based in the US, working for his organisation PECDC which is a cross border initiative to collect LGD and EAD (exposure at default) data. The basic premise seems to be that in dealing with the scarce data problem, we do not have 100 years of data yet, so in the mean time lets pool data across member banks and hence build up a more statistically significant data set - put another way: let's increase the width of the dataset if we can't control the depth. 

PECDC is a consortia of around 50 organisations that pool data relating to credit events. Steve said that capture data fields per default at four "snapshot" times: orgination, 1 year prior to default, at default and at resolution. He said that every bank that had joined the organisation had managed to improve its datasets. Following an audience question, he clarified that PECDC does not predict LGD with any of its own models, but rather provides the pooled data to enable the banks to model LGD better. 

Steve said that LGD turns out to be very different for different sectors of the market, particularly between SMEs and large corporations (levels of LGD for large corporations being more stable globally and less subject to regional variations). But also there is great LGD variation across specialist sectors such as aircraft finance, shipping and project finance. 

Steve ended by saying that PECDC was orginally formed in Europe, and was now attempting to get more US banks involved, with 3 US banks already involved and 7 waiting to join. There was an audience question relating to whether regulators allowed pooled data to be used under Basel IRB - apparently Nordic regulators allow this due to needing more data in a smaller market, European banks use the pooled data to validate their own data in IRB but in the US banks much use their own data at the moment.

Til Schuermann

Following Steve, Til Schuermann added his thoughts on LGD. He said that LGD has a time variation and is not random, being worse in recession when DR is high. His stylized argument to support this was that in recession there are lots of defaults, leading to lots of distressed assets and that following the laws of supply and demand, then assets used in recovery would be subject to lower prices. Til mentioned that there was a large effect in the timing of recovery, with recovery following default between 1 and 10 quarters later. He offered words of warning that not all defaults and not all collateral are created equal, emphasising that debt structures and industry stress matter. 

Summary

The evening closed with a few audience questions and a general summation by the panelists of the main issues of their talks, primarily around models and modelling, the scarcity of data and how to be pragmatic in the application of this kind of credit analysis. 

 

 

09 October 2013

And the winner of the Best Risk Data Management and Analytics Platform is...

...Xenomorph!!! Thanks to all who voted for us in the recent A-Team Data Management Awards, it was great to win the award for Best Risk Data Management and Analytics Platform. Great that our strength in the Data Management for Risk field is being recognised, and big thanks again to clients, partners and staff who make it all possible!

Please also find below some posts for the various panel debates at the event:

 Some photos, slides and videos from the event are now available on the A-Team site.

 

07 October 2013

#DMSLondon - Managed Services and the Utility Model

Andrew Delaney introduced the final panel of the day, involving Steve Cheng of Rimes, Jonathan Clark of Tech Mahindra, Tom Dalglish of UBS and Martijn Groot of Euroclear. Main points:

  • Andrew started by asking the panel for their definitions of managed data services and data utilities
  • Martijn said that a managed data service was usually the lifting out of a data process from in a company to be run by somebody else whereas a data utility had many users.
  • Tom put it another way saying that a managed service was run for you whereas a utility was run for them. Tom suggested that there were some concerns around data utilities for the industry in terms of knowing/being transparent about data vendor affinity and any data monopoly aspects.
  • When asked why past attempts at data utilities had failed, Tom said that it must be frustrating to be right but at wrong time, but in addition to the timing being right just now (costs/regulations being drivers) then the tech stack available is better and the appreciation of data usage importance is clearer.
  • Steve added a great point on the tech stack, in that it now made mass customisation much easier.
  • Jonathan made the point that past attempts at data utilities were built on product platforms used at clients, whereas the latest utilities were built on platforms specifically designed for use by a data utility.
  • Looking at the cost savings of using a data utility, Martijn said that the industry spends around $16-20B on data, and that with his Euroclear data utility they can serve 2000 clients with a staff level that is less than any one client employs directly.
  • Tom said that the savings from collapsing the data silos were primarily from more efficient/reduced usage of people and hardware to perform a specific function, and not data.
  • Steve suggested that some utilities take an incremental data services and not take all data as in the old utility model, again coming back to his earlier point of mass customisation.
  • Tom mentioned it was a bit like cable TV, where you can subscribe to a set of services of your choice but where certain services cost more than others.
  • Martijn said that there were too many vested interests to turn data costs around quickly. He said that data utilities could go a long way however. 
  • Tom concluded by saying that it was about content not feeds, licensing was important as was how to segregate data.

Good panel - additionally one final audience question/discussion was around data utilities providing LEI data, and it was argued that LEI without the hierarchy is just another set of data to map and manage. 

 

#DMSLondon - Data Architecture: Sticks or Carrots?

Great day on Thursday at the A-Team Data Management Summit in London (personally not least because Xenomorph won the Best Risk Data Management/Analytics Platform Award but more of that later!). The event kicked off with a brief intro from Andrew Delaney of the A-Team talking through some of the drivers behind the current activity in data management, with Andrew saying that risk and regulation were to the fore. Andrew then introduced Colin Gibson, Head of Data Architecture, Markets Division at Royal Bank of Scotland.

Data Architecture - Sticks or Carrots? Colin began by looking at the definition of "data architecture" showing how the definition on Wikipedia (now obviously the definitive source of all knowledge...) was not particularly clear in his view. He suggested himself that data architecture is composed of two related frameworks:

  • Orderly Arrangement of Parts
  • Discipline 

He said that the orderly arrangement of parts is focussed on business needs and aims, covering how data is sourced, stored, referenced, accessed, moved and managed. On the discipline side, he said that this covered topics such as rules, governance, guides, best practice, modelling and tools.

Colin then put some numbers around the benefits of data management, saying that for every dollar spend on centralising data saves 20 dollars, and mentioning a resulting 80% reduction in operational costs. Related to this he said that for every dollar spent on not replicating data saved a dollar on reconcilliation tools and a further dollar saved on the use of reconcilliation tools (not sure how the two overlap but these are obviously some of the "carrots" from the title of the talk). 

Despite these incentives, Colin added that getting people to actually use centralised reference data remains a big problem in most organisations. He said he thought that people find it too difficult to understand and consume what is there, and faced with a choice they do their own thing as an easier alternative. Colin then talked about a program within RBS called "GoldRush" whereby there is a standard data management library available to all new projects in RBS which contains:

  • messaging standards
  • standard schema
  • update mechanisms

The benefit being that if the project conforms with the above standards then they have little work to do for managing reference data since all the work is done once and centrally. Colin mentioned that also there needs to be feedback from the projects back to central data management team around what is missing/needing to be improved in the library (personally I would take it one step further so that end-users and not just IT projects have easy discovery and access to centralised reference data). The lessons he took from this were that we all need to "learn to love" enterprise messaging if we are to get to the top down publish once/consume often nirvana, where consuming systems can pick up new data and functionality without significant (if any) changes (might be worth a view of this post on this topic). He also mentioned the role of metadata in automating reconcilliation where that needed to occur.

Colin then mentioned that allocation of costs of reference data to consumers is still a hot topic, one where reference data lags behind the market data permissioning/metering insisted upon by exchanges. Related to this Colin thought that the role of the Chief Data Officer to enforce policies was important, and the need for the role was being driven by regulation. He said that the true costs of a tactical, non-standard approach need to be identifiable (quantifying the size of the stick I guess) but that he had found it difficult to eliminate the tactical use of pricing data sourced for the front office. He ended by mentioning that there needs to be a coming together of market data and reference data since operations staff are not doing quantitative valuations (e.g. does the theoretical price of this new bond look ok?) and this needs to be done to ensure better data quality and increased efficiency (couldn't agree more, have a look at this article and this post for a few of my thoughts on the matter). Overall very good speaker with interesting, practical examples to back up the key points he was trying to get across. 

 

25 June 2013

Matthew Berry on the Libor/OIS curve debate

Guest post today from Matthew Berry of Bedrock Valuation Advisors, discussing Libor vs OIS based rate benchmarks. Curves and curve management are a big focus for Xenomorph's clients and partners, so great that Matthew can shed some further light on the current debate and its implications:

New Benchmark Proposal’s Significant Implications for Data Management

During the 2008 financial crisis, problems posed by discounting future cash flows using Libor rather than the overnight index swap (OIS) rate became apparent. In response, many market participants have modified systems and processes to discount cash flows using OIS, but Libor remains the benchmark rate for hundreds of trillions of dollars worth of financial contracts. More recently, regulators in the U.S. and U.K. have won enforcement actions against several contributors to Libor, alleging that these banks manipulated the benchmark by contributing rates that were not representative of the market, and which benefitted the banks’ derivative books of business.

In response to these allegations, the CFTC in the U.S. and the Financial Conduct Authority (FCA) in the U.K. have proposed changes to how financial contracts are benchmarked and how banks manage their submissions to benchmark fixings. These proposals have significant implications for data management.

The U.S. and U.K. responses to benchmark manipulation

In April 2013, CFTC Chairman Gary Gensler delivered a speech in London in which he suggested that Libor should be retired as a benchmark. Among the evidence he cited to justify this suggestion:

-          Liquidity in the unsecured inter-dealer market has largely dried up.

-          The risk implied by contributed Libor rates has historically not agreed with the risk implied by credit default swap rates. The Libor submissions were often stale and did not change, even if the entity’s CDS spread changed significantly. Gensler provided a graph to demonstrate this.

Gensler proposed to replace Libor with either the OIS rate or the rate paid on general collateral repos. These instruments are more liquid and their prices more readily-observable in the market. He proposed a period of transition during which Libor is phased out while OIS or the GC repo rate is phased in.

In the U.K., the Wheatley Report provided a broad and detailed review of practices within banks that submit rates to the Libor administrator. This report found a number of deficiencies in the benchmark submission and calculation process, including:

-          The lack of an oversight structure to monitor systems and controls at contributing banks and the Libor administrator.

-          Insufficient use of transacted or otherwise observable prices in the Libor submission and calculation process.

The Wheatley Report called for banks and benchmark administrators to put in place rigorous controls that scrutinize benchmark submissions both pre and post publication. The report also calls for banks to store an historical record of their benchmark submissions and for benchmarks to be calculated using a hierarchy of prices with preference given to transacted prices, then prices quoted in the market, then management’s estimates.

Implications for data management

The suggestions for improving benchmarks made by Gensler and the Wheatley Report have far-reaching implications for data management.

If Libor and its replacement are run in parallel for a time, users of these benchmark rates will need to store and properly reference two different fixings and forward curves. Without sufficiently robust technology, this transition period will create operational, financial and reputational risk given the potential for users to inadvertently reference the wrong rate. If Gensler’s call to retire Libor is successful, existing contracts may need to be repapered to reference the new benchmark. This will be a significant undertaking. Users of benchmarks who store transaction details and reference rates in electronic form and manage this data using an enterprise data management platform will mitigate risk and enjoy a lower cost to transition.

Within the submitting banks and the benchmark administrator, controls must be implemented that scrutinize benchmark submissions both pre and post publication. These controls should be exceptions-based and easily scripted so that monitoring rules and tolerances can be adapted to changing market conditions. Banks must also have in place technology that defines the submission procedure and automatically selects the optimal benchmark submission. If transacted prices are available, these should be submitted. If not, quotes from established market participants should be submitted. If these are not available, management should be alerted that it must estimate the benchmark rate, and the decision-making process around that estimate should be documented.

Conclusion

These improvements to the benchmark calculation process will, in Gensler’s words, “promote market integrity, as well as financial stability.” Firms that effectively utilize data management technology, such as Xenomorph's TimeScape, to implement these changes will manage the transition to a new benchmark regime at a lower cost and with a higher likelihood of success.

 


07 May 2013

Big Data Finance at NYU Poly

I went over to NYU Poly in Brooklyn on Friday of last week for their Big Data Finance Conference. To get a slightly negative point out of the way early, I guess I would have to pose the question "When is a big data conference, not a big data Conference?". Answer: "When it is a time series analysis conference" (sorry if you were expecting a funny answer...but as you can see, then what I occupy my time with professionally doesn't naturally lend itself to too much comedy). As I like time series analysis, then this was ok, but certainly wasn't fully "as advertised" in my view, but I guess other people are experiencing this problem too.

Maybe this slightly skewed agenda was due to the relative newness of the topic, the newness of the event and the temptation for time series database vendors to jump on the "Big Data" marketing bandwagon (what? I hear you say, we vendors jumping on a buzzword marketing bandwagon, never!...). Many of the talks were about statistical time series analysis of market behaviour and less about what I was hoping for, which was new ways in which empirical or data-based approaches to financial problems might be addressed through big data technologies (as an aside, here is a post on a previous PRMIA event on big data in risk management as some additional background). There were some good attempts at getting a cross-discipline fertilization of ideas going at the conference, but given the topic then representatives from the mobile and social media industries were very obviously missing in my view. 

So as a complete counterexample to the two paragraphs above, the first speaker (Kevin Atteson of Morgan Stanley) at the event was on very much on theme with the application of big data technologies to the mortgage market. Apparently Morgan Stanley had started their "big data" analysis of the mortgage market in 2008 as part of a project to assess and understand more about the potential losses than Fannie Mae and Freddie Mac faced due to the financial crisis.  

Echoing some earlier background I had heard on mortgages, one of the biggest problems in trying to understand the market according to Kevin was data, or rather the lack of it. He compared mortgage data analysis to "peeling an onion" and that going back to the time of the crisis, mortgage data at an individual loan level was either not available or of such poor quality as to be virtually useless (e.g. hard to get accurate ZIP code data for each loan). Kevin described the mortgage data set as "wide" (lots of loans with lots of fields for each loan) rather than "deep" (lots of history), with one of the main data problems was trying to match nearest-neighbour loans. He mentioned that only post crisis have Fannie and Freddie been ordered to make individual loan data available, and that there is still no readily available linkage data between individual loans and mortgage pools (some presentations from a recent PRMIA event on mortgage analytics are at the bottom of the page here for interested readers). 

Kevin said that Morgan Stanley had rejected the use of Hadoop, primarily due write through-put capabilities, which Kevin indicated was a limiting factor in many big data technologies. He indicated that for his problem type that he still believed their infrastructure to be superior to even the latest incarnations of Hadoop. He also mentioned the technique of having 2x redundancy or more on the data/jobs being processed, aimed not just at failover but also at using the whichever instance of a job that finished first. Interestingly, he also added that Morgan Stanley's infrastructure engineers have a policy of rebooting servers in the grid even during the day/use, so fault tolerance was needed for both unexpected and entirely deliberate hardware node unavailability. 

Other highlights from the day:

  • Dennis Shasha had some interesting ideas on using matrix algebra for reducing down the data analysis workload needed in some problems - basically he was all for "cleverness" over simply throwing compute power at some data problems. On a humourous note (if you are not a trader?), he also suggested that some traders had "the memory of a fruit-fly".
  • Robert Almgren of QuantitativeBrokers was an interesting speaker, talking about how his firm had done a lot of analytical work in trying to characterise possible market responses to information announcements (such as Friday's non-farm payroll announcement). I think Robert was not so much trying to predict the information itself, but rather trying to predict likely market behaviour once the information is announced. 
  • Scott O'Malia of the CFTC was an interesting speaker during the morning panel. He again acknowledged some of the recent problems the CFTC had experienced in terms of aggregating/analysing the data they are now receiving from the market. I thought his comment on the twitter crash was both funny and brutally pragmatic with him saying "if you want to rely solely upon a single twitter feed to trade then go ahead, knock yourself out."
  • Eric Vanden Eijnden gave an interesting talk on "detecting Black Swans in Big Data". Most of the examples were from current detection/movement in oceanography, but seemed quite analogous to "regime shifts" in the statistical behaviour of markets. Main point seemed to be that these seemingly unpredictable and infrequent events were predictable to some degree if you looked deep enough in the data, and in particular that you could detect when the system was on a possible likely "path" to a Black Swan event.

One of the most interesting talks was by Johan Walden of the Haas Business School, on the subject of "Investor Networks in the Stock Market". Johan explained how they had used big data to construct a network model of all of the participants in the Turkish stock exchange (both institutional and retail) and in particular how "interconnected" each participant was with other members. His findings seemed to support the hypothesis that the more "interconnected" the investor (at the centre of many information flows rather than add the edges) the more likely that investor would demonstrate superior return levels to the average. I guess this is a kind of classic transferral of some of the research done in social networking, but very interesting to see it applied pragmatically to financial markets, and I would guess an area where a much greater understanding of investor behaviour could be gleaned. Maybe Johan could do with a little geographic location data to add to his analysis of how information flows.

So overall a good day with some interesting talks - the statistical presentations were challenging to listen to at 4pm on a Friday afternoon but the wine afterwards compensated. I would also recommend taking a read through a paper by Charles S. Tapiero on "The Future of Financial Engineering" for one of the best discussions I have so far read about how big data has the potential to change and improve upon some of the assumptions and models that underpin modern financial theory. Coming back to my starting point in this post on the content of the talks, I liked the description that Charles gives of traditional "statistical" versus "data analytics" approaches, and some of the points he makes about data immediately inferring relationships without the traditional "hypothesize, measure, test and confirm-or-not" were interesting, both in favour of data analytics and in cautioning against unquestioning belief in the findings from data (feels like this post from October 2008 is a timely reminder here). With all of the hype and the hope around the benefits of big data, maybe we would all be wise to remember this quote by a certain well-known physicist: "No amount of experimentation can ever prove me right; a single experiment can prove me wrong."

 

25 April 2013

The Anthropology, Sociology, and Epistemology of Risk

Background - I went along to my first PRMIA event in Stamford, CT last night, with the rather grandiose title of "The Anthropology, Sociology, and Epistemology of Risk". Stamford is about 30 miles north of Manhattan and is the home to major offices of a number of financial markets companies such as Thomson Reuters, RBS and UBS (who apparently have the largest column-less trading floor in the world at their Stamford headquarters - particularly useful piece of trivia for you there...). It also happens to be about 5 minutes drive/train journey away from where I now live, so easy for me to get to (thanks for another useful piece of information I hear you say...). Enough background, more on the event which was a good one with five risk managers involved in an interesting and sometimes philosophical discussion on fundamentally what "risk management" is all about.

IntroductionMarc Groz who heads the Stamford Chapter of PRMIA introduced the evening and started by thanking Barry Schwimmer for allowing PRMIA to use the Stamford Innovation Centre (the Old Town Hall) for the meeting. Henrik Neuhaus moderated the panel, and started by outlining the main elements of the event title as a framework for the discussion:

  • Anthropology - risk management is to what purpose?
  • Sociology - how does risk management work?
  • Epistemology - what knowledge is really contained within risk management?

Henrik started by taking a passage about anthropology and replacing human "development" with "risk management" which seemed to fit ok, although the angle I was expecting was much more about human behaviour in risk management than where Henrik started. Henrik asked the panel what results they had seen from risk management and what did that imply about risk management? The panelists seemed a little confused or daunted by the question prompting one of them to ask "Is that the question?".

Business Model and Risk CultureElliot Noma dived in by responding that the purpose of risk management obviously depended very much on what are the institutional goals of the organization. He said that it was as much about what you are forced to do and what you try to do in risk management. Elliot said that the sell-side view of risk management was very regulatory and capital focused, whereas mutual funds are looking more at risk relative to benchmarks and performance attribution. He added that in the alternatives (hedge-fund) space then there were no benchmarks and the focus was more about liquidity and event risk.

Steve Greiner said that it was down to the investment philosophy and how risk is defined and measured. He praised some asset managers where the risk managers sit across from the portfolio managers and are very much involved in the decision making process.

Henrik asked the panel whether any of the panel had ever defined a “mission statement” for risk management. Marc Groz chipped in that he remember that he had once defined one, and that it was very different from what others in the institution were expecting and indeed very different from the risk management that he and his department subsequently undertook.

Mark Szycher (of GM Pension Fund) said that risk management split into two areas for him, the first being the symmetrical risks where you need to work out the range of scenarios for a particular trade or decision being taken. The second was the more asymmetrical risks (i.e. downside only) such as those found in operational risk where you are focused on how best to avoid them happening.

Micro Risk Done Well - Santa Federico said that he had experience of some of the major problems experienced at institutions such as Merrill Lynch, Salomen Brothers and MF Global, and that he thought risk management was much more of a cultural problem than a technical one. Santa said he thought that the industry was actually quite good at the micro (trade, portfolio) risk management level, but obviously less effective at the large systematic/economic level. Mark asked Santa what was the nature of the failures he had experienced. Santa said that the risks were well modeled, but maybe the assumptions around macro variables such as the housing market proved to be extremely poor.

Keep Dancing? - Henrik asked the panel what might be done better? Elliot made the point that some risks are just in the nature of the business. If a risk manager did not like placing a complex illiquid trade and the institution was based around trading in illiquid markets then what is a risk manager to do? He quote the Citi executive who said “ whilst the music is still playing we have to dance”. Again he came back to the point that the business model of the institution drives its cultural and the emphasis of risk management (I guess I see what Elliot was saying but taken one way it implied that regardless of what was going on risk management needs to fit in with it, whereas I am sure that he meant that risk managers must fit in with the business model mandated to shareholders).

Risk Attitudes in the USA - Mark said that risk managers need to recognize that the improbable is maybe not so improbable and should be more prepared for the worst rather than risk management under “normal” market and institutional behavior. Steven thought that a cultural shift was happening, where not losing money was becoming as important to an organization as gaining money. He said that in his view, Europe and Asia had a stronger risk culture than in the United States, with much more consensus, involvement and even control over the trading decisions taken. Put another way, the USA has more of a culture of risk taking than Europe. (I have my own theories on this. Firstly I think that the people are generally much more risk takers in the USA than in UK/Europe, possibly influenced in part by the relative lack of underlying social safety net – whilst this is not for everyone, I think it produces a very dynamic economy as a result. Secondly, I do not think that cultural desire in the USA for the much admired “presidential” leader necessarily is the best environment for sound, consensus based risk management. I would also like to acknowledge that neither of my two points above seem to have protected Europe much from the worst of the financial crisis, so it is obviously a complex issue!).

Slaves to Data? - Henrik asked whether the panel thought that risk managers were slaves to data? He expanded upon this by asking what kinds of firms encourage qualitative risk management and not just risk management based on Excel spreadsheets? Santa said that this kind of qualitative risk management occurred at a business level and less so at a firm wide level. In particular he thought this kind of culture was in place at many hedge funds, and less so at banks. He cited one example from his banking career in the 1980's, where his immediate boss was shouted off the trading floor by the head of desk, saying that he should never enter the trading floor again (oh those were the days...). 

Sociology and Credibility - Henrik took a passage on the historic development of women's rights and replaced the word "women" with "risk management" to illustrate the challenges risk management is facing with trying to get more say and involvement at financial institutions. He asked who should the CRO report to? A CEO? A CIO? Or a board member? Elliot responded by saying this was really a issue around credibility with the business for risk managers and risk management in general. He made the point that often Excel and numbers were used to establish credibility with the business. Elliot added that risk managers with trading experience obviously had more credibility, and to some extent where the CRO reported to was dependent upon the credibility of risk management with the business. 

Trading and Risk Management Mindsets - Elliot expanded on his previous point by saying that the risk management mindset thinks more in terms of unconditional distributions and tries to learn from history. He contrasted this with a the "conditional mindset' of a trader, where the time horizon forwards (and backwards) is rarely longer than a few days and the belief is strong that a trade will work today given it worked yesterday is high. Elliot added that in assisting the trader, the biggest contribution risk managers can make is more to be challenging/helpful on the qualitative side rather than just quantitative.

Compensation and Transactions - Most of the panel seemed to agree that compensation package structure was a huge influencer in the risk culture of an organisation. Mark touched upon a pet topic of mine, which is that it very hard for a risk manager to gain credibility (and compensation) when what risk management is about is what could happen as opposed to what did happen. A risk manager blocking a trade due to some potentially very damaging outcomes will not gain any credibility with the business if the trading outcome for the suggested trade just happened to come out positive. There seemed to be concensus here that some of the traditional compensation models that were based on short-term transactional frequency and size were ill-formed (given the limited downside for the individual), and whilst the panel reserved judgement on the effectiveness of recent regulation moves towards longer-term compensation were to be welcome from a risk perspective.

MF Global and Busines Models - Santa described some of his experiences at MF Global, where Corzine moved what was essentially a broker into taking positions in European Sovereign Bonds. Santa said that the risk management culture and capabilities were not present to be robust against senior management for such a business model move. Elliot mentioned that he had been courted for trades by MF Global and had been concerned that they did not offer electronic execution and told him that doing trades through a human was always best. Mark said that in the area of pension fund management there was much greater fidiciary responsibility (i.e. behave badly and you will go to jail) and maybe that kind of responsibility had more of a place in financial markets too. Coming back to the question of who a CRO should report to, Mark also said that questions should be asked to seek out those who are 1) less likely to suffer from the "agency" problem of conflicts of interest and on a related note those who are 2) less likely to have personal biases towards particular behaviours or decisions.

Santa said that in his opinion hedge funds in general had a better culture where risk management opinions were heard and advice taken. Mark said that risk managers who could get the business to accept moral persuasion were in a much stronger position to add value to the business rather than simply being able to "block" particular trades. Elliot cited one experience he had where the traders under his watch noticed that a particular type of trade (basis trades) did not increase their reported risk levels, and so became more focussed on gaming the risk controls to achieve high returns without (reported) risk. The panel seemed to be in general agreement that risk managers with trading experience were more credible with the business but also more aware of the trader mindset and behaviors. 

Do we know what we know? - Henrik moved to his third and final subsection of the evening, asking the panel whether risk managers really know what they think they know. Elliot said that traders and risk managers speak a different language, with traders living in the now, thinking only of the implications of possible events such as those we have seen with Cyprus or the fiscal cliff, where the risk management view was much less conditioned and more historical. Steven re-emphasised the earlier point that risk management at this micro trading level was fine but this was not what caused events such as the collapse of MF Global.

Rational argument isn't communication - Santa said that most risk managers come from a quant (physics, maths, engineering) background and like structured arguments based upon well understood rational foundations. He said that this way of thinking was alien to many traders and as such it was a communication challenge for risk managers to explain things in a way that traders would actually put some time to considering. On the modelling side of things, Santa said that sometimes traders dismissed models as being "too quant" and sometimes traders followed models all too blindly without questioning or understanding the simplifying assumptions they are based on. Santa summarised by saying that risk management needs to intuitive for traders and not just academically based. Mark added that a quantitative focus can sometimes become too narrow (modeler's manifesto anyone?) and made the very profound point that unfortunately precision often wins over relevance in the creation and use of many models. Steven added that traders often deal with absolutes, so as knowing the spread between two bonds to the nearest basis point, whereas a risk manager approaching them with a VaR number really means that this is the estimated VaR which really should be thought to be within a range of values. This is alien to the way traders think and hence harder to explain.

Unanticipated Risk - An audience member asked whether risk management should focus mainly on unanticipated risks rather than "normal' risks. Elliot said that in his trading he was always thinking and checking whether the markets were changing or continuing with their recent near-term behaviour patterns. Steven said that history was useful to risk management when markets were "normal", but in times of regime shifts this was not the case and cited the example of the change in markets when Mario Dragi announced that the ECB would stand behind the Euro and its member nations. 

Risky Achievements - Henrik closed the panel by asking each member what they thought was there own greatest achievement in risk management. Elliot cited a time when he identified that a particular hedge fund had a relatively inconspicuous position/trade that he identified as potentially extremely dangerous and was proved correct when the fund closed down due to this. Steven said he was proud of some good work he and his team did on stress testing involving Greek bonds and Eurozone. Santa said that some of the work he had done on portfolio "risk overlays" was good. Mark ended the panel by saying that he thought his biggest achievement was when the traders and portfolio managers started to come to the risk management department to ask opinions before placing key trades. Henrik and the audience thanked the panel for their input and time.

An Insured View - After the panel closed I spoke with an actuary who said that he had greatly enjoyed the panel discussions but was surprised that when talking of how best to support the risk management function in being independent and giving "bad" news to the business, the role of auditors were not mentioned. He said he felt that auditors were a key support to insurers in ensuring any issues were allowed to come to light. So food for thought there as to whether financial markets can learn from other industry sectors.

Summary - great evening of discussion, only downside being the absence of wine once the panel had closed!

 


23 April 2013

PRMIA on ETFs #4 and #5 - ETF regulation and risk management

Katherine Moriaty was a very interesting speaker at the ETF event, and she talked us through some of the regulatory issues in relation to ETFs, particularly in relation to non-transparent ETFs. Katherine provided some history on the regulation of the fund industry in the US, particularly in relation to the Investment Company Act of 1940 which was enacted to restore public confidence in the fund management industry following the troubled times of the late 1920's and through the 1930's.

The fundamental concern for the SEC (the regulatory body for this) is that the provider of the fund products cannot game investors, providing false or incorrect valuations to maximize profits. Based on the "'40 Act" as she termed it, the SEC has allowed exemptions to allow various index and fund products, such as for smart indices you need full disclosure of the rules involved, plus with active indices then constituents are published. However with active ETFs, retail investors are at a disadvantage to authorized participants (APs, the ETF providers) since there is no transparency around the constituents.

Obviously fund managers want to manage portfolios without disclosure (to maintain the "secrets" of their success, to keep trading costs low etc), but no solution has yet been found to allow this for ETFs that satisfies the SEC that the small guy is not at risk from this lack of transparency. Katherine said that participants were still still trying to come up with solutions to this problem and the SEC is still open to an exemption for anything that in their view, "works" (sounds like someone will make a lot of money when/if a solution is found). Solutions tried so far include using blind trusts and proxy or shadow portfolios. Someone from the audience asked about the relative merits of Active ETFs when compared to Active Mutual Funds - Katherine answered that the APs wanted an exchange traded product as a new distribution channel (and I guess us "Joe Soaps" want lower fees for active management...)

Vikas Kalra of MSCI had the uneviable position of giving the last presentation of the evening, and he said he would keep his talk short since he was aware he was standing between us and the cocktail reception to follow. Vikas described the problem that many risk managers faced, which was that doing risk management for a portfolio containing ETFs was fine when the ETF was of a "look through" type (i.e. constituents available), but when the ETF is opaque (no/little/uncertain constituent data) then the choices were usually 1) remove the ETF from the risk calculation or 2) substitute some proxy instrument.

Vikas said the Barra part of MSCI had come up with the solution to analyse ETF "styles". From what I could tell, this looked like some sophisticated form of 2) above, where Barra had done the analysis to enable an opaque ETF to be replaced by some more transparent proxy which allowed constituents to be analysed within the risk process and correlations etc recognised. Vikas said that 400 ETFs and ETNs were now covered in their product offering.

Conclusion - Overall a very interesting event that improved my knowledge of ETFs and had some great speakers.

PRMIA on ETFs #3 - Tradable Volatility Exposure in ETP Packaging

Joanne Hill of Proshare presented next at the event. Joanne started her talk by illustrating how showing volatility levels from 1900 to the present day, and how historic volatility over the past 10 years seems to be at pre-1950's levels. Joanne had a lot of slides that she took us through (to be available on the event link above) which would be challenging to write up everyone (or at least that is my excuse and I am sticking to it...).

Joanne said that the VIX trades about 4% above realised volatility, which she described as being due to expectations that "something" might happen (so financial markets can be cautious it seems!). Joanne seemed almost disappointed that we seem now to have entered a period of relatively boring (?!) market activity following the end of the crisis given that the VIX is now trading at pre-2007 lows. In answer to audience questions she said that inverse volatility indices were growing as were products dependent on dynamic index strategies.

 

PRMIA on ETFs #2 - How the ETF Market Works: Quant for the Traders

Next up in the event was Phil Mackintosh of Credit Suisse who gave his presentation on trading ETFs, starting with some scene-setting for the market. Phil said that the ETP market had expanded enormously since its start in 1993, currently with over $2trillion of assets ($1.3trillion in the US). He mentioned that $1 in $4 of flow in the US was ETF related, and that the US ETF market was larger than the whole of the Asian equity market, but again emphasizing relative size the US ETF market was much smaller than the US equities and futures markets. 

He said that counter to the impression some have, the market is 52% institutional and only 48% retail. He mentioned that some macro hedge fund managers he speaks to manage all their business through ETPs. ETFs are available across all asset classes from alternatives, currencies, commodities, fixed income, international and domestic equities. Looking at fees, these tend to reside in the 0.1% to 1% bracket, with larger fees charged only for products that have specific characteristics and/or that are difficult to replicate.

Phil illustrated how funds have consistently flowed into ETFs over recent years, in contrast with the mutual funds industry, with around 25% in international equity and around 30% in fixed income. He said that corporate fixed income, low volatility equity indices and real estate ETFs were all on the up in terms of funds flow. 

He said that ETF values were calculated every 15 seconds and oscillated around there NAV, with arbitrage activity keeping ETF prices in line with underlying prices. Phil said that spreads in ETFs could be tighter than in their underlyings and that ETF spreads tightened for ETFs over $200m. 

Phil warned of a few traps in trading ETFs. He illustrated the trading volumes of ETFs during an average which showed that they tended to be traded in volume in the morning but not (late) afternoon (need enlightening as to why..). He added that they were more specifically not a trade for a market open or close. He said that large ETF trades sometimes caused NAV disconnects, and mentioned deviations around NAV due to underlying liquidity levels. He also said that contango can become a problem for VIX futures related products.

There were a few audience questions. One concerned how fixed income ETFs were the price discovery mechanism for some assets during the crisis given the liquidity and timeliness of the ETF relative to its underlyings. Another question concerned why the US ETF market was larger and more homogenous then in Europe. Phil said that Europe was not dominated by 3 providers as in the US, plus each nationality in Europe tended to have preferences for ETF products produced by each country. This was also further discussions on shorting Fixed Income ETFs since they were more liquid than the primary market. (Inote to self, need to find out more about the details of the ETF redemption and creation process).

Overall a great talk by a very "sharp" presenter (like a lot of good traders Phil seemed to understand the relationships in the market without needing to think about them too heavily). 

 

PRMIA on ETFs #1 - Index-Based Approaches for Risk Management in Wealth Management

It seems to be ETF week for events in New York this week, one of which was hosted by PRMIA, Credit Suisse and MSCI last night called "Risk Management of and with ETFs/Indices". The event was chaired by Seddik Meziani of Montclair State University, who opened with thanks for the sponsors and the speakers for coming along, and described the great variety of asset exposures now available in Exchange Traded Products (ETPs) and the growth in ETF assets since their formation in 1993. He also mentioned that this was the first PRMIA event in NYC specifically on ETFs. 

Index-Based Approaches for Risk Management in Wealth Management - Shaun Weuzbach of S&P Dow Jones Indices started with his proesentation. Shaun's initial point was to consider whether "Buy & Hold" works given the bad press it received over the crisis. Shaun said that the peak to trough US equity loss during the recent crisis was 57%, but when he hears of investors that made losses of this order he thinks that this was more down to a lack of diversification and poor risk management rather than inherent failures in buy and hold. To justify this, he sited an example simple portfolio constructed of 60% equity and 40% fixed income, which only lost 13% peak to trough during the crisis. He also illustrated that equity market losses of 5% or more were far more frequent during the period 1945-2012 than many people imagine, and that investors should be aware of this in portfolio construction.

Shaun suggested that we are in the third innings of indexing:

  1. Broad-based benchmark indices
  2. Precise sector-and thematic-based indices
  3. Factor-based indices (involving active strategies)

Where the factor-based indices might include ETF strategies based on/correlated with things such as dividend payments, equity weightings, fundamentals, revenues, GDP weights and volatility. 

He then described how a simple strategy index based around lowering volatility could work. Shaun suggested that low volatility was easier to explain than minimizing variance to retail investors. The process for his example low volatility index was take the 100 lowest volatility stocks out of the the S&P500 and weight by the inverse of volatility, with rebalancing every quarter.

He illustrated how this index exhibited lower volatility with higher returns over the past 13 years or so (this looked like a practical example illustrating some of the advantages of having a less volatile geometric mean of returns from what I could see). He also said that this index had worked across both developed and emerging markets.

Apparently this index has been available for only 2 years, so 11 years of the performance figures were generated from back-testing (the figures looked good, but a strategy theoretically backtested over historic markets when the strategy was not used and did not exist should always be examined sceptically).

Looking at the sector composition of this low volatility index, then one of the very interesting points that Shaun made was that the index got of the financials sector some two quarters before Lehman's went down (maybe the index was less influenced by groupthink or the fear of realising losses?)

Shaun then progressed to look a short look at VIX-based strategies, describing the VIX as the "investor fear guage". In particular he considered the S&P VIX Short-Term Future Index, which he said exhibits a high negative correlation with the S&P500 (around -0.8) and a high positive correlation with the VIX spot (approx +0.8). He said that explaining these products as portfolio insurance products was sometimes hard for financial advisors to do, and features such as the "roll cost" (moving from one set of futures contracts to others as some expire) was also harder to explain to non-institutional investors.

A few audience questions followed, one concerned concerned with whether one could capture principal retention in fixed income ETFs. Shaun briefly mentioned that the audience member should look at "maturity series" products in the ETP market. One audience member had concerns over the liquidity of ETF underlyings, to which Shaun said that S&P have very strict criteria for their indices ensuring that the free float of underlyings is high and that the ETF does not dominate liquidity in the underlying market. 

Overall a very good presentation from a knowledgeable speaker.

 

 

27 March 2013

Spreadsheet control and contagion

Just caught saw a reference on LinkedIn to this FT article "Finance groups lack spreadsheet controls". Started to write a quick response and given it is one of my major hobby-horses, I ended up doing a bit of an essay, so I decided to post it here too:

"As many people have pointed out elsewhere, much of the problem with spreadsheet usage is that they are not treated as a corporate and IT asset, and as such things like testing, peer review and general QA are not applied (mind you, maybe more of that should still be applied to many mainstream software systems in financial markets...). 

Ralph and the guys at Cluster Seven do a great job in helping institutions to manage and monitor spreadsheet usage (I like Ralph's "we are CCTV for spreadsheets" analogy), but I think a fundamental (and often overlooked) consideration is to ask yourself why did the business users involved decide that they needed spreadsheets to manage trading and risk in the first place? It is a bit like trying to address the symptoms of a illness without ever considering how we got the illness in the first place. 

Excel is a great tool, but to quote Spider-Man "with great power comes great responsibility" and I guess we can all see the consequences of not taking the usage of spreadsheets seriously and responsibly. So next time the trader or risk manager says "we've just built this really great model in Excel" ask them why they built it in Excel, and why they didn't build upon the existing corporate IT solutions and tools. In these cost- and risk- conscious times, I think the answers would be interesting..."

 

14 February 2013

Analytics Strategy from Numerix

Good post from Jim Jockle over at Numerix - main theme is around having an "analytics" strategy in place in addition to (and probably as part of) a "Big Data" strategy. Fits strongly around Xenomorph's ideas on having both data management and analytics management in place (a few posts on this in the past, try this one from a few years back) - analytics generate the most valuable data of all, yet the data generated by analytics and the input data that supports analytics is largely ignored as being too business focussed for many data management vendors to deal with, and too low level for many of the risk management system vendors to deal with. Into this gap in functionality falls the risk manager (supported by many spreadsheets!), who has to spend too much time organizing and validating data, and too little time on risk management itself.

Within risk management, I think it comes down to having the appropriate technical layers in place of data management, analytics/pricing management and risk model management. Ok it is a greatly simplified representation of the architecture needed (apologies to any techies reading this), but the majority of financial institutions do not have these distinct layers in place, with each of these layers providing easy "business user" access to allow risk managers to get to the "detail" of the data when regulators, auditors and clients demand it. Regulators are finally waking up to the data issue (see Basel on data aggregation for instance) but more work is needed to pull analytics into the technical architecture/strategy conversation, and not just confine regulatory discussions of pricing analytics to model risk. 

08 February 2013

Big Data – What is its Value to Risk Management?

A little late on these notes from this PRMIA Event on Big Data in Risk Management that I helped to organize last month at the Harmonie Club in New York. Big thank you to my PRMIA colleagues for taking the notes and for helping me pull this write-up together, plus thanks to Microsoft and all who helped out on the night.

Introduction: Navin Sharma (of Western Asset Management and Co-Regional Director of PRMIA NYC) introduced the event and began by thanking Microsoft for its support in sponsoring the evening. Navin outlined how he thought the advent of “Big Data” technologies was very exciting for risk management, opening up opportunities to address risk and regulatory problems that previously might have been considered out of reach.

Navin defined Big Data as the structured or unstructured in receive at high volumes and requiring very large data storage. Its characteristics include a high velocity of record creation, extreme volumes, a wide variety of data formats, variable latencies, and complexity of data types. Additionally, he noted that relative to other industries, in the past financial services has created perhaps the largest historical sets of data and continually creates enormous amount of data on a daily or moment-by-moment basis. Examples include options data, high frequency trading, and unstructured data such as via social media.  Its usage provides potential competitive advantages in a trading and investment management. Also, by using Big Data it is possible to have faster and more accurate recognition of potential risks via seemingly disparate data - leading to timelier and more complete risk management of investments and firms’ assets. Finally, the use of Big Data technologies is in part being driven by regulatory pressures from Dodd-Frank, Basel III, Solvency II, Markets for Financial Instruments Directives (1 & 2) as well as Markets for Financial Instruments Regulation.

Navin also noted that we will seek to answer questions such as:

  • What is the impact of big data on asset management?
  • How can Big Data’s impact enhance risk management?
  • How is big data used to enhance operational risk?

Presentation 1: Big Data: What Is It and Where Did It Come From?: The first presentation was given by Michael Di Stefano (of Blinksis Technologies), and was titled “Big Data. What is it and where did it come from?”.  You can find a copy of Michael’s presentation here. In summary Michael started with saying that there are many definitions of Big Data, mainly defined as technology that deals with data problems that are either too large, too fast or too complex for conventional database technology. Michael briefly touched upon the many different technologies within Big Data such as Hadoop, MapReduce and databases such as Cassandra and MongoDB etc. He described some of the origins of Big Data technology in internet search, social networks and other fields. Michael described the “4 V’s” of Big Data: Volume, Velocity, Variety and a key point from Michael was “time to Value” in terms of what you are using Big Data for. Michael concluded his talk with some business examples around use of sentiment analysis in financial markets and the application of Big Data to real-time trading surveillance.

Presentation 2: Big Data Strategies for Risk Management: The second presentation “Big Data Strategies for Risk Management” was introduced by Colleen Healy of Microsoft (presentation here). Colleen started by saying expectations of risk management are rising, and that prior to 2008 not many institutions had a good handle on the risks they were taking. Risk analysis needs to be done across multiple asset types, more frequently and at ever greater granularity. Pressure is coming from everywhere including company boards, regulators, shareholders, customers, counterparties and society in general. Colleen used to head investor relations at Microsoft and put forward a number of points:

  • A long line of sight of one risk factor does not mean that we have a line of sight on other risks around.
  • Good risk management should be based on simple questions.
  • Reliance on 3rd parties for understanding risk should be minimized.
  • Understand not just the asset, but also at the correlated asset level.
  • The world is full of fast markets driving even more need for risk control
  • Intraday and real-time risk now becoming necessary for line of sight and dealing with the regulators
  • Now need to look at risk management at a most granular level.

Colleen explained some of the reasons why good risk management remains a work in progress, and that data is a key foundation for better risk management. However data has been hard to access, analyze, visualize and understand, and used this to link to the next part of the presentation by Denny Yu of Numerix.

Denny explained that new regulations involving measures such as Potential Future Exposure (PFE) and Credit Value Adjustment (CVA) were moving the number of calculations needed in risk management to a level well above that required by methodologies such as Value at Risk (VaR). Denny illustrated how the a typical VaR calculation on a reasonable sized portfolio might need 2,500,000 instrument valuations and how PFE might require as many as 2,000,000,000. He then explain more of the architecture he would see as optimal for such a process and illustrated some of the analysis he had done using Excel spreadsheets linked to Microsoft’s high performance computing technology.

Presentation 3: Big Data in Practice: Unintentional Portfolio Risk: Kevin Chen of Opera Solutions gave the third presentation, titled “Unintentional Risk via Large-Scale Risk Clustering”. You can find a copy of the presentation here. In summary, the presentation was quite visual and illustrating how large-scale empirical analysis of portfolio data could produce some interesting insights into portfolio risk and how risks become “clustered”. In many ways the analysis was reminiscent of an empirical form of principal component analysis i.e. where you can see and understand more about your portfolio’s risk without actually being able to relate the main factors directly to any traditional factor analysis. 

Panel Discussion: Brian Sentance of Xenomorph and the PRMIA NYC Steering Committee then moderated a panel discussion. The first question was directed at Michael “Is the relational database dead?” – Michael replied that in his view relational databases were not dead and indeed for dealing with problems well-suited to relational representation were still and would continue to be very good. Michael said that NoSQL/Big Data technologies were complimentary to relational databases, dealing with new types of data and new sizes of problem that relational databases are not well designed for. Brian asked Michael whether the advent of these new database technologies would drive the relational database vendors to extend the capabilities and performance of their offerings? Michael replied that he thought this was highly likely but only time would tell whether this approach will be successful given the innovation in the market at the moment. Colleen Healy added that the advent of Big Data did not mean the throwing out of established technology, but rather an integration of established technology with the new such as with Microsoft SQL Server working with the Hadoop framework.

Brian asked the panel whether they thought visualization would make a big impact within Big Data? Ken Akoundi said that the front end applications used to make the data/analysis more useful will evolve very quickly. Brian asked whether this would be reminiscent of the days when VaR first appeared, when a single number arguably became a false proxy for risk measurement and management? Ken replied that the size of the data problem had increased massively from when VaR was first used in 1994, and that visualization and other automated techniques were very much needed if the headache of capturing, cleansing and understanding data was to be addressed.

Brian asked whether Big Data would address the data integration issue of siloed trading systems? Colleen replied that Big Data needs to work across all the silos found in many financial organizations, or it isn’t “Big Data”. There was general consensus from the panel that legacy systems and people politics were also behind some of the issues found in addressing the data silo issue.

Brian asked if the panel thought the skills needed in risk management would change due to Big Data? Colleen replied that effective Big Data solutions require all kinds of people, with skills across a broad range of specific disciplines such as visualization. Generally the panel thought that data and data analysis would play an increasingly important part for risk management. Ken put forward his view all Big Data problems should start with a business problem, with not just a technology focus. For example are there any better ways to predict stock market movements based on the consumption of larger and more diverse sources of information. In terms of risk management skills, Denny said that risk management of 15 years ago was based on relatively simply econometrics. Fast forward to today, and risk calculations such as CVA are statistically and computationally very heavy, and trading is increasingly automated across all asset classes. As a result, Denny suggested that even the PRMIA PRM syllabus should change to focus more on data and data technology given the importance of data to risk management.

Asked how best to should Big Data be applied?, then Denny replied that echoed Ken in saying that understanding the business problem first was vital, but that obviously Big Data opened up the capability to aggregate and work with larger datasets than ever before. Brian then asked what advice would the panel give to risk managers faced with an IT department about to embark upon using Big Data technologies? Assuming that the business problem is well understood, then Michael said that the business needed some familiarity with the broad concepts of Big Data, what it can and cannot do and how it fits with more mainstream technologies. Colleen said that there are some problems that only Big Data can solve, so understanding the technical need is a first checkpoint. Obviously IT people like working with new technologies and this needs to be monitored, but so long as the business problem is defined and valid for Big Data, people should be encouraged to learn new technologies and new skills. Kevin also took a very positive view that IT departments should  be encouraged to experiment with these new technologies and understand what is possible, but that projects should have well-defined assessment/cut-off points as with any good project management to decide if the project is progressing well. Ken put forward that many IT staff were new to the scale of the problems being addressed with Big Data, and that his own company Opera Solutions had an advantage in its deep expertise of large-scale data integration to deliver quicker on project timelines.

Audience Questions: There then followed a number of audience questions. The first few related to other ideas/kinds of problems that could be analyzed using the kind of modeling that Opera had demonstrated. Ken said that there were obvious extensions that Opera had not got around to doing just yet. One audience member asked how well could all the Big Data analysis be aggregated/presented to make it understandable and usable to humans? Denny suggested that it was vital that such analysis was made accessible to the user, and there general consensus across the panel that man vs. machine was an interesting issue to develop in considering what is possible with Big Data. The next audience question was around whether all of this data analysis was affordable from a practical point of view. Brian pointed out that there was a lot of waste in current practices in the industry, with wasteful duplication of ticker plants and other data types across many financial institutions, large and small. This duplication is driven primarily by the perceived need to implement each institution’s proprietary analysis techniques, and that this kind of customization was not yet available from the major data vendors, but will become more possible as cloud technology such as Microsoft’s Azure develops further. There was a lot of audience interest in whether Big Data could lead to better understanding of causal relationships in markets rather than simply correlations. The panel responded that causal relationships were harder to understand, particularly in a dynamic market with dynamic relationships, but that insight into correlation was at the very least useful and could lead to better understanding of the drivers as more datasets are analyzed.

 

01 February 2013

Rutgers Quantitative Finance Summit

I got my first tour around the NYSE trading floor on Wednesday night, courtesy of an event by Rutgers University on Risk. Good event, mainly around panel discussion moderated by Nicholar Dunbar (Editor of Bloomberg Risk newsletter), and involving David Belmont (Commonfund CRO), Adam Litke (Chief Risk Strategist for Bloomberg), Hilmar Schaumann (Fortress Investment CRO) and Sanjay Sharma (CRO of Global Arbitrage and Trading at RBC).

Nick first asked the panel how do you define and measure risk? Hilmar responded that risk measurement is based around two main activities: 1) understanding how a book/portfolio is positioned (the static view) and 2) understanding sensitivities to risks that impact P&L (the dynamic view). Hilmar mentioned the use of historical data as a guide to current risks that are difficult to measure, but emphasised the need for a qualitative approach when looking at the risks being taken.

David said that he looks at both risk and uncertainty - with risk being defined as those impacts you can measure/estimate. He said that historical analysis was useful but limited given it is based only on what has happened. He thought that scenario analysis was a stronger tool. (I guess with historical analysis you at least get some idea of the impact of things that could not be predicted even it is based on one "simulation" path i.e. reality, whereas you have more flexibility with scenario management to cover all bases, but I guess limited to those bases you can imagine). David said that path-dependent risks such as those in the credit markets in the last crisis were some of the most difficult to deal with.

Adam said that you need to understand why you are measuring risk and understand what risks you are prepared to take. He said that at Wachovia they knew that a 25% house price fall in California would be a near death experience for the bank prior to the 2008 crisis, and in the event the losses were much greater than 25%. His point was really that you must decide what risks you want to survice and at what level. He said that sound common-sense judgement is needed to decide whether a scenario is really-real or not.

Sanjay said that risk managers need to maintain a lot of humility and not to over-trust risk meaurements. He described a little of the risk approach used at RBC where he said they use over 80 different models and employ them as layers/different views on risk to be brought together. He said they start with VaR as a base analysis, but build on this with scenarios, greeks and then on to other more specific reports and analysis. He emphasised that communication is a vital skill for risk managers to get their views and ideas across.

Nicholas then moved on to ask how risk managers should make or reduce risks? - getting away from risk measurement to risk management. Adam said that risks should be delegated out to those that manage them but this needs to be combined with responsibility for the risks too. Keep people and departments within the bounds of what their remit. Be prepared to talk a different business language to different stakeholders dependent upon their understanding and their motivations. David gave some examples of this in his case, where endowment funds what risk premiums over many years and risks are translated/quantified into practical things for example such as a new college building not going ahead etc. 

Hilmar said the hedge funds are supposed to take risks, and that the key was not necessarily to avoid losses (although avoid them if you can) but rather to avoid surprises. Like the other speakers, Hilmar emphasised that communication of risks to key stakeholders was vital. He also added the key point that if you don't like a risk you have identified, then try first to take it off rather than hedging it, since hedging could potentially add basis risk and simple more complication.

Nicholas then Sanjay about how risk managers should deal with bringing difficult news to the business? Sanjay suggested that any bad news should be approach in the form of "actionable transparency" i.e. that not only do you say communicate how bad the risk is to all stakeholders but you come along with actionable approaches to dealing with the risk. In all of his experience and despite the crisis, Sanjay's experience is that traders do not want to loose money and if you come with solutions they will listen. He concluded by saying that qualitative analysis should also be used, citing the hypothetical example that you should take notice of dogs (yes, the animal!) buying mortgages, whether or not the mortgages are AAA rated.

Nicholas asked the panel members in turn what risks are they concerned about currently? David said he believed that many risks were not priced into the market currently. He was concerned about policy impacts of action by the ECB and the Fed, and thought the current and forward levels of volatility are low. In Fixed Income markets he thought that Dodd-Frank may have detrimental effects, particular with the current lack of clarity about what is proprietary trading and what is market-making. He thought that should policies and interests rates change, he thought that risk managers should look carefully at what will happen as funds flow out of fixed income and into equities.

Hilmar talked about the postponement of the US debt ceiling limits and that US Government policy battles continue to be an obvious source of risk. In Europe, many countries had elections this year which would be interesting, and that the problems in the Euro-zone are less than they were, but problems in Cyprus could fan the flames of more problems and anxiety. Hilmar said the Japan's new policy of targetting 2% inflation may have effects on the willingness of domestic investors to buy JGBs. 

Sanjay said he was worried. In the "Greenspan Years" prior to 2008 a quasi government guarantee on the banks was effectively put in place and that we continue to live with cheap money. When policy eventually changes and interest rates rise, Sanjay wondered whether the world was ready for the wholesale asset revaluation that would then be required.

Adams concerns where mainly around identifying what will be the cause of the next panic in the market. Whilst he said he is in favour of central clearing for OTC derivatives, he thought that the changing market structure combined with implementing central clearing had not been fully thought through and this was a worry to him. 

Nicholas asked what do the panelists think to the regulation being implemented? David said that regulators face the same difficulty that risk managers face, in that nobody notices when you took sensible action to protect against a risk that didn't occur. He thinks that regulation of the markets is justified and necessary.

Sanjay said that in the airline and pharmacutical industries regulatory approval was on the whole very robust but that they were dealing with approving designs (aeroplanes and drugs) that are reproduced once approved. He said that such levels of regulation in financial services were not yet possible due to the constant innovation found in the markets, and he wanted regulation to be more dynamic and responsive to market developments. Sanjay also joined those in the industry that are critical of the shear size of Dodd-Frank.

Nicholas said that Adam was obviously keen on operational issues and wondered what plumbing in the industry would he change? Adam said that he is a big fan of automation but operational risk are real and large. He thought that there were too many rules and regulations being applied, and the regulators were not paying attention to the type of markets they want in the future, nor on the effects of current regulation and how people were moving from one part of the industry to another. Adam said that in relation to Knight Capital he was still a strong advocate of standing by the wall socket, ready to pull the plug on the computer. Adam suggested that regulators should look at regulating/approving software releases (I assume here he means for key tasks such as automated trading or risk reporting, not all software).

Given the large number of students present, Nicholas closed the panel by asking what career advice the panelists had for future risk managers? Adam emphasised flexibility in role, taking us through his career background as an equity derivatives and then fixed income trader before coming into risk management. Adam said it was highly unlikely over your career that you would stay with one role or area of expertise. 

Hilmar said that having risk managers independent of trading was vitally important for the industry. He thought there were many areas to work with operational risk being potentially the largest, but still with plenty more to do in market risk, compliance and risk modelling. He added that understanding the interdepencies between risks was key and an area for further development.

When asked by Nicholas, David said that risk managers should have a career path right through to CEO of an institution. He wanted to encourage risk management as a necessary level above risk measurement and control. He was excited about the potential of Big Data technologies to help in risk management. David gave some interesting background on his own career initially as an emergining markets debt trader. He said that it is important to know yourself, and that he regarded himself as a sceptic, needing all the information available before making a decision. As such his performance as a trader was consistent but not as high as some, and this became one of the reasons he moved into risk management. 

Sanjay said many of the systems used in finance are 20 years old, in complete contrast with the advancies in mobile and internet technologies. As such he thought this was a great opportunity to be involved in the replacement and upgrading of this older infrastructure. Apparently one analyst had estimated that $65B will be spent on risk management over the next 4-5 years.

Adam thought that there was a need for code of ethics for quants (see old post for some ideas). Sanjay added that the industry needed to move away from being involved primarily in attempting to optimise activity around gaming regulation. When asked by Nicholas about Basel III, Adam thought that improved regulation was necessary but Basel III was not the right way to go about it and was way too complex.

 

 

 

Xenomorph: analytics and data management

About Xenomorph

Xenomorph is the leading provider of analytics and data management solutions to the financial markets. Risk, trading, quant research and IT staff use Xenomorph’s TimeScape analytics and data management solution at investment banks, hedge funds and asset management institutions across the world’s main financial centres.

@XenomorphNews



Blog powered by TypePad
Member since 02/2008