23 posts categorized "Events"

15 April 2014

Financial Markets Data and Analytics. Everywhere You Need Them.

Very pleased to announce that Xenomorph will be hosting an event, "Financial Markets Data and Analytics. Everywhere You Need Them.", at Microsoft's Times Square New York offices on May 9th.

This breakfast briefing includes Sang Lee of the analyst firm Aite Group offering some great insights from financial institutions into their adoption of cloud technology, applying it to address risk management, data management and regulatory reporting challenges.

Microsoft will be showing how their new Power BI can radically change and accelerate the integration of data for business and IT staff alike, regardless of what kind of data it is, what format it is stored in or where it is located.

And Xenomorph will be introducing the TimeScape MarketPlace, our new cloud-based data mashup service for publishing and consuming financial markets data and analytics. More background and updates on MarketPlace in coming weeks.

In the meantime, please take a look at the event and register if you can come along, it would be great to see you there.

12 March 2014

S&P Capital IQ Risk Event #2 - Enterprise or Risk Data Strategy?

Christian Nilsson of S&P CIQ followed up Richard Burtsal's talk with a presentation on data management for risk, containing many interesting questions for those considering data for risk management needs. Christian started his talk by taking a time machine back to 2006, and asking what were the issues then in Enterprise Data Management:

  1. There is no current crisis - we have other priorities (we now know what happened there)
  2. The business case is still too fuzzy (regulation took care of this issue)
  3. Dealing with the politics of implementation (silos are still around, but cost and regulation are weakening politics as a defence?)
  4. Understanding data dependencies (understanding this throughout the value chain, but still not clear today?)
  5. The risk of doing it wrong (there are risk you will do data management wrong given all the external parties and sources involved, but what is the risk of not doing it?)

Christian then moved on to say the current regulatory focus is on clearer roadmaps for financial institutions, citing Basel II/III, Dodd Frank/Volker Rule in the US, challenges in valuation from IASB and IFRS, fund management challenges with UCITS, AIFMD, EMIR, MiFID and MiFIR, and Solvency II in the Insurance industry. He coined the phrase that "Regulation Goes Hollywood" with multiple versions of regulation like UCITS I, II, III, IV, V, VII for example having more versions than a set of Rocky movies. 

He then touched upon some of the main motivations behind the BCBS 239 document and said that regulation had three main themes at the moment:

  1. Higher Capital and Liquidity Ratios
  2. Restrictions on Trading Activities
  3. Structural Changes ("ring fence" retail, global operations move to being capitalized local subsidiaries)

Some further observations were on what will be the implications of the effective "loss" of globablization within financial markets, and also what now can be considered as risk free assets (do such things now exist?). Christian then gave some stats on risk as a driver of data and technology spend with over $20-50B being spent over the next 2-3 years (seems a wide range, nothing like a consensus from analysts I guess!). 

The talk then moved on to what role data and data management plays within regulatory compliance, with for example:

  • LEI - Legal Entity Identifiers play out throughout most regulation, as a means to enable automated processing and as a way to understand and aggregate exposures.
  • Dodd-Frank - Data management plays within OTC processing and STP in general.
  • Solvency II - This regulation for insurers places emphasis on data quality/data lineage and within capital reserve requirements.
  • Basel III - Risk aggregation and counterparty credit risk are two areas of key focus.

Christian outlined the small budget of the regulators relative to the biggest banks (a topic discussed in previous posts, how society wants stronger, more effective regulation but then isn't prepared to pay for it directly - although I would add we all pay for it indirectly but that is another story, in part illustrated in the document this post talks about).

In addtion to the well-known term "regulatory arbitrage" dealing with different regulations in different jurisdictions, Christian also mentioned the increasingly used term "subsituted compliance" where a global company tries to optimise which jurisdictions it and its subsidiaries comply within, with the aim of avoiding compliance in more difficult regimes through compliance within others.

I think Christian outlined the "data management dichotomy" within financial markets very well :

  1. Regulation requires data that is complete, accurate and appropriate
  2. Industry standards of data management and data are poorly regulated, and there is weak industry leadership in this area.

(not sure if it was quite at this point, but certainly some of the audience questions were about whether the data vendors themselves should be regulated which was entertaining).

He also outlined the opportunity from regulation in that it could be used as a catalyst for efficiency, STP and cost base reduction.

Obviously "Big Data" (I keep telling myself to drop the quotes, but old habits die hard) is hard to avoid, and Christian mentioned that IBM say that 90% of the world's data has been created in the last 2 years. He described the opportunities of the "3 V's" of Volume, Variety, Velocity and "Dark Data" (exploiting underused data with new technology - "Dark" and "Deep" are getting more and more use of late). No mention directly in his presentation but throughout there was the implied extension of the "3 V's" to "5 V's" with Veracity (aka quality) and Value (aka we could do this, but is it worth it?). Related to the "Value" point Christian brought out the debate about what data do you capture, analyse, store but also what do you deliberately discard which is point worth more consideration that it gets (e.g. one major data vendor I know did not store its real-time tick data and now buys its tick data history from an institution who thought it would be a good idea to store the data long before the data vendor thought of it).

I will close this post taking a couple of summary lists directly from his presentation, the first being the top areas of focus for risk managers:

  • Counterparty Risk
  • Integrating risk into the Pre-trade process
  • Risk Aggregation across the firm
  • Risk Transparency
  • Cross Asset Risk Reporting
  • Cost Management/displacement

The second list outlines the main challenges:

  • Getting complete view of risk from multiple systems
  • Lack of front to back integration of systems
  • Data Mapping
  • Data availability of history
  • Lack of Instrument coverage
  • Inability to source from single vendor
  • Growing volumes of data

Christian's presentation then put forward a lot of practical ideas about how best to meet these challenges (I particularly liked the risk data warehouse parts, but I am unsurprisingly biassed). In summary if you get the chance then see or take a read of Christian's presentation, I thought it was a very thoughtful document with some interesting ideas and advice put forward.

 

 

 

 

 

 

 

03 March 2014

See you at the A-Team Data Management Summit this week!

Xenomorph is sponsoring the networking reception at the A-Team DMS event in London this week, and if you are attending then I wanted to extend a cordial invite to you to attend the drinks and networking reception at the end of day at 5:30pm on Thursday.

In preparation for Thursday’s Agenda then the blog links below are a quick reminder of some of the main highlights from last September’s DMS:

I will also be speaking on the 2pm panel “Reporting for the C-Suite: Data Management for Enterprise & Risk Analytics”. So if you like what you have heard during the day, come along to the drinks and firm up your understanding with further discussion with like-minded individuals. Alternatively, if you find your brain is so full by then of enterprise data architecture, managed services, analytics, risk and regulation that you can hardly speak, come along and allow your cerebellum to relax and make sense of it all with your favourite beverage in hand. Either way your you will leave the event more informed then when you went in...well that’s my excuse and I am sticking with it!

Hope to see you there!

21 October 2013

Credit Risk: Default and Loss Given Default from PRMIA

Great event from PRMIA on Tuesday evening of last week, entitled Credit Risk: The link between Loss Given Default and Default. The event was kicked off by Melissa Sexton of PRMIA, who introduced Jon Frye of the Federal Reserve Bank of Chicago. Jon seems to an acknowledged expert in the field of Loss Given Default (LGD) and credit risk modelling. I am sure that the slides will be up on the PRMIA event page above soon, but much of Jon's presentation seems to be around the following working paper. So take a look at the paper (which is good in my view) but I will stick to an overview and in particular any anecdotal comments made by Jon and other panelists.

Jon is an excellent speaker, relaxed in manner, very knowledgeable about his subject, humourous but also sensibly reserved in coming up with immediate answers to audience questions. He started by saying that his talk was not going to be long on philosophy, but very pragmatic in nature. Before going into detail, he outlined that the area of credit risk can and will be improved, but that this improvement becomes easier as more data is collected, and inevitably that this data collection process may need to run for many years and decades yet before the data becomes statistically significant. 

Which Formula is Simpler? Jon showed two formulas for estimating LGD, one a relatively complex looking formula (the Vasicek distribution mentioned his working paper) and the other a simple linear model of the a + b.x. Jon said that looking at the two formulas, then many would hope that the second formula might work best given its simplicity, but he wanted to convince us that the first formula was infact simpler than the second. He said that the second formula would need to be regressed on all loans to estimate its parameters, whereas the first formula depended on two parameters that most banks should have a fairly good handle on. The two parameters were Default Rate (DR) and Expected Loss (EL). The fact that these parameters were relatively well understood seemed to be the basis for saying the first formula was simpler, despite its relative mathematical complexity. This prompted an audience question on what is the difference between Probability of Default (PD) and Default Rate (DR). Apparently it turns out PD is the expected probability of default before default happens (so ex-ante) and DR is the the realised rate of default (so ex-post). 

Default and LGD over Time. Jon showed a graph (by an academic called Altman) of DR and LGD over time. When the DR was high (lots of companies failing, in a likely economic downtown) the LGD was also perhaps understandably high (so high number of companies failing, in an economic background that is both part of the causes of the failures but also not helping the loss recovery process). When DR is low, then there is a disconnect between LGD and DR. Put another way, when the number of companies failing is low, the losses incurred by those companies that do default can be high or low, there is no discernable pattern. I guess I am not sure in part whether this disconnect is due to the smaller number of companies failing meaning the sample space is much smaller and hence the outcomes are more volatile (no averaging effect), or more likely that in healthy economic times the loss given a default is much more of random variable, dependent on the defaulting company specifics rather than on general economic background.

Conclusions Beware: Data is Sparse. Jon emphasised from the graph that the Altman data went back 28 years, of which 23 years were periods of low default, with 5 years of high default levels but only across 3 separate recessions. Therefore from a statistical point of view this is very little data, so makes drawing any firm statistical conclusions about default and levels of loss given default very difficult and error-prone. 

The Inherent Risk of LGD. Jon here seemed to be focussed not on the probability of default, but rather on the conditional risk that once a default has occurred then how does LGD behave and what is the risk inherent from the different losses faced. He described how LGD affects i) Economic Capital - if LGD is more variable, then you need stronger capital reserves, ii) Risk and Reward - if a loan has more LGD risk, then the lender wants more reward, and iii) Pricing/Valuation - even if the expected LGD of two loans is equal, then different loans can still default under different conditions having different LGD levels.

Models of LGD

Jon showed a chart will LGC plotted against DR for 6 models (two of which I think he was involved in). All six models were dependent on three parameters, PD, EL and correlation, plus all six models seemed to produce almost identical results when plotted on the chart. Jon mentioned that one of his models had been validated (successfully I think, but with a lot of noise in the data) against Moody's loan data taken over the past 14 years. He added that he was surprised that all six models produced almost the same results, implying that either all models were converging around the correct solution or in total contrast that all six models were potentially subject to "group think" and were systematically all wrong in the ways the problem should be looked at.

Jon took one of his LGD models and compared it against the simple linear model, using simulated data. He showed a graph of some data points for what he called a "lucky bank" with the two models superimposed over the top. The lucky bit came in since this bank's data points for DR against LGD showed lower DR than expected for a given LGD, and lower LGD for a given DR. On this specific case, Jon said that the simple linear model fits better than his non-linear one, but when done over many data sets his LGD model fitted better overall since it seemed to be less affected by random data.

There were then a few audience questions as Jon closed his talk, one leading Jon to remind everyone of the scarcity of data in LGD modelling. In another Jon seemed to imply that he would favor using his model (maybe understandably) in the Dodd-Frank Annual Stress Tests for banks, emphasising that models should be kept simple unless a more complex model can be justified statistically. 

Steve Bennet and the Data Scarcity Issue 

Following Jon's talk, Steve Bennet of PECDC picked on Jon's issue of scare data within LGD modelling. Steve is based in the US, working for his organisation PECDC which is a cross border initiative to collect LGD and EAD (exposure at default) data. The basic premise seems to be that in dealing with the scarce data problem, we do not have 100 years of data yet, so in the mean time lets pool data across member banks and hence build up a more statistically significant data set - put another way: let's increase the width of the dataset if we can't control the depth. 

PECDC is a consortia of around 50 organisations that pool data relating to credit events. Steve said that capture data fields per default at four "snapshot" times: orgination, 1 year prior to default, at default and at resolution. He said that every bank that had joined the organisation had managed to improve its datasets. Following an audience question, he clarified that PECDC does not predict LGD with any of its own models, but rather provides the pooled data to enable the banks to model LGD better. 

Steve said that LGD turns out to be very different for different sectors of the market, particularly between SMEs and large corporations (levels of LGD for large corporations being more stable globally and less subject to regional variations). But also there is great LGD variation across specialist sectors such as aircraft finance, shipping and project finance. 

Steve ended by saying that PECDC was orginally formed in Europe, and was now attempting to get more US banks involved, with 3 US banks already involved and 7 waiting to join. There was an audience question relating to whether regulators allowed pooled data to be used under Basel IRB - apparently Nordic regulators allow this due to needing more data in a smaller market, European banks use the pooled data to validate their own data in IRB but in the US banks much use their own data at the moment.

Til Schuermann

Following Steve, Til Schuermann added his thoughts on LGD. He said that LGD has a time variation and is not random, being worse in recession when DR is high. His stylized argument to support this was that in recession there are lots of defaults, leading to lots of distressed assets and that following the laws of supply and demand, then assets used in recovery would be subject to lower prices. Til mentioned that there was a large effect in the timing of recovery, with recovery following default between 1 and 10 quarters later. He offered words of warning that not all defaults and not all collateral are created equal, emphasising that debt structures and industry stress matter. 

Summary

The evening closed with a few audience questions and a general summation by the panelists of the main issues of their talks, primarily around models and modelling, the scarcity of data and how to be pragmatic in the application of this kind of credit analysis. 

 

 

09 October 2013

And the winner of the Best Risk Data Management and Analytics Platform is...

...Xenomorph!!! Thanks to all who voted for us in the recent A-Team Data Management Awards, it was great to win the award for Best Risk Data Management and Analytics Platform. Great that our strength in the Data Management for Risk field is being recognised, and big thanks again to clients, partners and staff who make it all possible!

Please also find below some posts for the various panel debates at the event:

 Some photos, slides and videos from the event are now available on the A-Team site.

 

07 October 2013

#DMSLondon - The Chief Data Officer Challenge

The first panel of the afternoon touched on a hot topic at the moment, the role of the Chief Data Officer (CDO). Andrew Delaney again moderated the panel, consisting of Rupert Brown of UBS, Patrick Dewald of Diaku, Colin Hall of Credit Suisse, Nigel Matthews of Barclays and Neill Vanlint of GoldenSource. Main points:

  • Colin said that the need for the CDO role is that someone needs to sit at the top table who is both nerdy about data but also can communicate a vision for data to the CEO.
  • Rupert said that role of CDO was still a bit nebulous covering data conformance, storage management, security and data opportunity (new functionality and profit). He suggested this role used to be called "Data Stewardship" and that the CDO tag is really a rename.
  • Colin answered that the role did use to be a junior one, but regulation and the rate of industry change demands a CDO, a point contact for everyone when anything comes up that concerns data - previously nobody knew quite who to speak to on this topic.
  • Patrick suggested that a CDO needs a long-term vision for data, since the role is not just an operational one. 
  • Nigel pointed out that the CDO needs to cover all kinds of data and mentioned recent initiatives like BCBS with their risk data aggregation paper.
  • Neil said that he had seen the use of a CDO per business line at some of his clients.
  • There was some conversation around the different types of CDO and the various carrots and sticks that can be employed. Neil made the audience laugh with his quote from a client that "If the stick doesn't work, I have a five-foot carrot to hit them with!"
  • Patrick said that CDO role is about business not just data.
  • Colin picked up on what Patrick said and illustrated this with an example of legal contract data feeding directly into capital calculations.
  • Nigel said that the CDO is a facilitator with all departments. He added that the monitoring tools from market data where needed in reference data

Overall good debate, and I guess if you were starting from scratch (if only we could!) you would have to think that the CDO is a key role given the finance industry is primarily built on the flow of data from one organisation to another.

 

 

#DMSLondon - What Will Drive Data Management?

The first panel of the day opened with an introductory talk by Chris Johnson of HSBC. Chris started his talk by proudly announcing that he drives a Skoda car, something that to him would have been unthinkable 25 years ago but with investment, process and standards things can and will change. He suggested that data management needs to go through a similar transformation, but that there remained a lot to be done. 

Moving on to the current hot topics of data unitilities and managed services, he said that reduced costs of managed services only became apparent in the long term and that both types of initiative have historically faced issues with:

  • Collaboration
  • Complexity
  • Logistical Challenges and Risks

Chris made the very good point that until service providers accept liability for data quality then this means that clients must always check the data they use. He also mentioned that in relation to Solvency II (a hot topic for Chris at HSBC Security Services), that EIOPA had recently mentioned that managed services may need to be regulated. Chris mentioned the lack of time available to respond to all the various regulatory deadlines faced (a recurring theme) and that the industry still lacked some basic fundamentals such as a standard instrument identifier.

Chris then joined the panel discussion with Andrew Delaney as moderator and with other panelists including Colin Gibson (see previous post), Matt Cox of Denver Perry, Sally Hinds of Data Management Consultancy Services and Robert Hofstetter of Bank J. Safra Sarasin. The key points I took from the panel are outlined below:

  • Sally said that many firms were around Level 3 in the Data Management Maturity Model, and that many were struggling particularly with data integration. Sally added that utililities were new, as was the CDO role and that implications for data management were only just playing out.
  • Matt thought that reducing cost was an obvious priority in the industry at the moment, with offshoring playing its part but progress was slow. He believed that data management remains underdeveloped with much more to be done.
  • Colin said that organisations remain daunted by their data management challenges and said that new challenges for data management with transactional data and derived data.
  • Sally emphasised the role of the US FATCA regulation and how it touches upon some many processess and departments including KYC, AML, Legal, Tax etc.
  • Matt highlighted derivatives regulation with the current activity in central clearing, Dodd-Frank, Basel III and EMIR.
  • Chris picked up on this and added Solvency II into the mix (I think you can sense regulation was a key theme...). He expressed the need and desirability of a Unique Product Identifier (UPI see report) as essential for the financial markets industry and how we need not just stand still now the LEI was coming. He said that industry associations really needed to pick up their game to get more standards in place but added that the IMA had been quite proactive in this regard. He expressed his frustration at current data licensing arrangements with data vendors, with the insistence on a single point of use being the main issue (big problem if you are in security services serving your clients I guess)
  • Robert added that his main issues were data costs and data quality
  • Andrew then brought the topic around to risk management and its impact on data management.
  • Colin suggested that more effort was needed to understand the data needs of end users within risk management. He also mentioned that products are not all standard and data complexity presents problems that need addressing in data management.
  • Chris mentioned that there 30 data fields used in Solvency II calculations and that if any are wrong this would have a direct impact on the calcualated capital charge (i.e. data is important!)
  • Colin got onto the topic of unstructured data and said how it needed to be tagged in some way to become useful. He suggested that there was an embrionic cross-over taking place between structured and unstructured data usage.
  • Sally thought that the merging of Business Intelligence into Data Management was a key development, and that if you have clean data then use it as much as you can.
  • Robert thought that increased complexity in risk management and elsewhere should drive the need for increased automation.
  • Colin thought cost pressures mean that the industry simply cannot afford the old IT infrastructure and that architecture needs to be completely rethought.
  • Chris said that we all need to get the basics right, with LEI but then on to UPI. He said to his knowledge data management will always be a cost centre and standardisation was a key element of reducing costs across the industry.
  • Sally thought that governance and ownership of data was wooly at many organisations and needed more work. She added this needed senior sponsorship and that data management was an ongoing process, not a one-off project.
  • Matt said that the "stick" was very much needed in addition to the carrot, advising that the proponents of improved data management should very much lay out the negative consequences to bring home the reality to business users who might not see the immediate benefits and costs.

Overall good panel, lots of good debate and exchanging of ideas.

 

14 June 2013

Xenomorph at SIFMA Tech 2013 NYC

Quick note to say that Xenomorph will be exhibiting at this week's SIFMA Tech 2013 event in New York. You can find us on the Microsoft stand (booth 1507) on both Tuesday 18th and Wednesday 19th, and we can show you some of the work we have been doing with Microsoft on their Windows Azure cloud platform. 

I am also speaking at the event with Microsoft and a few other partners on Wednesday 19th at 11:40am:

"Managing Data Complexity in Challenging Times" - a panel with the following participants:

  • Rupesh Khendry – Head WW Capital Markets Industry Solutions, Microsoft Financial Services
  • Marc Alvarez - Senior Director, Interactive Data
  • Satyam Kancharla - SVP, Numerix
  • Dushyant Shahrawat – Senior Reseacrh Director, CEB Towergroup
  • Brian Sentance - CEO, Xenomorph 

Hope to see you there!

16 October 2012

The Missing Data Gap

Getting to the heart of "Data Management for Risk", PRMIA held an event entitled "Missing Data for Risk Management Stress Testing" at Bloomberg's New York HQ last night. For those of you who are unfamiliar with the topic of "Data Management for Risk", then the following diagram may help to further explain how the topic is to do with all the data sets feeding the VaR and scenario engines.

Data-Flow-for-Risk-Engines
I have a vested interest in saying this (and please forgive the product placement in the diagram above, but hey this is what we do...), but the topic of data management for risk seems to fall into a functionality gap between: i) the risk system vendors who typically seem to assume that the world of data is perfect and that the topic is too low level to concern them and ii) the traditional data management vendors who seem to regard things like correlations, curves, spreads, implied volatilities and model parameters as too business domain focussed (see previous post on this topic) As a result, the risk manager is typically left with ad-hoc tools like spreadsheets and other analytical packages to perform data validation and filling of any missing data found. These ad-hoc tools are fine until the data universe grows larger, leading to the regulators becoming concerned about just how much data is being managed "out of system" (see past post for some previous thoughts on spreadsheets).

The Crisis and Data Issues. Anyway enough background above and on to some of the issues raised at the event. Navin Sharma of Western Asset Management started the evening by saying that pre-crisis people had a false sense of security around Value at Risk, and that crisis showed that data is not reliably smooth in nature. Post-crisis, then questions obviously arise around how much data to use, how far back and whether you include or exclude extreme periods like the crisis. Navin also suggested that the boards of many financial institutions were now much more open to reviewing scenarios put forward by the risk management function, whereas pre-crisis their attention span was much more limited.

Presentation. Don Wesnofske did a great presentation on the main issues around data and data governance in risk (which I am hoping to link to here shortly...)

Issues with Sourcing Data for Risk and Regulation. Adam Litke of Bloomberg asked the panel what new data sourcing challenges were resulting from the current raft of regulation being implemented. Barry Schachter cited a number of Basel-related examples. He said that the costs of rolling up loss data across all operations was prohibitative, and hence there were data truncation issues to be faced when assessing operational risk. Barry mentioned that liquidity calculations were new and presenting data challenges. Non centrally cleared OTC derivatives also presented data challenges, with initial margin calculations based on stressed VaR. Whilst on the subject of stressed VaR, Barry said that there were a number of missing data challenges including the challenge of obtaining past histories and of modelling current instruments that did not exist in past stress periods. He said that it was telling on this subject that the Fed had decided to exclude tier 2 banks from stressed VaR calculations on the basis that they did not think these institutions were in a position to be able to calculate these numbers given the data and systems that they had in place.

Barry also mentioned the challenges of Solvency II for insurers (and their asset managers) and said that this was a huge exercise in data collection. He said that there were obvious difficulties in modelling hedge fund and private equity investments, and that the regulation penalised the use of proxy instruments where there was limited "see-through" to the underlying investments. Moving on to UCITS IV, Barry said that the regulation required VaR calculations to be regularly reviewed on an ongoing basis, and he pointed out one issue with much of the current regulation in that it uses ambiguous terms such as models of "high accuracy" (I guess the point being that accuracy is always arguable/subjective for an illiquid security).

Sandhya Persad of Bloomberg said that there were many practical issues to consider such as exchanges that close at different times and the resultant misalignment of closing data, problems dealing with holiday data across different exchanges and countries, and sourcing of factor data for risk models from analysts. Navin expanded more on his theme of which periods of data to use. Don took a different tack, and emphasised the importance of getting the fundamental data of client-contract-product in place, and suggested that this was a big challenge still at many institutions. Adam closed the question by pointing out the data issues in everyday mortgage insurance as an example of how prevalant data problems are.

What Missing Data Techniques Are There? Sandhya explained a few of the issues her and her team face working at Bloomberg in making decisions about what data to fill. She mentioned the obvious issue of distance between missing data points and the preceding data used to fill it. Sandhya mentioned that one approach to missing data is to reduce factor weights down to zero for factors without data, but this gave rise to a data truncation issue. She said that there were a variety of statistical techniques that could be used, she mentioned adaptive learning techniques and then described some of the work that one of her colleagues had been doing on maximum-likehood estimation, whereby in addition to achieving consistency with the covariance matrix of "near" neighbours, that the estimation also had greater consistency with the historical behaviour of the factor or instrument over time.

Navin commented that fixed income markets were not as easy to deal with as equity markets in terms of data, and that at sub-investment grade there is very little data available. He said that heuristic models where often needed, and suggested that there was a need for "best practice" to be established for fixed income, particularly in light of guidelines from regulators that are at best ambiguous.

I think Barry then made some great comments about data and data quality in saying that risk managers need to understand more about the effects (or lack of) that input data has on the headline reports produced. The reason I say great is that I think there is often a disconnect or lack of knowledge around the effects that input data quality can have on the output numbers produced. Whilst regulators increasingly want data "drill-down" and justfication on any data used to calculate risk, it is still worth understanding more about whether output results are greatly sensitive to the input numbers, or whether maybe related aspects such as data consistency ought to have more emphasis than say absolute price accuracy. For example, data quality was being discussed at a recent market data conference I attended and only about 25% of the audience said that they had ever investigated the quality of the data they use. Barry also suggested that you need to understand to what purpose the numbers are being used and what effect the numbers had on the decisions you take. I think here the distinction was around usage in risk where changes/deltas might be of more important, whereas in calculating valuations or returns then price accuracy might receieve more emphasis. 

How Extensive is the Problem? General consensus from the panel was that the issues importance needed to be understood more (I guess my experience is that the regulators can make data quality important for a bank if they say that input data issues are the main reason for blocking approval of an internal model for regulatory capital calculations). Don said that any risk manager needed to be able to justify why particular data points were used and there was further criticism from the panel around regulators asking for high quality without specifying what this means or what needs to be done.

Summary - My main conclusions:

  • Risk managers should know more of how and in what ways input data quality affects output reports
  • Be aware of how your approach to data can affect the decisions you take
  • Be aware of the context of how the data is used
  • Regulators set the "high quality" agenda for data but don't specify what "high quality" actually is
  • Risk managers should not simply accept regulatory definitions of data quality and should join in the debate

Great drinks and food afterwards (thanks Bloomberg!) and a good evening was had by all, with a topic that needs further discussion and development.

 

 

30 August 2012

Reverse Stress Testing at Quafafew

Just back from a good vacation (London Olympics followed by a sunny week in Portugal - hope your summer has gone well too) and enjoyed a great evening at a Quafafew event on Tuesday evening, entitled "Reverse Stress Testing & Roundtable on Managing Hedge Fund Risk".

Reverse Stress Testing

The first part of the evening was a really good presentation by Daniel Satchkov of Rixtrema on reverse stress testing. Daniel started the evening by stating his opinion that risk managers should not consider their role as one of trying to predict the future, but rather one more reminiscent of "car crash testing", where the role of the tester is one of assessing, managing and improving the response of a car to various "impacts", without needing to understand the exact context of any specific crash such as "Who was driving?", "Where did the accident take place?" or "Whose fault was it?". (I guess the historic context is always interesting, but will be no guide to where, when and how the next accident takes place). 

Daniel spent some of his presentation discussing the importance of paradigms (aka models) to risk management, which in many ways echos many of themes from the modeller's manifesto. Daniel emphasised the importance of imagination in risk management, and gave a quick story about a German professor of mathematics who when asked the whereabouts of one of his new students replied that "he didn't have enough imagination so he has gone off to become a poet".

In terms of paradigms and how to use them, he gave the example of Brownian motion and described how the probability of all the air in the room moving to just one corner was effectively zero (as evidenced by the lack of oxygen cylinders brought along by the audience). However such extremes were not unusual in market prices, so he noted how Black-Scholes was evidently the wrong model, but when combined with volatility surfaces the model was able to give the right results i.e. "the wrong number in the wrong formula to get the right price." His point here was that the wrong model is ok so long as you aware of how it is wrong and what its limatations are (might be worth checking out this post containing some background by Dr Yuval Millo about the evolution of the options market). 

Daniel said that he disagreed with the premise by Taleb that the range of outcomes was infinite and that as a result all risk managers should just give up and buy and a lottery ticket, however he had some sympathies with Taleb over the use of stable correlations within risk management. His illustration was once again entertaining in quoting a story where a doctor asks a nurse what the temperature is of the patients at a Russian hospital, only to be told that they were all "normal, on average" which obviously is not the most useful medical information ever provided. Daniel emphasised that contrary to what you often read correlations do not always move to one in a crisis, but there are often similarities from one crisis to the next (maybe history not repeating itself but more rhyming instead). He said that accuracy was not really valid or possible in risk management, and that the focus should be on relative movements and relative importance of the different factors assessed in risk.

Coming back to the core theme of reverse stress testing, then Daniel presented a method by which from having categorised certain types of "impacts" a level of loss could be specified and the model would produce a set of scenarios that produce the loss level entered. Daniel said that he had designed his method with a view to producing sets of scenarios that were:

  • likely
  • different
  • not missing any key dangers

He showed some of the result sets from his work which illustrated that not all scenarios were "obvious". He was also critical of addressing key risk factors separately, since hedges against different factors would be likely to work against each other in times of crisis and hedging is always costly. I was impressed by his presentation (both in content and in style) and if the method he described provides a reliable framework for generating a useful range of possible scenarios for a given loss level, then it sounds to me like a very useful tool to add to those available to any risk manager.

Managing Hedge Fund Risk

The second part of the evening involved Herb Blank of S-Network (and Quafew) asking a few questions to Raphael Douady, of Riskdata and Barry Schachter of Woodbine Capital. Raphael was an interesting and funny member of the audience at the Dragon Kings event, asking plenty of challenging questions and the entertainment continued yesterday evening. Herb asked how VaR should be used at hedge funds, to which Raphael said that if he calculated a VaR of 2 and we lost 2.5, he would have been doing his job. If the VaR was 2 and the loss was 10, he would say he was not doing his job. Barry said that he only uses VaR when he thinks it is useful, in particular when the assumptions underlying VaR are to some degree reflected in the stability of the market at the time it is used. 

Raphael then took us off on an interesting digression based on human perceptions of probability and statistical distributions. He told the audience that yesterday was his eldest daughter's birthday and what he wanted was for the members of the audience to write down on paper what was a lower and upper bound of her age to encompass a 99th percentile. As background, Raphael looks like this. Raphael got the results and found that out of 28 entries, the range of ages provided by 16 members of the audience did not cover his daughters age. Of the 12 successful entries (her age was 25) six entries had 25 as the upper bound. Some of the entries said that she was between 18 and 21, which Raphael took to mean that some members of the audience thought that they knew her if they assigned a 99th percentile probability to their guess (they didn't). His point was that even for Quafafewers (or maybe Quafafewtoomuchers given the results...) then guessing probabilities and appropriate ranges of distributions was not a strong point for many of the human race.

Raphael then went on to illustrate his point above through saying that if you asked him whether he thought the Euro would collapse, then on balance he didn't think it was very likely that this will happen since he thinks that when forced Germany would ultimately come to the rescue. However if you were assessing the range of outcomes that might fit within the 99th percentile distribution of outcomes, then Raphael said that the collapse of the Euro should be included as a possible scenario but that this possibility was not currently being included in the scenarios used by the major financial institutions. Off on another (related) digression, Raphael said that he compared LTCM with having the best team of Formula 1 drivers in the world that given a F1 track would drive the fastest and win everything, but if forced to drive an F1 car on a very bumpy road this team would be crashing much more than most, regardless of their talent or the capabilities of their vehicle.

Barry concluded the evening by saying that he would speak first, otherwise he would not get chance to given Raphael's performance so far. Again it was a digression from hedge fund risk management, but he said that many have suggested that risk managers need to do more of what they were already doing (more scenarios, more analysis, more transparency etc). Barry suggested that maybe rather than just doing more he wondered whether the paradigm was wrong and risk managers should be thinking different rather than just more of the same. He gave one specific example of speaking to a structurer in a bank recently and asking given the higher hurdle rates for capital whether the structurer should consider investing in riskier products. The answer from the structurer was the bank was planning to meet about this later that day, so once again it would seem that what the regulators want to happen is not necessarily what they are going to get... 

 

 

21 June 2012

SIFMA NYC 2012 - the event to be (outside) at

Quick note to say that it was the week of SIFMA in New York, what was once the biggest event in the fintech calendar. It unfortunately continutes its decline, with charging for entrance for the first time and a continued reduction in the number of vendors exhibiting. Eli Manning of the New York Giants turned up to speak, but I guess he didn't have to pay the entrance fee (to say the least...). 

Regardless of the exhibitions decline, perhaps the organisers should start charging for entrance to the Bridges Bar in the Hilton Hotel where the event is held? Seems like the world and his wife still want to meet up in New York around this time, it is just that the organisers need to find some better ways to tap into this enthusiam to talk face to face.

14 June 2012

Paris Financial Information Summit 2012

I attended the Financial Information Summit event on Tuesday, organized in Paris by Inside Market Data and Inside Reference Data.

Unsurprisingly, most of the topics discussed during the panels focused on reducing data costs, managing the vendor relationship strategically, LEI and building sound data management strategies.

Here is a (very) brief summary of the key points touched which generated a good debate from both panellists and audience:

Lowering data costs and cost containment panels

  • Make end-users aware of how much they pay for that data so that they will have a different perspective when deciding if the data is really needed or a "nice to have"
  • Build a strong relationship with the data vendor: you work for the same aim and share the same industry issues
  • Evaluate niche data providers who are often more flexible and willing to assist while still providing high quality data
  • Strategic vendor management is needed within financial institutions: this should be an on-going process aimed to improve contract mgmt for data licenses
  • A centralized data management strategy and consolidation of processes and data feeds allow cost containment (something that Xenomorph have long been advocating)
  • Accuracy and timeliness of data is essential: make sure your vendor understands your needs
  • Negotiate redistribution costs to downstream systems

One good point was made by David Berry, IPUG-Cossiom, on the acquisition of data management software vendors by the same data providers (referring to the Markit-Cadis and PolarLake-Bloomberg deals) and stating that it will be tricky to see how the two business units will be managed "separately" (if kept separated...I know what you are thinking!).

There were also interesting case studies and examples supporting the points above. Many panellists pointed out how difficult can be to obtain high quality data from vendors and that only regulation can actually improve the standards. Despite the concerns, I must recognize that many firms are now pro-actively approaching the issue and trying to deal with the problem in a strategic manner. For example, Hand Henrik Hovmand, Market Data Manager, Danske Bank, explained how Danske Bank are in the process of adopting a strategic vendor system made of 4 steps: assessing vendor, classifying vendor, deciding what to do with the vendor and creating a business plan. Vendors are classified as strategic, tactical, legacy or emerging. Based on this classification, then the "bad" vendors are evaluated to verify if they are enhancing data quality. This vendor landscape is used both internally and externally during negotiation and Hovmand was confident it will help Danske Bank to contain costs and get more for the same price.

I also enjoyed the panel on Building a sound management strategy where Alain Robert- Dauton, Sycomore Asset Management, was speaking. He highlighted how asset managers, in particular smaller firms, are now feeling the pressure of regulators but at the same time are less prepared to deal with compliance than larger investment banks. He recognized that asset managers need to invest in a sound risk data management strategy and supporting technology, with regulators demanding more details, reports and high quality data.

For a summary on what was said on LEI, then seems like most financial institutions are still unprepared on how it should be implemented, due to uncertainty around it but I refer you to an article from Nicholas Hamilton in Inside Reference Data for a clear picture of what was discussed during the panel.

Looking forward, the panellists agreed that the main challenge is and will be managing the increasing volume of data. Though, as Tom Dalglish affirmed, the market is still not ready for the cloud, given than not much has been done in terms of legislation. Watch out!

The full agenda of the event is available here.

08 June 2012

Federal Reserve beats the market (at ping pong...)

Thanks to all those who came and along and supported "Ping Pong 4 Public Schools" at the AYTTO fund raiser event at SPiN on Wednesday evening. Great evening with participants in the team competition from the TabbGroup, Jeffries Investment Bank, Toro Trading, MissionBig, PolarLake, AIG, Mediacs, Xenomorph and others. In fact the others included the Federal Reserve, who got ahead of the market and won the team competition...something which has to change next year! Additional thanks to SPiN NYC for hosting the event, and to Bonhams for conducting the reverse auction.

Some photographs from the event below:

Photo

Ben Nisbet of AYTTO trying to make order out of chaos at the start of the team competion...

 

Photo

One the AYTTO students, glad none of us had to play her, we would have got wupped...

 

Photo

The TabbGroup strike a pose and look optimistic at the start of the evening...

 

Photo

Sidney, one of the AYTTO coaches, helping us all to keep track of the score...

 

Photo

This team got a lot of support from the audience, no idea why...

 

18 October 2011

A-Team event – Data Management for Risk, Analytics and Valuations

My colleagues Joanna Tydeman and Matthew Skinner attended the A-Team Group's Data Management for Risk, Analytics and Valuations event today in London. Here are some of Joanna's notes from the day:

Introductory discussion

Andrew Delaney, Amir Halton (Oracle)

Drivers of the data management problem – regulation and performance.

Key challenges that are faced – the complexity of the instruments is growing, managing data across different geographies, increase in M&As because of volatile market, broader distribution of data and analytics required etc. It’s a work in progress but there is appetite for change. A lot of emphasis is now on OTC derivatives (this was echoed at a CityIQ event earlier this month as well).

Having an LEI is becoming standard, but has its problems (e.g. China has already said it wants its own LEI which defeats the object). This was picked up as one of the main topics by a number of people in discussions after the event, seeming to justify some of the journalistic over-exposure to LEI as the "silver bullet" to solve everyone's counterparty risk problems.

Expressed the need for real time data warehousing and integrated analytics (a familiar topic for Xenomorph!) – analytics now need to reflect reality and to be updated as the data is running - coined as ‘analytics at the speed of thought’ by Amir. Hadoop was mentioned quite a lot during the conference, also NoSQL which is unsurprising from Oracle given their recent move into this tech (see post - a very interesting move given Oracle's relational foundations and history)

Impact of regulations on Enterprise Data Management requirements

Virginie O’Shea, Selwyn Blair-Ford (FRS Global), Matthew Cox (BNY Melon), Irving Henry (BBA), Chris Johnson (HSBC SS)

Discussed the new regulations, how there is now a need to change practice as regulators want to see your positions immediately. Pricing accuracy was mentioned as very important so that valuations are accurate.

Again, said how important it is to establish which areas need to be worked on and make the changes. Firms are still working on a micro level, need a macro level. It was discussed that good reasons are required to persuade management to allocate a budget for infrastructure change. This takes preparation and involving the right people.

Items that panellists considered should be on the priority list for next year were:

· Reporting – needs to be reliable and meaningful

· Long term forecasts – organisations should look ahead and anticipate where future problems could crop up.

· Engage more closely with Europe (I guess we all want the sovereign crisis behind us!)

· Commitment of firm to put enough resource into data access and reporting including on an ad hoc basis (the need for ad hoc was mentioned in another session as well).

Technology challenges of building an enterprise management infrastructure

Virginie O’Shea, Colin Gibson (RBS), Sally Hinds (Reuters), Chris Thompson (Mizuho), Victoria Stahley (RBC)

Coverage and reporting were mentioned as the biggest challenges.

Front office used to be more real time, back office used to handle the reference data, now the two must meet. There is a real requirement for consistency, front office and risk need the same data so that they arrive to the same conclusions.

Money needs to be spent in the right way and fims need to build for the future. There is real pressure for cost efficiency and for doing more for less. Discussed that timelines should perhaps be longer so that a good job can be done, but there should be shorter milestones to keep business happy.

Panellists described the next pain points/challenges that firms are likely to face as:

· Consistency of data including transaction data.

· Data coverage.

· Bringing together data silos, knowing where data is from and how to fix it.

· Getting someone to manage the project and uncover problems (which may be a bit scary, but problems are required in order to get funding).

· Don’t underestimate the challenges of using new systems.

Better business agility through data-driven analytics

Stuart Grant, Sybase

Discussed Event Stream Processing, that now analytics need to be carried out whilst data is running, not when it is standing still. This was also mentioned during other sessions, so seems to be a hot topic.

Mentioned that the buy side’s challenge is that their core competency is not IT. Now with cloud computing they are more easily able to outsource. He mentioned that buy side shouldn’t necessarily build in order to come up with a different, original solution.

Data collection, normalisation and orchestration for risk management

Andrew Delaney, Valerie Bannert-Thurner (FTEN), Michael Coleman (Hyper Rig), David Priestley (CubeLogic), Simon Tweddle (Mizuho)

Complexity of the problem is the main hindrance. When problems are small, it is hard for them to get budget so they have to wait for problems to get big – which is obviously not the best place to start from.

There is now a change in behaviour of senior front office management – now they want reports, they want a global view. Front office do in fact care about risk because they don’t want to lose money. Now we need an open dialogue between front office and risk as to what is required.

Integrating data for high compute enterprise analytics

Andrew Delaney, Stuart Grant (Sybase), Paul Johnstone (independent), Colin Rickard (DataFlux)

The need for granularity and transparency are only just being recognised by regulators. The amount of data is an overwhelming problem for regulators, not just financial institutions.

Discussed how OTCs should be treated more like exchange-traded instruments – need to look at them as structured data.

24 June 2011

PRMIA on Data and Analytics

Final presentation at the PRMIA event yesterday was by Clifford Rossi and was entitled "The Brave New World of Data & Analytics Following the Crisis: A Risk Manager's Perspective".

Clifford got his presentation going with a humorous and self-depricating start by suggesting that his past employment history could in fact be the missing "leading indicator" for predicting orgnisations in crisis, having worked at CitiGroup, WaMu, Countrywide, Freddie Mac and Fannie Mae. One of the other professors present said that he didn't do the same to academia (University of Maryland beware maybe!).

Clifford said that the crisis had laid bare the inadequacy and underinvestment in data and risk technology in the financial services sector. He suggested that the OFR had the potential to be a game changer in correcting this issue and in helping the role of CRO to gain in stature.

He gave an example of a project at one of the GSEs he had worked at called "Project Enterprise" which was to replace 40 year old mainframe based systems (systems that for instance only had 3 digits to identify a transaction). He said that he noted that this project had recently been killed, having cost around $500M. With history like this, it is not surprising that enterpring risk data warehousing capabilities were viewed as black holes without much payoff prior to the crisis. In fact it was only due to Basel that data management projects in risk received any attention from senior management in his view.

During the recent stress test process (SCAP) the regulators found just how woeful these systems were as the banks struggled to produce the scenario results in a timely manner. Clifford said that many banks struggled to produce a consistent view of risk even for one asset type, and that in many cases, corporate acquisitions had exascerbated this lack of consistency in obtaining accurate, timely exposure data. He said that the mortgage processing fiasco showed the inadequacy of these types of systems (echoing something I heard at another event about mortgage tagging information being completely "free-fromat", without even designated fields for "City" and "State" for instance)

Data integrity was another key issue that Clifford discussed, here talking about the lack of historical performance data leading to myopia in dealing with new products and poor defintions of product leading to risk assessments based on the originator rather than on the characteristics of the product. (side note: I remember prior to the crisis the credit derivatives department at one UK bank requisitioning all new server hardware to price new CDO squared deals given it was supposedly so profitable, it was at that point that maybe I should have known something was brewing...) Clifford also outlined some further data challenges, such as the changing statistical relationship between Debt to Income ratio and mortgage defaults once incomes were self-declared on mortgages.

Moving on to consider analytics and models, Clifford outlined a lot of the concerns covered by the Modeller's Manifesto, such as the lack of qualitative judgement and over-reliance on the quantitative, efficiency and automation superceding risk management, limited capability to stress test on a regular basis, regime change, poor model validation, and cognitive biases reinforced by backward-looking statistical analysis. He made the additional point that in relation to the OFR, they should concentrate on getting good data in place before spending resource on building models.

In terms of focus going forward, Clifford said the liquidity, counterparty and credit risk management were not well understood. Possibly echoing Ricardo Rebonato's ideas, he suggested that leading indicators need to be integrated into risk modelling to provide the early warning systems we need. He advocated that the was more to do on integrating risk views across lines of business, counterparties and between the banking and trading book.

Whilst being a proponent of the OFRs potential to mandate better Analytics and data management, he warned (sensibly in my view) that we should not think that the solution to future crises is simply to set up a massive data collection and Modelling entity (see earlier post on the proposed ECB data utility)

Clifford thinks that Dodd-Frank has the potential to do for the CRO role what Sarbanes-Oxley did in elevating the CFO role. He wants risk managers to take the opportunity presented in this post-crisis period to lead the way in promoting good judgement based on sound management of data and Analytics. He warned that senior management buy-in to risk management was essential and could be forced through by regulatory edict.

This last and closing point is where I think where the role of risk management (as opposed to risk reporting) faces it's biggest challenge, in that how can a risk manager be supported in preventing a senior business manager from seeking a overly risky new business opportunity based on what "might" happen in the future - we human beings don't think about uncertainty very clearly and the lack of a resulting negative outcome will be seen by many to invalidate the concerns put forward before a decision was made. Risk management will become known as the "business prevention" department and not regarded as the key role it should be.

28 October 2010

A French Slant on Valuation

Last Thursday, I went along to an event organized by the Club Finance Innovation on the topic of “Independent valuations for the buy-side: expectations, challenges and solutions”.

The event was held at the Palais Brongniart in Paris, which, for those who don’t know (like me till Thursday), was built in the years 1807-1826 by the architect Brongniart by order of Napoleone Bonaparte, who wanted the building to permanently host the Paris stock exchange.

Speakers at the roundtable were:

The event focussed on the role of the buy-side in financial markets, looking in particular at the concept of independent valuations and how this has taken an important role after the financial downturn.  However, all the speakers agreed that remains a large gap between the sell-side and buy-side in terms of competences and expertise in the field of independent valuations. The buy-side lacks the systems for a better understanding of financial products and should align itself to the best practices of the sell-side and bigger hedge funds.

The roundtable was started by Francis Cornut of DeriveXperts, who gave the audience a definition of independent valuation. Whilst valuation could be defined as the “set of data and models used to explain the result of a valuation”, Cornut highlighted how the difficulty is in saying what independent means; there is in fact a general confusion on what this concept represents: internal confusion, for example between the front office and risk control department of an institution, but also external confusion, when valuations are done by third-parties.

Cornut provided three criteria that an independent valuation should respect:

  • Autonomy, which should be both technical and financial;
  • Credibility and transparency;
  • Ethics, i.e.: being able to resist to market/commercial pressure and deliver a valuation which is free from external influences/opinions.

Independent valuations are the way forward for a better understanding of complex, structured financial products. Cornut advocated the need for financial parties (clients, regulators, users and providers) to invest more and understand the importance of independent valuations, which will ultimately improve risk management.

Jean-Marc Eber, President LexiFi, agreed that the ultimate objective of independent valuations is to allow financial institutions to better understand the market. To accomplish this, Eber pointed to the fact that when we speak about services to clients, we should first think of what are their real needs. The bigger umbrella of “buy-side” implies in fact different needs and there is often a contradiction on what regulators want: on one side, having independent valuations provided by independent third parties; on the other side, independent valuations really mean that internal users/staff do understand what there is underline the products that a company have.In the same way, we don’t just need to value products but also measure their risk and periodically  re-value them.It is important, in fact, to have the whole picture of the product being evaluated in order to make the buy-side more competitive.

Another point on which the speakers agreed is traceability: as Eber said, financial products don’t exist just as they are, but they go under transformation and change several times. Therefore, the market needs to follow the products across its life cycle till its maturity stage and this pose a technology challenge, in providing scenario analysis for compliance and keeping track of the audit trail.

At the question, ‘what has the crisis changed’ panellists answered:

Eber: the crisis showed the need to be more competent and technical to avoid risk. He highlighted the need to understand the product and its underlying. Many speak of having a central repository for OTCs, obligations, etc but this needs more thinking from the regulators and the financial markets. Moreover, the markets should focus more on quality data and transparency.

Eric Benhamou, CEO pricing Partners, sees an evolution of the market as the crisis showed underestimated risks which are now being taken in consideration.

Claude Martini, CEO Zeliade, advocated the need for financial markets to implement best practices for product valuations: buy-side should apply the same practices already adopted by the sell-side and verify the hypotheses, price and risk related to a financial product.  

Cornut admitted  things have changed since 2005, when they launched DerivExperts and nobody seemed to be interested in independent valuations. People would ask what value they would get from an investment in independent valuations: yes, regulators are happy but what’s the benefit for me?

This is changing now that financial institutions know that a deeper understanding of financial products increases their ability to push the products to their clients. The speech I enjoyed the most was from Patrick Hénaff, associated professor at the University of Bretagne and formerly Global Head of Quantitative Analysis - Commodites at Merrill Lynch / Bank of America.

He took a more academic approach and contested the fact that having two prices to confront is thought to reduce the incertitude on the product but highlighting as this is not always the case. I found interesting his idea of giving a product price with a confidence interval or a ‘toxic index’ which would represent the incertitude about the product and reproduce the model risk which may originate from it.

We speak too often about the risk associated to complex products but Hénaff, explained how the risk exists even on simpler products, for example the calculation of VAR on a given stock positioning. A stock is extremely volatile and we can’t know its trend; providing a confidence interval is therefore crucial. What is new instead, it is the interest that many are showing in assigning a price to a determinate risk, whilst before model risk was considered a mere operational risk coming out from the calculation process. Today, a good valuation of the risk associated to a product can result in less regulatory capital used to cover the risk and as such it is gaining much more interest from the market.

Henaff describes two approaches currently taken from academic research on valuations:

1) Adoption of statistic simulation in order to identify the risk deriving from an incorrect calibration of the model. This consists in taking historical data and test the model, through simulations and scenarios, in order to measure the risk associated in choosing a model instead of another;)

2) Have more quality data. Lack of quality data implies that models chosen are inaccurate as it is difficult to identify exactly what model we should be using to price a product.

 

Model risk, which as said above was before considered  an operational risk, now becomes of extremely importance as it can free up capital. Hénaff suggested that is key to find for model risk the equivalent of the VAR for market risk, a normalized measure. He also spoke about the concept of a “Model validation protocol”, giving the example of what happens in the pharmaceutical and biologic sectors: before launching a new pill into the market, this is tested several times.

Whilst in finance products are just given with their final valuation, the pharmaceutical sector provides a “protocol” which describes the calculations, analysis and processes used in order to get to the final value and their systems are organized to provide a report which would show all the deeper detail. To reduce risk, valuations should be a pre-trade process and not a post-trade.

This week, the A-Team group published a valuations benchmarking study which shows how buy-side institutions are turning more and more often to third-parties valuations, driven mainly by risk management, regulations and client needs. Many of the institutions interviewed also admitted that they will increase their spending in technology to automate and improve the pricing process, as well as the data source integration and the workflow.

This is in line on what has been said at the event I attended and confirmed by the technology representatives speaking at the roundtable.

I would like to end with what Hénaff said: there can’t be a truly independent valuation without transparency of the protocols used to get to that value.

Well, Rome wasn’t built in a day (and as it is my city we’re speaking about, I can say there is still much to build, but let’s not get into this!) but there is a great debate going on, meaning that financial institutions are aware of the necessity to take a step forward. Much is being said about the need for more transparency and a better understanding of complex, structured financial products and still there is a lot to debate.  Easier said than done I guess but, as Napoleon would say, victory belongs to the most persevering!

20 October 2010

Analytics Management by Sybase and Platform

I went along to a good event at Sybase New York this morning, put on by Sybase and Platform Computing (the grid/cluster/HPC people, see an old article for some background). As much as some of Sybase's ideas in this space are competitive to Xenomorph's, some are very complimentary and I like their overall technical and marketing direction in focussing on the issue of managing of data and analytics within financial markets (given that direction I would, wouldn't I?...). Specifically, I think their marketing pitch based on moving away from batch to intraday risk management is a good one, but one that many financial institutions are unfortunately (?) a long way away from.

The event started with a decent breakfast, a wonderful sunny window view of Manhattan and then proceeded with the expected corporate marketing pitch for Sybase and Platform - this was ok but to be critical (even of some of my own speeches) there is only so much you can say about the financial crisis. The presenters described two reference architectures that combined Platform's grid computing technology with Sybase RAP and the Aleri CEP Engine, and from these two architectures they outlined four usage cases.

The first use case was for strategy back testing. The architecture for this looked fine but some questions were raised from the audience about the need for distributed data cacheing within the proposed architecture to ensure that data did not become the bottleneck. One of the presenters said that distributed cacheing was one option, although data cacheing (involving "binning" of data) can limit the computational flexibility of a grid solution. The audience member also added that when market data changes, this can cause temporary but significant issues of cache consistency across a grid as the change cascades from one node to another.

Apparently a cache could be implemented in the Aleri CEP engine on each grid node, or the Platform guy said that it was also possible to hook in a client's own C/C++ solution into Platform to achieve this, and that their "Data Affinity" offering was designed to assist with this type of issue. In summary their presentation would have looked better with the distributed cacheing illustrated in my view, and it begged the question as to why they did not have an offering or partner in this technical space. To be fair, when asked whether the architecture had any performance issues in this way, they said for the usage case they had then no it didn't - so on that simple and fundamental aspect they were covered.

They had three usage cases for the second architecture, one was intraday market risk, one was counterparty risk exposure and one was intraday option pricing. On the option pricing case, there was some debated about whether the architecture could "share" real-time objects such as zero curves, volatility surfaces etc. Apparently this is possible, but again would have benefitted by being illustrated first as an explicit part of the architecture.

There was one question about the usage of the architecture applied to transactional problems, and as usual for an event full of database specialists there was some confusion as to whether we were talking about database "transactions" or financial transactions. I think it was the latter, but this wasn't answered too clearly but neither was the question asked clearly I guess - maybe they could have explained the counterparty exposure usage case a bit more to see if this met some of the audience member's needs.

The latter question on transactions above got a conversation going on about resilliancy within the architecture, given that the Sybase ASE database engine is held in-memory for real-time updates whilst the historic data resides on shared disk in Sybase IQ, their column-based database offering. Again full resilience is possible across the whole architecture (Sybase ASE, IQ, Aleri and the Symphony Grid from Platform) but this was not illustrated this time round.

Overall good event with some decent questions and interaction.

21 May 2010

Counterparty Event

I went along to a morning panel on counterparty data management on Tuesday, sponsored by GoldenSource, Avox and Interactive Data, and hosted by Virginie O'Shea of the A-Team. Counterparty data obviously has a very high profile currently in light of recent events, however the advice from the panel fundamentally seemed to be get the basics of data management right (ownership, control, consistency, quality, transparency), rather than anything radically new.

There was some debate about the possible extension of BIC (Bank Identifier Code) to be used more generally as a standard for a unique business entity identifier - this seemed to be received well but there were concerns that such an initiative would not solve the problem but rather become an addition to the already complex entity-mapping process.

The "Data Utility" from the ECB was also debated, and it was refreshing to here some negative (realistic?) things said about it, such as the concern raised by Interactive that this might involve huge public spend without necessarily understanding why a new government sponsored entity would be able to do better than existing data providers. Obviously a data provider would say that, but I have to agree, it seems there is too much focus on having a data utility and not looking at the different options for solving industry data issues (one option obviously being a data utility, but lets not pre-package the problem with a solution but more of that in later posts...).

For more detail on the event, then take a look at Virginie's blog post.

09 December 2008

RiskMinds - from Blame to Bubble Indices...

I am attending the RiskMinds Conference in Geneva this week. Given what has happened over the past year, its somewhat intellectual name seems less appropriate than it once did, but I guess not many of us are smelling of roses on that point...

Seems to be very good attendance with the main hall full to overflowing for the first full day of the conference - unsurprisingly I think many people are looking for answers (from "what did I do wrong?" to "who can I blame?"). From a quick survey of the attendees, there seems to be no doubt that regulators and the credit rating agencies seem to be the favoured candidates to blame.

Robert Shiller (author of Irrational Exuberance) gave the openning talk on the current credit crisis and what to do about it. He made the point that behavioural finance (stock market pyschology) is becoming much more integrated with financial markets theory, and put forward the positive point that financial theory needs to be expanded to encompass what we have experienced over the past year, not that all financial theory should be thrown away (a jibe at Taleb on this point?)

Much of Professor Shiller's talk was spent on illustrating various "bubbles" in real asset prices in various markets against long run trends, usually involving a comparison with the data of the Great Depression of the 1930's, and an occaisional mention of his book (I haven't read it (yet) but I would guess it spends a lot of time on bubbles too). He is very keen on the democratisation of finance, more particularly of financial advice (it would seem that the FSA has been listening in the UK, with the recent action against commission-based financial advisors).

He also proposes greater usage of macro economic indices and related derivatives to make risks of house price falls, inflation, economic growth, employment etc more transparent to all and to allow easier hedging of these risks. He raised some eye-brows of many banking staff by proposing mortgages whose payments went down when these factors moved against a house owner (with the originator hedging these risks using futures on the indices he proposes). He was not so clear what should happen when these factors went in favour of the house owner!

One thought struck me though the talk, is that if it is relatively easy to illustrate/calculate these real asset price bubbles illustrated by Professor Shiller, then why not go further than just having indices on direct macro-economic variables and have indices based on these "bubble" calculations? If everyone could see that the "bubble" index for a particular risk factor was high then you could hedge your "irrational exuberance" or at the very least there would be a transparent indicator that a market was moving into dangerous price territory. Stupid idea? Maybe, but if it has legs please remember you heard the nickname"Aero" for the cocoa index here first!...

01 October 2008

Here today. Where tomorrow?

These may be the words on the lips of many bankers today, as they survey the continuing turmoil in global financial markets. In fact, this was the incredibly apposite tagline on a recent magazine advertisement for a major bank which (maybe unsurprisingly) was subsequently nationalised.

In the fluid (many would say “bloody”) landscape of financial services, with the next merger or acquisition just around the corner, it means that now, more than ever, data integration is a growing challenge. Accompanying this activity is the ever-growing need for consistency, accuracy, transparency and control of both the data itself and the movement of that data.

Data architecture itself is an evolving discipline and one approach worth looking at is data federation – deftly described in an article by Dain Hansen. Basically, the approach is to leave the data where it is but aggregate it into a single view, available as a service to your applications. It is an approach that Xenomorph has advocated for many years, going back to our founding days in the mid-90s, with the normalized database driver approach implemented in our Connectivity Services.

Hansen’s article explains both the advantages (simplicity, no need to copy or synchronize) and the disadvantages (performance) of this approach, and argues for a solution that incorporates both federation and consolidation of data. He shows that it is possible to architect a system that will provide consistency and control as well as agility.

It’s difficult to say whether better data management would have assisted the world’s banks in avoiding their current troubles, but greater transparency of where exactly their exposures lay would certainly have helped.

19 June 2008

Industry Trends - Larry Tabb at Sifma NY

Main points from a good talk given by Larry Tabb of the Tabb Group last week in New York:

  • The total loss so far from the sub-prime/credit crisis world-wide is around the USD 280 Billion mark – although the final figure could be as high as USD 400 Billion as in some cases the dust has not fully settled.
  • With a few notable exceptions, the majority of these losses have been incurred by the major US banks rather than their European counterparts.
  • The effect of the situation has hit a number of asset classes, the worst by far is fixed income where trading is down by 80% at the moment but it’s coming back. Equities – particularly US – has also been hit pretty hard but again it’s showing resilience and starting to improve
  • In terms of what is going to happen, IT spend is going to decrease from previous levels for a year or to by 10% or so, but is expected to rise after 2010.
  • The only areas where IT spend is expected to rise are communications and risk management.
  • Communications because there is much more activity in emerging markets, particularly Asia (excluding Japan - mainly India, China and HK) and also Latin America. Connectivity is needed to the exchanges within these markets so there will be IT spend to make this happen. Also, there is going to be more algorithmic trading as people try to get more from ever-tighter markets and reduce costs through lower trader headcount and paying less in broker commissions.
  • As manual trading decreases due to algo increases, brokerages may increase their commission rates to compensate and some of them are also going to introduce caps so that clients have to put so much trade traffic through them or else they won’t be interested - thus reducing numbers of clients and streamlining the process. This will force smaller firms to use smaller brokerages, thus splintering the market more.
  • Risk management spend is going to increase because firms want to understand why their models failed w.r.t sub-prime situation and why they didn’t see these huge losses coming earlier. They don’t want it to happen again so they want tighter and better tools as a result.
  • The breakdown by asset class in algo trading is expected to change. At the moment it is largely Equity, but the biggest increases are expected to be derivatives, futures, fixed income (particularly credit), FX and a lot more OTC to make this automated rather than manual to get these exposures off the balance sheets. This obviously means that the exchanges and banks/funds etc. need the software and equipment necessary to do algo trading in these spaces which currently are quite embryonic so there will be a lot of development in this area. Of course, this means algo engines and risk management/portfolio software will need to be much more adept at handling mixed assets and not just equities.
  • All firms, banks and hedge funds, are going to place a lot more of their investments overseas, due to higher expected return and less risk, rather than the US, so all of their systems will need global capability in terms of data acquisition and trading activities, with consolidation and risk management
  • Larry also mentioned that he had been to TradeTech and also one in Asia, both of which he had been involved in panel discussions with all of the Exchange Heads (e.g. LSE) – and when asked what the biggest headaches were they all said that clearing was a massive problem with the process being splintered and disparate and if the trading levels are going to increase this process must be a lot more streamlined.

Last point is very topical in news at the moment given LSE assessing whether to get involved in doing its own clearing, plus also the regulators desire to get some form of clearing in place for major classes of OTC derivatives. 

28 April 2008

TradeTech Paris 2008 - Chi-X leads the waiting game

We exhibited at the TradeTech Paris event last week - this is a mainly equities/trade execution event and as such most of the speakers were playing the waiting game and not saying much. Everyone seems to want to see more of the new trading venues come on line before they put out any opinions of substance.

The main (only?) news item was the rapid uptake of Chi-X and its share in trading volumes against the exchanges. There was some "competitive" banter between Peter Randall of Chi-X and Eli Lederman of Project Turquoise (Chi-X saying that Chi-X is a live business and not just a "project"). LSE was there too, not doing a great job of defending the record of exchanges - too much emphasis on defending their existance based on the past rather than the future in my opinion.

17 March 2008

Higher quality data from the front office?

Sungard had a good event on Thursday night, with four risk managers taking the stage for a "thought leadership" seminar entitled "Regulatory Impact of Market Events" (if the advert is still around on their site, see http://www.sungard.com/ADAPTIV/default.aspx?id=4678&formAction=takeit&formid=48)

The Dresdner risk manager (Ted Macdonald, good speaker) was emphasising that data quality is a real issue for risk management, and that all participants thought that risk managers should spend more time on risk and less on validating/cleaning data (no great surprises there then but interesting to hear it validated again as an issue).

He suggested that more pressure should be put on the front office to get data right first time (as opposed to leaving everyone else to sort out the mess!), even going so far as to suggest that charging the front office for each wrongly-booked trade in the trading and risk management systems - not sure how that would go down with the trading desks, but sounds a good approach if you could agree (and unambiguously measure) these mistakes!

Seems like transfer-costing is becoming a re-occurring theme - also recently mentioned by a grid computing specialist from Credit Suisse about "metering" each desk for the amount of compute power used...anyone retraining as a management accountant out there? - sounds like the banks will be hiring soon!

Xenomorph: analytics and data management

About Xenomorph

Xenomorph is the leading provider of analytics and data management solutions to the financial markets. Risk, trading, quant research and IT staff use Xenomorph’s TimeScape analytics and data management solution at investment banks, hedge funds and asset management institutions across the world’s main financial centres.

@XenomorphNews



Blog powered by TypePad
Member since 02/2008