« March 2012 | Main | May 2012 »

3 posts from April 2012

25 April 2012

Dragon Kings, Black Swans and Bubbles

"Dragon Kings" is a new term to me, and the subject on Monday evening of a presentation by Prof. Didier Sornette at an event given by PRMIA. Didier has been working on the diagnosis on financial markets bubbles, something that has been of interest to a lot of people over the past few years (see earlier post on bubble indices from RiskMinds and a follow up here).

Didier started his presentation by talking about extreme events and how many have defined different epochs in human history. He placed a worrying question mark over the European Sovereign Debt Crisis as to its place in history, and showed a pair of particularly alarming graphs of the "Perpetual Money Machine" of financial markets. One chart was a plot of savings and rate of profit for US, EU and Japan with profit rising, savings falling from about 1980 onwards, and a similar diverging one of consumption rising and wages falling in the US since 1980. Didier puts this down to finance allowing this increasing debt to occur and to perpetuate the "virtual" growth of wealth.

Corn, Obesity and Antibiotics - He put up one fascinating slide relating to positive feedback in complex systems and effectively the law of unintended consequencies. After World War II, the US Government wanted to ensure the US food supply and subsidized the production of corn. This resulted in over supply over for humans -> so the excess corn was fed to cattle -> who can't digest starch easily -> who developed e-coli infections -> which prompted the use of antibiotics in cattle -> which prompted antibiotics as growth promoters for food animals -> which resulted in cheap meat -> leading to non-sustainable meat protein consumption and under-consumption of vegetable protein. Whilst that is a lot of things to pull together, ultimately Didier suggested that the simple decision to subsidise corn had led to the current epidemic in obesity and the losing battle against bacterial infections.

Power Laws - He then touched briefly upon Power Law Distributions, which are observed in many natural phenomena (city size, earthquakes etc) and seem to explain the peaked mean and long-tails of distributions of finance far better than the traditional Lognormal distribution of traditional economic theory. (I need to catch up on some Mandelbrot I think). He explained that whilst many observations (city size for instance) fitted a power law, that the where observations that did not fit this distribution at all (in the cities example, many capital cities are much, much larger than a power law predicts). Didier then moved on to describe Black Swans, characterised as unknown unknowable events, occurring exogenously ("wrath of god" type events) and with one unique investment strategy in going long put options.

Didier said that Dragon-Kings were not Black Swans, but the major crises we have observed are "endogenous" (i.e. come from inside the system), do not conform to a power law distribution and:

  • can be diagnosed in advanced
  • can be quantified
  • have (some) predictability

Diagnosing Bubbles - In terms of diagnosing Dragon Kings, Didier listed the following criteria that we should be aware of (later confirmed as a very useful and practical list by one of the risk managers in the panel):

  • Slower recovery from perturbations
  • Increasing (or decreasing) autocorrelation
  • Increasing (or decreasing) cross-correlation with external driving
  • Increasing variance
  • Flickering and stochastic resonance
  • Increased spatial coherence
  • Degree of endogeneity/reflexivity
  • Finite-time singularities

Didier finished his talk by describing the current work that he and ETH are doing with real and ever-larger datasets to test whether bubbles can be detected before they end, and whether the prediction of the timing of their end can be improved. So in summary, Didier's work on Dragon Kings involves the behaviour of complex systems, how the major events in these systems come from inside (e.g. the flash crash), how positive feedback and system self-configuration/organisation can produce statistical behaviour well beyond that predicted by power law distributions and certainly beyond that predicted by traditional equilibrium-based economic theory. Didier mentioned how the search for returns was producing more leverage and an ever more connected economy and financial markets system, and how this interconnectedness was unhealthy from a systemic risk point of view, particularly if overlayed by homogenous regulation forcing everyone towards the same investment and risk management approaches (see Riskminds post for some early concerns on this and more recent ideas from Baruch College)

Panel-Debate - The panel debate following was interesting. As mentioned, one of the risk managers confirmed the above statistical behaviours as useful in predicting that the markets were unstable, and that to detect such behaviours across many markets and asset classes was an early warning sign of potential crisis that could be acted upon. I thought a good point was made about the market post crash, in that the market's behaviour has changed now that many big risk takers were eliminated in the recent crash (backtesters beware!). It seems Bloomberg are also looking at some regime switching models in this area, so worth looking out for what they are up to. Another panelist was talking about the need to link the investigations across asset class and markets, and emphasised the role of leverage in crisis events. One of the quants on the panel put forward a good analogy for "endogenous" vs. "exogenous" impacts on systems (comparing Dragon King events to Black Swans), and I paraphrase this somewhat to add some drama to the end of this post, but here goes: "when a man is pushed off a cliff then how far he falls is not determined by the size of the push, it is determined by the size of the cliff he is standing on". 

 

 

13 April 2012

CVA - a business driver for breaking down asset silos

Xenomorph's analytics partner Numerix sponsored a PRMIA event at New York's Harvard Club this week on Credit Valuation Adjustment (CVA). The event also involved Microsoft, with a surprisingly relevant contribution to the evening on CVA and "Big Data" (I still don't feel comfortable losing the quotes yet, maybe soon...). Credit Valuation Adjustment seems to be the hot topic in risk management and pricing at the moment, with Numerix's competitor Quantifi having held another PRMIA event on CVA only a few months back. 

The event started with an introduction to CVA from Aletta Ely of JP Morgan Chase. Aletta started by defining CVA as the market value of counterparty credit risk. I am new to CVA as a topic, and my own experience on any kind of adjustment in valuation for instrument was back at JP Morgan in the mid-90s (those of you under 30 are allowed to start yawning at this point...). We used to maintain separate risk-free curves (what are they now?) and counterparty spread curves, which would be combined to discount the cashflows in the model.

Whilst such an adjustment could be calibrated to come up with an adjusted valuation which would be better than having no counterparty risk modelled at all, it seems one of the key aspects of how CVA differs is that a credit valuation adjustement needs to be done in the context of the whole portfolio of exposures to the counterparty, and not in isolation instrument by instrument. The fact that a trader in equity derivatives was long exposure to a counterparty cannot be looked at in isolation from a short exposure to a portfolio of swaps with the same counterparty on the fixed income desk.

Put another way, CVA only has context if we stand to lose money if our counterparty defaults, and so an aggregated approach is needed to calculate the size of the positive exposures to the counterparty over the lifetime of the portfolio. Also, given this one sided payoff aspect of the CVA calculation, then instrument types such as vanilla interest rate swaps suddenly move from being relatively simple instrument that can be priced off a single curve to instruments that needed optionality to be modelled for the purposes of CVA.

So why has CVA become such a hot topic at the banks? Prior to the 2008/2009 crisis CVA was already around (credit risk has existed for a long time I guess, regardless of whether you regulate or report to it), but given that bank credit spreads were at that time consistently low and stable then CVA had minimal effects on valuations and P&L. Obviously with the advent of Lehmans then this changed, and CVA has been pushed into prominence since it has directly affected P&L in a significant manner for many institutions (for example see these FT articles on Citi and JPMorgan)

A key and I think positive point for the whole industry is the CVA requires a completely multi-asset view, and given regulatory focus on CVA and capital adequacy then as a result it will drive banks away from a siloed approach to data and valuation management. If capital is scarcer and more costly, then banks will invest in understanding both their aggregate CVA and the incremental contribution to CVA of a new trade in the context of all exposures to the counterparty. Looking at incremental CVA, then you can also see that this also drives investment into real or near-realtime CVA calculation, which brings me on to the next talks of the evening by Numerix on CVA calculation methods and a surprisingly good presentation on CVA and "Big Data" from David Cox of Microsoft.

Denny Yu of Numerix did a good job of explaining some of the methods of calculating CVA, and in addition to being cross asset and all the implications that requires for having the ability to price anything, CVA is both data and computationally expensive. It requires both simulation of the scenarios for the default of counterparties through time, but also the valuation of cross-asset portfolios at different points in time. Denny mentioned techniques such as American Monte-Carlo to reduce the computation needed through using the same simulation paths for both default scenarios and valuation.

So on to Microsoft. I have seen some appalling presentations on "Big Data" recently, mainly from the larger software and hardware companies try to jump on the marketing band wagon (main marketing premise: the data problems you have are "Big"...enough said I hope). Surprisingly, David Cox of Microsoft gave a very good presentation around the computation challenges of CVA, and how technologies such as Hadoop take the computational power closer to the data that needs acting on, bringing the analytics and data together. (As an aside, his presentation was notably "Metro" GUI in style, something that seems to work well for PowerPoint where the slide is very visual and it puts more emphasis on the speak to overlay the information). David was obviously keen to talk up some of the cloud technology that Microsoft is currently pushing, but he knew the CVA business topic well and did a good job of telling a good story around CVA, "Big Data" and Cloud technologies. Fundamentally, his pitch was for banks and other institutions to become "Analytic Enterprises" with a common, scaleable and flexible infrastructure for data management and analysis. 

In summary it was a great event - the Harvard Club is always worth a visit (bars and grandiose portraits as expected but also barber shop in the basement and squash courts in the loft!), the wine afterwards was tolerably good and the speakers were informative without over-selling their products or company. Quick thank you to Henry Hu of IBM for transportation on the night, and thanks also to Henry for sending through this link to a great introductory paper on CVA and credit risk from King's College London. Whilst the title of the King's paper is a bit long and scary, it takes the form of dialogue between a new employee and a CVA expert, and as such is very readable with lots of background links.

 

 

 

04 April 2012

NoSQL - the benefit of being specific

NoSQL is an unfortunate name in my view for the loose family of non-relational database technologies associated with "Big Data". NotRelational might be a better description (catchy eh? thought not...) , but either way I don't like the negatives in both of these titles, due to aestetics and in this case because it could be taken to imply that these technologies are critical of SQL and relational technology that we have all been using for years. For those of you who are relatively new to NoSQL (which is most of us), then this link contains a great introduction. Also, if you can put up with a slightly annoying reporter, then the CloudEra CEO is worth a listen to on YouTube.

In my view NoSQL databases are complementary to relational technology, and as many have said relational tech and tabular data are not going away any time soon. Ironically, some of the NoSQL technologies need more standardised query languages to gain wider acceptance, and there will be no guessing which existing query language will be used for ideas in putting these new languages together (at this point as an example I will now say SPARQL, not that should be taken to mean that I know a lot about this, but that has never stopped me before...)

Going back into the distant history of Xenomorph and our XDB database technology, then when we started in 1995 the fact that we then used a proprietary database technology was sometimes a mixed blessing on sales. The XDB database technology we had at the time was based around answering a specific question, which was "give me all of the history for this attribute of this instrument as quickly as possible".

The risk managers and traders loved the performance aspects of our object/time series database - I remember one client with a historical VaR calc that we got running in around 30 minutes on laptop PC that was taking 12 hours in an RDBMS on a (then quite meaty) Sun Sparc box. It was a great example how specific database technology designed for specific problems could offer performance that was not possible from more generic relational technology. The use of database for these problems was never intended as a replacement for relational databases dealing with relational-type "set-based" problems though, it was complementary technology designed for very specific problem sets.

The technologists were much more reserved, some were more accepting and knew of products such as FAME around then, but some were sceptical over the use of non-standard DBMS tech. Looking back, I think this attitude was in part due to either a desire to build their own vector/time series store, but also understandably (but incorrectly) they were concerned that our proprietary database would be require specialist database admin skills. Not that the mainstream RDBMS systems were expensive or specialist to maintain then (Oracle DBA anyone?), but many proprietary database systems with proprietary languages can require expensive and on-going specialist consultant support even today.

The feedback from our clients and sales prospects that our database performance was liked, but the proprietary database admin aspects were sometimes a sales objection caused us to take a look at hosting some of our vector database structures in Microsoft SQL Server. A long time back we had already implemented a layer within our analytics and data management system where we could replace our XDB database with other databases, most notably FAME. You can see a simple overview of the architecture in the diagram below, where other non-XDB databases (and datafeeds) can "plugged in" to our TimeScape system without affecting the APIs or indeed the object data model being used by the client:

TimeScape-DUL

Data Unification Layer

Using this layer, we then worked with the Microsoft UK SQL team to implement/host some of our vector database structures inside of Microsoft SQL Server. As a result, we ended up with a database engine that maintained the performance aspects of our proprietary database, but offered clients a standards-based DBMS for maintaining and managing the database. This is going back a few years, but we tested this database at Microsoft with a 12TB database (since this was then the largest disk they had available), but still this contained 500 billion tick data records which even today could be considered "Big" (if indeed I fully understand "Big" these days?). So you can see some of the technical effort we put into getting non-mainstream database technology to be more acceptable to an audience adopting a "SQL is everything" mantra.

Fast forward to 2012, and the explosion of interest in "Big Data" (I guess I should drop the quotes soon?) and in NoSQL databases. It finally seems that due to the usage of these technologies on internet data problems that no relational database could address, the technology community seem to have much more willingness to accept non-RDBMS technology where the problem being addressed warrants it - I guess for me and Xenomorph it has been a long (and mostly enjoyable) journey from 1995 to 2012 and it is great to see a more open-minded approach being taken towards database technology and the recognition of the benefits of specfic databases for (some) specific problems. Hopefully some good news on TimeScape and NoSQL technologies to follow in coming months - this is an exciting time to be involved in analytics and data management in financial markets and this tech couldn't come a moment too soon given the new reporting requirements being requested by regulators.

 

 

 

Xenomorph: analytics and data management

About Xenomorph

Xenomorph is the leading provider of analytics and data management solutions to the financial markets. Risk, trading, quant research and IT staff use Xenomorph’s TimeScape analytics and data management solution at investment banks, hedge funds and asset management institutions across the world’s main financial centres.

@XenomorphNews



Blog powered by TypePad
Member since 02/2008