21 posts categorized "Statistics"

11 December 2013

Aqumin visual landscapes for TimeScape

Very pleased that our partnering with Aqumin and their AlphaVision visual landscapes has been announced this week (see press release from Monday). Further background and visuals can be found at the following link and for those of you that like instant gratification please find a sample visual below showing some analysis of the S&P500.

Sp500aq

27 November 2013

Putting the F# in Finance with TimeScape

Quick thank you to Don Syme of Microsoft Research for including a demonstration of F# connecting to TimeScape running on the Windows Azure cloud in the F# in Finance event this week in London. F# is functional language that is developing a large following in finance due to its applicability to mathematical problems, the ease of development with F# and its performance. You can find some testimonials on the language here.

Don has implemented a proof-of-concept F# type provider for TimeScape. If that doesn't mean much to you, then a practical example below will help, showing how the financial instrument data in TimeScape is exposed at runtime into the F# programming environment. I guess the key point is just how easy it looks to code with data, since effectively you get guided through what is (and is not!) available as you are coding (sorry if I sound impressed, I spent a reasonable amount of time writing mathematical C code using vi in the mid 90's - so any young uber-geeks reading this, please make allowances as I am getting old(er)...). Example steps are shown below:

Referencing the Xenomorph TimeScape type provider and creating a data context: 

F_1

Connecting to a TimeScape database:

F_2

Looking at categories (classes) of financial instrument available:

F_3

Choosing an item (instrument) in a category by name:

F_4

Looking at the properties associated with an item:

F_5

The intellisense-like behaviour above is similar to what TimeScape's Query Explorer offers and it is great to see this implemented in an external run-time programming language such as F#. Don additionally made the point that each instrument only displays the data it individually has available, making it easy to understand what data you have to work with. This functionality is based on F#'s ability to make each item uniquely nameable, and to optionally to assign each item (instrument) a unique type, where all the category properties (defined at the category schema level) that are not available for the item are hidden. 

The next event for F# in Finance will take place in New York on Wednesday 11th of December 2013 in New York, so hope to see you there. We are currently working on a beta program for this functionality to be available early in the New Year so please get in touch if this is of interest via [email protected].  

 

19 November 2013

i2i Logic launch customer engagement platform based on TimeScape

An exciting departure from Xenomorph's typical focus on data management for risk in capital markets, but one of our partners, i2i Logic, has just announced the launch of their customer engagement platform for institutional and commercial banks based on Xenomorph's TimeScape. The i2i Logic team have a background in commercial banking, and have put together a platform that allows much greater interaction with a corporate client that a bank is trying to engage with.

Hosted in the cloud, and delivered to sales staff through an easy and powerful tablet app, the system enables bank sales staff to produce analysis and reports that are very specific to a particular client, based upon predictive analytics and models applied to market, fundamentals and operational data, initially supplied by S&P Capital IQ. This allows the bank and the corporate to discuss and understand where the corporate is when benchmarked against peers in a variety of metrics current across financial and operational performance, and to provide insight on where the bank's services may be able to assist in the profitability, efficiency and future growth of the corporate client.

Put another way, it sounds like the corporate customers of commercial banks are in not much better a position than us individuals dealing with retail banks, in that currently the offerings from the banks are not that engaging, generic and very hard to differentiate. Sounds like the i2i Logic team are on to something, so I wish them well in trying to move the industry's expectations of customer service and engagement, and would like to thank them for choosing TimeScape as the analytics and data management platform behind their solution. 

 

 

04 November 2013

Risk Data Aggregation and Risk Reporting from PRMIA

Another good event from PRMIA at the Harmonie Club here in NYC last week, entitled Risk Data Agregation and Risk Reporting - Progress and Challenges for Risk Management. Abraham Thomas of Citi and PRMIA introduced the evening, setting the scene by refering to the BCBS document Principles for effective risk data aggregation and risk reporting, with its 14 principles to be implemented by January 2016 for G-SIBs (Globally Systemically Important Banks) and December 2016 for D-SIBS (Domestically Systemically Important Banks).

The event was sponsored by SAP and they were represented by Dr Michael Adam on the panel, who gave a presentation around risk data management and the problems have having data siloed across many different systems. Maybe unsurprisingly Michael's presentation had a distinct "in-memory" focus to it, with Michael emphasizing the data analysis speed that is now possible using technologies such as SAP's in-memory database offering "Hana".

Following the presentation, the panel discussion started with a debate involving Dilip Krishna of Deloitte and Stephanie Losi of the Federal Reserve Bank of New York. They discussed whether the BCBS document and compliance with it should become a project in itself or part of existing initiatives to comply with data intensive regulations such as CCAR and CVA etc. Stephanie is on the board of the BCBS committee for risk data aggregation and she said that the document should be a guide and not a check list. There seemed to be general agreement on the panel that data architectures should be put together not with a view to compliance with one specific regulation but more as a framework to deal with all regulation to come, a more generalized approach.

Dilip said that whilst technology and data integration are issues, people are the biggest issue in getting a solid data architecture in place. There was an audience question about how different departments need different views of risk and how were these to be reconciled/facilitated. Stephanie said that data security and control of who can see what is an issue, and Dilip agreed and added that enterprise risk views need to be seen by many which was a security issue to be resolved. 

Don Wesnofske of PRMIA and Dell said that data quality was another key issue in risk. Dilip agreed and added that the front office need to be involved in this (data management projects are not just for the back office in insolation) and that data quality was one of a number of needs that compete for resources/budget at many banks at the moment. Coming back to his people theme, Dilip also said that data quality also needed intuition to be carried out successfully. 

An audience question from Dan Rodriguez (of PRMIA and Credit Suisse) asked whether regulation was granting an advantage to "Too Big To Fail" organisations in that only they have the resources to be able to cope with the ever-increasing demands of the regulators, to the detriment of the smaller financial insitutions. The panel did not completely agree with Dan's premise, arguing that smaller organizations were more agile and did not have the legacy and complexity of the larger institutions, so there was probably a sweet spot between large and small from a regulatory compliance perspective (I guess it was interesting that the panel did not deny that regulation was at least affecting the size of financial institutions in some way...)

Again focussing on where resources should be deployed, the panel debated trade-offs such as those between accuracy and consistency. The Legal Entity Identifier (LEI) initiative was thought of as a great start in establishing standards for data aggregation, and the panel encouraged regulators to look at doing more. One audience question was around the different and inconsistent treatment of gross notional and trade accounts. Dilip said that yes this was an issue, but came back to Stephanie's point that what is needed is a single risk data platform that is flexible enough to be used across multiple business and compliance projects.  Don said that he suggests four "views" on risk:

  • Risk Taking
  • Risk Management
  • Risk Measurement
  • Risk Regulation

Stephanie added that organisations should focus on the measures that are most appropriate to your business activity.

The next audience question asked whether the panel thought that the projects driven by regulation had a negative return. Dilip said that his experience was yes, they do have negative returns but this was simply a cost of being in business. Unsurprisingly maybe, Stephanie took a different view advocating the benefits side coming out of some of the regulatory projects that drove improvements in data management.

The final audience question was whether the panel through the it was possible to reconcile all of the regulatory initiatives like Dodd-Frank, Basel III, EMIR etc with operational risk. Don took a data angle to this question, taking about the benefits of big data technologies applied across all relevant data sets, and that any data was now potentially valuable and could be retained. Dilip thought that the costs of data retention were continually going down as data volumes go up, but that there were costs in capturing the data need for operational risk and other applications. Dilip said that when compared globally across many industries, financial markets were way behind the data capabilities of many sectors, and that finance was more "Tiny Data" than "Big Data" and again he came back to the fact that people were getting in the way of better data management. Michael said that many banks and market data vendors are dealing with data in the 10's of TeraBytes range, whereas the amount of data in the world was around 8-900 PetaBytes (I thought we were already just over into ZetaBytes but what are a few hundred PetaBytes between friends...).

Abraham closed off the evening, firstly by asking the audience if they thought the 2016 deadline would be achieved by their organisation. Only 3 people out of around 50+ said yes. Not sure if this was simply people's reticence to put their hand up, but when Abraham asked one key concern for many was that the target would change by then - my guess is that we are probably back into the territory of the banks not implementing a regulation because it is too vague, and the regulators not being too prescriptive because they want feedback too. So a big game of chicken results, with the banks weighing up the costs/fines of non-compliance against the costs of implementing something big that they can't be sure will be acceptable to the regulators. Abraham then asked the panel for closing remarks: Don said that data architecture was key; Stephanie suggested getting the strategic aims in place but implementing iteratively towards these aims; Dilip said that deciding your goal first was vital; and Michael advised building a roadmap for data in risk. 

 

 

 

 

07 May 2013

Big Data Finance at NYU Poly

I went over to NYU Poly in Brooklyn on Friday of last week for their Big Data Finance Conference. To get a slightly negative point out of the way early, I guess I would have to pose the question "When is a big data conference, not a big data Conference?". Answer: "When it is a time series analysis conference" (sorry if you were expecting a funny answer...but as you can see, then what I occupy my time with professionally doesn't naturally lend itself to too much comedy). As I like time series analysis, then this was ok, but certainly wasn't fully "as advertised" in my view, but I guess other people are experiencing this problem too.

Maybe this slightly skewed agenda was due to the relative newness of the topic, the newness of the event and the temptation for time series database vendors to jump on the "Big Data" marketing bandwagon (what? I hear you say, we vendors jumping on a buzzword marketing bandwagon, never!...). Many of the talks were about statistical time series analysis of market behaviour and less about what I was hoping for, which was new ways in which empirical or data-based approaches to financial problems might be addressed through big data technologies (as an aside, here is a post on a previous PRMIA event on big data in risk management as some additional background). There were some good attempts at getting a cross-discipline fertilization of ideas going at the conference, but given the topic then representatives from the mobile and social media industries were very obviously missing in my view. 

So as a complete counterexample to the two paragraphs above, the first speaker (Kevin Atteson of Morgan Stanley) at the event was on very much on theme with the application of big data technologies to the mortgage market. Apparently Morgan Stanley had started their "big data" analysis of the mortgage market in 2008 as part of a project to assess and understand more about the potential losses than Fannie Mae and Freddie Mac faced due to the financial crisis.  

Echoing some earlier background I had heard on mortgages, one of the biggest problems in trying to understand the market according to Kevin was data, or rather the lack of it. He compared mortgage data analysis to "peeling an onion" and that going back to the time of the crisis, mortgage data at an individual loan level was either not available or of such poor quality as to be virtually useless (e.g. hard to get accurate ZIP code data for each loan). Kevin described the mortgage data set as "wide" (lots of loans with lots of fields for each loan) rather than "deep" (lots of history), with one of the main data problems was trying to match nearest-neighbour loans. He mentioned that only post crisis have Fannie and Freddie been ordered to make individual loan data available, and that there is still no readily available linkage data between individual loans and mortgage pools (some presentations from a recent PRMIA event on mortgage analytics are at the bottom of the page here for interested readers). 

Kevin said that Morgan Stanley had rejected the use of Hadoop, primarily due write through-put capabilities, which Kevin indicated was a limiting factor in many big data technologies. He indicated that for his problem type that he still believed their infrastructure to be superior to even the latest incarnations of Hadoop. He also mentioned the technique of having 2x redundancy or more on the data/jobs being processed, aimed not just at failover but also at using the whichever instance of a job that finished first. Interestingly, he also added that Morgan Stanley's infrastructure engineers have a policy of rebooting servers in the grid even during the day/use, so fault tolerance was needed for both unexpected and entirely deliberate hardware node unavailability. 

Other highlights from the day:

  • Dennis Shasha had some interesting ideas on using matrix algebra for reducing down the data analysis workload needed in some problems - basically he was all for "cleverness" over simply throwing compute power at some data problems. On a humourous note (if you are not a trader?), he also suggested that some traders had "the memory of a fruit-fly".
  • Robert Almgren of QuantitativeBrokers was an interesting speaker, talking about how his firm had done a lot of analytical work in trying to characterise possible market responses to information announcements (such as Friday's non-farm payroll announcement). I think Robert was not so much trying to predict the information itself, but rather trying to predict likely market behaviour once the information is announced. 
  • Scott O'Malia of the CFTC was an interesting speaker during the morning panel. He again acknowledged some of the recent problems the CFTC had experienced in terms of aggregating/analysing the data they are now receiving from the market. I thought his comment on the twitter crash was both funny and brutally pragmatic with him saying "if you want to rely solely upon a single twitter feed to trade then go ahead, knock yourself out."
  • Eric Vanden Eijnden gave an interesting talk on "detecting Black Swans in Big Data". Most of the examples were from current detection/movement in oceanography, but seemed quite analogous to "regime shifts" in the statistical behaviour of markets. Main point seemed to be that these seemingly unpredictable and infrequent events were predictable to some degree if you looked deep enough in the data, and in particular that you could detect when the system was on a possible likely "path" to a Black Swan event.

One of the most interesting talks was by Johan Walden of the Haas Business School, on the subject of "Investor Networks in the Stock Market". Johan explained how they had used big data to construct a network model of all of the participants in the Turkish stock exchange (both institutional and retail) and in particular how "interconnected" each participant was with other members. His findings seemed to support the hypothesis that the more "interconnected" the investor (at the centre of many information flows rather than add the edges) the more likely that investor would demonstrate superior return levels to the average. I guess this is a kind of classic transferral of some of the research done in social networking, but very interesting to see it applied pragmatically to financial markets, and I would guess an area where a much greater understanding of investor behaviour could be gleaned. Maybe Johan could do with a little geographic location data to add to his analysis of how information flows.

So overall a good day with some interesting talks - the statistical presentations were challenging to listen to at 4pm on a Friday afternoon but the wine afterwards compensated. I would also recommend taking a read through a paper by Charles S. Tapiero on "The Future of Financial Engineering" for one of the best discussions I have so far read about how big data has the potential to change and improve upon some of the assumptions and models that underpin modern financial theory. Coming back to my starting point in this post on the content of the talks, I liked the description that Charles gives of traditional "statistical" versus "data analytics" approaches, and some of the points he makes about data immediately inferring relationships without the traditional "hypothesize, measure, test and confirm-or-not" were interesting, both in favour of data analytics and in cautioning against unquestioning belief in the findings from data (feels like this post from October 2008 is a timely reminder here). With all of the hype and the hope around the benefits of big data, maybe we would all be wise to remember this quote by a certain well-known physicist: "No amount of experimentation can ever prove me right; a single experiment can prove me wrong."

 

25 April 2013

The Anthropology, Sociology, and Epistemology of Risk

Background - I went along to my first PRMIA event in Stamford, CT last night, with the rather grandiose title of "The Anthropology, Sociology, and Epistemology of Risk". Stamford is about 30 miles north of Manhattan and is the home to major offices of a number of financial markets companies such as Thomson Reuters, RBS and UBS (who apparently have the largest column-less trading floor in the world at their Stamford headquarters - particularly useful piece of trivia for you there...). It also happens to be about 5 minutes drive/train journey away from where I now live, so easy for me to get to (thanks for another useful piece of information I hear you say...). Enough background, more on the event which was a good one with five risk managers involved in an interesting and sometimes philosophical discussion on fundamentally what "risk management" is all about.

IntroductionMarc Groz who heads the Stamford Chapter of PRMIA introduced the evening and started by thanking Barry Schwimmer for allowing PRMIA to use the Stamford Innovation Centre (the Old Town Hall) for the meeting. Henrik Neuhaus moderated the panel, and started by outlining the main elements of the event title as a framework for the discussion:

  • Anthropology - risk management is to what purpose?
  • Sociology - how does risk management work?
  • Epistemology - what knowledge is really contained within risk management?

Henrik started by taking a passage about anthropology and replacing human "development" with "risk management" which seemed to fit ok, although the angle I was expecting was much more about human behaviour in risk management than where Henrik started. Henrik asked the panel what results they had seen from risk management and what did that imply about risk management? The panelists seemed a little confused or daunted by the question prompting one of them to ask "Is that the question?".

Business Model and Risk CultureElliot Noma dived in by responding that the purpose of risk management obviously depended very much on what are the institutional goals of the organization. He said that it was as much about what you are forced to do and what you try to do in risk management. Elliot said that the sell-side view of risk management was very regulatory and capital focused, whereas mutual funds are looking more at risk relative to benchmarks and performance attribution. He added that in the alternatives (hedge-fund) space then there were no benchmarks and the focus was more about liquidity and event risk.

Steve Greiner said that it was down to the investment philosophy and how risk is defined and measured. He praised some asset managers where the risk managers sit across from the portfolio managers and are very much involved in the decision making process.

Henrik asked the panel whether any of the panel had ever defined a “mission statement” for risk management. Marc Groz chipped in that he remember that he had once defined one, and that it was very different from what others in the institution were expecting and indeed very different from the risk management that he and his department subsequently undertook.

Mark Szycher (of GM Pension Fund) said that risk management split into two areas for him, the first being the symmetrical risks where you need to work out the range of scenarios for a particular trade or decision being taken. The second was the more asymmetrical risks (i.e. downside only) such as those found in operational risk where you are focused on how best to avoid them happening.

Micro Risk Done Well - Santa Federico said that he had experience of some of the major problems experienced at institutions such as Merrill Lynch, Salomen Brothers and MF Global, and that he thought risk management was much more of a cultural problem than a technical one. Santa said he thought that the industry was actually quite good at the micro (trade, portfolio) risk management level, but obviously less effective at the large systematic/economic level. Mark asked Santa what was the nature of the failures he had experienced. Santa said that the risks were well modeled, but maybe the assumptions around macro variables such as the housing market proved to be extremely poor.

Keep Dancing? - Henrik asked the panel what might be done better? Elliot made the point that some risks are just in the nature of the business. If a risk manager did not like placing a complex illiquid trade and the institution was based around trading in illiquid markets then what is a risk manager to do? He quote the Citi executive who said “ whilst the music is still playing we have to dance”. Again he came back to the point that the business model of the institution drives its cultural and the emphasis of risk management (I guess I see what Elliot was saying but taken one way it implied that regardless of what was going on risk management needs to fit in with it, whereas I am sure that he meant that risk managers must fit in with the business model mandated to shareholders).

Risk Attitudes in the USA - Mark said that risk managers need to recognize that the improbable is maybe not so improbable and should be more prepared for the worst rather than risk management under “normal” market and institutional behavior. Steven thought that a cultural shift was happening, where not losing money was becoming as important to an organization as gaining money. He said that in his view, Europe and Asia had a stronger risk culture than in the United States, with much more consensus, involvement and even control over the trading decisions taken. Put another way, the USA has more of a culture of risk taking than Europe. (I have my own theories on this. Firstly I think that the people are generally much more risk takers in the USA than in UK/Europe, possibly influenced in part by the relative lack of underlying social safety net – whilst this is not for everyone, I think it produces a very dynamic economy as a result. Secondly, I do not think that cultural desire in the USA for the much admired “presidential” leader necessarily is the best environment for sound, consensus based risk management. I would also like to acknowledge that neither of my two points above seem to have protected Europe much from the worst of the financial crisis, so it is obviously a complex issue!).

Slaves to Data? - Henrik asked whether the panel thought that risk managers were slaves to data? He expanded upon this by asking what kinds of firms encourage qualitative risk management and not just risk management based on Excel spreadsheets? Santa said that this kind of qualitative risk management occurred at a business level and less so at a firm wide level. In particular he thought this kind of culture was in place at many hedge funds, and less so at banks. He cited one example from his banking career in the 1980's, where his immediate boss was shouted off the trading floor by the head of desk, saying that he should never enter the trading floor again (oh those were the days...). 

Sociology and Credibility - Henrik took a passage on the historic development of women's rights and replaced the word "women" with "risk management" to illustrate the challenges risk management is facing with trying to get more say and involvement at financial institutions. He asked who should the CRO report to? A CEO? A CIO? Or a board member? Elliot responded by saying this was really a issue around credibility with the business for risk managers and risk management in general. He made the point that often Excel and numbers were used to establish credibility with the business. Elliot added that risk managers with trading experience obviously had more credibility, and to some extent where the CRO reported to was dependent upon the credibility of risk management with the business. 

Trading and Risk Management Mindsets - Elliot expanded on his previous point by saying that the risk management mindset thinks more in terms of unconditional distributions and tries to learn from history. He contrasted this with a the "conditional mindset' of a trader, where the time horizon forwards (and backwards) is rarely longer than a few days and the belief is strong that a trade will work today given it worked yesterday is high. Elliot added that in assisting the trader, the biggest contribution risk managers can make is more to be challenging/helpful on the qualitative side rather than just quantitative.

Compensation and Transactions - Most of the panel seemed to agree that compensation package structure was a huge influencer in the risk culture of an organisation. Mark touched upon a pet topic of mine, which is that it very hard for a risk manager to gain credibility (and compensation) when what risk management is about is what could happen as opposed to what did happen. A risk manager blocking a trade due to some potentially very damaging outcomes will not gain any credibility with the business if the trading outcome for the suggested trade just happened to come out positive. There seemed to be concensus here that some of the traditional compensation models that were based on short-term transactional frequency and size were ill-formed (given the limited downside for the individual), and whilst the panel reserved judgement on the effectiveness of recent regulation moves towards longer-term compensation were to be welcome from a risk perspective.

MF Global and Busines Models - Santa described some of his experiences at MF Global, where Corzine moved what was essentially a broker into taking positions in European Sovereign Bonds. Santa said that the risk management culture and capabilities were not present to be robust against senior management for such a business model move. Elliot mentioned that he had been courted for trades by MF Global and had been concerned that they did not offer electronic execution and told him that doing trades through a human was always best. Mark said that in the area of pension fund management there was much greater fidiciary responsibility (i.e. behave badly and you will go to jail) and maybe that kind of responsibility had more of a place in financial markets too. Coming back to the question of who a CRO should report to, Mark also said that questions should be asked to seek out those who are 1) less likely to suffer from the "agency" problem of conflicts of interest and on a related note those who are 2) less likely to have personal biases towards particular behaviours or decisions.

Santa said that in his opinion hedge funds in general had a better culture where risk management opinions were heard and advice taken. Mark said that risk managers who could get the business to accept moral persuasion were in a much stronger position to add value to the business rather than simply being able to "block" particular trades. Elliot cited one experience he had where the traders under his watch noticed that a particular type of trade (basis trades) did not increase their reported risk levels, and so became more focussed on gaming the risk controls to achieve high returns without (reported) risk. The panel seemed to be in general agreement that risk managers with trading experience were more credible with the business but also more aware of the trader mindset and behaviors. 

Do we know what we know? - Henrik moved to his third and final subsection of the evening, asking the panel whether risk managers really know what they think they know. Elliot said that traders and risk managers speak a different language, with traders living in the now, thinking only of the implications of possible events such as those we have seen with Cyprus or the fiscal cliff, where the risk management view was much less conditioned and more historical. Steven re-emphasised the earlier point that risk management at this micro trading level was fine but this was not what caused events such as the collapse of MF Global.

Rational argument isn't communication - Santa said that most risk managers come from a quant (physics, maths, engineering) background and like structured arguments based upon well understood rational foundations. He said that this way of thinking was alien to many traders and as such it was a communication challenge for risk managers to explain things in a way that traders would actually put some time to considering. On the modelling side of things, Santa said that sometimes traders dismissed models as being "too quant" and sometimes traders followed models all too blindly without questioning or understanding the simplifying assumptions they are based on. Santa summarised by saying that risk management needs to intuitive for traders and not just academically based. Mark added that a quantitative focus can sometimes become too narrow (modeler's manifesto anyone?) and made the very profound point that unfortunately precision often wins over relevance in the creation and use of many models. Steven added that traders often deal with absolutes, so as knowing the spread between two bonds to the nearest basis point, whereas a risk manager approaching them with a VaR number really means that this is the estimated VaR which really should be thought to be within a range of values. This is alien to the way traders think and hence harder to explain.

Unanticipated Risk - An audience member asked whether risk management should focus mainly on unanticipated risks rather than "normal' risks. Elliot said that in his trading he was always thinking and checking whether the markets were changing or continuing with their recent near-term behaviour patterns. Steven said that history was useful to risk management when markets were "normal", but in times of regime shifts this was not the case and cited the example of the change in markets when Mario Dragi announced that the ECB would stand behind the Euro and its member nations. 

Risky Achievements - Henrik closed the panel by asking each member what they thought was there own greatest achievement in risk management. Elliot cited a time when he identified that a particular hedge fund had a relatively inconspicuous position/trade that he identified as potentially extremely dangerous and was proved correct when the fund closed down due to this. Steven said he was proud of some good work he and his team did on stress testing involving Greek bonds and Eurozone. Santa said that some of the work he had done on portfolio "risk overlays" was good. Mark ended the panel by saying that he thought his biggest achievement was when the traders and portfolio managers started to come to the risk management department to ask opinions before placing key trades. Henrik and the audience thanked the panel for their input and time.

An Insured View - After the panel closed I spoke with an actuary who said that he had greatly enjoyed the panel discussions but was surprised that when talking of how best to support the risk management function in being independent and giving "bad" news to the business, the role of auditors were not mentioned. He said he felt that auditors were a key support to insurers in ensuring any issues were allowed to come to light. So food for thought there as to whether financial markets can learn from other industry sectors.

Summary - great evening of discussion, only downside being the absence of wine once the panel had closed!

 


08 February 2013

Big Data – What is its Value to Risk Management?

A little late on these notes from this PRMIA Event on Big Data in Risk Management that I helped to organize last month at the Harmonie Club in New York. Big thank you to my PRMIA colleagues for taking the notes and for helping me pull this write-up together, plus thanks to Microsoft and all who helped out on the night.

Introduction: Navin Sharma (of Western Asset Management and Co-Regional Director of PRMIA NYC) introduced the event and began by thanking Microsoft for its support in sponsoring the evening. Navin outlined how he thought the advent of “Big Data” technologies was very exciting for risk management, opening up opportunities to address risk and regulatory problems that previously might have been considered out of reach.

Navin defined Big Data as the structured or unstructured in receive at high volumes and requiring very large data storage. Its characteristics include a high velocity of record creation, extreme volumes, a wide variety of data formats, variable latencies, and complexity of data types. Additionally, he noted that relative to other industries, in the past financial services has created perhaps the largest historical sets of data and continually creates enormous amount of data on a daily or moment-by-moment basis. Examples include options data, high frequency trading, and unstructured data such as via social media.  Its usage provides potential competitive advantages in a trading and investment management. Also, by using Big Data it is possible to have faster and more accurate recognition of potential risks via seemingly disparate data - leading to timelier and more complete risk management of investments and firms’ assets. Finally, the use of Big Data technologies is in part being driven by regulatory pressures from Dodd-Frank, Basel III, Solvency II, Markets for Financial Instruments Directives (1 & 2) as well as Markets for Financial Instruments Regulation.

Navin also noted that we will seek to answer questions such as:

  • What is the impact of big data on asset management?
  • How can Big Data’s impact enhance risk management?
  • How is big data used to enhance operational risk?

Presentation 1: Big Data: What Is It and Where Did It Come From?: The first presentation was given by Michael Di Stefano (of Blinksis Technologies), and was titled “Big Data. What is it and where did it come from?”.  You can find a copy of Michael’s presentation here. In summary Michael started with saying that there are many definitions of Big Data, mainly defined as technology that deals with data problems that are either too large, too fast or too complex for conventional database technology. Michael briefly touched upon the many different technologies within Big Data such as Hadoop, MapReduce and databases such as Cassandra and MongoDB etc. He described some of the origins of Big Data technology in internet search, social networks and other fields. Michael described the “4 V’s” of Big Data: Volume, Velocity, Variety and a key point from Michael was “time to Value” in terms of what you are using Big Data for. Michael concluded his talk with some business examples around use of sentiment analysis in financial markets and the application of Big Data to real-time trading surveillance.

Presentation 2: Big Data Strategies for Risk Management: The second presentation “Big Data Strategies for Risk Management” was introduced by Colleen Healy of Microsoft (presentation here). Colleen started by saying expectations of risk management are rising, and that prior to 2008 not many institutions had a good handle on the risks they were taking. Risk analysis needs to be done across multiple asset types, more frequently and at ever greater granularity. Pressure is coming from everywhere including company boards, regulators, shareholders, customers, counterparties and society in general. Colleen used to head investor relations at Microsoft and put forward a number of points:

  • A long line of sight of one risk factor does not mean that we have a line of sight on other risks around.
  • Good risk management should be based on simple questions.
  • Reliance on 3rd parties for understanding risk should be minimized.
  • Understand not just the asset, but also at the correlated asset level.
  • The world is full of fast markets driving even more need for risk control
  • Intraday and real-time risk now becoming necessary for line of sight and dealing with the regulators
  • Now need to look at risk management at a most granular level.

Colleen explained some of the reasons why good risk management remains a work in progress, and that data is a key foundation for better risk management. However data has been hard to access, analyze, visualize and understand, and used this to link to the next part of the presentation by Denny Yu of Numerix.

Denny explained that new regulations involving measures such as Potential Future Exposure (PFE) and Credit Value Adjustment (CVA) were moving the number of calculations needed in risk management to a level well above that required by methodologies such as Value at Risk (VaR). Denny illustrated how the a typical VaR calculation on a reasonable sized portfolio might need 2,500,000 instrument valuations and how PFE might require as many as 2,000,000,000. He then explain more of the architecture he would see as optimal for such a process and illustrated some of the analysis he had done using Excel spreadsheets linked to Microsoft’s high performance computing technology.

Presentation 3: Big Data in Practice: Unintentional Portfolio Risk: Kevin Chen of Opera Solutions gave the third presentation, titled “Unintentional Risk via Large-Scale Risk Clustering”. You can find a copy of the presentation here. In summary, the presentation was quite visual and illustrating how large-scale empirical analysis of portfolio data could produce some interesting insights into portfolio risk and how risks become “clustered”. In many ways the analysis was reminiscent of an empirical form of principal component analysis i.e. where you can see and understand more about your portfolio’s risk without actually being able to relate the main factors directly to any traditional factor analysis. 

Panel Discussion: Brian Sentance of Xenomorph and the PRMIA NYC Steering Committee then moderated a panel discussion. The first question was directed at Michael “Is the relational database dead?” – Michael replied that in his view relational databases were not dead and indeed for dealing with problems well-suited to relational representation were still and would continue to be very good. Michael said that NoSQL/Big Data technologies were complimentary to relational databases, dealing with new types of data and new sizes of problem that relational databases are not well designed for. Brian asked Michael whether the advent of these new database technologies would drive the relational database vendors to extend the capabilities and performance of their offerings? Michael replied that he thought this was highly likely but only time would tell whether this approach will be successful given the innovation in the market at the moment. Colleen Healy added that the advent of Big Data did not mean the throwing out of established technology, but rather an integration of established technology with the new such as with Microsoft SQL Server working with the Hadoop framework.

Brian asked the panel whether they thought visualization would make a big impact within Big Data? Ken Akoundi said that the front end applications used to make the data/analysis more useful will evolve very quickly. Brian asked whether this would be reminiscent of the days when VaR first appeared, when a single number arguably became a false proxy for risk measurement and management? Ken replied that the size of the data problem had increased massively from when VaR was first used in 1994, and that visualization and other automated techniques were very much needed if the headache of capturing, cleansing and understanding data was to be addressed.

Brian asked whether Big Data would address the data integration issue of siloed trading systems? Colleen replied that Big Data needs to work across all the silos found in many financial organizations, or it isn’t “Big Data”. There was general consensus from the panel that legacy systems and people politics were also behind some of the issues found in addressing the data silo issue.

Brian asked if the panel thought the skills needed in risk management would change due to Big Data? Colleen replied that effective Big Data solutions require all kinds of people, with skills across a broad range of specific disciplines such as visualization. Generally the panel thought that data and data analysis would play an increasingly important part for risk management. Ken put forward his view all Big Data problems should start with a business problem, with not just a technology focus. For example are there any better ways to predict stock market movements based on the consumption of larger and more diverse sources of information. In terms of risk management skills, Denny said that risk management of 15 years ago was based on relatively simply econometrics. Fast forward to today, and risk calculations such as CVA are statistically and computationally very heavy, and trading is increasingly automated across all asset classes. As a result, Denny suggested that even the PRMIA PRM syllabus should change to focus more on data and data technology given the importance of data to risk management.

Asked how best to should Big Data be applied?, then Denny replied that echoed Ken in saying that understanding the business problem first was vital, but that obviously Big Data opened up the capability to aggregate and work with larger datasets than ever before. Brian then asked what advice would the panel give to risk managers faced with an IT department about to embark upon using Big Data technologies? Assuming that the business problem is well understood, then Michael said that the business needed some familiarity with the broad concepts of Big Data, what it can and cannot do and how it fits with more mainstream technologies. Colleen said that there are some problems that only Big Data can solve, so understanding the technical need is a first checkpoint. Obviously IT people like working with new technologies and this needs to be monitored, but so long as the business problem is defined and valid for Big Data, people should be encouraged to learn new technologies and new skills. Kevin also took a very positive view that IT departments should  be encouraged to experiment with these new technologies and understand what is possible, but that projects should have well-defined assessment/cut-off points as with any good project management to decide if the project is progressing well. Ken put forward that many IT staff were new to the scale of the problems being addressed with Big Data, and that his own company Opera Solutions had an advantage in its deep expertise of large-scale data integration to deliver quicker on project timelines.

Audience Questions: There then followed a number of audience questions. The first few related to other ideas/kinds of problems that could be analyzed using the kind of modeling that Opera had demonstrated. Ken said that there were obvious extensions that Opera had not got around to doing just yet. One audience member asked how well could all the Big Data analysis be aggregated/presented to make it understandable and usable to humans? Denny suggested that it was vital that such analysis was made accessible to the user, and there general consensus across the panel that man vs. machine was an interesting issue to develop in considering what is possible with Big Data. The next audience question was around whether all of this data analysis was affordable from a practical point of view. Brian pointed out that there was a lot of waste in current practices in the industry, with wasteful duplication of ticker plants and other data types across many financial institutions, large and small. This duplication is driven primarily by the perceived need to implement each institution’s proprietary analysis techniques, and that this kind of customization was not yet available from the major data vendors, but will become more possible as cloud technology such as Microsoft’s Azure develops further. There was a lot of audience interest in whether Big Data could lead to better understanding of causal relationships in markets rather than simply correlations. The panel responded that causal relationships were harder to understand, particularly in a dynamic market with dynamic relationships, but that insight into correlation was at the very least useful and could lead to better understanding of the drivers as more datasets are analyzed.

 

16 October 2012

The Missing Data Gap

Getting to the heart of "Data Management for Risk", PRMIA held an event entitled "Missing Data for Risk Management Stress Testing" at Bloomberg's New York HQ last night. For those of you who are unfamiliar with the topic of "Data Management for Risk", then the following diagram may help to further explain how the topic is to do with all the data sets feeding the VaR and scenario engines.

Data-Flow-for-Risk-Engines
I have a vested interest in saying this (and please forgive the product placement in the diagram above, but hey this is what we do...), but the topic of data management for risk seems to fall into a functionality gap between: i) the risk system vendors who typically seem to assume that the world of data is perfect and that the topic is too low level to concern them and ii) the traditional data management vendors who seem to regard things like correlations, curves, spreads, implied volatilities and model parameters as too business domain focussed (see previous post on this topic) As a result, the risk manager is typically left with ad-hoc tools like spreadsheets and other analytical packages to perform data validation and filling of any missing data found. These ad-hoc tools are fine until the data universe grows larger, leading to the regulators becoming concerned about just how much data is being managed "out of system" (see past post for some previous thoughts on spreadsheets).

The Crisis and Data Issues. Anyway enough background above and on to some of the issues raised at the event. Navin Sharma of Western Asset Management started the evening by saying that pre-crisis people had a false sense of security around Value at Risk, and that crisis showed that data is not reliably smooth in nature. Post-crisis, then questions obviously arise around how much data to use, how far back and whether you include or exclude extreme periods like the crisis. Navin also suggested that the boards of many financial institutions were now much more open to reviewing scenarios put forward by the risk management function, whereas pre-crisis their attention span was much more limited.

Presentation. Don Wesnofske did a great presentation on the main issues around data and data governance in risk (which I am hoping to link to here shortly...)

Issues with Sourcing Data for Risk and Regulation. Adam Litke of Bloomberg asked the panel what new data sourcing challenges were resulting from the current raft of regulation being implemented. Barry Schachter cited a number of Basel-related examples. He said that the costs of rolling up loss data across all operations was prohibitative, and hence there were data truncation issues to be faced when assessing operational risk. Barry mentioned that liquidity calculations were new and presenting data challenges. Non centrally cleared OTC derivatives also presented data challenges, with initial margin calculations based on stressed VaR. Whilst on the subject of stressed VaR, Barry said that there were a number of missing data challenges including the challenge of obtaining past histories and of modelling current instruments that did not exist in past stress periods. He said that it was telling on this subject that the Fed had decided to exclude tier 2 banks from stressed VaR calculations on the basis that they did not think these institutions were in a position to be able to calculate these numbers given the data and systems that they had in place.

Barry also mentioned the challenges of Solvency II for insurers (and their asset managers) and said that this was a huge exercise in data collection. He said that there were obvious difficulties in modelling hedge fund and private equity investments, and that the regulation penalised the use of proxy instruments where there was limited "see-through" to the underlying investments. Moving on to UCITS IV, Barry said that the regulation required VaR calculations to be regularly reviewed on an ongoing basis, and he pointed out one issue with much of the current regulation in that it uses ambiguous terms such as models of "high accuracy" (I guess the point being that accuracy is always arguable/subjective for an illiquid security).

Sandhya Persad of Bloomberg said that there were many practical issues to consider such as exchanges that close at different times and the resultant misalignment of closing data, problems dealing with holiday data across different exchanges and countries, and sourcing of factor data for risk models from analysts. Navin expanded more on his theme of which periods of data to use. Don took a different tack, and emphasised the importance of getting the fundamental data of client-contract-product in place, and suggested that this was a big challenge still at many institutions. Adam closed the question by pointing out the data issues in everyday mortgage insurance as an example of how prevalant data problems are.

What Missing Data Techniques Are There? Sandhya explained a few of the issues her and her team face working at Bloomberg in making decisions about what data to fill. She mentioned the obvious issue of distance between missing data points and the preceding data used to fill it. Sandhya mentioned that one approach to missing data is to reduce factor weights down to zero for factors without data, but this gave rise to a data truncation issue. She said that there were a variety of statistical techniques that could be used, she mentioned adaptive learning techniques and then described some of the work that one of her colleagues had been doing on maximum-likehood estimation, whereby in addition to achieving consistency with the covariance matrix of "near" neighbours, that the estimation also had greater consistency with the historical behaviour of the factor or instrument over time.

Navin commented that fixed income markets were not as easy to deal with as equity markets in terms of data, and that at sub-investment grade there is very little data available. He said that heuristic models where often needed, and suggested that there was a need for "best practice" to be established for fixed income, particularly in light of guidelines from regulators that are at best ambiguous.

I think Barry then made some great comments about data and data quality in saying that risk managers need to understand more about the effects (or lack of) that input data has on the headline reports produced. The reason I say great is that I think there is often a disconnect or lack of knowledge around the effects that input data quality can have on the output numbers produced. Whilst regulators increasingly want data "drill-down" and justfication on any data used to calculate risk, it is still worth understanding more about whether output results are greatly sensitive to the input numbers, or whether maybe related aspects such as data consistency ought to have more emphasis than say absolute price accuracy. For example, data quality was being discussed at a recent market data conference I attended and only about 25% of the audience said that they had ever investigated the quality of the data they use. Barry also suggested that you need to understand to what purpose the numbers are being used and what effect the numbers had on the decisions you take. I think here the distinction was around usage in risk where changes/deltas might be of more important, whereas in calculating valuations or returns then price accuracy might receieve more emphasis. 

How Extensive is the Problem? General consensus from the panel was that the issues importance needed to be understood more (I guess my experience is that the regulators can make data quality important for a bank if they say that input data issues are the main reason for blocking approval of an internal model for regulatory capital calculations). Don said that any risk manager needed to be able to justify why particular data points were used and there was further criticism from the panel around regulators asking for high quality without specifying what this means or what needs to be done.

Summary - My main conclusions:

  • Risk managers should know more of how and in what ways input data quality affects output reports
  • Be aware of how your approach to data can affect the decisions you take
  • Be aware of the context of how the data is used
  • Regulators set the "high quality" agenda for data but don't specify what "high quality" actually is
  • Risk managers should not simply accept regulatory definitions of data quality and should join in the debate

Great drinks and food afterwards (thanks Bloomberg!) and a good evening was had by all, with a topic that needs further discussion and development.

 

 

11 October 2012

Regulation Increases Risk

"Any Regulation of Risk Increases Risk" is an interesting paper illustrating quantitatively what a lot of people already think qualitatively (see past post for example), which is that regulation nearly always falls fowl of the law of intended consequences. Through the use of regulatory driven capital charge calculations, banks are biassed towards investing in a limited and hence overly concentrated set of assets that at the time of investment exhibit abnormally low levels of volatility. Thanks to PRMIA NYC for suggesting this paper. 

17 July 2012

Charting, Heatmaps and Reports for TimeScape, plus new Query Explorer

We have a great new software release out today for TimeScape, Xenomorph's analytics and data management solution, more details of which you can find here. For some additional background to this release then please take a read below.

For many users of Xenomorph's TimeScape, our Excel interface to TimeScape has been a great way of extending and expanding the data analysis capabilities of Excel through moving the burden of both the data and the calculation out of each spreadsheet and into TimeScape. As I have mentioned before, spreadsheets are fantastic end-user tools for ad-hoc reporting and analysis, but problems arise when their very usefulness and ease of use cause people to use them as standalone desktop-based databases. The four-hundred or so functions available in TimeScape for Excel, plus Excel access to our TimeScape QL+ Query Language have enabled much simpler and more powerful spreadsheets to be built, simply because Excel is used as a presentation layer with the hard work being done centrally in TimeScape.

Many people like using spreadsheets, however many users equally do not and prefer more application based functionality. Taking this feedback on board has previously driven us to look at innovative ways of extending data management, such as embedding spreadsheet-like calculations inside TimeScape and taking them out of spreadsheets with our SpreadSheet Inside technology. With this latest release of TimeScape, we are providing much of the ease of use, analysis and reporting power of spreadsheets but doing so in a more consistent and centralised manner. Charts can now be set up as default views on data so that you can quickly eyeball different properties and data sources for issues. New Heatmaps allow users to view large colour-coded datasets and zoom in quickly on areas of interest for more analysis. Plus our enhanced Reporting functionality allows greater ease of use and customisation when wanting to share data analysis with other users and departments.

Additionally, the new Query Explorer front really shows off what is possible with TimeScape QL+, in allowing users to build and test queries in the context of easily configurable data rules for things such as data source preferences, missing data and proxy instruments. The new auto-complete feature is also very useful when building queries, and automatically displays all properties and methods available at each point in the query, even including user-defined analytics and calculations. It also displays complex and folded data in an easy manner, enabling faster understanding and analysis of more complex data sets such as historical volatility surfaces. 

25 April 2012

Dragon Kings, Black Swans and Bubbles

"Dragon Kings" is a new term to me, and the subject on Monday evening of a presentation by Prof. Didier Sornette at an event given by PRMIA. Didier has been working on the diagnosis on financial markets bubbles, something that has been of interest to a lot of people over the past few years (see earlier post on bubble indices from RiskMinds and a follow up here).

Didier started his presentation by talking about extreme events and how many have defined different epochs in human history. He placed a worrying question mark over the European Sovereign Debt Crisis as to its place in history, and showed a pair of particularly alarming graphs of the "Perpetual Money Machine" of financial markets. One chart was a plot of savings and rate of profit for US, EU and Japan with profit rising, savings falling from about 1980 onwards, and a similar diverging one of consumption rising and wages falling in the US since 1980. Didier puts this down to finance allowing this increasing debt to occur and to perpetuate the "virtual" growth of wealth.

Corn, Obesity and Antibiotics - He put up one fascinating slide relating to positive feedback in complex systems and effectively the law of unintended consequencies. After World War II, the US Government wanted to ensure the US food supply and subsidized the production of corn. This resulted in over supply over for humans -> so the excess corn was fed to cattle -> who can't digest starch easily -> who developed e-coli infections -> which prompted the use of antibiotics in cattle -> which prompted antibiotics as growth promoters for food animals -> which resulted in cheap meat -> leading to non-sustainable meat protein consumption and under-consumption of vegetable protein. Whilst that is a lot of things to pull together, ultimately Didier suggested that the simple decision to subsidise corn had led to the current epidemic in obesity and the losing battle against bacterial infections.

Power Laws - He then touched briefly upon Power Law Distributions, which are observed in many natural phenomena (city size, earthquakes etc) and seem to explain the peaked mean and long-tails of distributions of finance far better than the traditional Lognormal distribution of traditional economic theory. (I need to catch up on some Mandelbrot I think). He explained that whilst many observations (city size for instance) fitted a power law, that the where observations that did not fit this distribution at all (in the cities example, many capital cities are much, much larger than a power law predicts). Didier then moved on to describe Black Swans, characterised as unknown unknowable events, occurring exogenously ("wrath of god" type events) and with one unique investment strategy in going long put options.

Didier said that Dragon-Kings were not Black Swans, but the major crises we have observed are "endogenous" (i.e. come from inside the system), do not conform to a power law distribution and:

  • can be diagnosed in advanced
  • can be quantified
  • have (some) predictability

Diagnosing Bubbles - In terms of diagnosing Dragon Kings, Didier listed the following criteria that we should be aware of (later confirmed as a very useful and practical list by one of the risk managers in the panel):

  • Slower recovery from perturbations
  • Increasing (or decreasing) autocorrelation
  • Increasing (or decreasing) cross-correlation with external driving
  • Increasing variance
  • Flickering and stochastic resonance
  • Increased spatial coherence
  • Degree of endogeneity/reflexivity
  • Finite-time singularities

Didier finished his talk by describing the current work that he and ETH are doing with real and ever-larger datasets to test whether bubbles can be detected before they end, and whether the prediction of the timing of their end can be improved. So in summary, Didier's work on Dragon Kings involves the behaviour of complex systems, how the major events in these systems come from inside (e.g. the flash crash), how positive feedback and system self-configuration/organisation can produce statistical behaviour well beyond that predicted by power law distributions and certainly beyond that predicted by traditional equilibrium-based economic theory. Didier mentioned how the search for returns was producing more leverage and an ever more connected economy and financial markets system, and how this interconnectedness was unhealthy from a systemic risk point of view, particularly if overlayed by homogenous regulation forcing everyone towards the same investment and risk management approaches (see Riskminds post for some early concerns on this and more recent ideas from Baruch College)

Panel-Debate - The panel debate following was interesting. As mentioned, one of the risk managers confirmed the above statistical behaviours as useful in predicting that the markets were unstable, and that to detect such behaviours across many markets and asset classes was an early warning sign of potential crisis that could be acted upon. I thought a good point was made about the market post crash, in that the market's behaviour has changed now that many big risk takers were eliminated in the recent crash (backtesters beware!). It seems Bloomberg are also looking at some regime switching models in this area, so worth looking out for what they are up to. Another panelist was talking about the need to link the investigations across asset class and markets, and emphasised the role of leverage in crisis events. One of the quants on the panel put forward a good analogy for "endogenous" vs. "exogenous" impacts on systems (comparing Dragon King events to Black Swans), and I paraphrase this somewhat to add some drama to the end of this post, but here goes: "when a man is pushed off a cliff then how far he falls is not determined by the size of the push, it is determined by the size of the cliff he is standing on". 

 

 

27 March 2012

Data Visualisation from the FT

Data visualisation has always been an interesting subject in financial markets, one that seems to always have been talked about about as the next big thing in finance, but one that always seems to fail to meet expectations (of visualisation software vendors mostly...). I went along to an event put on by the FT today about what they term "infographics", set in the Vanderbilt Hall at Grand Central Station New York:

FT1

One of my first experiences of data visualisation was showing a partner company, Visual Numerix (VNI), around the Bankers Trust 's London trading floor in 1995. The VNI folks were talking grandly about visualising a "golden corn field of trading oportunities, with the wind of market change forcing the blades of corn to change in size and orientation" - whilst maybe they had been under the influence of illegal substances when dreaming up this description, their disappointment was palpable at trading screen after trading screen full of spreadsheets containing "numbers". Sure there was some charting being used, but mostly and understandably the traders were very focussed on the numbers of the deal that they were about to do (or had just done).

I guess this theme ultimately continues today to a large extent, although given the (media hyped) "explosion of data", visualisation is a useful technique for filtering down a large (er, can I use the word "big"?) data problem to get at the data you really want to work with (quick plug - the next version of our TimeScape product includes graphical heatmaps for looking for data exceptions, statistical anomolies and trading opportunities, which confirms Xenomorph buys into at least this aspect of the "filtering" benefits of visualisation).

Coming back to the presentation, Gillian Tett of the FT said at the event today that "infographics" is cutting edge technology - not sure I would agree although given the location some of the images were very good, like this one representing the stock pile of cash that major corporations have been hoarding (i.e. not spending) over recent years:

FT5


There was also some "interactive" aspects to the display where by stepping on part of the hall floor changed the graphic displayed. Biggest problem the FT had with this was persuading anyone to step into the middle of the floor to use it (more of an English reaction to such a request, so the reticience from New Yorker's surprised me):

FT2

Videos from the presentation can be found at http://ftgraphicworld.ft.com/ and the journalist involved, David McCandless is worth a listen to for the different ways he looks at data both on the FT site but also in a TED presentation.

17 June 2011

Taleb and Model Fragility - NYU-Poly

I went along to spend a day in Brooklyn yesterday at NYU-Poly, now the engineering school of NYU containing the Department of Finance and Risk Engineering. The event was called the "The Post Crisis World of Finance" was sponsored by Capco.

First up was Nassim Taleb (he of Black Swan fame). His presentation was entitled "A Simple Heuristic to Assess Tail Exposure and Model Error". First time I had seen Nassim talk and like many of us he was an interesting mix of seeming nervousness and confidence whilst presenting. He started by saying that given the success and apparent accessibility to the public of his Black Swan book, he had a deficit to make up in unreadability in this presentation and his future books.

Nassim recommenced his on-going battle with proponents of Value at Risk (see earlier posts on VaR) and economists in general. He said that economics continues to be marred by the lack of any stochastic component within the models that most economists use and develop. He restated his view that economists change the world to fit their choice of model, rather than the other way round. He mentioned "The Bed of Procrustes" from Greek mythology in which a man who made his visitors fit his bed to perfection by either stretching them or cutting their limbs (good analogy but also good plug for his latest book too I guess)

He categorized the most common errors in economic models as follows:

  1. Linear risks/errors - these were rare but show themselves early in testing
  2. Missing variables - rare and usually gave rise to small effects (as an aside he mentioned that good models should not have too many variables)
  3. Missing 2nd order effects - very common, harder to detect and potentially very harmful

He gave a few real-life examples of 3 above such as a 10% increase in traffic on the roads could result in doubling journey times whilst a 10% reduction would deliver very little benefit. He targeted Heathrow airport in London, saying that landing there was an exercise in understanding a convex function in which you never arrive 2 hours early, but arriving 2 hours later than scheduled was relatively common.

He described the effects of convexity firstly in "English" (his words):

"Don't try to cross a river that is on average 4ft deep"

and secondly in "French" (again his words - maybe a dig at Anglo-Saxon mathematical comprehension or in praise of French mathematics/mathematicians? Probably both?):

"A convex function of an average is not the average of a convex function"

Nassim then progressed to show the fragility of VaR models and their sensitivity to estimates of volatility. He showed that a 10% estimate error in volatility could produce a massive error in VaR level calculated. His arguments here on model fragility reflected a lot of what he had proposed a while back on the conversion of debt to equity in order to reduce the fragility of the world's economy (see post).

His heuristic measure mentioned in the title was then described which is to peturb some input variable such as volatility by say 15%, 20% and 25%. If the 20% result is much worse than the average of the 15 and 25 ones then you have a fragile system and should be very wary of the results and conclusions you draw from your model. He acknowledged that this was only a heuristic but said that with complex systems/models a simple heuristic like this was both pragmatic and insightful. Overall he gave a very entertaining talk with something of practical value at the end.

31 March 2011

Investment risk not rewarded

Interesting article from the FT, Reward for risk seems to be a chimera, effectively saying that more risky (volatile) equities do not necessarily provide higher returns than less risky equities. I like the suggestion that the reason for this is that "hope springs eternal" and investors buy more volatile stocks (pushing up price) in the hope of higher returns. However, as yet another illustration of the law of unintended consequences, the article goes on to suggest that choosing a benchmark index to outperform and limitations on borrowing imposed by investment mandates may both be driving this effect, are interesting and challenging ideas for investment managers.

 

09 December 2010

RiskMinds 2010 - Day 2 - Perceptions of Risk

Very interesting presentation by David Spiegelhalter of Cambridge University on "Perceptions of Risk - Communicating Risks and Deeper Uncertainties In Words, Numbers & Pictures". David is the Winton Professor for the Public Understanding of Risk, Department of Mathematics at Cambridge University and is involved with the website understandinguncertainty.org.

David started by saying that when communicating risk what is needed is a user-friendly, easy to understand unit of risk. One example he gave was a one in a million change of death, which he termed a "micromort". He said that on average around 50 people a day die of unnatural causes in England and Wales every year, so this means that with a population of 50 million a person's exposure in England and Wales is 1 micromort. He then compared various means of transport and the distance that would need to be travelled to reach a 1 micromort measure:

  • Walking - 12 miles
  • Cycling - 20 miles
  • Car - 217 miles
  • Motor Bike - 6 miles

So I guess those of you with motor bikes should take note above!

He emphasised that also peoples reaction to risk is very interesting, and gave the example of a UK health official recently sacked for suggesting that the hobby (addiction?) to horse riding (he termed it "equesy") was just as risky as taking the drug ecstasy - statistically what the official said stacked up but it was simply not culturally acceptable to suggest such an association. No great advertisement for the NHS, but there are around 3753 deaths a year with an average of around 135,000 people in hospital at any one time. This works out at around 75 micromorts if you are in hospital which is around twice the level faced by troops in Afghanistan!

Obviously the above has a number of biases, but David was trying to illustrate how to compare risks and how people are not used to assessing them objectively. In particular, given a choice between probabilities of 1 in 10, 1 in 100 and 1 in 1000, around a quarter of the public would choose 1 in 1000 as the highest probability given it contains the "highest" number.

Whilst the above refers to the denominator with a constant numerator, given the choice between drawing from a bowl containing 1 sweet and 8 marbles and another bowl containing 5 sweets and 45 marbles, 53% of people choose the one containing 5 sweets (because it contains "the most" chances of getting a sweet).

David went on to test the audience on a few trivia questions using what he termed a "quadratic scoring" scale that asked the participant to select a multi-choice answer but to also associate a level of confidence with it. If right and confident the marks given would be high, but if wrong and confident the penalty mark would be much larger. He said that such scoring often produced interesting results and changed people's views, often with young men doing worse (testosterone not being good for risk seemingly!).

He showed how probability density seemed to better understood if represented by density of ink rather than the usual bell curves etc. He suggested that results should come with some more warning of how reliable they are to stop simple acceptance of the numbers reported as "truth". He described how risk is measurable whereas uncertainty is not, which led to the inevitable references to the wisdom (?) of Donald Rumsfeld.

Good fun talk with some great points to make - how humans (and bank management boards?) understand risk is interesting and to some extent surprising (see earlier post for a different slant on human perception of maths). Obviously it is accepted now that simple VAR measures are not enough, but even with the move towards scenario based methods then how to produce a simple but meaningful summary of risk for management is still challenging.

28 October 2010

A French Slant on Valuation

Last Thursday, I went along to an event organized by the Club Finance Innovation on the topic of “Independent valuations for the buy-side: expectations, challenges and solutions”.

The event was held at the Palais Brongniart in Paris, which, for those who don’t know (like me till Thursday), was built in the years 1807-1826 by the architect Brongniart by order of Napoleone Bonaparte, who wanted the building to permanently host the Paris stock exchange.

Speakers at the roundtable were:

The event focussed on the role of the buy-side in financial markets, looking in particular at the concept of independent valuations and how this has taken an important role after the financial downturn.  However, all the speakers agreed that remains a large gap between the sell-side and buy-side in terms of competences and expertise in the field of independent valuations. The buy-side lacks the systems for a better understanding of financial products and should align itself to the best practices of the sell-side and bigger hedge funds.

The roundtable was started by Francis Cornut of DeriveXperts, who gave the audience a definition of independent valuation. Whilst valuation could be defined as the “set of data and models used to explain the result of a valuation”, Cornut highlighted how the difficulty is in saying what independent means; there is in fact a general confusion on what this concept represents: internal confusion, for example between the front office and risk control department of an institution, but also external confusion, when valuations are done by third-parties.

Cornut provided three criteria that an independent valuation should respect:

  • Autonomy, which should be both technical and financial;
  • Credibility and transparency;
  • Ethics, i.e.: being able to resist to market/commercial pressure and deliver a valuation which is free from external influences/opinions.

Independent valuations are the way forward for a better understanding of complex, structured financial products. Cornut advocated the need for financial parties (clients, regulators, users and providers) to invest more and understand the importance of independent valuations, which will ultimately improve risk management.

Jean-Marc Eber, President LexiFi, agreed that the ultimate objective of independent valuations is to allow financial institutions to better understand the market. To accomplish this, Eber pointed to the fact that when we speak about services to clients, we should first think of what are their real needs. The bigger umbrella of “buy-side” implies in fact different needs and there is often a contradiction on what regulators want: on one side, having independent valuations provided by independent third parties; on the other side, independent valuations really mean that internal users/staff do understand what there is underline the products that a company have.In the same way, we don’t just need to value products but also measure their risk and periodically  re-value them.It is important, in fact, to have the whole picture of the product being evaluated in order to make the buy-side more competitive.

Another point on which the speakers agreed is traceability: as Eber said, financial products don’t exist just as they are, but they go under transformation and change several times. Therefore, the market needs to follow the products across its life cycle till its maturity stage and this pose a technology challenge, in providing scenario analysis for compliance and keeping track of the audit trail.

At the question, ‘what has the crisis changed’ panellists answered:

Eber: the crisis showed the need to be more competent and technical to avoid risk. He highlighted the need to understand the product and its underlying. Many speak of having a central repository for OTCs, obligations, etc but this needs more thinking from the regulators and the financial markets. Moreover, the markets should focus more on quality data and transparency.

Eric Benhamou, CEO pricing Partners, sees an evolution of the market as the crisis showed underestimated risks which are now being taken in consideration.

Claude Martini, CEO Zeliade, advocated the need for financial markets to implement best practices for product valuations: buy-side should apply the same practices already adopted by the sell-side and verify the hypotheses, price and risk related to a financial product.  

Cornut admitted  things have changed since 2005, when they launched DerivExperts and nobody seemed to be interested in independent valuations. People would ask what value they would get from an investment in independent valuations: yes, regulators are happy but what’s the benefit for me?

This is changing now that financial institutions know that a deeper understanding of financial products increases their ability to push the products to their clients. The speech I enjoyed the most was from Patrick Hénaff, associated professor at the University of Bretagne and formerly Global Head of Quantitative Analysis - Commodites at Merrill Lynch / Bank of America.

He took a more academic approach and contested the fact that having two prices to confront is thought to reduce the incertitude on the product but highlighting as this is not always the case. I found interesting his idea of giving a product price with a confidence interval or a ‘toxic index’ which would represent the incertitude about the product and reproduce the model risk which may originate from it.

We speak too often about the risk associated to complex products but Hénaff, explained how the risk exists even on simpler products, for example the calculation of VAR on a given stock positioning. A stock is extremely volatile and we can’t know its trend; providing a confidence interval is therefore crucial. What is new instead, it is the interest that many are showing in assigning a price to a determinate risk, whilst before model risk was considered a mere operational risk coming out from the calculation process. Today, a good valuation of the risk associated to a product can result in less regulatory capital used to cover the risk and as such it is gaining much more interest from the market.

Henaff describes two approaches currently taken from academic research on valuations:

1) Adoption of statistic simulation in order to identify the risk deriving from an incorrect calibration of the model. This consists in taking historical data and test the model, through simulations and scenarios, in order to measure the risk associated in choosing a model instead of another;)

2) Have more quality data. Lack of quality data implies that models chosen are inaccurate as it is difficult to identify exactly what model we should be using to price a product.

 

Model risk, which as said above was before considered  an operational risk, now becomes of extremely importance as it can free up capital. Hénaff suggested that is key to find for model risk the equivalent of the VAR for market risk, a normalized measure. He also spoke about the concept of a “Model validation protocol”, giving the example of what happens in the pharmaceutical and biologic sectors: before launching a new pill into the market, this is tested several times.

Whilst in finance products are just given with their final valuation, the pharmaceutical sector provides a “protocol” which describes the calculations, analysis and processes used in order to get to the final value and their systems are organized to provide a report which would show all the deeper detail. To reduce risk, valuations should be a pre-trade process and not a post-trade.

This week, the A-Team group published a valuations benchmarking study which shows how buy-side institutions are turning more and more often to third-parties valuations, driven mainly by risk management, regulations and client needs. Many of the institutions interviewed also admitted that they will increase their spending in technology to automate and improve the pricing process, as well as the data source integration and the workflow.

This is in line on what has been said at the event I attended and confirmed by the technology representatives speaking at the roundtable.

I would like to end with what Hénaff said: there can’t be a truly independent valuation without transparency of the protocols used to get to that value.

Well, Rome wasn’t built in a day (and as it is my city we’re speaking about, I can say there is still much to build, but let’s not get into this!) but there is a great debate going on, meaning that financial institutions are aware of the necessity to take a step forward. Much is being said about the need for more transparency and a better understanding of complex, structured financial products and still there is a lot to debate.  Easier said than done I guess but, as Napoleon would say, victory belongs to the most persevering!

11 December 2009

RiskMinds - VaR as simple as chartism?

Interesting panel debate at RiskMinds Wednesday morning, entitled "Sophisticated Complex Models vs. Crude Robust Risk Measures".

Riccardo Rebonato of RBS started off the debate in (untypically?) controversial style by saying that he thinks that the risk management models (mostly VaR) used in financial markets are peculiar. Peculiar in that coming from a physics background he is used to models that have "causal" links between inputs and outputs, whereas VaR is based simply on the P&L distribution of a portfolio i.e. all the information is contained in the data itself. Riccardo said the obvious analogy was with chartism, where decisions are made on the observed market data itself without any reference to external (exogenous) factors at all (perhaps he should have a discussion on endogenous risk with Jean-Phillippe Bouchard at Quant Invest). Riccardo suggested that in the range of models from those that are "over specified" with two many inputs to those in "reduced form", then VaR was far too much at the reduced form end.

In response to Riccardo's proposal that risk models should involve more causal ("factor") effects, Andreas Gottschling of Deutshe Bank countered with the quote from Harry S. Truman "Give me a one-handed economist! All my economists say, On the one hand on the other.". To which Riccardo acknowledged that maybe Economists and Econometrics were less suited to trading/analyst reports (e.g. give me a single view of what the prospects/returns will be) and more suited to risk management (e.g. give me a range of scenarios with supporting assumptions for each).

Chris Finger of RiskMetrics moved on to put forward an argument for standardisation of risk reporting, saying that it was impossible to say what methodology was behind the VaR numbers disclosed by major financial institutions. He proposed that risk reporting needed to be standardised and obligatory, but emphasised that risk management should not standardised. Paul Shotton of UBS agreed, saying that whilst micro-prudential risk of Pillar I had decreased risk on an individual institution level, it had increased systematic (macro) level risk and this was an area of failure for the regulators. On this the panel agreed, echoing a lot of what Avinash Persaud said in proposing the more diversity of risk management was highly desirable.

On standardisation, Riccardo noted that many banks had switched from using 10-day to adjusting up a 1-day VaR, and as a result presenting a less risky picture to analysts and regulators, regardless of how risky the "tail" of each institutions' P&L distribution is. Riccardo also proposed that there should be "constructive ambiguity" over what is asked of the banks by the regulators - put another way he suggested the regulators should come up with the "curriculum" for risk but not the "questions", as definitive questions encourage arbitrage.

Andreas then brought the debate back to its title, and put forward that maybe VaR should be replaced by simpler measures such as limits on notional traded. Paul suggested that VaR was only good for simpler products and portfolios, under "normal" market conditions. He said that he had been an advocate of more stress testing for a long time as a complimentary approach to VaR, but also combined with the simpler approach of limits.

It was an interesting debate, particularly with Riccardo's proposal on VaR being too simple a measure based on statistics, and wanting a more "causal" model to be developed. Using the example of June 2007, Riccardo said that everyone knew something big was about to happen but this was not reflected in VaR calculations since they are statistically based and inherently backwards-looking and not predictive. The lack of prediction is a very valid point, but putting forward a counter-view, then I get the argument about economists giving a range of outcomes, but surely these should be fed into the scenario engine rather than trying to develop econometric models of relationships between market variables. Econometric models are just as vunerable as any other to the mis-behaviour of markets (anyone seen a stable correlation lately?).

A few of the other risk managers there expressed other views, from the more buy-side folks who were more comfortable with factor-based modelling, to risk managers who said that VaR was already "structural" with explicit relationships between valuations and interest rate inputs for example. It would be good to understand more of Riccardo's ideas on this, since it appeals from making risk a more "forward-looking" process but I find it difficult to quite grasp what "causal" model you can have of markets that is itself robust to changes in market behaviour.

08 December 2009

RiskMinds - The Failure of Risk Models

Avinash Persaud of Intelligence Capital gave the opening talk of the morning at RiskMinds (see first of set of posts from last year here) and put forward a lot of the very good ideas that he has contributed to in the recent Warwick Commission Report. Main points that Avinash made:

  • Regulators were admirably quick in working out where past regulation had gone wrong in focussing too much on micro (individual institution) rather than macro (whole market)/systematic risk.
  • The regulators then came out with promising papers on counter cyclical regulation and other positive ideas.
  • These new ideas do not win votes however and do not satisfy the public's desire to punish someone - Avinash called this the "Bad Apple" policy, with "bad bankers, bad products, bad jurisdictions" being the perceived guilty parties.
  • All past crises have resulted in demands for three things: i) more risk management; ii) more regulation; and iii) more transparency.
  • These are fine as demands but evidently do not prevent financial crises.
  • Avinash recalled his work back at JPMorgan in the early 90's when the 4:15 report was produced for Sam Weill, which eventually led to VAR reporting becoming widespread.
  • He then fast forwarded to the Asian crisis of 97 where he saw the failings of VAR (or rather its widespread use) first hand with all players using VAR which when volatility increased caused an increase in VAR causing JPM (and all) to sell causing markets to fall, increasing vol causing more selling, increasing correlation and leading to what is called the "loss spiral".
  • In light of the recent crisis, Avinash said the public perception is that bankers created a load of toxic bombs (products), through them at an unsuspecting public and ran away...
  • ...and in his opinion the reality is that banks created a load of toxic bombs and ran straight towards them i.e. this was a failure of risk management where bankers did not understand the risks they were buying and selling.
  • He then took us back to the 1950's and the formation of modern portfolio theory with Markowitz and Danzig working at the RAND Corporation.
  • At that time banks and insurers were still separate, with FX and capital controls still in place meaning that not only could the "efficient frontier" of investment portfolios be observed but it could also be acted upon.
  • Now everyone has the same information everyone can observe the efficient frontier of investment opportunities but cannot exploit or act upon it, since usually everyone moves in (the "herd") and the value observed is changed by this crowded participation in the market. Here he seems to be echoing a lot of what Bob Litterman said at QuantInvest last week over the "crowded trade" and that the barriers to market knowledge and our ability to act on this knowledge have been lowered forever.
  • Avinash put forward that many of the models we use today assume the statistical independence of decision making process whereas the reality is that the market is homogenous (everyone is thinking/acting the same) and hence these models are invalid in this "crowded" context.
  • In light of this, the problem of risk management is not about exogenous risk (risks from outside the market, from Black Swan events to normal distributions) but more about endogenous risk i.e. peoples behaviours upon seeing opportunities cause strategic risks. (Interesting given Jean-Phillippe Bouchard at QuantInvest commenting on what makes prices move). Put another way, behaviour is the issue not the financial instruments themselves.
  • Avinash proposes that risk capacity (the ability of an institution to absorb a particular type of risk) shoudl be thought through more fully, with for example insurance and pension institutions with long-term liabilities having a much greater capacity to absorb liquidity risk than banks, and banks with short term funding being a better position to manage a loan book.
  • He pointed out that regulation that uses market prices to protect us against movements in market prices is doomed to failure before it starts.
  • Booms occur due to some perceived "paradigm shift" technolgy leading to dramatically improved risk/return ratios - he cited things such as cars, electricity, rail, dotcom and the mantra from those involved that "This time it is different..." (see "bubble" post from last year)
  • Avinash thinks the regulators are significantly to blame for the last crisis since they themselves said the latest financial innovations in credit derivatives were making us safer through sharing out risk in the system.
  • He said that there is no theory for making a complex system "safe" as a whole and that the regulators did not/do not "get" this idea.
  • Diversity of approach and risks in a large systems (macro financial markets) is our only current defence and regulatory "best practice" has driven conformity not diversity in the market, making systemic risks higher not lower.
  • So the regulators are themselves creating a homogenised market.
  • In terms of solutions, he proposes that risk and audit committees need separating so that risk management does not become a "tick box" exercise.
  • He further proposes that the risk management function is given some capital so that it can place hedges at a macro level for institution (i.e. looking at the resulting risk when divisional risks have been aggregated) - here is proposing moving to risk "management" as opposed to the much more common risk "reporting" found in many institutions.
  • One risk management indicator idea he proposed was to put a portfolio management model together that was linked to VAR in order to see where the "herd" is moving to (e.g. low vol, high return Asian markets of the past etc) and to move or hedge against this.
  • He is concerned that applying Basel II regulation to the Insurance industry with Solvency II will mean that all players will be dancing to same VAR tune which will introduce more risk as more institutions are forced to react in the same way to market movements and volatility.
  • On the same lines, Credit Rating Agency regulation will create barriers to changes in ratings methodology in response to endogenous market risk, again meaning that everyone will be forced to behave and act in the same ways.
  • He summarised that "endogenous risk" (movements in the market caused by the market) and not statistical distributions that are the key issue and diversity is the only solution.

Entertaining speaker with some interesting ideas that fly in the face of much of what is being done by the regulators today, and generally well received by many of the risk managers present. Behavioural finance and the "crowded trade" (i.e. everyone doing the same thing in the market causing movements within the market) seem to be key themes occuring in a lot of what academics and practitioners have said on risk management recently. Now what to do about it? Not sure that less (not more) regulation will find many fans at the moment...answers on a postcard please!

05 December 2009

Maths to Money - Quantitative Investment

I attended the Quant Invest 2009 event for the first time last week in Paris. The event is unsurprisingly about quantitative investment strategies, but with an institutional asset manager and hedge fund focus - so not so much about ultra-high frequency trading (although some present) but more about using quantitative techniques to manage medium/longer-term investment decisions and applied portfolio theory. A few highlights below that I found interesting:

  • Pierre Guilleman of Swiss Life Asset Management gave an interesting 1/2 day workshop entitled "A random walk through models":
  • He is a strong supporter of the need to understand more about the data and statistical assumptions upon which any quant investment model is based and how these fit with the desired investment objectives (similar to the Modeler's Manifesto)
  • He made the point that good models can sometimes be almost annoyingly simple, and cited the example by a Professor Fair of Yale who had determined that US elections were predictable based on simple parameters such as past results, inflation and gdp and that policy did not seem to be a key factor at all - annoying for the politicians anyway! 
  • Pierre seems very concerned that the Solvency II regulation applied to Life Institutions will negatively influence the investment policies of many institutions - applying sell-side risk measures like VAR to the insurance industry will drive a more short-term approach to investment. He strongly believes that VAR applied to his industry should have an expected return parameter introduced to fit with longer term investment horizons of 10 to 25 years.
  • Bob Litterman of Goldman Sachs Asset Management opened the first "official" day of the conference:
  • Bob put forward his "scientific" approach to investment modelling going through the stages of hypothesis, test and implement. He warned against overconfidence in investment (apparently 70% of us think we are "above average"...) and impulsiveness (quick impulsiveness test: "if a bat costs $1 more than the ball, and the bat and ball together cost $1.10 then how much does the bat cost?...") 
  • He said that the failure of quantitative investment models in 2007 needed to be understood given the success of quant models over past decades. In particular he thought that quant investment became the "crowded trade" of 2007 with every hedge fund having a quant investment strategy. In terms of why this became a "crowded trade" Bob thinks that the barriers to entry into quant investment (particularly technology) have lowered significantly recently.  
  • He noted that factor-based investment opportunities decay quicker than they used to due to increased competition - implying the need for a more dynamic and opportunistic investment approach.  
  • GSAM are now looking at new markets and new investment instruments, trying to find areas of market disruption but without following what others are doing in the market.  
  • He pointed out the conflict between investors wanting more transparency over what is done for them, against the need to be more proprietary about the investment models developed.  
  • Next there was a talk on regulation from the French regulator that was dull, dull, dull both in terms of content and presentation style (when will regulators actually prepare well for the talks they give?)
  • Panel debate was also pretty average, with the word "alpha" being used too much in my view - asset managers of a certain type seem to hide behind this word as an opaque "magic wand" to justify what they do.
  • Jean-Phillippe Bouchard of Capital Fund Management did a great talk called "Why do Prices Move?". Some points from the talk:
  • He started off with a reminder about the Efficient Markets Hypothesis (EMH) and how it says that crashes and market movements are caused by events outside (endogenous to) the market such as news, events etc.
  • He then said this was not born out in the data, where extreme jumps in prices were only related to news only 5% of the time.
  • Volatility looks like a long memory process with clustering of vol over time - similar to behaviour in complex systems
  • The sign of order flow is predictable but the price movement is not, with only 1% of daily order volume accounting for price movements over 5%
  • Even very liquid stocks have low immediate liquidity, meaning that price movements can play out over many hours and days as liquidity is sought to "play-out" some change in fundamental price levels.
  • Joseph Masri of the Canadian Pension Plan Investment Board then did a good talk on Risk Management:
  • Jo said that sell-side risk was easier to deal with in some ways since it involved fewer strategies in high volumes, and hence could be better resourced.
  • Buy-side quantitative risk was harder due to its reliance onsell-side research and risk tools, the outsourcing of credit assessment to the credit rating agencies, the loss of Bear and Lehman's having caused the buy-side to have to do more risk management itself (and through third parties) rather than rely on the sell side risk management tools.
  • He said that sell-side risk models are a good start for an asset manager, but need to be adapted to give both absolute and relative risk (to a benchmark fund for instance). All models are no substitute for risk governance.
  • He described the cross over from risk methods: VAR, stress testing, factor-based and their applicability to market risk, credit and counterparty risk.
  • Like Pierre he was not a fan of 1 or 10 day trading VAR being applied to investment managers since this risk measure was not suitable for long term investment in his view.
  • On stress testing he said this needed to be top down (using historical events etc) as well as bottom up from knowing the detail of strategy/portfolio.
  • In terms of challenges in risk management he said that VAR needed more stress testing to cope with the fat tails effect in markets, that liquidity risk both of counterparties and of illiquid products was vital and the importance of stress testing (he mentioned reverse stress testing) plus also the feedback (crowding effects) of having similar investment strategies to others in the market.
  • Dale Gray of the IMF gave a very interesting talk on how he and Bob Merton have been applying the contingent claims model of a company (looking at equity in terms of option payoffs for shareholders and bondholders) to whole economies:
  • He said that some of his work was being applied to produce a model for the pricing of the implicit guarantees offered by governments to banks
  • He said these models were also applicable to macro-prudential risk
  • Very interesting talk, and if he really has something of macro-level risk then this is great relative to the wooly approach by the regulators so far

There were some other good talks from Danielle Bernardi on Behavioural Finance, Martin Martens on Fixed Income Quant Investment, Vassilios Papathanakos on Stochastic Portfolio Theory (seemed to be a "holy grail" of investment model, giving good returns even in the crisis - begs the question why he is telling everyone about it?), Claudio Albanese on unified derivative pricing/calibration across all markets (again another "holy grail" worth more investigation) and Terry Lyons on speeding up monte carlo simulations.

Overall a good conference although the quality of the asset managers present seemed very digital from those who really seemed to know what they talking about to those who plainly did not (in my limited view!). Along this line of thought, I think it be good to test whether there is an inverse relationship between the quality of the asset manager and the amount of times they use the word "alpha" to explain what they are doing...

12 November 2009

It's in the news...

I went along to the Forum on News Analytics over in Canary Wharf on Monday evening, organised by Professor Gautam Mitra from OptiRisk / Carisma at Brunel University. We seem to be in the early days of transforming news articles into quantifiable/machine-readable data so that it can be processed automatically/systematically in trading and risk management. It was a good event with both vendors and practititioners attending so was reasonably balanced between vendor hype and the current state of market practice.

As background on what is meant by news analytics data, then for example you might count the number of news articles about a particular company and look at whether the quantity of news articles might be a predictor of some change in the company's stock price or volatility. Moving on from this simple approach (assuming that you are clever enough to be certain about what news is about what company), then you can then move towards assessing whether the news is negative, neutral or positive in sentiment about a company/stock.

The context here is about having the capability to automatically process/analyse any kind of text-based news story, not just those from research analysts that might be nicely tagged with such quantifiers of sentiment (see http://www.rixml.org/ on xml standards for analyst data). The way in which the meaning of the text is "quantified" uses some form of Natural Language Processing.

The event started with a brief talk by Dan di Bartolemeo of Northfield Information Services. I hadn't heard of him or his company before (maybe I should pay more attention!) but he seemed a very solid speaker with strong academic and practical background in investment management and modelling. He referenced a few academic papers (available via their web site) on news analytics, and how news analytics and implied volatility could provide better estimates of future volatility than implied volatility alone. He also made some good points about how investment "models" are calibrated to history and how such models need to adapt to "today" - he put it as "how are things different now from the past?" and put forward the idea of a framework for assessing and potentially modifying a model to respond to the "now" situation. He also suggested that the market can react very differently to "expected news" (having a range of investment "what ifs" planned for a known earnings announcement) as opposed to unexpected information (we are back into the realms of the Black Swan and the ultimate in uncertainty wisdom from Donald Runsfeld)

Armando Gonzalez of RavenPack then began by explaining how RavenPack had become involved in applying text analysis to finance (it seems the subject has its origins, like a lot of things, in the military). RavenPack seem to be highest profile quantified news vendor at the moment, and whilst Armando is obviously biassed towards pushing the concept that money can be made by adding quantified news data to trading models, he said that not many firms are as yet systematically processing news and most people are relying upon manual interpretation of the news they buy/use. Some of the studies Ravenpack have on market news and prices are very interesting, showing how a news event can take up to 20 mins before the market settles on a new "fair" price level for a stock. Additionally, and maybe an interesting reflection on human behaviour, was that in bull markets there are usually twice as many positive stories about companies than negative, but strikingly in a bear market there was still almost equal amounts of positive and negative news - so humans are basically optimists! (or delusional, or just plain greedy...take your pick!)

Mark Vreijling of Semlab followed Armando and suggested that a lot of their sales prospects understandably desire "proof" of the benefits of adding quantified news to trading, but this was a little ironic since most financial institutions have been paying to receive "raw" news for years, presumably because they perceive beneift from it. Mark also mentioned that the application of quantified news to risk management was a new but growing area for him and his colleagues.

Gurvinder Brar of Macquarie then went into some of the practicallities of quantifying and using news in automated trading. He suggested that you need to understand what is really "news" (containing information on something that has just happened) and what is merely an news "article" (like a "feature" in a magazine etc). Assessing relevance of news was also difficult and he added that setting a hierarchy of what kind of events are important to your trading was a key step in dealing with news data. Fundamentally he suggested that why wait for five days for analysts to publish their assessment of a market or company-specific event when you could react to the event in near real-time.

The event then went into "panel" mode where the following points came out:

  • Dan thought that a real challenge was integrating quantified news with all of the other relevant datasets (market data, but also reference data etc)
  • Armando picked up on Dan's point by giving the example news about Gillette which at one point was about Gillette the company but then on acquisition became news about the Gillette "brand" which became a part of Proctor and Gamble.
  • Dan said that a key problem with processing news was also understanding what news was simply ignored by the news wires i.e. we know what is being talked about, but what could have been talked about, why was it ignored and is it (even so) relevant to trading?
  • Mark and Armando said that the "context" for the news story was vital and that market expectations can turn many "negative" news stories into positive outcomes for trading e.g. the market likes bad news when it is not as "bad" as everyone thought.
  • Dan made a very interesting point about trading in terms of categorising trades as "want to" trades and "have to" trades. He gave the example of a trade being observed that seemingly has no news associated/prompting it - so does this mean the trade is occuring because somebody "has to" make the trade (a fund facing an welcome client redemption for example?) or because there has been some information leak to a market participant and such a participant "wants to" make a trade before the news becomes available to the market as a whole.
  • I think all of the panel members then collectively hesitated before answering the next question from the audience, with Microsoft having one of their "text search" R&D team (think Bing...) asking about news categorisation and quantification.
  • Dan also mentioned something that I have only recently become more aware of, which is that apart from major markets in the US, most exchanges world-wide do not publish whether a trade was a "buy" or "sell" trade (they just publish the price and transaction size). Obviously knowing the direction of the trade would be useful to any trading model, and Dan referred to this as wanting to know the "signed volume".
  • A member of the audience then asked whether most quantified news had been based on just the English language and the concensus was that most was based on English, but Natural Language Processing can be trained in other languages relatively easily. A few members of the panel pointed out that all languages change, even English, requiring constant retraining, and also that certain languages, countries and cultures added further complication to the recognition process.
  • The next question asked was whether the panel could outline the major areas that quantified news is applied in - the answer included intraday (but not quite real-time) trading, algorithmic execution, lower frequency portofolio rebalancing and in compliance/risk/market abuse detection.
  • A good debate ensued about whether "news" was provided by the official newswires or by the web itself. The panel (and audience) concensus seemed to favour the premise the news wires are the source of news and the web is a reflection/regurgitation of this news. That said, Gurvinder of Macquarie gave the nice counter example of the analysts/news wires not making much of the new Apple iPod, when looking at the web it was possible to see that the public were in contrast very enthusiastic about it.

Overall an interesting event. I think the application of "quantified news" to risk management is interesting - maths and financial theory is very interesting but markets are driven by people's behaviour and if "quantified news" can help us understand this better it has to help in avoiding (some!) of the future problems to be faced in the market.

10 September 2008

Tibco buys Insightful...

...it must be summer (maybe not in the UK according to the weather?), seems like I missed this but Tibco has just finalised its purchase of Insightful, the makers of the S-Plus statistical package. A release from Insightful explaining the deal can be found by clicking here. Not something that strikes me as an immediate "that makes obvious sense" but not a negative either, so let's see...

Xenomorph: analytics and data management

About Xenomorph

Xenomorph is the leading provider of analytics and data management solutions to the financial markets. Risk, trading, quant research and IT staff use Xenomorph’s TimeScape analytics and data management solution at investment banks, hedge funds and asset management institutions across the world’s main financial centres.

@XenomorphNews



Blog powered by TypePad
Member since 02/2008