20 posts categorized "Economics"

24 June 2014

Cloud, data and analytics in London. Tomorrow Wednesday 25th June.

One day to go until our TimeScape MarketPlace breakfast briefing "Financial Markets Data and Analytics. Everywhere You Need Them" at Merchant Taylor's Hall tomorrow, Wednesday June 25th. With over ninety people registered so far it should be a great event, but if you can make it please register and come along, it would be great to see you there.

14 May 2014

Clients and Partners. Everywhere You Need Them.

Quick thank you to the clients and partners who took some time out of their working day to attend our breakfast briefing, "Financial Markets Data and Analytics. Everywhere You Need Them." at Microsoft's Times Square offices last Friday morning. Not particularly great weather on here in Manhattan so it was great to see around 60 folks turn up...

Photo 1

 
Rupesh Khendry of Microsoft (Head of World-Wide Capital Markets Solutions) started the event and set out the agenda for the morning. Rupesh described the expense of data within financial markets, and the difficulties experienced by risk managers in pulling together all the data and analytics they need...  Photo 2
 
 ...and following Rupesh was Antonio Zurlo (below) of Microsoft (Senior Program Manager) who explained the fundamentals of Microsoft Azure and what services and infrastructure it offers, including public cloud, virtual private cloud and hybrid cloud architectures. Antonio also described a key usage pattern for HPC/grid on Azure being used to "burst to the cloud" when on-premise infrasture needs to be extended for end/intra-day risk calcs...
Photo 3
 
Sang Lee (below) of Aite Group (Managing Partner) then delivered his presentation "Floating in the Capital Markets Cloud: Moving Beyond Data Storage". Sang's main findings from the survey of 20 financial institutions were that concerns about security and SLAs relating to cloud usage remain, but even those that were concerned about this also said they were planning to start a cloud project within the next 24 months. Cloud technology seems to becoming more acceptable of late, and Sang said this seems to be due to regulation, cost pressures and the desire to offer better services to clients. Sang confirmed that HPC/Grid with "burst to the cloud" is a common usage pattern and that "Data as a Service" is becoming more popular... 
Photo 4
 
Fred Veasley (below) of Microsoft (Tech Solutions Professional) to introduce Microsoft Power BI and Office 365. Fred explained how Power BI extended the capabilities of Excel with data search (finding and retrieving publicized data sources both within an organization and over the web), its integration capabilities with standard databases, NoSQL databases, data standards such as OData and new APIs/sources of data such as Facebook. Once downloaded, the data can be shaped and merged with other datasets (for instance combining data from positions databases/systems with analytics and data from the cloud), and kept up to date automatically. In addition to Power BI, Power View enables great visualizations and interactive dashboards to be created, and once finalized these can be deployed centrally via web pages down to end users...
Photo 5
 
After Fred, Brian Sentance (below), CEO of Xenomorph explained the origins of the TimeScape MarketPlace. Based on some discussions with Microsoft about 18 months back, the idea was effectively to firstly to get TimeScape running in the Microsoft Azure cloud, secondly to turn the data management capabilities of TimeScape "upside-down" by using it as a means to upload and publish data to the cloud and thirdly to provide one-to-many access to multiple sources of data via web interfaces and key delivery tools such as Microsoft Power BI. Put another way, without any local software or hardware infrastructure both business users and IT staff can access multiple data sources in the same format and using the same data model wherever the data is needed. In addition to .NET and Java interfaces to the TimeScape MarketPlace via OData, web API delivery into F#, Python, R and MATLAB are all in development...
Photo 1 - Copy
 
...and in addition to downloading data via Power BI, Brian also demonstrated how you could build on the data using "Power View" to create powerful analytical dashboard functionality that could be built and tested in Excel, then deployed centrally within a browser for access by users outside of Excel. He added that partners was one of the key aspects for the platform, and introduced the TimeScape MarketPlace Partner Program for the platform to get data, analytics, model vendors, software and service vendors involved and building on the platform. Andrew Tognela (below) of Microsoft (Worldwide Managing Director) closed the presentations...
Photo 4 - Copy

21 October 2013

Credit Risk: Default and Loss Given Default from PRMIA

Great event from PRMIA on Tuesday evening of last week, entitled Credit Risk: The link between Loss Given Default and Default. The event was kicked off by Melissa Sexton of PRMIA, who introduced Jon Frye of the Federal Reserve Bank of Chicago. Jon seems to an acknowledged expert in the field of Loss Given Default (LGD) and credit risk modelling. I am sure that the slides will be up on the PRMIA event page above soon, but much of Jon's presentation seems to be around the following working paper. So take a look at the paper (which is good in my view) but I will stick to an overview and in particular any anecdotal comments made by Jon and other panelists.

Jon is an excellent speaker, relaxed in manner, very knowledgeable about his subject, humourous but also sensibly reserved in coming up with immediate answers to audience questions. He started by saying that his talk was not going to be long on philosophy, but very pragmatic in nature. Before going into detail, he outlined that the area of credit risk can and will be improved, but that this improvement becomes easier as more data is collected, and inevitably that this data collection process may need to run for many years and decades yet before the data becomes statistically significant. 

Which Formula is Simpler? Jon showed two formulas for estimating LGD, one a relatively complex looking formula (the Vasicek distribution mentioned his working paper) and the other a simple linear model of the a + b.x. Jon said that looking at the two formulas, then many would hope that the second formula might work best given its simplicity, but he wanted to convince us that the first formula was infact simpler than the second. He said that the second formula would need to be regressed on all loans to estimate its parameters, whereas the first formula depended on two parameters that most banks should have a fairly good handle on. The two parameters were Default Rate (DR) and Expected Loss (EL). The fact that these parameters were relatively well understood seemed to be the basis for saying the first formula was simpler, despite its relative mathematical complexity. This prompted an audience question on what is the difference between Probability of Default (PD) and Default Rate (DR). Apparently it turns out PD is the expected probability of default before default happens (so ex-ante) and DR is the the realised rate of default (so ex-post). 

Default and LGD over Time. Jon showed a graph (by an academic called Altman) of DR and LGD over time. When the DR was high (lots of companies failing, in a likely economic downtown) the LGD was also perhaps understandably high (so high number of companies failing, in an economic background that is both part of the causes of the failures but also not helping the loss recovery process). When DR is low, then there is a disconnect between LGD and DR. Put another way, when the number of companies failing is low, the losses incurred by those companies that do default can be high or low, there is no discernable pattern. I guess I am not sure in part whether this disconnect is due to the smaller number of companies failing meaning the sample space is much smaller and hence the outcomes are more volatile (no averaging effect), or more likely that in healthy economic times the loss given a default is much more of random variable, dependent on the defaulting company specifics rather than on general economic background.

Conclusions Beware: Data is Sparse. Jon emphasised from the graph that the Altman data went back 28 years, of which 23 years were periods of low default, with 5 years of high default levels but only across 3 separate recessions. Therefore from a statistical point of view this is very little data, so makes drawing any firm statistical conclusions about default and levels of loss given default very difficult and error-prone. 

The Inherent Risk of LGD. Jon here seemed to be focussed not on the probability of default, but rather on the conditional risk that once a default has occurred then how does LGD behave and what is the risk inherent from the different losses faced. He described how LGD affects i) Economic Capital - if LGD is more variable, then you need stronger capital reserves, ii) Risk and Reward - if a loan has more LGD risk, then the lender wants more reward, and iii) Pricing/Valuation - even if the expected LGD of two loans is equal, then different loans can still default under different conditions having different LGD levels.

Models of LGD

Jon showed a chart will LGC plotted against DR for 6 models (two of which I think he was involved in). All six models were dependent on three parameters, PD, EL and correlation, plus all six models seemed to produce almost identical results when plotted on the chart. Jon mentioned that one of his models had been validated (successfully I think, but with a lot of noise in the data) against Moody's loan data taken over the past 14 years. He added that he was surprised that all six models produced almost the same results, implying that either all models were converging around the correct solution or in total contrast that all six models were potentially subject to "group think" and were systematically all wrong in the ways the problem should be looked at.

Jon took one of his LGD models and compared it against the simple linear model, using simulated data. He showed a graph of some data points for what he called a "lucky bank" with the two models superimposed over the top. The lucky bit came in since this bank's data points for DR against LGD showed lower DR than expected for a given LGD, and lower LGD for a given DR. On this specific case, Jon said that the simple linear model fits better than his non-linear one, but when done over many data sets his LGD model fitted better overall since it seemed to be less affected by random data.

There were then a few audience questions as Jon closed his talk, one leading Jon to remind everyone of the scarcity of data in LGD modelling. In another Jon seemed to imply that he would favor using his model (maybe understandably) in the Dodd-Frank Annual Stress Tests for banks, emphasising that models should be kept simple unless a more complex model can be justified statistically. 

Steve Bennet and the Data Scarcity Issue 

Following Jon's talk, Steve Bennet of PECDC picked on Jon's issue of scare data within LGD modelling. Steve is based in the US, working for his organisation PECDC which is a cross border initiative to collect LGD and EAD (exposure at default) data. The basic premise seems to be that in dealing with the scarce data problem, we do not have 100 years of data yet, so in the mean time lets pool data across member banks and hence build up a more statistically significant data set - put another way: let's increase the width of the dataset if we can't control the depth. 

PECDC is a consortia of around 50 organisations that pool data relating to credit events. Steve said that capture data fields per default at four "snapshot" times: orgination, 1 year prior to default, at default and at resolution. He said that every bank that had joined the organisation had managed to improve its datasets. Following an audience question, he clarified that PECDC does not predict LGD with any of its own models, but rather provides the pooled data to enable the banks to model LGD better. 

Steve said that LGD turns out to be very different for different sectors of the market, particularly between SMEs and large corporations (levels of LGD for large corporations being more stable globally and less subject to regional variations). But also there is great LGD variation across specialist sectors such as aircraft finance, shipping and project finance. 

Steve ended by saying that PECDC was orginally formed in Europe, and was now attempting to get more US banks involved, with 3 US banks already involved and 7 waiting to join. There was an audience question relating to whether regulators allowed pooled data to be used under Basel IRB - apparently Nordic regulators allow this due to needing more data in a smaller market, European banks use the pooled data to validate their own data in IRB but in the US banks much use their own data at the moment.

Til Schuermann

Following Steve, Til Schuermann added his thoughts on LGD. He said that LGD has a time variation and is not random, being worse in recession when DR is high. His stylized argument to support this was that in recession there are lots of defaults, leading to lots of distressed assets and that following the laws of supply and demand, then assets used in recovery would be subject to lower prices. Til mentioned that there was a large effect in the timing of recovery, with recovery following default between 1 and 10 quarters later. He offered words of warning that not all defaults and not all collateral are created equal, emphasising that debt structures and industry stress matter. 

Summary

The evening closed with a few audience questions and a general summation by the panelists of the main issues of their talks, primarily around models and modelling, the scarcity of data and how to be pragmatic in the application of this kind of credit analysis. 

 

 

26 June 2013

Macro Stress Testing

Great event from PRMIA on Macro Stress Testing at Moody's last night. A few quick highlights:

  • The role of the regulators is now not only to be sure that banks have enough capital to withstand a severe downtown, but that the banks have enough capital once the downturn has happened.
  • The Fed have a new whitepaper coming out in July on "Effective Capital Adequacy Process" that covers 7 different aspects from risk management foundations through to governance.
  • CCAR stress tests are thought by regulators to be easier to understand (e.g. this happens we get this loss) rather VAR/risk sensitivities that do not capture tail risk.
  • Hedges that do not behave as hedges under times of stress are a major area of concern.
  • Assumptions of the stress tests such as the second half of 2008 occuring instantaneously to the trading book is not reasonable/representative but hard to come up with credible/pragmatic alternatives.
  • One of the speakers put forward the following lists of positives about the stress tests:
    • Restoration of market/public confidence in banks
    • Determination of the appropriate levels of capital adequacy
    • Understanding of risk profile
    • Identification of tail risks
    • Curbing of risk taking
    • Incentivising behaviours
  • Whilst banks and regulators are often in conflict over capital adequacy, banks do implement their own internal stress tests and do have a commercial interest in doing this well.
  • One panelist said that "the best hedge is to sell"
  • Some banks have switched accountancy standards to game capital requirements, and there was some later debate that Risk Weighted Assets were a controversial part of the calculations when analyzed against the NYU Stern V-Lab stress testing.
  • There is a danger that CCAR and stress testing drives or becomes an industry in itself, which is not good for markets, the banking system or the economy as a whole.
  • There was some debate about liquidity risk as it relates to solvency, and that it should be much more integrated with the stress tests. The panel expressed interest at the forthcoming CLAR stress tests and how it relates to CCAR.
  • The panel thought that the Federal Reserve is effectively challenging each bank to understand its own balance sheet better than the Fed can.
  • Given the state of systems and data management at many banks, this was a big challenge.
  • The panel thought that more open access to the data regulators are collecting would be great for academics to analyze given some of the big data technologies available to analyze such large datasets.
  • One speaker put forward that only a subidized industry such as banking could an industry afford to treat data so poorly. 

Great event, knowledgeable speakers with strong opinions and good wine/food afterwards (thanks Moody's!).

25 June 2013

Matthew Berry on the Libor/OIS curve debate

Guest post today from Matthew Berry of Bedrock Valuation Advisors, discussing Libor vs OIS based rate benchmarks. Curves and curve management are a big focus for Xenomorph's clients and partners, so great that Matthew can shed some further light on the current debate and its implications:

New Benchmark Proposal’s Significant Implications for Data Management

During the 2008 financial crisis, problems posed by discounting future cash flows using Libor rather than the overnight index swap (OIS) rate became apparent. In response, many market participants have modified systems and processes to discount cash flows using OIS, but Libor remains the benchmark rate for hundreds of trillions of dollars worth of financial contracts. More recently, regulators in the U.S. and U.K. have won enforcement actions against several contributors to Libor, alleging that these banks manipulated the benchmark by contributing rates that were not representative of the market, and which benefitted the banks’ derivative books of business.

In response to these allegations, the CFTC in the U.S. and the Financial Conduct Authority (FCA) in the U.K. have proposed changes to how financial contracts are benchmarked and how banks manage their submissions to benchmark fixings. These proposals have significant implications for data management.

The U.S. and U.K. responses to benchmark manipulation

In April 2013, CFTC Chairman Gary Gensler delivered a speech in London in which he suggested that Libor should be retired as a benchmark. Among the evidence he cited to justify this suggestion:

-          Liquidity in the unsecured inter-dealer market has largely dried up.

-          The risk implied by contributed Libor rates has historically not agreed with the risk implied by credit default swap rates. The Libor submissions were often stale and did not change, even if the entity’s CDS spread changed significantly. Gensler provided a graph to demonstrate this.

Gensler proposed to replace Libor with either the OIS rate or the rate paid on general collateral repos. These instruments are more liquid and their prices more readily-observable in the market. He proposed a period of transition during which Libor is phased out while OIS or the GC repo rate is phased in.

In the U.K., the Wheatley Report provided a broad and detailed review of practices within banks that submit rates to the Libor administrator. This report found a number of deficiencies in the benchmark submission and calculation process, including:

-          The lack of an oversight structure to monitor systems and controls at contributing banks and the Libor administrator.

-          Insufficient use of transacted or otherwise observable prices in the Libor submission and calculation process.

The Wheatley Report called for banks and benchmark administrators to put in place rigorous controls that scrutinize benchmark submissions both pre and post publication. The report also calls for banks to store an historical record of their benchmark submissions and for benchmarks to be calculated using a hierarchy of prices with preference given to transacted prices, then prices quoted in the market, then management’s estimates.

Implications for data management

The suggestions for improving benchmarks made by Gensler and the Wheatley Report have far-reaching implications for data management.

If Libor and its replacement are run in parallel for a time, users of these benchmark rates will need to store and properly reference two different fixings and forward curves. Without sufficiently robust technology, this transition period will create operational, financial and reputational risk given the potential for users to inadvertently reference the wrong rate. If Gensler’s call to retire Libor is successful, existing contracts may need to be repapered to reference the new benchmark. This will be a significant undertaking. Users of benchmarks who store transaction details and reference rates in electronic form and manage this data using an enterprise data management platform will mitigate risk and enjoy a lower cost to transition.

Within the submitting banks and the benchmark administrator, controls must be implemented that scrutinize benchmark submissions both pre and post publication. These controls should be exceptions-based and easily scripted so that monitoring rules and tolerances can be adapted to changing market conditions. Banks must also have in place technology that defines the submission procedure and automatically selects the optimal benchmark submission. If transacted prices are available, these should be submitted. If not, quotes from established market participants should be submitted. If these are not available, management should be alerted that it must estimate the benchmark rate, and the decision-making process around that estimate should be documented.

Conclusion

These improvements to the benchmark calculation process will, in Gensler’s words, “promote market integrity, as well as financial stability.” Firms that effectively utilize data management technology, such as Xenomorph's TimeScape, to implement these changes will manage the transition to a new benchmark regime at a lower cost and with a higher likelihood of success.

 


25 April 2013

The Anthropology, Sociology, and Epistemology of Risk

Background - I went along to my first PRMIA event in Stamford, CT last night, with the rather grandiose title of "The Anthropology, Sociology, and Epistemology of Risk". Stamford is about 30 miles north of Manhattan and is the home to major offices of a number of financial markets companies such as Thomson Reuters, RBS and UBS (who apparently have the largest column-less trading floor in the world at their Stamford headquarters - particularly useful piece of trivia for you there...). It also happens to be about 5 minutes drive/train journey away from where I now live, so easy for me to get to (thanks for another useful piece of information I hear you say...). Enough background, more on the event which was a good one with five risk managers involved in an interesting and sometimes philosophical discussion on fundamentally what "risk management" is all about.

IntroductionMarc Groz who heads the Stamford Chapter of PRMIA introduced the evening and started by thanking Barry Schwimmer for allowing PRMIA to use the Stamford Innovation Centre (the Old Town Hall) for the meeting. Henrik Neuhaus moderated the panel, and started by outlining the main elements of the event title as a framework for the discussion:

  • Anthropology - risk management is to what purpose?
  • Sociology - how does risk management work?
  • Epistemology - what knowledge is really contained within risk management?

Henrik started by taking a passage about anthropology and replacing human "development" with "risk management" which seemed to fit ok, although the angle I was expecting was much more about human behaviour in risk management than where Henrik started. Henrik asked the panel what results they had seen from risk management and what did that imply about risk management? The panelists seemed a little confused or daunted by the question prompting one of them to ask "Is that the question?".

Business Model and Risk CultureElliot Noma dived in by responding that the purpose of risk management obviously depended very much on what are the institutional goals of the organization. He said that it was as much about what you are forced to do and what you try to do in risk management. Elliot said that the sell-side view of risk management was very regulatory and capital focused, whereas mutual funds are looking more at risk relative to benchmarks and performance attribution. He added that in the alternatives (hedge-fund) space then there were no benchmarks and the focus was more about liquidity and event risk.

Steve Greiner said that it was down to the investment philosophy and how risk is defined and measured. He praised some asset managers where the risk managers sit across from the portfolio managers and are very much involved in the decision making process.

Henrik asked the panel whether any of the panel had ever defined a “mission statement” for risk management. Marc Groz chipped in that he remember that he had once defined one, and that it was very different from what others in the institution were expecting and indeed very different from the risk management that he and his department subsequently undertook.

Mark Szycher (of GM Pension Fund) said that risk management split into two areas for him, the first being the symmetrical risks where you need to work out the range of scenarios for a particular trade or decision being taken. The second was the more asymmetrical risks (i.e. downside only) such as those found in operational risk where you are focused on how best to avoid them happening.

Micro Risk Done Well - Santa Federico said that he had experience of some of the major problems experienced at institutions such as Merrill Lynch, Salomen Brothers and MF Global, and that he thought risk management was much more of a cultural problem than a technical one. Santa said he thought that the industry was actually quite good at the micro (trade, portfolio) risk management level, but obviously less effective at the large systematic/economic level. Mark asked Santa what was the nature of the failures he had experienced. Santa said that the risks were well modeled, but maybe the assumptions around macro variables such as the housing market proved to be extremely poor.

Keep Dancing? - Henrik asked the panel what might be done better? Elliot made the point that some risks are just in the nature of the business. If a risk manager did not like placing a complex illiquid trade and the institution was based around trading in illiquid markets then what is a risk manager to do? He quote the Citi executive who said “ whilst the music is still playing we have to dance”. Again he came back to the point that the business model of the institution drives its cultural and the emphasis of risk management (I guess I see what Elliot was saying but taken one way it implied that regardless of what was going on risk management needs to fit in with it, whereas I am sure that he meant that risk managers must fit in with the business model mandated to shareholders).

Risk Attitudes in the USA - Mark said that risk managers need to recognize that the improbable is maybe not so improbable and should be more prepared for the worst rather than risk management under “normal” market and institutional behavior. Steven thought that a cultural shift was happening, where not losing money was becoming as important to an organization as gaining money. He said that in his view, Europe and Asia had a stronger risk culture than in the United States, with much more consensus, involvement and even control over the trading decisions taken. Put another way, the USA has more of a culture of risk taking than Europe. (I have my own theories on this. Firstly I think that the people are generally much more risk takers in the USA than in UK/Europe, possibly influenced in part by the relative lack of underlying social safety net – whilst this is not for everyone, I think it produces a very dynamic economy as a result. Secondly, I do not think that cultural desire in the USA for the much admired “presidential” leader necessarily is the best environment for sound, consensus based risk management. I would also like to acknowledge that neither of my two points above seem to have protected Europe much from the worst of the financial crisis, so it is obviously a complex issue!).

Slaves to Data? - Henrik asked whether the panel thought that risk managers were slaves to data? He expanded upon this by asking what kinds of firms encourage qualitative risk management and not just risk management based on Excel spreadsheets? Santa said that this kind of qualitative risk management occurred at a business level and less so at a firm wide level. In particular he thought this kind of culture was in place at many hedge funds, and less so at banks. He cited one example from his banking career in the 1980's, where his immediate boss was shouted off the trading floor by the head of desk, saying that he should never enter the trading floor again (oh those were the days...). 

Sociology and Credibility - Henrik took a passage on the historic development of women's rights and replaced the word "women" with "risk management" to illustrate the challenges risk management is facing with trying to get more say and involvement at financial institutions. He asked who should the CRO report to? A CEO? A CIO? Or a board member? Elliot responded by saying this was really a issue around credibility with the business for risk managers and risk management in general. He made the point that often Excel and numbers were used to establish credibility with the business. Elliot added that risk managers with trading experience obviously had more credibility, and to some extent where the CRO reported to was dependent upon the credibility of risk management with the business. 

Trading and Risk Management Mindsets - Elliot expanded on his previous point by saying that the risk management mindset thinks more in terms of unconditional distributions and tries to learn from history. He contrasted this with a the "conditional mindset' of a trader, where the time horizon forwards (and backwards) is rarely longer than a few days and the belief is strong that a trade will work today given it worked yesterday is high. Elliot added that in assisting the trader, the biggest contribution risk managers can make is more to be challenging/helpful on the qualitative side rather than just quantitative.

Compensation and Transactions - Most of the panel seemed to agree that compensation package structure was a huge influencer in the risk culture of an organisation. Mark touched upon a pet topic of mine, which is that it very hard for a risk manager to gain credibility (and compensation) when what risk management is about is what could happen as opposed to what did happen. A risk manager blocking a trade due to some potentially very damaging outcomes will not gain any credibility with the business if the trading outcome for the suggested trade just happened to come out positive. There seemed to be concensus here that some of the traditional compensation models that were based on short-term transactional frequency and size were ill-formed (given the limited downside for the individual), and whilst the panel reserved judgement on the effectiveness of recent regulation moves towards longer-term compensation were to be welcome from a risk perspective.

MF Global and Busines Models - Santa described some of his experiences at MF Global, where Corzine moved what was essentially a broker into taking positions in European Sovereign Bonds. Santa said that the risk management culture and capabilities were not present to be robust against senior management for such a business model move. Elliot mentioned that he had been courted for trades by MF Global and had been concerned that they did not offer electronic execution and told him that doing trades through a human was always best. Mark said that in the area of pension fund management there was much greater fidiciary responsibility (i.e. behave badly and you will go to jail) and maybe that kind of responsibility had more of a place in financial markets too. Coming back to the question of who a CRO should report to, Mark also said that questions should be asked to seek out those who are 1) less likely to suffer from the "agency" problem of conflicts of interest and on a related note those who are 2) less likely to have personal biases towards particular behaviours or decisions.

Santa said that in his opinion hedge funds in general had a better culture where risk management opinions were heard and advice taken. Mark said that risk managers who could get the business to accept moral persuasion were in a much stronger position to add value to the business rather than simply being able to "block" particular trades. Elliot cited one experience he had where the traders under his watch noticed that a particular type of trade (basis trades) did not increase their reported risk levels, and so became more focussed on gaming the risk controls to achieve high returns without (reported) risk. The panel seemed to be in general agreement that risk managers with trading experience were more credible with the business but also more aware of the trader mindset and behaviors. 

Do we know what we know? - Henrik moved to his third and final subsection of the evening, asking the panel whether risk managers really know what they think they know. Elliot said that traders and risk managers speak a different language, with traders living in the now, thinking only of the implications of possible events such as those we have seen with Cyprus or the fiscal cliff, where the risk management view was much less conditioned and more historical. Steven re-emphasised the earlier point that risk management at this micro trading level was fine but this was not what caused events such as the collapse of MF Global.

Rational argument isn't communication - Santa said that most risk managers come from a quant (physics, maths, engineering) background and like structured arguments based upon well understood rational foundations. He said that this way of thinking was alien to many traders and as such it was a communication challenge for risk managers to explain things in a way that traders would actually put some time to considering. On the modelling side of things, Santa said that sometimes traders dismissed models as being "too quant" and sometimes traders followed models all too blindly without questioning or understanding the simplifying assumptions they are based on. Santa summarised by saying that risk management needs to intuitive for traders and not just academically based. Mark added that a quantitative focus can sometimes become too narrow (modeler's manifesto anyone?) and made the very profound point that unfortunately precision often wins over relevance in the creation and use of many models. Steven added that traders often deal with absolutes, so as knowing the spread between two bonds to the nearest basis point, whereas a risk manager approaching them with a VaR number really means that this is the estimated VaR which really should be thought to be within a range of values. This is alien to the way traders think and hence harder to explain.

Unanticipated Risk - An audience member asked whether risk management should focus mainly on unanticipated risks rather than "normal' risks. Elliot said that in his trading he was always thinking and checking whether the markets were changing or continuing with their recent near-term behaviour patterns. Steven said that history was useful to risk management when markets were "normal", but in times of regime shifts this was not the case and cited the example of the change in markets when Mario Dragi announced that the ECB would stand behind the Euro and its member nations. 

Risky Achievements - Henrik closed the panel by asking each member what they thought was there own greatest achievement in risk management. Elliot cited a time when he identified that a particular hedge fund had a relatively inconspicuous position/trade that he identified as potentially extremely dangerous and was proved correct when the fund closed down due to this. Steven said he was proud of some good work he and his team did on stress testing involving Greek bonds and Eurozone. Santa said that some of the work he had done on portfolio "risk overlays" was good. Mark ended the panel by saying that he thought his biggest achievement was when the traders and portfolio managers started to come to the risk management department to ask opinions before placing key trades. Henrik and the audience thanked the panel for their input and time.

An Insured View - After the panel closed I spoke with an actuary who said that he had greatly enjoyed the panel discussions but was surprised that when talking of how best to support the risk management function in being independent and giving "bad" news to the business, the role of auditors were not mentioned. He said he felt that auditors were a key support to insurers in ensuring any issues were allowed to come to light. So food for thought there as to whether financial markets can learn from other industry sectors.

Summary - great evening of discussion, only downside being the absence of wine once the panel had closed!

 


10 January 2013

Sovereign Credit Risk - Contingent Claims Analysis

Went along to a Quafafew event on Tuesday this week, mainly to hear Dan diBartolomeo of Northfield speak. I first heard Dan speak over in London a few years back at an event on quantified news sentiment, whereas on Tuesday he was giving a talk on applying Merton-like contingent claims analysis models to the sovereign risk modelling. 

I have always enjoyed (is that the right word?) Contigent Claims Analysis modelling of corporates, and Dan did an interesting talk in extending this methodology to look at sovereigns and the various contingent claims between sovereigns, banks and the "real" economy. I particularly like the concept that one of the main "assets" governments have is the ability to print money. In one of the concluding remarks, Dan said that it was clear to him what the US government was doing in effectively printing money, since local bond holders are effectively insulated (given they have US assets) from the effects of domestic inflation, where foreign bond holders are not. Anyway it was a good presentation by an entertaining and knowledgeable speaker. You can download Dan's presentation by clicking here and it is worth a look for a different view on sovereign risk modelling.

28 November 2012

PRMIA on Basel III, Volcker and the Fed

Just wanted to start this post with a quick best wishes to all affected by Hurricane Sandy in the New York area. Nature is a awesomely powerful thing and amply demonstrated it is always to be respected as a "risk".

Good event on regulatory progress organised by PRMIA and hosted by Credit Suisse last night. Dan Rodriguez introduced the speakers and Michael Gibson of the Fed began with his assessment of what he thinks regulators have learned from the crisis. Mike said that regulators had not paid enough attention to the following factors:

  • Capital
  • Liquidity
  • Resolvability (managing the failure of a financial institution without triggering systemic risk) 

Capital - Mike said that regulators had addressed the quality and quantity of capital head by banks. With respect to Basel III, Mike said that the Fed had received around 2,500 comments that they were currently reviewing. In relation to supervision, he suggested that stress testing by the banks, the requirement for capital planning from banks and the independent stress tests undertaken by the regulators had turned the capital process into much more of a forward-looking exercise than it had been pre-crisis. The ability of regulators to limit dividend payments and request capital changes had added some "teeth" to this forward looking approach. Mike said that the regulators are getting more information which is allowing them to look more horizontally across different financial institutions to compare and contrast business practices, risks and capital adequacy. He thought that disclosure to the public of stress testing results and other findings was also a healthy thing for the industry, prompting wider debate and discussion.

Liquidity - Mike said that liquidity stress testing was an improvement over what had gone before (which was not much). He added that the Basel Committee was working on a quantitative liquidity ratio and that in general regulators were receiving and understanding much more data from the banks around liquidity.

Resolvability - Mike said in addition to resolution plans (aka "living wills") being required by Dodd-Frank in the US, the Fed was working with other regulators internationally on resolvability.

There then followed a Q&A session involving the panelists and the audience:

Basel III Implementation Timeline - Dan asked Mike about the 2,500 comments the Fed had received on Basel III and when the Fed would have dealt with these comments, particularly in the context of where compliance with Basel III for US Banks had been delayed beyond Jan 1 2013. Dan additionally asked whether Mike that implementing Basel III now was a competitive advantage or disadvantage for a bank?

Mike responded that the Fed had extended its review period from 90 days to 135 days which was an unusual occurence. He said that as yet the Fed had no new target data for implementation. 

Brian of AIG on Basel III and Regulation -  Dan asked Brian Peters of AIG what his thoughts were on Basel III. Brian was an entertaining speaker and responded firstly that AIG was not a bank, it was an insurer and that regulators need to recognise this. He said regulators need to think of the whole financial markets and how they want them to look in the future. Put another way, he implied that looking at capital, liquidity and resolvability in isolation was fine at one level, but these things had much wider implications and without taking that view then there would be problems. 

Brian said he thinks of Basel III as a hammer, and that when people use a hammer everything starts to look like a "nail". He said that insurers write 50 year-long liabilities, and as a result he needs long term investments to cover these obligations. He added that the liquidity profile of insurers was different to banks, with life policies having exposures to interest rates more like bank deposits. He said that AIG was mostly dealing with publicly traded securities (I guess now AIG FP is no longer dominant?). Resolvability was a different process for insurers, with regulators forcing troubled insurers to limit dividends and build up cash reserves.

Brian's big concern for the regulators was that in his view they need to look at the whole financial system and what future they want for it, rather than dealing with one set of players and regulations in isolation. Seems Brian shares some similar concerns to Pierre Guilleman on apply banking regulation to the insurance industry, combined with the unintended consequencies of current regulation on the future of the whole of financial markets (maybe the talk on diversity of approach is a good to read on this, or maybe more recently "Regulation Increases Risk" for a more quantitative approach).

Steve of Credit Suisse on Basel III - Dan asked Steven Haratunian whether implementing Basel III was a competitive advantage or disadvantage for Credit Suisse. Steve said that regardless of competitive advantage, as a Swiss bank Credit Suisse had no choice in complying with Basel III by Jan 1 2013, that Credit Suisse had started its preparations since 2011 and had been Basel 2.5 compliant since Jan 1 2012. He said that Basel III compliance had effectively doubled their capital requirements, and had prompted a strategic review of all business activities within the investment banking arm.

This review had caused a reassessment of the company's involvement in areas such as fixed income and risk weighted assets had been reduced by over $100Billion. Steven explained how they had looked at each business activity and assessed whether they could achieve a 15% return on equity over a business cycle, plus be able to withstand CCAR stress testing during this time. He said that Credit Suisse had felt lonely in the US markets in that they were many occaisions where deals were lost due directly to consideration of Basel III capital requirements. Credit Suisse felt less lonely now given how regulation is affecting other banks, and that for certain markets (notably mortgages and credit) the effects of Basel III were very harsh.

Volcker Rule and Dodd-Frank - Dan asked Mike where did the Volcker Rule fit within Dodd-Frank, and does it make us safer? Mike didn't have a great deal to say on this, other than he thought it was all part and parcel of Congress's attempts to make the financial markets safer, that its implementation was being managed/discussed across an inter-agency group including the Fed, SEC and CFTC. Brian said that Dodd-Frank did not have a great deal of impact for insurers, the only real effects being some on swap providers to insurers. 

Steve said that many of the many aspects or "spirit" of Volcker and Dodd-Frank had been internalised by the banks and were progressing despite Dodd-Frank not being finalised. He said that in particular the lack of certainty around extraterritoriality and margining in derivatives was not helpful. Mike added that in terms of progressing through Dodd-Frank, his estimated was that the Fed had one third of it finished, one third of the rules proposed, and one third not started or in very early stages. So still some work to be done.

Living Wills - Brian at this point referred to a recent speech by William C. Dudley of the Fed with title "Solving the Too Big to Fail Problem" (haven't looked at this yet, but will). Mike said that the Fed was stilling learning in relation to "Living Wills" and eventually it will get down to a level of being very company specific. Brian asked whether this meant that "Living Wills" would be very specific to each company and not a general rule to be applied to all. Mike said it was too early to tell. 

Extraterritoriality - On extraterritoriality Steve said that Credit Suisse having to look at its subsidiaries globally more as standalone companies when dealing with regulators and capital requirements, which will great increase capital requires if the portfolio effect of being a global company is not considered by regulators. Dan mentioned a forthcoming speechto be made by Dan Tarullo of the Fed, and mentioned how the Fed was looking at treating foreign subsidiaries operating in the US as bank holding companies not global subsidiaries, hence again causing problems by ignoring portfolio effect. Mike said that the regulators were working on this issue, and that unsurprisingly he couldn't comment on the speech Dan Tarullo had yet to make. 

The Future Shape of the Markets - Brian brought up an interesting question for Mike in asking how the regulators wanted to see financial markets develop and operate in the future? Brian thought that current regulation was being implemented as almost the "last war" against financial markets without a forward looking view. He said that historically he could see Basel 1 being prompted by addressing some of the issues caused by Japanese banks, he saw Basel II addressing credit risk but what will the effects of Basel III ultimately be? 

This prompted an interesting response from Mike, in that he said that the Fed is not shaping markets and is dealing only with current rules and risks. He added that private enterprise would shape future markets. (difficult to see how that argument stacks up, regulation implemented now is surely not independent of private sector reaction/exploitation of it) Steve added that Basel III had already had effects, with Credit Suisse already reducing its activity in mortgage and fixed income markets. Steve said that non-banking organisations were now involved in these markets and that regulators have to be aware of these changes or face further problems. 

Did Regulators Fail to Enforce Existing US Regulation - one audience participant was strongly of the opinion that Basel III is not needed, that there was enough regulation in place to limit the crisis and that the main failing of the regultors was that they did not implement what was already there to be used. Mike said he thought that the regulators did have lessons to learn and that some of the regulation then in place needed reviewing.

Keep it Simple - another audience member asked about the benefits of simple regulation of simpler markets and mentioned an article by Andrew Haldane of the Bank of England on "The Dog and the Frisbee". Mike didn't have much to add on this other than saying it was a work in progress. 

Brian thought that the central failure behind the crisis was the mis-rating of credit instruments, with AAA products attracting a 4bp capital charge instead of a more realistic 3%.

Regulations Effects on Market Pricing - Steve was the first to respond on this, pointing to areas such as cmbs and credit markets as being best performing areas that also have the lower capital risk weights. Dan said he felt that equity markets had not fully adjusted yet, and ironically that financial equities had the highest risk weights. Combined with anticipated rises in tax, high risk weightings were taking capital out of the risk bearing/wealth generating parts of the economy and into low weighted instruments like US treasuries. Dan wondered whether regulation was one of the key dampening factors behind why the current record stimulus was not accelerating the economy in the US more quickly. 

Derivatives Clearer and Clearing - this audience question was asking how the regulators were dealing with the desire to encourage clearing of derivative trades whilst at the same time not incentivising the banks to set themselves up as clearers. Mike said that there was an international effort to look at this.

What Happens When the Stimulus Goes - an audience member asked what the panel thought would happen once the stimulus was removed from the markets. The panelists thought this was more an economics questions. However Dan said that the regulators were more sensitive to the markets and market participants when considering new stimulus measures, and cited problems in the fall of 2011 caused by Fed actions in the market crushing mortgage spreads. Brian said insurers need yield so the stimulus was obviously having an impact. Dan mentioned that given the low risk weighting of US Treasuries then everyone was holding them and so the impact of a jump in rates would hurt many if done without preparation.

Wine Shortage and Summary - Just had to mention that there was no wine made available at the networking session afterwards. A sign of austere times or simply that it was too early in the week? Anyway it was a great discussion and raised some good points. In summary, all I hear still supports the premise that the "Law of Unintended Consequences" is ever-present, ever-powerful and looming over the next few years. Hearing regulators say that they are dealing with current risks only and are not shaping the future of financial markets smacks of either delusion or obfuscation to me. 

 

 

 

 

 

16 October 2012

The Missing Data Gap

Getting to the heart of "Data Management for Risk", PRMIA held an event entitled "Missing Data for Risk Management Stress Testing" at Bloomberg's New York HQ last night. For those of you who are unfamiliar with the topic of "Data Management for Risk", then the following diagram may help to further explain how the topic is to do with all the data sets feeding the VaR and scenario engines.

Data-Flow-for-Risk-Engines
I have a vested interest in saying this (and please forgive the product placement in the diagram above, but hey this is what we do...), but the topic of data management for risk seems to fall into a functionality gap between: i) the risk system vendors who typically seem to assume that the world of data is perfect and that the topic is too low level to concern them and ii) the traditional data management vendors who seem to regard things like correlations, curves, spreads, implied volatilities and model parameters as too business domain focussed (see previous post on this topic) As a result, the risk manager is typically left with ad-hoc tools like spreadsheets and other analytical packages to perform data validation and filling of any missing data found. These ad-hoc tools are fine until the data universe grows larger, leading to the regulators becoming concerned about just how much data is being managed "out of system" (see past post for some previous thoughts on spreadsheets).

The Crisis and Data Issues. Anyway enough background above and on to some of the issues raised at the event. Navin Sharma of Western Asset Management started the evening by saying that pre-crisis people had a false sense of security around Value at Risk, and that crisis showed that data is not reliably smooth in nature. Post-crisis, then questions obviously arise around how much data to use, how far back and whether you include or exclude extreme periods like the crisis. Navin also suggested that the boards of many financial institutions were now much more open to reviewing scenarios put forward by the risk management function, whereas pre-crisis their attention span was much more limited.

Presentation. Don Wesnofske did a great presentation on the main issues around data and data governance in risk (which I am hoping to link to here shortly...)

Issues with Sourcing Data for Risk and Regulation. Adam Litke of Bloomberg asked the panel what new data sourcing challenges were resulting from the current raft of regulation being implemented. Barry Schachter cited a number of Basel-related examples. He said that the costs of rolling up loss data across all operations was prohibitative, and hence there were data truncation issues to be faced when assessing operational risk. Barry mentioned that liquidity calculations were new and presenting data challenges. Non centrally cleared OTC derivatives also presented data challenges, with initial margin calculations based on stressed VaR. Whilst on the subject of stressed VaR, Barry said that there were a number of missing data challenges including the challenge of obtaining past histories and of modelling current instruments that did not exist in past stress periods. He said that it was telling on this subject that the Fed had decided to exclude tier 2 banks from stressed VaR calculations on the basis that they did not think these institutions were in a position to be able to calculate these numbers given the data and systems that they had in place.

Barry also mentioned the challenges of Solvency II for insurers (and their asset managers) and said that this was a huge exercise in data collection. He said that there were obvious difficulties in modelling hedge fund and private equity investments, and that the regulation penalised the use of proxy instruments where there was limited "see-through" to the underlying investments. Moving on to UCITS IV, Barry said that the regulation required VaR calculations to be regularly reviewed on an ongoing basis, and he pointed out one issue with much of the current regulation in that it uses ambiguous terms such as models of "high accuracy" (I guess the point being that accuracy is always arguable/subjective for an illiquid security).

Sandhya Persad of Bloomberg said that there were many practical issues to consider such as exchanges that close at different times and the resultant misalignment of closing data, problems dealing with holiday data across different exchanges and countries, and sourcing of factor data for risk models from analysts. Navin expanded more on his theme of which periods of data to use. Don took a different tack, and emphasised the importance of getting the fundamental data of client-contract-product in place, and suggested that this was a big challenge still at many institutions. Adam closed the question by pointing out the data issues in everyday mortgage insurance as an example of how prevalant data problems are.

What Missing Data Techniques Are There? Sandhya explained a few of the issues her and her team face working at Bloomberg in making decisions about what data to fill. She mentioned the obvious issue of distance between missing data points and the preceding data used to fill it. Sandhya mentioned that one approach to missing data is to reduce factor weights down to zero for factors without data, but this gave rise to a data truncation issue. She said that there were a variety of statistical techniques that could be used, she mentioned adaptive learning techniques and then described some of the work that one of her colleagues had been doing on maximum-likehood estimation, whereby in addition to achieving consistency with the covariance matrix of "near" neighbours, that the estimation also had greater consistency with the historical behaviour of the factor or instrument over time.

Navin commented that fixed income markets were not as easy to deal with as equity markets in terms of data, and that at sub-investment grade there is very little data available. He said that heuristic models where often needed, and suggested that there was a need for "best practice" to be established for fixed income, particularly in light of guidelines from regulators that are at best ambiguous.

I think Barry then made some great comments about data and data quality in saying that risk managers need to understand more about the effects (or lack of) that input data has on the headline reports produced. The reason I say great is that I think there is often a disconnect or lack of knowledge around the effects that input data quality can have on the output numbers produced. Whilst regulators increasingly want data "drill-down" and justfication on any data used to calculate risk, it is still worth understanding more about whether output results are greatly sensitive to the input numbers, or whether maybe related aspects such as data consistency ought to have more emphasis than say absolute price accuracy. For example, data quality was being discussed at a recent market data conference I attended and only about 25% of the audience said that they had ever investigated the quality of the data they use. Barry also suggested that you need to understand to what purpose the numbers are being used and what effect the numbers had on the decisions you take. I think here the distinction was around usage in risk where changes/deltas might be of more important, whereas in calculating valuations or returns then price accuracy might receieve more emphasis. 

How Extensive is the Problem? General consensus from the panel was that the issues importance needed to be understood more (I guess my experience is that the regulators can make data quality important for a bank if they say that input data issues are the main reason for blocking approval of an internal model for regulatory capital calculations). Don said that any risk manager needed to be able to justify why particular data points were used and there was further criticism from the panel around regulators asking for high quality without specifying what this means or what needs to be done.

Summary - My main conclusions:

  • Risk managers should know more of how and in what ways input data quality affects output reports
  • Be aware of how your approach to data can affect the decisions you take
  • Be aware of the context of how the data is used
  • Regulators set the "high quality" agenda for data but don't specify what "high quality" actually is
  • Risk managers should not simply accept regulatory definitions of data quality and should join in the debate

Great drinks and food afterwards (thanks Bloomberg!) and a good evening was had by all, with a topic that needs further discussion and development.

 

 

11 October 2012

Regulation Increases Risk

"Any Regulation of Risk Increases Risk" is an interesting paper illustrating quantitatively what a lot of people already think qualitatively (see past post for example), which is that regulation nearly always falls fowl of the law of intended consequences. Through the use of regulatory driven capital charge calculations, banks are biassed towards investing in a limited and hence overly concentrated set of assets that at the time of investment exhibit abnormally low levels of volatility. Thanks to PRMIA NYC for suggesting this paper. 

03 October 2012

The Financial Regulatory Tide: In or Out?

If you have ever wandered around the financial district in New York, then you may not have noticed the Museum of American Finance on the corner of Wall and William St. I tend to find there are lots of things I don't notice in New York, probably due to the fact that I am still doing a passable impression of a tourist and find myself looking ever upwards at the skyscrapers rather than at anything at ground level. Anyway MoAF is worth a look-in and having recently become a member (thanks Cognito Media!) I went along to one of their events last night on regulation.Richard Sylla was the moderator for the evening, with support from Hugh Rockoff, Eugene N. White and Charles Geisst.

Richard Sylla on Fractional Reserve Banking and Regulation

Richard started the evening by explaining some basics of bank balance sheets as a means for explaining why he feels banking needs regulation. He showed a simplified and conservative balance sheet for an example bank:

Liabilities

  • Deposits 85% (from the likes of you or I)
  • Capital 15% (shareholders including surpluses)

Assets

  • Earning Assets 80% (loans and investments)
  • Reserves 20% (cash and deposits at other banks/central banks)

Richard explained that the main point to note from the balance sheet was that the reserves did not match the depositors and hence there is not enough money to repay all the depositors if they asked for their money back all at once. Richard's example was a form of Fractional Reserve Banking and he explained that there were two main reasons why banking needs regulation. The first was the incentive for banks to reduce their reserves to increase profits (increasing risk re: depositors) and the second was to keep capital levels low in order to increase earnings per share.

He then went on to illustrate how at the time of the last crisis Fannie Mae and Freddie Mac had earning assets of 100%, reserves 0%, deposits of 96% and capital of 4%. Lehman and Bear Stearns both had zero reserves and capital of only 3%. He then went on to list a large number of well known financial institutions and showed how the equity of many was simply wiped out given falls in asset valuations, the lack of reserves and the very small levels of equity maintained.

Hugh Rockoff on Adam Smith and Banking Regulation

Hugh is apparently a big fan of free market economics and of Adam Smith in particular. Much as Smith is for the "Invisible Hand" of the free market and against regulation, Hugh was at pains to point out that even Smith thought of banking being a special case in need of regulation and referred to banking operations as "a sort of waggon-way through the air".

Apparently Smith lived through a banking crisis in 1772 involving the Ayr Bank - I think Hugh had misspelt this as "Air" which I not sure whether it was deliberate but made for some reasonable humour about the value of the notes issued by the bank. Apparently this was an international crisis involving many of the then major powers, was based on stock market and property speculation and indirectly lead to the Boston Tea Party so I guess many Americans should pay their respects to this failed bank that became a catalyst to the formation of their country. A key point to note was that the shareholders of the Ayr Bank were subject to unlimited liability and had to pay all obligations owing...not sure how that would go down today in our more enlightened (?) times but more of that later.

Hugh described how Smith thought there were many things that banks should not be allowed to do including investing in real-estate (!) and prohibitions on the "option" to repay monetary notes. Smith also suggested that the Government should set maximum interest rates. So for a free market thinker, Smith had some surprising ideas when it came to banking. Hugh also pointed out that another great free-marketeer, Milton Freedman, was also in favour of banking regulation and favoured both deposit insurance and 100% reserve banking

Eugene White on Regulatory History

At a guess I would say that Eugene is a big fan of the quote from Mark Twain that "History does not repeat itself, but it does rhyme". Eugene took us briefly through major financial regulations in American history such as the National Banking Act of 1864, Federal Reserve Act of 1913, The "New Deal" of 1932 and others. He notably had a question mark around whether Dodd-Frank was going to be a major milestone in regulatory history, as in his opinion Dodd-Frank treats the symptons and not the causes of the last financial crisis. Eugene spent some time explaining the cycle of regulation where governments go through stages of:

-> Regulation ->
-> Problems caused by Regulation->
-> De-regulation ->
-> Financial Crisis ->
-> back to Regulation ->

Charles Geisst on Dodd-Frank and the Volker Rule

Charles started by saying that he thought Dodd-Frank, and in particular the Volker Rule, might well still be being debated three years hence. As others have done, he contrasted the 2,300 pages of Dodd-Frank with the simplicity of the 72 pages of the Glass-Steagall Act. He believes that the Volker Rule is Glass-Steagall by another name, and believes that Wall St has only recently realised this is the case and has begun the big push back against it. 

He left the audience with the sobering thought that he thinks another financial crisis is needed in order to cut down Dodd-Frank from 2300 pages of instructions for regulators to put regulations in place to around 150 pages of meaningful descriptions of the kinds of things that banks can and cannot do. 

Audience Questions

Rules vs. Principals - One audience member wondered if the panel thought it better to regulate in terms of feduciary duties of the participants rather than in detailed rules that can be "worked around". Charles respond that he thought feduciary duties were better, and contrasted the strictness with which banking fraud has been treated in the USA with the relative lack of punishment and sentencing in the securities industry. Eugene added that the "New Deal" of 1932 took away limited liabiltiy for shareholders of banks, and with it the incentives for shareholders to monitor the risks being taken by the banks they own.

Basel Regulations - Another audience member wanted panel feedback on Basel. In summary the panel said that the Basel Committee got it wrong in thinking it knew for certain how risky certainy asset classes were for example thinking that a corporate bond from IBM was more risky than say an MBS or government debt.

Do Regulators deal with the Real Issues? - Charles again brought this question back to the desire for simplicity and clarity, something that is not found in Dodd-Frank in his view. Hugh mentioned that the USA has specific problems with simply the number of regulatory bodies, and contrasted this with the single regulator in Canada. He said he thought competition was good for businesses but bad for regulators.

Eugene and Charles put an interesting historical perspective on this question, in that it is more often the case that government and the finance work together in composing legislation and regulation. Eugene gave the example that in the financial crisis of the early 30s, banks that had combined both retail and investment banking operations had faired quite well. So why did Glass-Steagal come about? Apparently Senator Steagal wanted deposit-insurance to help the myriad number of small banks back home, and Senator Glass simply wanted investment banks and retail banks to be separated, so a deal was done. I found this surprising (maybe I shouldn't be) but G-S is put forward as good regulation yet it seems it was not treating the observed symptoms of the crisis being dealt with.

How are the regulators dealing with Money Market Funds? - Here the panel said this was a classic example of the industry fiighting the SEC becuase the proposed regulation would reduce the return on their operations. Eugene explained how MMFs resulted from the savings and loans industry complaining about depositors investing in T-Bills. So the government response was to increase T-Bill denomination from $1,000 to $10,000 to limit who could invest, but then this was circumvented by the idea of setting up funds to invest in these larger denomination assets. Charles added that he thought the next crisis would come from the Shadow Banking system and that a more balanced approach needed to be taken to regulate across both systems. Hugh added that Dodd-Frank thinks it can identify systematically important institutions and it would be his bet that the next crisis starts with an organisation that is below the radar and not on this list. The panel concluded with a brief discussion of pay and remuneration and said that this was a major problem that needed better solutions.

 

 



30 August 2012

Reverse Stress Testing at Quafafew

Just back from a good vacation (London Olympics followed by a sunny week in Portugal - hope your summer has gone well too) and enjoyed a great evening at a Quafafew event on Tuesday evening, entitled "Reverse Stress Testing & Roundtable on Managing Hedge Fund Risk".

Reverse Stress Testing

The first part of the evening was a really good presentation by Daniel Satchkov of Rixtrema on reverse stress testing. Daniel started the evening by stating his opinion that risk managers should not consider their role as one of trying to predict the future, but rather one more reminiscent of "car crash testing", where the role of the tester is one of assessing, managing and improving the response of a car to various "impacts", without needing to understand the exact context of any specific crash such as "Who was driving?", "Where did the accident take place?" or "Whose fault was it?". (I guess the historic context is always interesting, but will be no guide to where, when and how the next accident takes place). 

Daniel spent some of his presentation discussing the importance of paradigms (aka models) to risk management, which in many ways echos many of themes from the modeller's manifesto. Daniel emphasised the importance of imagination in risk management, and gave a quick story about a German professor of mathematics who when asked the whereabouts of one of his new students replied that "he didn't have enough imagination so he has gone off to become a poet".

In terms of paradigms and how to use them, he gave the example of Brownian motion and described how the probability of all the air in the room moving to just one corner was effectively zero (as evidenced by the lack of oxygen cylinders brought along by the audience). However such extremes were not unusual in market prices, so he noted how Black-Scholes was evidently the wrong model, but when combined with volatility surfaces the model was able to give the right results i.e. "the wrong number in the wrong formula to get the right price." His point here was that the wrong model is ok so long as you aware of how it is wrong and what its limatations are (might be worth checking out this post containing some background by Dr Yuval Millo about the evolution of the options market). 

Daniel said that he disagreed with the premise by Taleb that the range of outcomes was infinite and that as a result all risk managers should just give up and buy and a lottery ticket, however he had some sympathies with Taleb over the use of stable correlations within risk management. His illustration was once again entertaining in quoting a story where a doctor asks a nurse what the temperature is of the patients at a Russian hospital, only to be told that they were all "normal, on average" which obviously is not the most useful medical information ever provided. Daniel emphasised that contrary to what you often read correlations do not always move to one in a crisis, but there are often similarities from one crisis to the next (maybe history not repeating itself but more rhyming instead). He said that accuracy was not really valid or possible in risk management, and that the focus should be on relative movements and relative importance of the different factors assessed in risk.

Coming back to the core theme of reverse stress testing, then Daniel presented a method by which from having categorised certain types of "impacts" a level of loss could be specified and the model would produce a set of scenarios that produce the loss level entered. Daniel said that he had designed his method with a view to producing sets of scenarios that were:

  • likely
  • different
  • not missing any key dangers

He showed some of the result sets from his work which illustrated that not all scenarios were "obvious". He was also critical of addressing key risk factors separately, since hedges against different factors would be likely to work against each other in times of crisis and hedging is always costly. I was impressed by his presentation (both in content and in style) and if the method he described provides a reliable framework for generating a useful range of possible scenarios for a given loss level, then it sounds to me like a very useful tool to add to those available to any risk manager.

Managing Hedge Fund Risk

The second part of the evening involved Herb Blank of S-Network (and Quafew) asking a few questions to Raphael Douady, of Riskdata and Barry Schachter of Woodbine Capital. Raphael was an interesting and funny member of the audience at the Dragon Kings event, asking plenty of challenging questions and the entertainment continued yesterday evening. Herb asked how VaR should be used at hedge funds, to which Raphael said that if he calculated a VaR of 2 and we lost 2.5, he would have been doing his job. If the VaR was 2 and the loss was 10, he would say he was not doing his job. Barry said that he only uses VaR when he thinks it is useful, in particular when the assumptions underlying VaR are to some degree reflected in the stability of the market at the time it is used. 

Raphael then took us off on an interesting digression based on human perceptions of probability and statistical distributions. He told the audience that yesterday was his eldest daughter's birthday and what he wanted was for the members of the audience to write down on paper what was a lower and upper bound of her age to encompass a 99th percentile. As background, Raphael looks like this. Raphael got the results and found that out of 28 entries, the range of ages provided by 16 members of the audience did not cover his daughters age. Of the 12 successful entries (her age was 25) six entries had 25 as the upper bound. Some of the entries said that she was between 18 and 21, which Raphael took to mean that some members of the audience thought that they knew her if they assigned a 99th percentile probability to their guess (they didn't). His point was that even for Quafafewers (or maybe Quafafewtoomuchers given the results...) then guessing probabilities and appropriate ranges of distributions was not a strong point for many of the human race.

Raphael then went on to illustrate his point above through saying that if you asked him whether he thought the Euro would collapse, then on balance he didn't think it was very likely that this will happen since he thinks that when forced Germany would ultimately come to the rescue. However if you were assessing the range of outcomes that might fit within the 99th percentile distribution of outcomes, then Raphael said that the collapse of the Euro should be included as a possible scenario but that this possibility was not currently being included in the scenarios used by the major financial institutions. Off on another (related) digression, Raphael said that he compared LTCM with having the best team of Formula 1 drivers in the world that given a F1 track would drive the fastest and win everything, but if forced to drive an F1 car on a very bumpy road this team would be crashing much more than most, regardless of their talent or the capabilities of their vehicle.

Barry concluded the evening by saying that he would speak first, otherwise he would not get chance to given Raphael's performance so far. Again it was a digression from hedge fund risk management, but he said that many have suggested that risk managers need to do more of what they were already doing (more scenarios, more analysis, more transparency etc). Barry suggested that maybe rather than just doing more he wondered whether the paradigm was wrong and risk managers should be thinking different rather than just more of the same. He gave one specific example of speaking to a structurer in a bank recently and asking given the higher hurdle rates for capital whether the structurer should consider investing in riskier products. The answer from the structurer was the bank was planning to meet about this later that day, so once again it would seem that what the regulators want to happen is not necessarily what they are going to get... 

 

 

25 April 2012

Dragon Kings, Black Swans and Bubbles

"Dragon Kings" is a new term to me, and the subject on Monday evening of a presentation by Prof. Didier Sornette at an event given by PRMIA. Didier has been working on the diagnosis on financial markets bubbles, something that has been of interest to a lot of people over the past few years (see earlier post on bubble indices from RiskMinds and a follow up here).

Didier started his presentation by talking about extreme events and how many have defined different epochs in human history. He placed a worrying question mark over the European Sovereign Debt Crisis as to its place in history, and showed a pair of particularly alarming graphs of the "Perpetual Money Machine" of financial markets. One chart was a plot of savings and rate of profit for US, EU and Japan with profit rising, savings falling from about 1980 onwards, and a similar diverging one of consumption rising and wages falling in the US since 1980. Didier puts this down to finance allowing this increasing debt to occur and to perpetuate the "virtual" growth of wealth.

Corn, Obesity and Antibiotics - He put up one fascinating slide relating to positive feedback in complex systems and effectively the law of unintended consequencies. After World War II, the US Government wanted to ensure the US food supply and subsidized the production of corn. This resulted in over supply over for humans -> so the excess corn was fed to cattle -> who can't digest starch easily -> who developed e-coli infections -> which prompted the use of antibiotics in cattle -> which prompted antibiotics as growth promoters for food animals -> which resulted in cheap meat -> leading to non-sustainable meat protein consumption and under-consumption of vegetable protein. Whilst that is a lot of things to pull together, ultimately Didier suggested that the simple decision to subsidise corn had led to the current epidemic in obesity and the losing battle against bacterial infections.

Power Laws - He then touched briefly upon Power Law Distributions, which are observed in many natural phenomena (city size, earthquakes etc) and seem to explain the peaked mean and long-tails of distributions of finance far better than the traditional Lognormal distribution of traditional economic theory. (I need to catch up on some Mandelbrot I think). He explained that whilst many observations (city size for instance) fitted a power law, that the where observations that did not fit this distribution at all (in the cities example, many capital cities are much, much larger than a power law predicts). Didier then moved on to describe Black Swans, characterised as unknown unknowable events, occurring exogenously ("wrath of god" type events) and with one unique investment strategy in going long put options.

Didier said that Dragon-Kings were not Black Swans, but the major crises we have observed are "endogenous" (i.e. come from inside the system), do not conform to a power law distribution and:

  • can be diagnosed in advanced
  • can be quantified
  • have (some) predictability

Diagnosing Bubbles - In terms of diagnosing Dragon Kings, Didier listed the following criteria that we should be aware of (later confirmed as a very useful and practical list by one of the risk managers in the panel):

  • Slower recovery from perturbations
  • Increasing (or decreasing) autocorrelation
  • Increasing (or decreasing) cross-correlation with external driving
  • Increasing variance
  • Flickering and stochastic resonance
  • Increased spatial coherence
  • Degree of endogeneity/reflexivity
  • Finite-time singularities

Didier finished his talk by describing the current work that he and ETH are doing with real and ever-larger datasets to test whether bubbles can be detected before they end, and whether the prediction of the timing of their end can be improved. So in summary, Didier's work on Dragon Kings involves the behaviour of complex systems, how the major events in these systems come from inside (e.g. the flash crash), how positive feedback and system self-configuration/organisation can produce statistical behaviour well beyond that predicted by power law distributions and certainly beyond that predicted by traditional equilibrium-based economic theory. Didier mentioned how the search for returns was producing more leverage and an ever more connected economy and financial markets system, and how this interconnectedness was unhealthy from a systemic risk point of view, particularly if overlayed by homogenous regulation forcing everyone towards the same investment and risk management approaches (see Riskminds post for some early concerns on this and more recent ideas from Baruch College)

Panel-Debate - The panel debate following was interesting. As mentioned, one of the risk managers confirmed the above statistical behaviours as useful in predicting that the markets were unstable, and that to detect such behaviours across many markets and asset classes was an early warning sign of potential crisis that could be acted upon. I thought a good point was made about the market post crash, in that the market's behaviour has changed now that many big risk takers were eliminated in the recent crash (backtesters beware!). It seems Bloomberg are also looking at some regime switching models in this area, so worth looking out for what they are up to. Another panelist was talking about the need to link the investigations across asset class and markets, and emphasised the role of leverage in crisis events. One of the quants on the panel put forward a good analogy for "endogenous" vs. "exogenous" impacts on systems (comparing Dragon King events to Black Swans), and I paraphrase this somewhat to add some drama to the end of this post, but here goes: "when a man is pushed off a cliff then how far he falls is not determined by the size of the push, it is determined by the size of the cliff he is standing on". 

 

 

18 January 2012

The financial crisis and Andrew Lo's reading list

I spotted this in the FT recently - for those of you diligent enough to want to read more about the possible causes and possible solutions to the (ongoing) financial crisis, then Andrew Lo may have saved us all a lot of time in his 21-book review of the financial crisis. Andrew reviews 10 books by academics, 10 by journalists and one by former Treasury Secretary Henry Paulson.

Andrew finds a wide range of opinions on the causes and solutions to the crisis, which I guess in part reflects that regardless of the economic/technical causes, human nature is both at the heart of the crisis and evidently also at the heart of its analysis. He regards the differences in opinion quite healthy in that they will be a catalyst for more research and investigation. I also like the way Andrew starts his review with a description of how people's view of the same events they have lived through can be entirely different, something that I have always found interesting (and difficult!).

A quote from Napolean (that I am in danger of over-using) seems appropriate to Andrew's review: "History is the version of past events that people have decided to agree upon" but maybe Churchill wins in this context with: "History will be kind to me for I intend to write it.". Maybe we should all get writing now before it is too late...

17 June 2011

Taleb and Model Fragility - NYU-Poly

I went along to spend a day in Brooklyn yesterday at NYU-Poly, now the engineering school of NYU containing the Department of Finance and Risk Engineering. The event was called the "The Post Crisis World of Finance" was sponsored by Capco.

First up was Nassim Taleb (he of Black Swan fame). His presentation was entitled "A Simple Heuristic to Assess Tail Exposure and Model Error". First time I had seen Nassim talk and like many of us he was an interesting mix of seeming nervousness and confidence whilst presenting. He started by saying that given the success and apparent accessibility to the public of his Black Swan book, he had a deficit to make up in unreadability in this presentation and his future books.

Nassim recommenced his on-going battle with proponents of Value at Risk (see earlier posts on VaR) and economists in general. He said that economics continues to be marred by the lack of any stochastic component within the models that most economists use and develop. He restated his view that economists change the world to fit their choice of model, rather than the other way round. He mentioned "The Bed of Procrustes" from Greek mythology in which a man who made his visitors fit his bed to perfection by either stretching them or cutting their limbs (good analogy but also good plug for his latest book too I guess)

He categorized the most common errors in economic models as follows:

  1. Linear risks/errors - these were rare but show themselves early in testing
  2. Missing variables - rare and usually gave rise to small effects (as an aside he mentioned that good models should not have too many variables)
  3. Missing 2nd order effects - very common, harder to detect and potentially very harmful

He gave a few real-life examples of 3 above such as a 10% increase in traffic on the roads could result in doubling journey times whilst a 10% reduction would deliver very little benefit. He targeted Heathrow airport in London, saying that landing there was an exercise in understanding a convex function in which you never arrive 2 hours early, but arriving 2 hours later than scheduled was relatively common.

He described the effects of convexity firstly in "English" (his words):

"Don't try to cross a river that is on average 4ft deep"

and secondly in "French" (again his words - maybe a dig at Anglo-Saxon mathematical comprehension or in praise of French mathematics/mathematicians? Probably both?):

"A convex function of an average is not the average of a convex function"

Nassim then progressed to show the fragility of VaR models and their sensitivity to estimates of volatility. He showed that a 10% estimate error in volatility could produce a massive error in VaR level calculated. His arguments here on model fragility reflected a lot of what he had proposed a while back on the conversion of debt to equity in order to reduce the fragility of the world's economy (see post).

His heuristic measure mentioned in the title was then described which is to peturb some input variable such as volatility by say 15%, 20% and 25%. If the 20% result is much worse than the average of the 15 and 25 ones then you have a fragile system and should be very wary of the results and conclusions you draw from your model. He acknowledged that this was only a heuristic but said that with complex systems/models a simple heuristic like this was both pragmatic and insightful. Overall he gave a very entertaining talk with something of practical value at the end.

31 March 2011

Investment risk not rewarded

Interesting article from the FT, Reward for risk seems to be a chimera, effectively saying that more risky (volatile) equities do not necessarily provide higher returns than less risky equities. I like the suggestion that the reason for this is that "hope springs eternal" and investors buy more volatile stocks (pushing up price) in the hope of higher returns. However, as yet another illustration of the law of unintended consequences, the article goes on to suggest that choosing a benchmark index to outperform and limitations on borrowing imposed by investment mandates may both be driving this effect, are interesting and challenging ideas for investment managers.

 

15 December 2010

2010 Risk in Review NY

I went along to a a Prmia event last night "2010 - Risk Year in Review". The event started with a somewhat overwhelming brain dump of economic and credit statistics from John Lonski, Chief Capital Markets Economist at Moody's Analytics. In summary he seems very bullish about corporate credit spreads tightening given the way in which corporate profit growth is surging ahead of debt growth. His main concern for the economy was maybe unsurprisingly the US housing market and whether this will bottom out and start to rise in 2011. Given fiscal imbalances and competition from emerging markets he did not think that inflation was a big risk despite activity such as QE2.

Robert Iommazzo of search firm Seba International did a fairly dry presentation on industry compensation for risk managers. Seba seem to getting around having had a big presence at Riskminds in Geneva last week. This section only livened up when the questions started after the presentation, and is probably worth noting that the UK FSA is being perceived as a "Big Brother" with its involvement in setting compensation policies in financial markets. Obviously the FSA is not heading back to the heady days of the 1970's where central government set industry pay rises (journalists please note this meant you back then!), but it is also obvious that such control over an individual's remuneration is something that goes totally contrary to an American way of thinking. UK Government needs to be mindful of this perception particularly if it leaves itself open to arbitrage on compensation policy from other financial centres.

Panel debate followed, involving Ashish Das of Moody's, Yury Dubrovsky of Lazard Asset Management, Jan H. Voigts of the NY Fed and Christopher Whalen of Institutional Risk Analytics. Main points:

  • Chris said that he was one who was predicting a further fall in the housing market next year, and he asked the audience that when they looked at economic statistics, credit spreads,the Vix, bond spreads, did anyone getting the feeling the things are "normal" yet? Using these numbers and plugging them into a model does any believe the results are stable and can be relied upon? The audience fundamentally seemed to agree with these "warning" questions.
  • Jan asked the audience to consider how believable is your data and to try to understand what data is critical for your business and that is imperative to create tools to manage this data appropriately. Jan said that the biggest challenge for financial institutions going forward is how to calibrate what rate/volume/type of business you can transact safely and that this needed a lot more consideration.  
  • Yury said that he finds that the risks present in 2008 are still around in 2010, but now with the addition of European sovereign credit problems and the raft of regulation heading towards the industry. To add to this pessimistic note, he also said that some of the interest in "hot" emerging markets such as the BRICs was resulting in investments in lower quality IPOs relative to previous years.
  • Ashish thought that systemic risk was going to become more important for the industry. With the setting up of the Office of Financial Research (OFR) next year, he suggested that the industry needed to take much more of a lead in sorting out its own house in advance of letting the regulators do so. On the subject of models, he said that models should supplement human judgement but not replace it, and mentioned the quote by George E. P. Box that "all models are wrong, but some are useful".
  • Chris suggested that the role of risk managers will become more like that of a credit collector, with more involvement in actually seeing what can be recovered once a default has occurred. He also suggested that the industry should create its own consensus-based ratings (supplemented by the existing CRAs) to get a more reliable view of credit.
  • Ashish echoed some of the speakers last week at Riskminds in saying that regulatory compliance is not risk management, and that practitioners should do more to guide the regulators.
  • On the subject of risk culture, Yury asked how many risk managers knew data, quant, markets and how to deal with the egos of traders and senior management. This last point seemed to be conceded by the audience as a major weakness of the risk management profession and goes back to whether a risk manager is willing to put his career on the line to go against accepted business strategy.
  • Chris added that having worked at several investment banks he had not yet experienced a risk manager attending a senior committee, let alone a risk manager speaking up against a senior trader. He talked of two business models "Paranoid and Nimble" and "Well Documented and Pedantic" with the second one being the only one possible in his view once a business gets to a certain size.
  • On the subject of Government Sponsored Enterprises (GSEs like Fannie Mae and Freddie Mac) Chris said that the role of these will be up for review by the end of 2011. He thinks that the banks will head back towards actually holding mortgages and loans and the GSEs will become more conduits rather than direct sources of finance. This was news to me, given that so far the GSEs have been notably left out of recent reviews of what went wrong with the recent crisis.

Panel was very good, all speakers very knowledgeable. "Regulation is not risk", "models are not perfect", "risk governance" and "take control of your data" were all themes that echoed last week's RiskMinds event, allbeit with more of an American rather than international viewpoint on the economy, regulation and markets.

23 November 2010

The current bad luck of the Irish

If like me you are puzzled as to:

  • Why the Irish need a financing package now when they don't need to borrow for at least another 6 months?
  • Why adding more debt on top of bad debt makes things better?
  • Why bondholders of failed banks don't get forceably converted to holding equity and original equity holders get nothing?

Then take a read of the this article from the FT. We live in interesting economic times.

14 October 2010

Dodd Frank Regulation - being seen to be doing something?

I went along to a Six Telekurs event "Securities Valuations: Is the Price Right?" last week - good event with some interesting speakers, most notably Paul Atkins of Patomak Partners to talk about the Dodd-Frank Wall Street Reform and Consumer Protection Act 2010. Paul is based out of Washington and was not very complimentary about what has been going on.

He started by saying that the Act was very large in size, with over 2319 pages (compared to SarbOx with only 60) and given this size he suggested that you could guess how many in Congress had actually read it. Background to the Act were:

  • "Political Tailwinds" such as:
    • New Democrat Government with tenuous majority
    • Ambitious legislative plans
    • Bleak economic back-drop
  • An angry populace:
    • TARP bailouts/Wall St bonuses
    • Recession and high unemployment
    • Perception that Govt. contributed to crisis
  • Aggressive case for new regulation based on:
    • Lack of confidence in current systems and regulation
    • "Too big to fail" demonstrating that regulators lack the toolsets necessary to deal with such events
    • High leverage across the financial system and the economy
    • Poor risk management by existing participants
    • Opaque shadow banking system and opaque derivatives markets

He summarised that Housing and the Credit Rating Agencies were the key fundamentals behind the financial crisis.

Paul said that with the new regulation had the following features:

  • The Act is a sweeping revision of financial regulation in the US
    • few dodged the regulatory changes (notably insurance managed to do this)
  • The Federal Reserve has emerged pre-eminent amongst all regulatory bodies in the US.
  • Significant discretion has been yielded to regulators to work out specifics
  • Sheer size and ambiguous wording of the Act exacerbates the uncertainty in the market and economy and will require further fixes over coming years
  • The Act does not reform Government Sponsored Enterprises (Fannie Mae, Freddie Mac)
  • Far from reducing/simplifying the number of agencies involved in regulation the Act eliminated 1 agency and created 13 more
  • Paul asked the question whether spreads and volatility will rise in the market due to new regulation (such as the Volcker rule) and whether ultimately this will trickle down to hinder or benefit SMEs.
  • The Act will likely result in regulatory arbitrage opportunities and Paul said this was not a good thing for the United States

Paul said that in his view Congress learned the wrong lessons from the crisis:

  • No reform of Fannie Mae and Freddie Mac
  • Government Housing Policy left unaddressed
  • Transparency still lacking despite efforts from FASB on fair value
  • International Policy Co-ordination is still an open question as to its extent
  • No reform of existing regulator structures
  • The crisis has resulted in payoffs to favoured groups (Unions, Trial Lawyers etc)

Paul talked about how hedge funds and private equity funds were going to experienced increased regulation with them having to register if they have over $100M assets under management and future implications for systemic risk provisions. He mentioned that Venture Capital investments had escaped being required to register if the lock-up period was over 2 years.

He briefly discussed the coming changes in OTC derivatives on centralised clearing, post trade reporting and new liability provisions. Paul was also concerned about certain SEC related issues such as "Whistleblower" provisions which contain a bounty programme of about 10-30% of any fine subsequently awarded against a financial institution. He re-iterated that it was not yet clear what all of the bodies involved in regulation would be doing, and at the same time as this was the case the very same bodies were also being given very strong powers such as that of legal subpoena.

Paul was a very knowledgeable speaker and had some good points to make. Listening to him speak it would seem from my perspective that the Act is a prime example of "being seen to be doing something" to address the crisis rather than something better structured, with all of "law of unintended consequencies" risks that such an initiative entails.

 

 

 

27 May 2010

Of Grasshoppers and Ants...

...not sure what Martin Wolf of the FT has been drinking or smoking recently, but he has certainly put together a very different way of explaining some of the economic inbalances faced by the world at the moment in his latest article.

Xenomorph: analytics and data management

About Xenomorph

Xenomorph is the leading provider of analytics and data management solutions to the financial markets. Risk, trading, quant research and IT staff use Xenomorph’s TimeScape analytics and data management solution at investment banks, hedge funds and asset management institutions across the world’s main financial centres.

@XenomorphNews



Blog powered by TypePad
Member since 02/2008