31 posts categorized "Events"

24 November 2014

PRMIA Risk Year in Review 2014

PRMIA put on their Risk Year in Review event at the New York Life Insurance Company on Thursday. Some of the main points from the panel, starting with trade:

  • The world continues to polarize between "open" and "closed" societies with associated attitudes towards trade and international exposure.
  • US growth at around 3% is better than the rest of the world but this progress is not seen/benefitting a lot of the poplation yet.
  • This against an economic background of Japan, Europe and China all struggling to maintain "healthy" growth (if at all).
  • Looking back at the financial crisis of 2008/9 it was the WTO rules that were in place that kept markets open and prevented isolationist and closed policies from really taking hold - although such populist inward-looking policies are still are major issue and risk for the global economy today.
  • Some optimistic examples of progress howver on world trade recently:
  • US Government is divided and needs to get back to pragmatic decision making
  • The Federval Reserve currently believes that external factors/the rest of the world are not major risks to growth in the US economy.

James Church of sponsor FINCAD then did a brief presentation on their recent experience and a recent survey of their clients in the area valuation and risk management in financial markets:

  • Risk management is now considered as a source of competitive advantage by many insitutions
  • 63% of survey respondents are currently involved in replacing risk systems
  • James gave the example of Alex Lurye saying risk is a differentiator
  • Aggregate view of risk is still difficult due to siloed systems (hello BCBS239)
  • Risk aggregation also needs consistency of modelling assumptions, data and analytics all together if you are avoid adding apples and pears
  • Institutions now need more flexibility in building curves post-crisis with OIS/Libor discounting (see FINCAD white paper)
    • 70% of survey respondents are involved in changes to curve basis
  • Many new calculations to be considered in collateralization given the move to central clearing
  • 62% of survey respondents are investing in better risk management process, so not just technology but people and process aswell

James was followed by a discussion on market/risk events this year:

  • Predictions are hard but 50 years ago Isaac Asimov made 10 predictions for 2014 and 8 of which have come true
  • Bonds and the Dollar are still up but yields are low - this is as a result of relatively poor performance of other currencies and the inward strength of US economy. US is firmly post-crisis economically and markets are anticipating both oil independence and future interest rate movements.
  • Employment level movements are no longer a predictor of interest rate moves, now more balance of payments
  • October 15th 40bp movement in yields in 3 hours (7 standard deviation move) - this was more positioning/liquidity risk in the absence of news - and an illustration of how regulation has moved power from banks to hedge funds
  • Risk On/Off - trading correlation is very difficult - oil price goes means demand up but 30% diver in price over the past 6 months - the correlation has changed
  • On the movie Interstellar, on one planet an astronaut sees a huge mountain but another sees it is a wave larger than anything seen before - all depends on forming your own view of the same information as to what you perceive or understand as risk

Some points of macro economics:

  • Modest slow down this quarter
  • Unemployment to drop to 5.2% in 2015 from 5.8%
  • CS see the Fed hiking rates in mid-2015 followed by 3 further hikes
    • The market does not yet agree, seeing a move in Q3 2015
  • Downside risks are inflation, slow US growth and wages growth anaemic
  • Upside risks - oil price boost to spending reducing cost of gas from 3.2% down to 2.4% of disposable income

Time for some audience questions/discussions:

  • One audience member asked the panel for thoughts on the high price of US Treasuries
  • Quantitative Easing (QE) was (understandably) targetted as having distorting effects
  • Treasury yields have been a proxy for the risk free rate in the past, but the volatility in this rate due to QE has a profound effect on equity valuations
  • Replacing maturing bonds with lower yielding instruments is painful
  • The Fed are concerned to not appear to loose control of interest rates, nor wants to kill the fixed income markets so rate rises will be slow.
  • One of the panelists said that all this had a human dimension not just markets, citing effectively non-existing interest rate levels but with -ve equity still in Florida, no incentive to save so money heads into stock which is risky, low IR of little benefit to senior citizens etc.
  • Taper talk last year saw massive sell off of emerging market currencies - one problem in assessing this is to define which economies are emerging markets - but key is that current account deficits/surpluses matter - which the US escapes as the world's reserve currency but emergining markets do not.
  • Emergining market boom of the past was really a commodities boom, and the US still leads the world's economies and current challenges may expose the limits of authoritarian capitalism

The discussion moved onto central clearing/collateral:

  • Interest rate assets for collateral purposes are currently expensive
  • Regulation may exacerbate volatility with unintended consequencies
  • $4.5T of collateral set aside currently set to rise to $12-13T
  • Risk is that other sovereign nations will target the production of AAA securities for collateral use that are not AAA
  • Banks will not be the place for risk, the shadow banking system will
  • Futures markets may be under collateralized and a source of future risk

One audience member was interested in downside risks for the US and couldn't understand why anyone was pessimistic given the stock market performance and other measures. The panel put forward the following as possible reasons behind a potential slow down:

  • Income inequality meaning benefits are not throughout the economy
  • Corporations making more and more money but not proportionate increase in jobs
  • Wages are flat and senior citizens are struggling
  • (The financial district is not representative of the rest of the economy in the US however surprising that may be to folks in Manhattan)
  • The rest of the US does not have jobs that make them think the future is going to get better

Other points:

  • Banks have badly underperformed the S&P
  • Regulation is a burden on the US economy that is holding US growth back
  • Republicans and Democrats need to co-operate much more
  • House prices need more oversight
  • Currently $1.2T in student loands and students are not expecting to earn more than their parents
  • Top 10 oil producers are all pumping full out
    • The Saudis are refusing to cut production
    • Venezuela funding policies from oil
    • Russia desparately generating dollars from oil
    • Will the US oil bonanza break OPEC - will they be able to co-ordinate effectively given their conflicting interests

 Summary - overall good event with a fair amount of economics to sum up the risks for 2014 and on into 2015. Food and wine tolerably good afterwards too!

 

16 October 2014

TabbForum MarketTech 2014: Game of Smarts

A great afternoon event put on by TabbFORUM in New York yesterday with a number of panels and one on one interviews (see agenda). You can see some of went on at the event via the hashtag #TabbTech or via the @XenomorphNews feed.

WP_20141015_16_36_01_Raw

"Death of Legacy" Panel Discussion

05 August 2014

A-Team DMS Data Management Awards 2014

Very pleased to announce that we have been nominated again this year in the A-Team’s DMS Data Management Awards. The categories we’ve been selected for are: 

  • Best Sell-Side Enterprise Data Management Platform
  • Best Buy-Side EDM Platform
  • Best EDM Platform (Portfolio Pricing & Valuations)
  • Best Risk Data Aggregation Platform
  • Best Analytics Platform.

Last year we were delighted to win the Best Risk Data Management/Analytics Platform award – even more so as the awards are voted for by our clients and industry peers.

So if you would like to support us again this year the voting is open now:

http://referencedatareview.hs-sites.com/data-management-summit-awards-2014-survey

and runs through to the 26th September. The award winners will be announced at A-Team’s Data Management Summit, at the America Square Conference Centre in London on October 8th.

14 July 2014

NoSQL Document Database - Manhattan MarkLogic

Bit late in posting this up, but given I did something about RainStor I thought I should write up my attendance at a MarkLogic event day in downtown Manhattan from several weeks back - their NoSQL database is used to serve up content on the bbc web site if you wanted some context. They are unusual for the NoSQL “movement” in that they are a proprietary vendor in a space that is dominated by open source databases and the companies that offer support for them. The database they most seem to compete with in the NoSQL space seems to be MongoDB, where both have origins as “document databases” (- managing millions of documents is one of the most popular uses for big data technology at the moment, though not so much publicized as more fashionable things like swallowing a twitter feed for sentiment analysis for example).

In order to cope with the workloads needing to be applied to data, MarkLogic argue that data has escaped from the data centre in terms of need separate data warehouses and ETL processes aligned with each silo of the business. They put forward the marketing message that MarkLogic allows the data to come back into the data center given it can be a single platform for where all data lives and all workloads applied to it. As such it is easy to apply proper data governance if the data is in one place rather than distributed across different databases, systems and tools.

Apparently MarkLogic started out with the aims of offering enterprise search of corporate data content but has evolved much beyond just document management. Gary Bloom, their CEO, described the MarkLogic platform as the combination of:

• Database
• Search Engine
• Application Services

He said that the platform is not just the database but particularly search and database together, aligned with the aim of not just storing data and documents but with the aim of getting insights out of the data. Gary also mentioned the increasing importance of elastic compute and MarkLogic has been designed to offer this capability to spin up and down with usage, integrating with and using the latest in cloud, Hadoop and Intel processors.

Apparently one of the large European investment banks is trying to integrate all of their systems for post-trade analysis and regulatory reporting. The bank apparently tried doing this by adopting a standard relational data model but faced two problems in that 1) the relational databases were not standard and 2) that it was difficult to get to and manage an overarching relational schema. On the schema side of things, the main problem they were alluding to seemed to be one schema changing and having to propagate that through the whole architecture. The bank seems now to be having more success now that they have switched to MarkLogic for doing this post-trade analysis – from a later presentation seems like things like trades are taken directly from the Enterprise Service Bus so saving the data in the message as is (schema-less).

One thing that came up time and time again was their pitch that MarkLogic is “the only Enterprise NoSQL database” with high availability, transactional support (ACID) and security built in. He criticized other NoSQL databases for offering “eventual consistency” and said that they aspire to something better than that (to put it mildly). I thought it was interesting over a lunch chat that one of MarkLogic guys said that "MongoDB does a lot of great pre-sales for MarkLogic" meaning I guess that MongoDB is the marketing "poster child" of NoSQL document databases so they get the early leads, but as the client widens the search they find that only MarkLogic is "enterprise" capable. You can bet that the MongoDB team disagree (and indeed they do...).

On the consistency side, Gary talked about “ObamaCare” aka HealthCare.gov that MarkLogic were involved in. First came some performance figures of how they were handling 50,000 transactions/sec with 4-5ms response time for 150,000 concurrent users. This project suffered from a lot of technical problems which really came down to problems of running the system based on a fragile infrastructure with weaknesses in network, servers and storage. Gary said that the government technologists were expecting data consistency problems when things like the network went down, but the MarkLogic database is ACID and all that was needed was to restart the servers once the infrastructure was ready. Gary also mentioned that he spent 14 years working at Oracle (as a lot of the MarkLogic folks seem to have) but it was only really until Oracle 7 that they could really say they offered data consistency.

On security, again there was more criticism of other NoSQL database for offering access to either all of the data or none of it. The analogy used was one of going to an ATM and being offered access to everyone’s money and having to trust each client to only take their own. Continuing the NoSQL criticism, Gary said that he did not like the premise put around that “NoSQL is defined by Open Source” – his argument was that MarkLogic generates more revenue than all the other NoSQL databases on the market. Gary said that one client said that they hosted a “lake of data” in Hadoop but said that Hadoop was a great distributed file system but still needs a database to go with it.

Gary then talked about some of the features of MarkLogic 7, their current release. In particular that MarkLogic 7 offered scale out elasticity but with full ACID support (apparently achieving one should make it not possible to achieve the other), high performance and a flexible schema-less architecture. Gary implied that the marketing emphasis had changed recently from “big data” pitch of a few years back to include both unstructured and structured data but within one platform, so dealing with heterogeneous data which is a core capability of MarkLogic. Other features mentioned were support for XML, JSON and access through a Rest API. Usage of MarkLogic as a semantic database (a triple store) and support for the semantic query language Sparql. Gary mentioned that semantic technology was a big area of growth for them. He also mentioned support for tiered stored on HDFS.

The conversation them moved on to what’s next with version 8 of Mark Logic. The main thing is “Ease of Use” for the next release with the following features:

• MarkLogic Developer – freely downloadable version
• MarkLogic Essential Enterprise – try it for 99c/hour on AWS
• MarkLogic Global Enterprise – 33% less (decided to spend less time on the sales cycle)
• Training for free – all classes sold out – instructor led online

Along this ease of use theme, MarkLogic acknowledged that using their systems needs to be easier and that in addition to XML/XQuery programming they will be adding native support for JavaScript, greatly expanding the number of people who could program with MarkLogic. In terms of storage formats, then in addition to XML they will be adding full JSON support. On the semantics side they will offer full support for RDF, Sparql 1.1. and inferencing. Bi-temporal support will also be added with a view to answering the kind of regulatory driven questions such as “what did they know and when did they know it?”.

Joe Pasqua, SVP of Product Strategy, then took over from Gary for a more technical introduction to the MarkLogic platform. He started by saying that MarkLogic is a schema-less database with a hierarchical data model that is very document-centric, and can be used for both structured and unstructured data. Data is stored in compressed trees with the system. Joe then explained how the system is indexed explaining the “Universal Index” which lists where to find the following kinds of data as in most good search engines:

• Words
• Phrases
• Stemmed words and phrasing
• Structure (this is indexed too as new documents come in)
• Words and phrases in the context of structure
• Values
• Collections
• Security Permissions

Joe also mentioned that a “range index” is used to speed up comparisons, apparently in a similar way to column store. Geospacial indices are like 2D range indices for how near things are to a point. The system also supports semantic indices, indexing on triples of subject-predicate-object.

He showed how the system has failover replication within a database cluster for high availability but also full replication for disaster recover purposes. There were continual side references to Oracle as a “legacy database”.

On database consistency and the ACID capability Joe talked about MVCC (Multi Version Concurrency Control). Each “document” record in MarkLogic seems to have a start and end time for how current it is, and these values are used when updating data to avoid any reduction in read availability. When a document is updated a copy of it is taken but made hidden until ready – the existing document remains available until the update is ready, and then the document “end time” in the old record is marked and the “start time” marked on the new record. So effectively always doing append in serial form not seeking on disk, and the start and end time for the record enables bitemporal functionality to be implemented. Whilst the new record is being created it is already being indexed so there is zero latency searching once the new document is live.

One of the index types mentioned by Joe was a “Reverse Index” where queries are indexed and as a new document comes in it is passed over these queries (sounds like the same story from the complex event processing folks) and can trigger alerts based on what documents fit each query.

In summary, the event was a good one and MarkLogic seems interesting technology and there seems to be a variety of folks using it in financial markets with the post trade analysis example (bit like RainStor I think though, as an archive) and others using it more in the reference data space. Not sure how much MarkLogic is real-time capable – seems to be a lot of emphasis on post trade. Also brought home to me the importance of search and database together which seems to be a big strength of their technology. 

01 July 2014

Cloud, data and analytics in London - thanks for coming along!

We had over 60 folks along to our event our the Merchant Taylors' Hall last week in London. Thanks to all who attended, all who helped with the organization of the event and sorry to miss those of you that couldn't come along this time.

Some photos from the event are below starting with Brad Sevenko of Microsoft (Director, Capital Markets Technology Strategy) in the foreground with a few of the speakers doing some last minute adjustments at the front of the room before the guests arrived:

AzureUK-1

 

Rupesh Khendry of Microsoft (Head of World-Wide Capital Markets Solutions) started off the presentations at the event, introducing Microsoft's capital markets technology strategy to a packed audience:

AzureUK-3

 

After a presentation by Virginie O'Shea of Aite Group on Cloud adoption in capital markets, Antonio Zurlo (below) of Microsoft (Senior Program Manager) gave a quick introduction to the services available through the Microsoft Azure cloud and then moved on to more detail around Microsoft Power BI:

AzureUK-5

 

After Antonio, then yours truly (Brian Sentance, CEO, Xenomorph) gave a presentation on what we have been building with Microsoft over the past 18 months, the TimeScape MarketPlace. At this point in the presentation I was giving some introductory background on the challenges of regulatory compliance and the pros and cons between point solutions and having a more general data framework in place:

AzureUK-6

 

The event ended with some networking and further discussions. Big thanks to those who came forward to speak with me afterwards, great to get some early feedback.

AzureUK-8

 

24 June 2014

Cloud, data and analytics in London. Tomorrow Wednesday 25th June.

One day to go until our TimeScape MarketPlace breakfast briefing "Financial Markets Data and Analytics. Everywhere You Need Them" at Merchant Taylor's Hall tomorrow, Wednesday June 25th. With over ninety people registered so far it should be a great event, but if you can make it please register and come along, it would be great to see you there.

11 June 2014

Financial Markets Data and Analytics. Everywhere London Needs Them.

Pleased to announce that our TimeScape MarketPlace event "Financial Markets Data and Analytics. Everywhere You Need Them" is coming to London, at Merchant Taylor's Hall on Wednesday June 25th. 

Come and join Xenomorph, Aite Group and Microsoft for breakfast and hear Virginie O'Shea of the analyst firm Aite Group offering some great insights from financial institutions into their adoption of cloud technology, applying it to address risk management, data management and regulatory reporting challenges.

Microsoft will be showing how their new Power BI can radically change and accelerate the integration of data for business and IT staff alike, regardless of what kind of data it is, what format it is stored in or where it is located.

And Xenomorph will be demonstrating the TimeScape MarketPlace, our new cloud-based data mashup service for publishing and consuming financial markets data and analytics. 

In the meantime, please take a look at the event and register if you can come along, it would be great to see you there.

02 May 2014

7 days to go - Financial Markets Data and Analytics. Everywhere You Need Them.

Quick reminder that there are just 7 days left to register for Xenomorph's breakfast briefing event at Microsoft's Times Square offices on Friday May 9th, "Financial Markets Data and Analytics. Everywhere You Need Them."

With 90 registrants so far it looks to be a great event with presentations from Sang Lee of Aite Group on the adoption of cloud technology in financial markets, Microsoft showing the self-service (aka easy!) data integration capabilities of Microsoft Power BI for Excel, and introducing the TimeScape MarketPlace, Xenomorph's new cloud-based data mashup service for publishing and consuming financial markets data and analytics.

Hope to see you there and have a great weekend!

 

15 April 2014

Financial Markets Data and Analytics. Everywhere You Need Them.

Very pleased to announce that Xenomorph will be hosting an event, "Financial Markets Data and Analytics. Everywhere You Need Them.", at Microsoft's Times Square New York offices on May 9th.

This breakfast briefing includes Sang Lee of the analyst firm Aite Group offering some great insights from financial institutions into their adoption of cloud technology, applying it to address risk management, data management and regulatory reporting challenges.

Microsoft will be showing how their new Power BI can radically change and accelerate the integration of data for business and IT staff alike, regardless of what kind of data it is, what format it is stored in or where it is located.

And Xenomorph will be introducing the TimeScape MarketPlace, our new cloud-based data mashup service for publishing and consuming financial markets data and analytics. More background and updates on MarketPlace in coming weeks.

In the meantime, please take a look at the event and register if you can come along, it would be great to see you there.

12 March 2014

S&P Capital IQ Risk Event #2 - Enterprise or Risk Data Strategy?

Christian Nilsson of S&P CIQ followed up Richard Burtsal's talk with a presentation on data management for risk, containing many interesting questions for those considering data for risk management needs. Christian started his talk by taking a time machine back to 2006, and asking what were the issues then in Enterprise Data Management:

  1. There is no current crisis - we have other priorities (we now know what happened there)
  2. The business case is still too fuzzy (regulation took care of this issue)
  3. Dealing with the politics of implementation (silos are still around, but cost and regulation are weakening politics as a defence?)
  4. Understanding data dependencies (understanding this throughout the value chain, but still not clear today?)
  5. The risk of doing it wrong (there are risk you will do data management wrong given all the external parties and sources involved, but what is the risk of not doing it?)

Christian then moved on to say the current regulatory focus is on clearer roadmaps for financial institutions, citing Basel II/III, Dodd Frank/Volker Rule in the US, challenges in valuation from IASB and IFRS, fund management challenges with UCITS, AIFMD, EMIR, MiFID and MiFIR, and Solvency II in the Insurance industry. He coined the phrase that "Regulation Goes Hollywood" with multiple versions of regulation like UCITS I, II, III, IV, V, VII for example having more versions than a set of Rocky movies. 

He then touched upon some of the main motivations behind the BCBS 239 document and said that regulation had three main themes at the moment:

  1. Higher Capital and Liquidity Ratios
  2. Restrictions on Trading Activities
  3. Structural Changes ("ring fence" retail, global operations move to being capitalized local subsidiaries)

Some further observations were on what will be the implications of the effective "loss" of globablization within financial markets, and also what now can be considered as risk free assets (do such things now exist?). Christian then gave some stats on risk as a driver of data and technology spend with over $20-50B being spent over the next 2-3 years (seems a wide range, nothing like a consensus from analysts I guess!). 

The talk then moved on to what role data and data management plays within regulatory compliance, with for example:

  • LEI - Legal Entity Identifiers play out throughout most regulation, as a means to enable automated processing and as a way to understand and aggregate exposures.
  • Dodd-Frank - Data management plays within OTC processing and STP in general.
  • Solvency II - This regulation for insurers places emphasis on data quality/data lineage and within capital reserve requirements.
  • Basel III - Risk aggregation and counterparty credit risk are two areas of key focus.

Christian outlined the small budget of the regulators relative to the biggest banks (a topic discussed in previous posts, how society wants stronger, more effective regulation but then isn't prepared to pay for it directly - although I would add we all pay for it indirectly but that is another story, in part illustrated in the document this post talks about).

In addtion to the well-known term "regulatory arbitrage" dealing with different regulations in different jurisdictions, Christian also mentioned the increasingly used term "subsituted compliance" where a global company tries to optimise which jurisdictions it and its subsidiaries comply within, with the aim of avoiding compliance in more difficult regimes through compliance within others.

I think Christian outlined the "data management dichotomy" within financial markets very well :

  1. Regulation requires data that is complete, accurate and appropriate
  2. Industry standards of data management and data are poorly regulated, and there is weak industry leadership in this area.

(not sure if it was quite at this point, but certainly some of the audience questions were about whether the data vendors themselves should be regulated which was entertaining).

He also outlined the opportunity from regulation in that it could be used as a catalyst for efficiency, STP and cost base reduction.

Obviously "Big Data" (I keep telling myself to drop the quotes, but old habits die hard) is hard to avoid, and Christian mentioned that IBM say that 90% of the world's data has been created in the last 2 years. He described the opportunities of the "3 V's" of Volume, Variety, Velocity and "Dark Data" (exploiting underused data with new technology - "Dark" and "Deep" are getting more and more use of late). No mention directly in his presentation but throughout there was the implied extension of the "3 V's" to "5 V's" with Veracity (aka quality) and Value (aka we could do this, but is it worth it?). Related to the "Value" point Christian brought out the debate about what data do you capture, analyse, store but also what do you deliberately discard which is point worth more consideration that it gets (e.g. one major data vendor I know did not store its real-time tick data and now buys its tick data history from an institution who thought it would be a good idea to store the data long before the data vendor thought of it).

I will close this post taking a couple of summary lists directly from his presentation, the first being the top areas of focus for risk managers:

  • Counterparty Risk
  • Integrating risk into the Pre-trade process
  • Risk Aggregation across the firm
  • Risk Transparency
  • Cross Asset Risk Reporting
  • Cost Management/displacement

The second list outlines the main challenges:

  • Getting complete view of risk from multiple systems
  • Lack of front to back integration of systems
  • Data Mapping
  • Data availability of history
  • Lack of Instrument coverage
  • Inability to source from single vendor
  • Growing volumes of data

Christian's presentation then put forward a lot of practical ideas about how best to meet these challenges (I particularly liked the risk data warehouse parts, but I am unsurprisingly biassed). In summary if you get the chance then see or take a read of Christian's presentation, I thought it was a very thoughtful document with some interesting ideas and advice put forward.

 

 

 

 

 

 

 

03 March 2014

See you at the A-Team Data Management Summit this week!

Xenomorph is sponsoring the networking reception at the A-Team DMS event in London this week, and if you are attending then I wanted to extend a cordial invite to you to attend the drinks and networking reception at the end of day at 5:30pm on Thursday.

In preparation for Thursday’s Agenda then the blog links below are a quick reminder of some of the main highlights from last September’s DMS:

I will also be speaking on the 2pm panel “Reporting for the C-Suite: Data Management for Enterprise & Risk Analytics”. So if you like what you have heard during the day, come along to the drinks and firm up your understanding with further discussion with like-minded individuals. Alternatively, if you find your brain is so full by then of enterprise data architecture, managed services, analytics, risk and regulation that you can hardly speak, come along and allow your cerebellum to relax and make sense of it all with your favourite beverage in hand. Either way your you will leave the event more informed then when you went in...well that’s my excuse and I am sticking with it!

Hope to see you there!

21 October 2013

Credit Risk: Default and Loss Given Default from PRMIA

Great event from PRMIA on Tuesday evening of last week, entitled Credit Risk: The link between Loss Given Default and Default. The event was kicked off by Melissa Sexton of PRMIA, who introduced Jon Frye of the Federal Reserve Bank of Chicago. Jon seems to an acknowledged expert in the field of Loss Given Default (LGD) and credit risk modelling. I am sure that the slides will be up on the PRMIA event page above soon, but much of Jon's presentation seems to be around the following working paper. So take a look at the paper (which is good in my view) but I will stick to an overview and in particular any anecdotal comments made by Jon and other panelists.

Jon is an excellent speaker, relaxed in manner, very knowledgeable about his subject, humourous but also sensibly reserved in coming up with immediate answers to audience questions. He started by saying that his talk was not going to be long on philosophy, but very pragmatic in nature. Before going into detail, he outlined that the area of credit risk can and will be improved, but that this improvement becomes easier as more data is collected, and inevitably that this data collection process may need to run for many years and decades yet before the data becomes statistically significant. 

Which Formula is Simpler? Jon showed two formulas for estimating LGD, one a relatively complex looking formula (the Vasicek distribution mentioned his working paper) and the other a simple linear model of the a + b.x. Jon said that looking at the two formulas, then many would hope that the second formula might work best given its simplicity, but he wanted to convince us that the first formula was infact simpler than the second. He said that the second formula would need to be regressed on all loans to estimate its parameters, whereas the first formula depended on two parameters that most banks should have a fairly good handle on. The two parameters were Default Rate (DR) and Expected Loss (EL). The fact that these parameters were relatively well understood seemed to be the basis for saying the first formula was simpler, despite its relative mathematical complexity. This prompted an audience question on what is the difference between Probability of Default (PD) and Default Rate (DR). Apparently it turns out PD is the expected probability of default before default happens (so ex-ante) and DR is the the realised rate of default (so ex-post). 

Default and LGD over Time. Jon showed a graph (by an academic called Altman) of DR and LGD over time. When the DR was high (lots of companies failing, in a likely economic downtown) the LGD was also perhaps understandably high (so high number of companies failing, in an economic background that is both part of the causes of the failures but also not helping the loss recovery process). When DR is low, then there is a disconnect between LGD and DR. Put another way, when the number of companies failing is low, the losses incurred by those companies that do default can be high or low, there is no discernable pattern. I guess I am not sure in part whether this disconnect is due to the smaller number of companies failing meaning the sample space is much smaller and hence the outcomes are more volatile (no averaging effect), or more likely that in healthy economic times the loss given a default is much more of random variable, dependent on the defaulting company specifics rather than on general economic background.

Conclusions Beware: Data is Sparse. Jon emphasised from the graph that the Altman data went back 28 years, of which 23 years were periods of low default, with 5 years of high default levels but only across 3 separate recessions. Therefore from a statistical point of view this is very little data, so makes drawing any firm statistical conclusions about default and levels of loss given default very difficult and error-prone. 

The Inherent Risk of LGD. Jon here seemed to be focussed not on the probability of default, but rather on the conditional risk that once a default has occurred then how does LGD behave and what is the risk inherent from the different losses faced. He described how LGD affects i) Economic Capital - if LGD is more variable, then you need stronger capital reserves, ii) Risk and Reward - if a loan has more LGD risk, then the lender wants more reward, and iii) Pricing/Valuation - even if the expected LGD of two loans is equal, then different loans can still default under different conditions having different LGD levels.

Models of LGD

Jon showed a chart will LGC plotted against DR for 6 models (two of which I think he was involved in). All six models were dependent on three parameters, PD, EL and correlation, plus all six models seemed to produce almost identical results when plotted on the chart. Jon mentioned that one of his models had been validated (successfully I think, but with a lot of noise in the data) against Moody's loan data taken over the past 14 years. He added that he was surprised that all six models produced almost the same results, implying that either all models were converging around the correct solution or in total contrast that all six models were potentially subject to "group think" and were systematically all wrong in the ways the problem should be looked at.

Jon took one of his LGD models and compared it against the simple linear model, using simulated data. He showed a graph of some data points for what he called a "lucky bank" with the two models superimposed over the top. The lucky bit came in since this bank's data points for DR against LGD showed lower DR than expected for a given LGD, and lower LGD for a given DR. On this specific case, Jon said that the simple linear model fits better than his non-linear one, but when done over many data sets his LGD model fitted better overall since it seemed to be less affected by random data.

There were then a few audience questions as Jon closed his talk, one leading Jon to remind everyone of the scarcity of data in LGD modelling. In another Jon seemed to imply that he would favor using his model (maybe understandably) in the Dodd-Frank Annual Stress Tests for banks, emphasising that models should be kept simple unless a more complex model can be justified statistically. 

Steve Bennet and the Data Scarcity Issue 

Following Jon's talk, Steve Bennet of PECDC picked on Jon's issue of scare data within LGD modelling. Steve is based in the US, working for his organisation PECDC which is a cross border initiative to collect LGD and EAD (exposure at default) data. The basic premise seems to be that in dealing with the scarce data problem, we do not have 100 years of data yet, so in the mean time lets pool data across member banks and hence build up a more statistically significant data set - put another way: let's increase the width of the dataset if we can't control the depth. 

PECDC is a consortia of around 50 organisations that pool data relating to credit events. Steve said that capture data fields per default at four "snapshot" times: orgination, 1 year prior to default, at default and at resolution. He said that every bank that had joined the organisation had managed to improve its datasets. Following an audience question, he clarified that PECDC does not predict LGD with any of its own models, but rather provides the pooled data to enable the banks to model LGD better. 

Steve said that LGD turns out to be very different for different sectors of the market, particularly between SMEs and large corporations (levels of LGD for large corporations being more stable globally and less subject to regional variations). But also there is great LGD variation across specialist sectors such as aircraft finance, shipping and project finance. 

Steve ended by saying that PECDC was orginally formed in Europe, and was now attempting to get more US banks involved, with 3 US banks already involved and 7 waiting to join. There was an audience question relating to whether regulators allowed pooled data to be used under Basel IRB - apparently Nordic regulators allow this due to needing more data in a smaller market, European banks use the pooled data to validate their own data in IRB but in the US banks much use their own data at the moment.

Til Schuermann

Following Steve, Til Schuermann added his thoughts on LGD. He said that LGD has a time variation and is not random, being worse in recession when DR is high. His stylized argument to support this was that in recession there are lots of defaults, leading to lots of distressed assets and that following the laws of supply and demand, then assets used in recovery would be subject to lower prices. Til mentioned that there was a large effect in the timing of recovery, with recovery following default between 1 and 10 quarters later. He offered words of warning that not all defaults and not all collateral are created equal, emphasising that debt structures and industry stress matter. 

Summary

The evening closed with a few audience questions and a general summation by the panelists of the main issues of their talks, primarily around models and modelling, the scarcity of data and how to be pragmatic in the application of this kind of credit analysis. 

 

 

09 October 2013

And the winner of the Best Risk Data Management and Analytics Platform is...

...Xenomorph!!! Thanks to all who voted for us in the recent A-Team Data Management Awards, it was great to win the award for Best Risk Data Management and Analytics Platform. Great that our strength in the Data Management for Risk field is being recognised, and big thanks again to clients, partners and staff who make it all possible!

Please also find below some posts for the various panel debates at the event:

 Some photos, slides and videos from the event are now available on the A-Team site.

 

07 October 2013

#DMSLondon - The Chief Data Officer Challenge

The first panel of the afternoon touched on a hot topic at the moment, the role of the Chief Data Officer (CDO). Andrew Delaney again moderated the panel, consisting of Rupert Brown of UBS, Patrick Dewald of Diaku, Colin Hall of Credit Suisse, Nigel Matthews of Barclays and Neill Vanlint of GoldenSource. Main points:

  • Colin said that the need for the CDO role is that someone needs to sit at the top table who is both nerdy about data but also can communicate a vision for data to the CEO.
  • Rupert said that role of CDO was still a bit nebulous covering data conformance, storage management, security and data opportunity (new functionality and profit). He suggested this role used to be called "Data Stewardship" and that the CDO tag is really a rename.
  • Colin answered that the role did use to be a junior one, but regulation and the rate of industry change demands a CDO, a point contact for everyone when anything comes up that concerns data - previously nobody knew quite who to speak to on this topic.
  • Patrick suggested that a CDO needs a long-term vision for data, since the role is not just an operational one. 
  • Nigel pointed out that the CDO needs to cover all kinds of data and mentioned recent initiatives like BCBS with their risk data aggregation paper.
  • Neil said that he had seen the use of a CDO per business line at some of his clients.
  • There was some conversation around the different types of CDO and the various carrots and sticks that can be employed. Neil made the audience laugh with his quote from a client that "If the stick doesn't work, I have a five-foot carrot to hit them with!"
  • Patrick said that CDO role is about business not just data.
  • Colin picked up on what Patrick said and illustrated this with an example of legal contract data feeding directly into capital calculations.
  • Nigel said that the CDO is a facilitator with all departments. He added that the monitoring tools from market data where needed in reference data

Overall good debate, and I guess if you were starting from scratch (if only we could!) you would have to think that the CDO is a key role given the finance industry is primarily built on the flow of data from one organisation to another.

 

 

#DMSLondon - What Will Drive Data Management?

The first panel of the day opened with an introductory talk by Chris Johnson of HSBC. Chris started his talk by proudly announcing that he drives a Skoda car, something that to him would have been unthinkable 25 years ago but with investment, process and standards things can and will change. He suggested that data management needs to go through a similar transformation, but that there remained a lot to be done. 

Moving on to the current hot topics of data unitilities and managed services, he said that reduced costs of managed services only became apparent in the long term and that both types of initiative have historically faced issues with:

  • Collaboration
  • Complexity
  • Logistical Challenges and Risks

Chris made the very good point that until service providers accept liability for data quality then this means that clients must always check the data they use. He also mentioned that in relation to Solvency II (a hot topic for Chris at HSBC Security Services), that EIOPA had recently mentioned that managed services may need to be regulated. Chris mentioned the lack of time available to respond to all the various regulatory deadlines faced (a recurring theme) and that the industry still lacked some basic fundamentals such as a standard instrument identifier.

Chris then joined the panel discussion with Andrew Delaney as moderator and with other panelists including Colin Gibson (see previous post), Matt Cox of Denver Perry, Sally Hinds of Data Management Consultancy Services and Robert Hofstetter of Bank J. Safra Sarasin. The key points I took from the panel are outlined below:

  • Sally said that many firms were around Level 3 in the Data Management Maturity Model, and that many were struggling particularly with data integration. Sally added that utililities were new, as was the CDO role and that implications for data management were only just playing out.
  • Matt thought that reducing cost was an obvious priority in the industry at the moment, with offshoring playing its part but progress was slow. He believed that data management remains underdeveloped with much more to be done.
  • Colin said that organisations remain daunted by their data management challenges and said that new challenges for data management with transactional data and derived data.
  • Sally emphasised the role of the US FATCA regulation and how it touches upon some many processess and departments including KYC, AML, Legal, Tax etc.
  • Matt highlighted derivatives regulation with the current activity in central clearing, Dodd-Frank, Basel III and EMIR.
  • Chris picked up on this and added Solvency II into the mix (I think you can sense regulation was a key theme...). He expressed the need and desirability of a Unique Product Identifier (UPI see report) as essential for the financial markets industry and how we need not just stand still now the LEI was coming. He said that industry associations really needed to pick up their game to get more standards in place but added that the IMA had been quite proactive in this regard. He expressed his frustration at current data licensing arrangements with data vendors, with the insistence on a single point of use being the main issue (big problem if you are in security services serving your clients I guess)
  • Robert added that his main issues were data costs and data quality
  • Andrew then brought the topic around to risk management and its impact on data management.
  • Colin suggested that more effort was needed to understand the data needs of end users within risk management. He also mentioned that products are not all standard and data complexity presents problems that need addressing in data management.
  • Chris mentioned that there 30 data fields used in Solvency II calculations and that if any are wrong this would have a direct impact on the calcualated capital charge (i.e. data is important!)
  • Colin got onto the topic of unstructured data and said how it needed to be tagged in some way to become useful. He suggested that there was an embrionic cross-over taking place between structured and unstructured data usage.
  • Sally thought that the merging of Business Intelligence into Data Management was a key development, and that if you have clean data then use it as much as you can.
  • Robert thought that increased complexity in risk management and elsewhere should drive the need for increased automation.
  • Colin thought cost pressures mean that the industry simply cannot afford the old IT infrastructure and that architecture needs to be completely rethought.
  • Chris said that we all need to get the basics right, with LEI but then on to UPI. He said to his knowledge data management will always be a cost centre and standardisation was a key element of reducing costs across the industry.
  • Sally thought that governance and ownership of data was wooly at many organisations and needed more work. She added this needed senior sponsorship and that data management was an ongoing process, not a one-off project.
  • Matt said that the "stick" was very much needed in addition to the carrot, advising that the proponents of improved data management should very much lay out the negative consequences to bring home the reality to business users who might not see the immediate benefits and costs.

Overall good panel, lots of good debate and exchanging of ideas.

 

14 June 2013

Xenomorph at SIFMA Tech 2013 NYC

Quick note to say that Xenomorph will be exhibiting at this week's SIFMA Tech 2013 event in New York. You can find us on the Microsoft stand (booth 1507) on both Tuesday 18th and Wednesday 19th, and we can show you some of the work we have been doing with Microsoft on their Windows Azure cloud platform. 

I am also speaking at the event with Microsoft and a few other partners on Wednesday 19th at 11:40am:

"Managing Data Complexity in Challenging Times" - a panel with the following participants:

  • Rupesh Khendry – Head WW Capital Markets Industry Solutions, Microsoft Financial Services
  • Marc Alvarez - Senior Director, Interactive Data
  • Satyam Kancharla - SVP, Numerix
  • Dushyant Shahrawat – Senior Reseacrh Director, CEB Towergroup
  • Brian Sentance - CEO, Xenomorph 

Hope to see you there!

16 October 2012

The Missing Data Gap

Getting to the heart of "Data Management for Risk", PRMIA held an event entitled "Missing Data for Risk Management Stress Testing" at Bloomberg's New York HQ last night. For those of you who are unfamiliar with the topic of "Data Management for Risk", then the following diagram may help to further explain how the topic is to do with all the data sets feeding the VaR and scenario engines.

Data-Flow-for-Risk-Engines
I have a vested interest in saying this (and please forgive the product placement in the diagram above, but hey this is what we do...), but the topic of data management for risk seems to fall into a functionality gap between: i) the risk system vendors who typically seem to assume that the world of data is perfect and that the topic is too low level to concern them and ii) the traditional data management vendors who seem to regard things like correlations, curves, spreads, implied volatilities and model parameters as too business domain focussed (see previous post on this topic) As a result, the risk manager is typically left with ad-hoc tools like spreadsheets and other analytical packages to perform data validation and filling of any missing data found. These ad-hoc tools are fine until the data universe grows larger, leading to the regulators becoming concerned about just how much data is being managed "out of system" (see past post for some previous thoughts on spreadsheets).

The Crisis and Data Issues. Anyway enough background above and on to some of the issues raised at the event. Navin Sharma of Western Asset Management started the evening by saying that pre-crisis people had a false sense of security around Value at Risk, and that crisis showed that data is not reliably smooth in nature. Post-crisis, then questions obviously arise around how much data to use, how far back and whether you include or exclude extreme periods like the crisis. Navin also suggested that the boards of many financial institutions were now much more open to reviewing scenarios put forward by the risk management function, whereas pre-crisis their attention span was much more limited.

Presentation. Don Wesnofske did a great presentation on the main issues around data and data governance in risk (which I am hoping to link to here shortly...)

Issues with Sourcing Data for Risk and Regulation. Adam Litke of Bloomberg asked the panel what new data sourcing challenges were resulting from the current raft of regulation being implemented. Barry Schachter cited a number of Basel-related examples. He said that the costs of rolling up loss data across all operations was prohibitative, and hence there were data truncation issues to be faced when assessing operational risk. Barry mentioned that liquidity calculations were new and presenting data challenges. Non centrally cleared OTC derivatives also presented data challenges, with initial margin calculations based on stressed VaR. Whilst on the subject of stressed VaR, Barry said that there were a number of missing data challenges including the challenge of obtaining past histories and of modelling current instruments that did not exist in past stress periods. He said that it was telling on this subject that the Fed had decided to exclude tier 2 banks from stressed VaR calculations on the basis that they did not think these institutions were in a position to be able to calculate these numbers given the data and systems that they had in place.

Barry also mentioned the challenges of Solvency II for insurers (and their asset managers) and said that this was a huge exercise in data collection. He said that there were obvious difficulties in modelling hedge fund and private equity investments, and that the regulation penalised the use of proxy instruments where there was limited "see-through" to the underlying investments. Moving on to UCITS IV, Barry said that the regulation required VaR calculations to be regularly reviewed on an ongoing basis, and he pointed out one issue with much of the current regulation in that it uses ambiguous terms such as models of "high accuracy" (I guess the point being that accuracy is always arguable/subjective for an illiquid security).

Sandhya Persad of Bloomberg said that there were many practical issues to consider such as exchanges that close at different times and the resultant misalignment of closing data, problems dealing with holiday data across different exchanges and countries, and sourcing of factor data for risk models from analysts. Navin expanded more on his theme of which periods of data to use. Don took a different tack, and emphasised the importance of getting the fundamental data of client-contract-product in place, and suggested that this was a big challenge still at many institutions. Adam closed the question by pointing out the data issues in everyday mortgage insurance as an example of how prevalant data problems are.

What Missing Data Techniques Are There? Sandhya explained a few of the issues her and her team face working at Bloomberg in making decisions about what data to fill. She mentioned the obvious issue of distance between missing data points and the preceding data used to fill it. Sandhya mentioned that one approach to missing data is to reduce factor weights down to zero for factors without data, but this gave rise to a data truncation issue. She said that there were a variety of statistical techniques that could be used, she mentioned adaptive learning techniques and then described some of the work that one of her colleagues had been doing on maximum-likehood estimation, whereby in addition to achieving consistency with the covariance matrix of "near" neighbours, that the estimation also had greater consistency with the historical behaviour of the factor or instrument over time.

Navin commented that fixed income markets were not as easy to deal with as equity markets in terms of data, and that at sub-investment grade there is very little data available. He said that heuristic models where often needed, and suggested that there was a need for "best practice" to be established for fixed income, particularly in light of guidelines from regulators that are at best ambiguous.

I think Barry then made some great comments about data and data quality in saying that risk managers need to understand more about the effects (or lack of) that input data has on the headline reports produced. The reason I say great is that I think there is often a disconnect or lack of knowledge around the effects that input data quality can have on the output numbers produced. Whilst regulators increasingly want data "drill-down" and justfication on any data used to calculate risk, it is still worth understanding more about whether output results are greatly sensitive to the input numbers, or whether maybe related aspects such as data consistency ought to have more emphasis than say absolute price accuracy. For example, data quality was being discussed at a recent market data conference I attended and only about 25% of the audience said that they had ever investigated the quality of the data they use. Barry also suggested that you need to understand to what purpose the numbers are being used and what effect the numbers had on the decisions you take. I think here the distinction was around usage in risk where changes/deltas might be of more important, whereas in calculating valuations or returns then price accuracy might receieve more emphasis. 

How Extensive is the Problem? General consensus from the panel was that the issues importance needed to be understood more (I guess my experience is that the regulators can make data quality important for a bank if they say that input data issues are the main reason for blocking approval of an internal model for regulatory capital calculations). Don said that any risk manager needed to be able to justify why particular data points were used and there was further criticism from the panel around regulators asking for high quality without specifying what this means or what needs to be done.

Summary - My main conclusions:

  • Risk managers should know more of how and in what ways input data quality affects output reports
  • Be aware of how your approach to data can affect the decisions you take
  • Be aware of the context of how the data is used
  • Regulators set the "high quality" agenda for data but don't specify what "high quality" actually is
  • Risk managers should not simply accept regulatory definitions of data quality and should join in the debate

Great drinks and food afterwards (thanks Bloomberg!) and a good evening was had by all, with a topic that needs further discussion and development.

 

 

30 August 2012

Reverse Stress Testing at Quafafew

Just back from a good vacation (London Olympics followed by a sunny week in Portugal - hope your summer has gone well too) and enjoyed a great evening at a Quafafew event on Tuesday evening, entitled "Reverse Stress Testing & Roundtable on Managing Hedge Fund Risk".

Reverse Stress Testing

The first part of the evening was a really good presentation by Daniel Satchkov of Rixtrema on reverse stress testing. Daniel started the evening by stating his opinion that risk managers should not consider their role as one of trying to predict the future, but rather one more reminiscent of "car crash testing", where the role of the tester is one of assessing, managing and improving the response of a car to various "impacts", without needing to understand the exact context of any specific crash such as "Who was driving?", "Where did the accident take place?" or "Whose fault was it?". (I guess the historic context is always interesting, but will be no guide to where, when and how the next accident takes place). 

Daniel spent some of his presentation discussing the importance of paradigms (aka models) to risk management, which in many ways echos many of themes from the modeller's manifesto. Daniel emphasised the importance of imagination in risk management, and gave a quick story about a German professor of mathematics who when asked the whereabouts of one of his new students replied that "he didn't have enough imagination so he has gone off to become a poet".

In terms of paradigms and how to use them, he gave the example of Brownian motion and described how the probability of all the air in the room moving to just one corner was effectively zero (as evidenced by the lack of oxygen cylinders brought along by the audience). However such extremes were not unusual in market prices, so he noted how Black-Scholes was evidently the wrong model, but when combined with volatility surfaces the model was able to give the right results i.e. "the wrong number in the wrong formula to get the right price." His point here was that the wrong model is ok so long as you aware of how it is wrong and what its limatations are (might be worth checking out this post containing some background by Dr Yuval Millo about the evolution of the options market). 

Daniel said that he disagreed with the premise by Taleb that the range of outcomes was infinite and that as a result all risk managers should just give up and buy and a lottery ticket, however he had some sympathies with Taleb over the use of stable correlations within risk management. His illustration was once again entertaining in quoting a story where a doctor asks a nurse what the temperature is of the patients at a Russian hospital, only to be told that they were all "normal, on average" which obviously is not the most useful medical information ever provided. Daniel emphasised that contrary to what you often read correlations do not always move to one in a crisis, but there are often similarities from one crisis to the next (maybe history not repeating itself but more rhyming instead). He said that accuracy was not really valid or possible in risk management, and that the focus should be on relative movements and relative importance of the different factors assessed in risk.

Coming back to the core theme of reverse stress testing, then Daniel presented a method by which from having categorised certain types of "impacts" a level of loss could be specified and the model would produce a set of scenarios that produce the loss level entered. Daniel said that he had designed his method with a view to producing sets of scenarios that were:

  • likely
  • different
  • not missing any key dangers

He showed some of the result sets from his work which illustrated that not all scenarios were "obvious". He was also critical of addressing key risk factors separately, since hedges against different factors would be likely to work against each other in times of crisis and hedging is always costly. I was impressed by his presentation (both in content and in style) and if the method he described provides a reliable framework for generating a useful range of possible scenarios for a given loss level, then it sounds to me like a very useful tool to add to those available to any risk manager.

Managing Hedge Fund Risk

The second part of the evening involved Herb Blank of S-Network (and Quafew) asking a few questions to Raphael Douady, of Riskdata and Barry Schachter of Woodbine Capital. Raphael was an interesting and funny member of the audience at the Dragon Kings event, asking plenty of challenging questions and the entertainment continued yesterday evening. Herb asked how VaR should be used at hedge funds, to which Raphael said that if he calculated a VaR of 2 and we lost 2.5, he would have been doing his job. If the VaR was 2 and the loss was 10, he would say he was not doing his job. Barry said that he only uses VaR when he thinks it is useful, in particular when the assumptions underlying VaR are to some degree reflected in the stability of the market at the time it is used. 

Raphael then took us off on an interesting digression based on human perceptions of probability and statistical distributions. He told the audience that yesterday was his eldest daughter's birthday and what he wanted was for the members of the audience to write down on paper what was a lower and upper bound of her age to encompass a 99th percentile. As background, Raphael looks like this. Raphael got the results and found that out of 28 entries, the range of ages provided by 16 members of the audience did not cover his daughters age. Of the 12 successful entries (her age was 25) six entries had 25 as the upper bound. Some of the entries said that she was between 18 and 21, which Raphael took to mean that some members of the audience thought that they knew her if they assigned a 99th percentile probability to their guess (they didn't). His point was that even for Quafafewers (or maybe Quafafewtoomuchers given the results...) then guessing probabilities and appropriate ranges of distributions was not a strong point for many of the human race.

Raphael then went on to illustrate his point above through saying that if you asked him whether he thought the Euro would collapse, then on balance he didn't think it was very likely that this will happen since he thinks that when forced Germany would ultimately come to the rescue. However if you were assessing the range of outcomes that might fit within the 99th percentile distribution of outcomes, then Raphael said that the collapse of the Euro should be included as a possible scenario but that this possibility was not currently being included in the scenarios used by the major financial institutions. Off on another (related) digression, Raphael said that he compared LTCM with having the best team of Formula 1 drivers in the world that given a F1 track would drive the fastest and win everything, but if forced to drive an F1 car on a very bumpy road this team would be crashing much more than most, regardless of their talent or the capabilities of their vehicle.

Barry concluded the evening by saying that he would speak first, otherwise he would not get chance to given Raphael's performance so far. Again it was a digression from hedge fund risk management, but he said that many have suggested that risk managers need to do more of what they were already doing (more scenarios, more analysis, more transparency etc). Barry suggested that maybe rather than just doing more he wondered whether the paradigm was wrong and risk managers should be thinking different rather than just more of the same. He gave one specific example of speaking to a structurer in a bank recently and asking given the higher hurdle rates for capital whether the structurer should consider investing in riskier products. The answer from the structurer was the bank was planning to meet about this later that day, so once again it would seem that what the regulators want to happen is not necessarily what they are going to get... 

 

 

21 June 2012

SIFMA NYC 2012 - the event to be (outside) at

Quick note to say that it was the week of SIFMA in New York, what was once the biggest event in the fintech calendar. It unfortunately continutes its decline, with charging for entrance for the first time and a continued reduction in the number of vendors exhibiting. Eli Manning of the New York Giants turned up to speak, but I guess he didn't have to pay the entrance fee (to say the least...). 

Regardless of the exhibitions decline, perhaps the organisers should start charging for entrance to the Bridges Bar in the Hilton Hotel where the event is held? Seems like the world and his wife still want to meet up in New York around this time, it is just that the organisers need to find some better ways to tap into this enthusiam to talk face to face.

14 June 2012

Paris Financial Information Summit 2012

I attended the Financial Information Summit event on Tuesday, organized in Paris by Inside Market Data and Inside Reference Data.

Unsurprisingly, most of the topics discussed during the panels focused on reducing data costs, managing the vendor relationship strategically, LEI and building sound data management strategies.

Here is a (very) brief summary of the key points touched which generated a good debate from both panellists and audience:

Lowering data costs and cost containment panels

  • Make end-users aware of how much they pay for that data so that they will have a different perspective when deciding if the data is really needed or a "nice to have"
  • Build a strong relationship with the data vendor: you work for the same aim and share the same industry issues
  • Evaluate niche data providers who are often more flexible and willing to assist while still providing high quality data
  • Strategic vendor management is needed within financial institutions: this should be an on-going process aimed to improve contract mgmt for data licenses
  • A centralized data management strategy and consolidation of processes and data feeds allow cost containment (something that Xenomorph have long been advocating)
  • Accuracy and timeliness of data is essential: make sure your vendor understands your needs
  • Negotiate redistribution costs to downstream systems

One good point was made by David Berry, IPUG-Cossiom, on the acquisition of data management software vendors by the same data providers (referring to the Markit-Cadis and PolarLake-Bloomberg deals) and stating that it will be tricky to see how the two business units will be managed "separately" (if kept separated...I know what you are thinking!).

There were also interesting case studies and examples supporting the points above. Many panellists pointed out how difficult can be to obtain high quality data from vendors and that only regulation can actually improve the standards. Despite the concerns, I must recognize that many firms are now pro-actively approaching the issue and trying to deal with the problem in a strategic manner. For example, Hand Henrik Hovmand, Market Data Manager, Danske Bank, explained how Danske Bank are in the process of adopting a strategic vendor system made of 4 steps: assessing vendor, classifying vendor, deciding what to do with the vendor and creating a business plan. Vendors are classified as strategic, tactical, legacy or emerging. Based on this classification, then the "bad" vendors are evaluated to verify if they are enhancing data quality. This vendor landscape is used both internally and externally during negotiation and Hovmand was confident it will help Danske Bank to contain costs and get more for the same price.

I also enjoyed the panel on Building a sound management strategy where Alain Robert- Dauton, Sycomore Asset Management, was speaking. He highlighted how asset managers, in particular smaller firms, are now feeling the pressure of regulators but at the same time are less prepared to deal with compliance than larger investment banks. He recognized that asset managers need to invest in a sound risk data management strategy and supporting technology, with regulators demanding more details, reports and high quality data.

For a summary on what was said on LEI, then seems like most financial institutions are still unprepared on how it should be implemented, due to uncertainty around it but I refer you to an article from Nicholas Hamilton in Inside Reference Data for a clear picture of what was discussed during the panel.

Looking forward, the panellists agreed that the main challenge is and will be managing the increasing volume of data. Though, as Tom Dalglish affirmed, the market is still not ready for the cloud, given than not much has been done in terms of legislation. Watch out!

The full agenda of the event is available here.

08 June 2012

Federal Reserve beats the market (at ping pong...)

Thanks to all those who came and along and supported "Ping Pong 4 Public Schools" at the AYTTO fund raiser event at SPiN on Wednesday evening. Great evening with participants in the team competition from the TabbGroup, Jeffries Investment Bank, Toro Trading, MissionBig, PolarLake, AIG, Mediacs, Xenomorph and others. In fact the others included the Federal Reserve, who got ahead of the market and won the team competition...something which has to change next year! Additional thanks to SPiN NYC for hosting the event, and to Bonhams for conducting the reverse auction.

Some photographs from the event below:

Photo

Ben Nisbet of AYTTO trying to make order out of chaos at the start of the team competion...

 

Photo

One the AYTTO students, glad none of us had to play her, we would have got wupped...

 

Photo

The TabbGroup strike a pose and look optimistic at the start of the evening...

 

Photo

Sidney, one of the AYTTO coaches, helping us all to keep track of the score...

 

Photo

This team got a lot of support from the audience, no idea why...

 

18 October 2011

A-Team event – Data Management for Risk, Analytics and Valuations

My colleagues Joanna Tydeman and Matthew Skinner attended the A-Team Group's Data Management for Risk, Analytics and Valuations event today in London. Here are some of Joanna's notes from the day:

Introductory discussion

Andrew Delaney, Amir Halton (Oracle)

Drivers of the data management problem – regulation and performance.

Key challenges that are faced – the complexity of the instruments is growing, managing data across different geographies, increase in M&As because of volatile market, broader distribution of data and analytics required etc. It’s a work in progress but there is appetite for change. A lot of emphasis is now on OTC derivatives (this was echoed at a CityIQ event earlier this month as well).

Having an LEI is becoming standard, but has its problems (e.g. China has already said it wants its own LEI which defeats the object). This was picked up as one of the main topics by a number of people in discussions after the event, seeming to justify some of the journalistic over-exposure to LEI as the "silver bullet" to solve everyone's counterparty risk problems.

Expressed the need for real time data warehousing and integrated analytics (a familiar topic for Xenomorph!) – analytics now need to reflect reality and to be updated as the data is running - coined as ‘analytics at the speed of thought’ by Amir. Hadoop was mentioned quite a lot during the conference, also NoSQL which is unsurprising from Oracle given their recent move into this tech (see post - a very interesting move given Oracle's relational foundations and history)

Impact of regulations on Enterprise Data Management requirements

Virginie O’Shea, Selwyn Blair-Ford (FRS Global), Matthew Cox (BNY Melon), Irving Henry (BBA), Chris Johnson (HSBC SS)

Discussed the new regulations, how there is now a need to change practice as regulators want to see your positions immediately. Pricing accuracy was mentioned as very important so that valuations are accurate.

Again, said how important it is to establish which areas need to be worked on and make the changes. Firms are still working on a micro level, need a macro level. It was discussed that good reasons are required to persuade management to allocate a budget for infrastructure change. This takes preparation and involving the right people.

Items that panellists considered should be on the priority list for next year were:

· Reporting – needs to be reliable and meaningful

· Long term forecasts – organisations should look ahead and anticipate where future problems could crop up.

· Engage more closely with Europe (I guess we all want the sovereign crisis behind us!)

· Commitment of firm to put enough resource into data access and reporting including on an ad hoc basis (the need for ad hoc was mentioned in another session as well).

Technology challenges of building an enterprise management infrastructure

Virginie O’Shea, Colin Gibson (RBS), Sally Hinds (Reuters), Chris Thompson (Mizuho), Victoria Stahley (RBC)

Coverage and reporting were mentioned as the biggest challenges.

Front office used to be more real time, back office used to handle the reference data, now the two must meet. There is a real requirement for consistency, front office and risk need the same data so that they arrive to the same conclusions.

Money needs to be spent in the right way and fims need to build for the future. There is real pressure for cost efficiency and for doing more for less. Discussed that timelines should perhaps be longer so that a good job can be done, but there should be shorter milestones to keep business happy.

Panellists described the next pain points/challenges that firms are likely to face as:

· Consistency of data including transaction data.

· Data coverage.

· Bringing together data silos, knowing where data is from and how to fix it.

· Getting someone to manage the project and uncover problems (which may be a bit scary, but problems are required in order to get funding).

· Don’t underestimate the challenges of using new systems.

Better business agility through data-driven analytics

Stuart Grant, Sybase

Discussed Event Stream Processing, that now analytics need to be carried out whilst data is running, not when it is standing still. This was also mentioned during other sessions, so seems to be a hot topic.

Mentioned that the buy side’s challenge is that their core competency is not IT. Now with cloud computing they are more easily able to outsource. He mentioned that buy side shouldn’t necessarily build in order to come up with a different, original solution.

Data collection, normalisation and orchestration for risk management

Andrew Delaney, Valerie Bannert-Thurner (FTEN), Michael Coleman (Hyper Rig), David Priestley (CubeLogic), Simon Tweddle (Mizuho)

Complexity of the problem is the main hindrance. When problems are small, it is hard for them to get budget so they have to wait for problems to get big – which is obviously not the best place to start from.

There is now a change in behaviour of senior front office management – now they want reports, they want a global view. Front office do in fact care about risk because they don’t want to lose money. Now we need an open dialogue between front office and risk as to what is required.

Integrating data for high compute enterprise analytics

Andrew Delaney, Stuart Grant (Sybase), Paul Johnstone (independent), Colin Rickard (DataFlux)

The need for granularity and transparency are only just being recognised by regulators. The amount of data is an overwhelming problem for regulators, not just financial institutions.

Discussed how OTCs should be treated more like exchange-traded instruments – need to look at them as structured data.

24 June 2011

PRMIA on Data and Analytics

Final presentation at the PRMIA event yesterday was by Clifford Rossi and was entitled "The Brave New World of Data & Analytics Following the Crisis: A Risk Manager's Perspective".

Clifford got his presentation going with a humorous and self-depricating start by suggesting that his past employment history could in fact be the missing "leading indicator" for predicting orgnisations in crisis, having worked at CitiGroup, WaMu, Countrywide, Freddie Mac and Fannie Mae. One of the other professors present said that he didn't do the same to academia (University of Maryland beware maybe!).

Clifford said that the crisis had laid bare the inadequacy and underinvestment in data and risk technology in the financial services sector. He suggested that the OFR had the potential to be a game changer in correcting this issue and in helping the role of CRO to gain in stature.

He gave an example of a project at one of the GSEs he had worked at called "Project Enterprise" which was to replace 40 year old mainframe based systems (systems that for instance only had 3 digits to identify a transaction). He said that he noted that this project had recently been killed, having cost around $500M. With history like this, it is not surprising that enterpring risk data warehousing capabilities were viewed as black holes without much payoff prior to the crisis. In fact it was only due to Basel that data management projects in risk received any attention from senior management in his view.

During the recent stress test process (SCAP) the regulators found just how woeful these systems were as the banks struggled to produce the scenario results in a timely manner. Clifford said that many banks struggled to produce a consistent view of risk even for one asset type, and that in many cases, corporate acquisitions had exascerbated this lack of consistency in obtaining accurate, timely exposure data. He said that the mortgage processing fiasco showed the inadequacy of these types of systems (echoing something I heard at another event about mortgage tagging information being completely "free-fromat", without even designated fields for "City" and "State" for instance)

Data integrity was another key issue that Clifford discussed, here talking about the lack of historical performance data leading to myopia in dealing with new products and poor defintions of product leading to risk assessments based on the originator rather than on the characteristics of the product. (side note: I remember prior to the crisis the credit derivatives department at one UK bank requisitioning all new server hardware to price new CDO squared deals given it was supposedly so profitable, it was at that point that maybe I should have known something was brewing...) Clifford also outlined some further data challenges, such as the changing statistical relationship between Debt to Income ratio and mortgage defaults once incomes were self-declared on mortgages.

Moving on to consider analytics and models, Clifford outlined a lot of the concerns covered by the Modeller's Manifesto, such as the lack of qualitative judgement and over-reliance on the quantitative, efficiency and automation superceding risk management, limited capability to stress test on a regular basis, regime change, poor model validation, and cognitive biases reinforced by backward-looking statistical analysis. He made the additional point that in relation to the OFR, they should concentrate on getting good data in place before spending resource on building models.

In terms of focus going forward, Clifford said the liquidity, counterparty and credit risk management were not well understood. Possibly echoing Ricardo Rebonato's ideas, he suggested that leading indicators need to be integrated into risk modelling to provide the early warning systems we need. He advocated that the was more to do on integrating risk views across lines of business, counterparties and between the banking and trading book.

Whilst being a proponent of the OFRs potential to mandate better Analytics and data management, he warned (sensibly in my view) that we should not think that the solution to future crises is simply to set up a massive data collection and Modelling entity (see earlier post on the proposed ECB data utility)

Clifford thinks that Dodd-Frank has the potential to do for the CRO role what Sarbanes-Oxley did in elevating the CFO role. He wants risk managers to take the opportunity presented in this post-crisis period to lead the way in promoting good judgement based on sound management of data and Analytics. He warned that senior management buy-in to risk management was essential and could be forced through by regulatory edict.

This last and closing point is where I think where the role of risk management (as opposed to risk reporting) faces it's biggest challenge, in that how can a risk manager be supported in preventing a senior business manager from seeking a overly risky new business opportunity based on what "might" happen in the future - we human beings don't think about uncertainty very clearly and the lack of a resulting negative outcome will be seen by many to invalidate the concerns put forward before a decision was made. Risk management will become known as the "business prevention" department and not regarded as the key role it should be.

28 October 2010

A French Slant on Valuation

Last Thursday, I went along to an event organized by the Club Finance Innovation on the topic of “Independent valuations for the buy-side: expectations, challenges and solutions”.

The event was held at the Palais Brongniart in Paris, which, for those who don’t know (like me till Thursday), was built in the years 1807-1826 by the architect Brongniart by order of Napoleone Bonaparte, who wanted the building to permanently host the Paris stock exchange.

Speakers at the roundtable were:

The event focussed on the role of the buy-side in financial markets, looking in particular at the concept of independent valuations and how this has taken an important role after the financial downturn.  However, all the speakers agreed that remains a large gap between the sell-side and buy-side in terms of competences and expertise in the field of independent valuations. The buy-side lacks the systems for a better understanding of financial products and should align itself to the best practices of the sell-side and bigger hedge funds.

The roundtable was started by Francis Cornut of DeriveXperts, who gave the audience a definition of independent valuation. Whilst valuation could be defined as the “set of data and models used to explain the result of a valuation”, Cornut highlighted how the difficulty is in saying what independent means; there is in fact a general confusion on what this concept represents: internal confusion, for example between the front office and risk control department of an institution, but also external confusion, when valuations are done by third-parties.

Cornut provided three criteria that an independent valuation should respect:

  • Autonomy, which should be both technical and financial;
  • Credibility and transparency;
  • Ethics, i.e.: being able to resist to market/commercial pressure and deliver a valuation which is free from external influences/opinions.

Independent valuations are the way forward for a better understanding of complex, structured financial products. Cornut advocated the need for financial parties (clients, regulators, users and providers) to invest more and understand the importance of independent valuations, which will ultimately improve risk management.

Jean-Marc Eber, President LexiFi, agreed that the ultimate objective of independent valuations is to allow financial institutions to better understand the market. To accomplish this, Eber pointed to the fact that when we speak about services to clients, we should first think of what are their real needs. The bigger umbrella of “buy-side” implies in fact different needs and there is often a contradiction on what regulators want: on one side, having independent valuations provided by independent third parties; on the other side, independent valuations really mean that internal users/staff do understand what there is underline the products that a company have.In the same way, we don’t just need to value products but also measure their risk and periodically  re-value them.It is important, in fact, to have the whole picture of the product being evaluated in order to make the buy-side more competitive.

Another point on which the speakers agreed is traceability: as Eber said, financial products don’t exist just as they are, but they go under transformation and change several times. Therefore, the market needs to follow the products across its life cycle till its maturity stage and this pose a technology challenge, in providing scenario analysis for compliance and keeping track of the audit trail.

At the question, ‘what has the crisis changed’ panellists answered:

Eber: the crisis showed the need to be more competent and technical to avoid risk. He highlighted the need to understand the product and its underlying. Many speak of having a central repository for OTCs, obligations, etc but this needs more thinking from the regulators and the financial markets. Moreover, the markets should focus more on quality data and transparency.

Eric Benhamou, CEO pricing Partners, sees an evolution of the market as the crisis showed underestimated risks which are now being taken in consideration.

Claude Martini, CEO Zeliade, advocated the need for financial markets to implement best practices for product valuations: buy-side should apply the same practices already adopted by the sell-side and verify the hypotheses, price and risk related to a financial product.  

Cornut admitted  things have changed since 2005, when they launched DerivExperts and nobody seemed to be interested in independent valuations. People would ask what value they would get from an investment in independent valuations: yes, regulators are happy but what’s the benefit for me?

This is changing now that financial institutions know that a deeper understanding of financial products increases their ability to push the products to their clients. The speech I enjoyed the most was from Patrick Hénaff, associated professor at the University of Bretagne and formerly Global Head of Quantitative Analysis - Commodites at Merrill Lynch / Bank of America.

He took a more academic approach and contested the fact that having two prices to confront is thought to reduce the incertitude on the product but highlighting as this is not always the case. I found interesting his idea of giving a product price with a confidence interval or a ‘toxic index’ which would represent the incertitude about the product and reproduce the model risk which may originate from it.

We speak too often about the risk associated to complex products but Hénaff, explained how the risk exists even on simpler products, for example the calculation of VAR on a given stock positioning. A stock is extremely volatile and we can’t know its trend; providing a confidence interval is therefore crucial. What is new instead, it is the interest that many are showing in assigning a price to a determinate risk, whilst before model risk was considered a mere operational risk coming out from the calculation process. Today, a good valuation of the risk associated to a product can result in less regulatory capital used to cover the risk and as such it is gaining much more interest from the market.

Henaff describes two approaches currently taken from academic research on valuations:

1) Adoption of statistic simulation in order to identify the risk deriving from an incorrect calibration of the model. This consists in taking historical data and test the model, through simulations and scenarios, in order to measure the risk associated in choosing a model instead of another;)

2) Have more quality data. Lack of quality data implies that models chosen are inaccurate as it is difficult to identify exactly what model we should be using to price a product.

 

Model risk, which as said above was before considered  an operational risk, now becomes of extremely importance as it can free up capital. Hénaff suggested that is key to find for model risk the equivalent of the VAR for market risk, a normalized measure. He also spoke about the concept of a “Model validation protocol”, giving the example of what happens in the pharmaceutical and biologic sectors: before launching a new pill into the market, this is tested several times.

Whilst in finance products are just given with their final valuation, the pharmaceutical sector provides a “protocol” which describes the calculations, analysis and processes used in order to get to the final value and their systems are organized to provide a report which would show all the deeper detail. To reduce risk, valuations should be a pre-trade process and not a post-trade.

This week, the A-Team group published a valuations benchmarking study which shows how buy-side institutions are turning more and more often to third-parties valuations, driven mainly by risk management, regulations and client needs. Many of the institutions interviewed also admitted that they will increase their spending in technology to automate and improve the pricing process, as well as the data source integration and the workflow.

This is in line on what has been said at the event I attended and confirmed by the technology representatives speaking at the roundtable.

I would like to end with what Hénaff said: there can’t be a truly independent valuation without transparency of the protocols used to get to that value.

Well, Rome wasn’t built in a day (and as it is my city we’re speaking about, I can say there is still much to build, but let’s not get into this!) but there is a great debate going on, meaning that financial institutions are aware of the necessity to take a step forward. Much is being said about the need for more transparency and a better understanding of complex, structured financial products and still there is a lot to debate.  Easier said than done I guess but, as Napoleon would say, victory belongs to the most persevering!

20 October 2010

Analytics Management by Sybase and Platform

I went along to a good event at Sybase New York this morning, put on by Sybase and Platform Computing (the grid/cluster/HPC people, see an old article for some background). As much as some of Sybase's ideas in this space are competitive to Xenomorph's, some are very complimentary and I like their overall technical and marketing direction in focussing on the issue of managing of data and analytics within financial markets (given that direction I would, wouldn't I?...). Specifically, I think their marketing pitch based on moving away from batch to intraday risk management is a good one, but one that many financial institutions are unfortunately (?) a long way away from.

The event started with a decent breakfast, a wonderful sunny window view of Manhattan and then proceeded with the expected corporate marketing pitch for Sybase and Platform - this was ok but to be critical (even of some of my own speeches) there is only so much you can say about the financial crisis. The presenters described two reference architectures that combined Platform's grid computing technology with Sybase RAP and the Aleri CEP Engine, and from these two architectures they outlined four usage cases.

The first use case was for strategy back testing. The architecture for this looked fine but some questions were raised from the audience about the need for distributed data cacheing within the proposed architecture to ensure that data did not become the bottleneck. One of the presenters said that distributed cacheing was one option, although data cacheing (involving "binning" of data) can limit the computational flexibility of a grid solution. The audience member also added that when market data changes, this can cause temporary but significant issues of cache consistency across a grid as the change cascades from one node to another.

Apparently a cache could be implemented in the Aleri CEP engine on each grid node, or the Platform guy said that it was also possible to hook in a client's own C/C++ solution into Platform to achieve this, and that their "Data Affinity" offering was designed to assist with this type of issue. In summary their presentation would have looked better with the distributed cacheing illustrated in my view, and it begged the question as to why they did not have an offering or partner in this technical space. To be fair, when asked whether the architecture had any performance issues in this way, they said for the usage case they had then no it didn't - so on that simple and fundamental aspect they were covered.

They had three usage cases for the second architecture, one was intraday market risk, one was counterparty risk exposure and one was intraday option pricing. On the option pricing case, there was some debated about whether the architecture could "share" real-time objects such as zero curves, volatility surfaces etc. Apparently this is possible, but again would have benefitted by being illustrated first as an explicit part of the architecture.

There was one question about the usage of the architecture applied to transactional problems, and as usual for an event full of database specialists there was some confusion as to whether we were talking about database "transactions" or financial transactions. I think it was the latter, but this wasn't answered too clearly but neither was the question asked clearly I guess - maybe they could have explained the counterparty exposure usage case a bit more to see if this met some of the audience member's needs.

The latter question on transactions above got a conversation going on about resilliancy within the architecture, given that the Sybase ASE database engine is held in-memory for real-time updates whilst the historic data resides on shared disk in Sybase IQ, their column-based database offering. Again full resilience is possible across the whole architecture (Sybase ASE, IQ, Aleri and the Symphony Grid from Platform) but this was not illustrated this time round.

Overall good event with some decent questions and interaction.

Xenomorph: analytics and data management

About Xenomorph

Xenomorph is the leading provider of analytics and data management solutions to the financial markets. Risk, trading, quant research and IT staff use Xenomorph’s TimeScape analytics and data management solution at investment banks, hedge funds and asset management institutions across the world’s main financial centres.

@XenomorphNews



Blog powered by TypePad
Member since 02/2008