94 posts categorized "Derivatives"

28 April 2016

PRMIA and GFT event on FRTB

Good event last night from PRMIA London and GFT on FRTB. Seems like lots to be done with the biggest changes to market risk regulation since the advent of VaR. There are some tweets from the evening covering the Internal Model Approach (IMA), Standardised Approach (SA) and PnL attribution if you click here or paste https://twitter.com/search?q=%40TheLongSentance%20%23frtb&src=typd into your browser.

02 April 2015

GARP - Value, Financial Innovation and Regulation

I went along to a GARP event on Monday night, held at the Harmonie Club in NYC. The event was introduced by Stefan Magnusson, chapter co-director of GARP and MD Market Risk Americas at Rabobank. Jeremy Josse was the main speaker, doing a talk entitled "Value, Financial Innovation and Regulation" loosely based around his book "Dinosaur Derivatives and Other Trades".

Jeremy started by a quick introduction to himself, describing how he has worked for many financial institutions during his career, but his educational background also included philosophy and economics too. He suggested that much of risk management was about the math and the technical aspects of risk management, whereas he was going to focus more on meaning rather than the underlying detail. Like many an Englishman (!) he almost seemed apologetic for suggesting this but to bare with him as he pulled various strands of thought together.

Starting with Dinosaur Derivatives, Jeremy wondered whether there could be value today in a derivatives contract or option to buy a Megalodon. Given that a Megalodon is an extinct species of giant shark, you would think not given that physical delivery might be a problem. That said, Jeremy thinks there could be a market in such an option due to:

  • Brokers - generating demand
  • Control of supply
  • Liquidity - some illiquid assets trade at a 30% premium to illiquid ones
  • Arbitrage - based on some expectation of selling at a higher price
  • Habit

Jeremy then listed off some other assets and considered their value:

  • Gold - no utility in this asset glass, reputed as a "safe-haven" asset
  • Diamonds - again no fundamental utility
  • $ - a fiat currency built on "trust" with no underlying asset, with fiat currencies being used first in China around 1000 years ago
  • Contracts for Difference - again no intrinsic value to this asset class
  • Internet/Social Media stocks - Jeremy thinks we are in internet bubble 2.0 with group think leading valuations astray

Looking at the Theory of Valuation then the comparison of market value versus intrinsic value is really analogous to technical analysis (charting/trending) versus fundamental analysis (balance sheet etc). Jeremy mentioned the Efficient Market Hypothesis (EMH) and said that anyone that has worked in the markets knows that EMH is not adhered to in the real world i.e. assets do not always reflect all information known about them. In particular Jeremy sees the work of Scheleifer and Shiller in behavioral finance as one of the most interesting areas of financial theory to work in, with the potential to quantify "irrationality". Jeremy put forward the following three choices that an individual could choose in terms of what money they would receive and what money another individual might receive:

  • $100 (me) and $0 (you) - some choose this but not all; we are not all greedy
  • $80 (me) and $80 (you) - most choose this but not all; we are not profit maximizers
  • $0 (me) and $150 (you) - a few choose this but not all; we are not all generous

Moving on to the Credit Crisis, Jeremy said that this was caused due to the mispricing of assets such as CDOs/CLOs and CDS. This mispricing was caused by complexity and a lack of transparency but such characteristics are fundamental to the nature of financial innovation. As an aside, Jeremy mentioned that smaller regional banks tend to trade at higher multiples than say universal banks due to investor perceptions of greater transparency of what is going on and what the risks are.

So moving on to Financial Innovation, Jeremy asked firstly what a financial instrument is?:

  • Rights - to future cashflows etc
  • Contractual strings/permutations - choice but leading to complexity
  • Epicycles upon epicycles - derivatives but more general dependencies and links
  • Some form of legal fiction - to arbitrage regulation and prohibitions

Jeremy talking around prohibition (aka modern regulation) being a driver of innovation, starting in history with the prohibition of usury leading financial innovation to find ways of replicating the returns of interest payments but without there being interest payments, so maybe leasing or buying goods receivable at discounts. Looking at the timeline of financial products through history:

  • Loans - available in Babylonian times
  • Stocks - available in Roman times
  • Convertibles - developed as a form of finance for the creation of the US railroads in the 19th C
  • Derivatives - back to Babylon again with property options
  • Securitizations - late '90s
  • CDS - late '90s

 Considering the Logic of Financial Innovation, Jeremy said that most professional disciplines used either Empirical Testing or Deductive Inference to innovate and check that something "worked" as such. But really there is no "social laboratory" for financial innovation or indeed for the regulation intended to control/shape it, so most things, including additionally macro economic policy, were implemented without prior testing. Jeremy said that financial innovation is both critical to our economies but also very vulnerable due to this lack of testing. 

Back to the Credit Crisis, Jeremy said that 10 years running up to the crisis were the social laboratory for CDO/CLO/CDS products but these were mispriced due to a lack of testing. This was a major cause of the crisis but such innovation (and lack of testing) is fundamental to the nature of financial innovation itself. Coming forward to today, securitization is now better understood, biases by rating agencies have been controlled and counterparty risk is being reduced through clearing. So put another way, financial innovation has a stormy creative period where a new product morphs and evolves and pushes the limits of what people, corporations and governments find attract or acceptable, until these limits are pushed too far and a crisis ensues - then finally maturity comes with experience and better understanding of the risks.

Jeremy is a collector of Antique Maps, which he says have become an interesting asset class and listed their history:

  • 1970s - emerged as a new asset class
  • Asset subject to wild price movements/patterns of behavior
  • Now an established art form

Initially dealers would visit libraries containing maps (notably Harvard in the US) and simply rip out pages from it. The market had misrepresentations of authenticity (lying), theft, short-selling, insider trading and many other dubious practices. This initial period of innovation was very destructive without regulation, but now antique maps are an established art form and asset class. So there is a real dichotomy between this destruction that eventually led to people seeing ancient maps as things of beauty, collecting them and hanging them on their wall. 

So why are Regulations needed? Jeremy said that regulation was needed to control:

  • Dysfunctional patterns of behavior
  • Extreme value fluctuations - primarily due to greed
  • Wealth distribution - implementing social justice
  • Bubbles
  • Financial innovation

But what is the Right Kind of Regulation? Jeremy said that we should not hand over "The rule of law to the rule of lawyers". He said there had been 100 years of regulation, with a lot of focus (particularly in the US) on rules that micro-manager what institutions can and cannot do, built up on closing the door after each fraud/incident. Here he talked of Dodd-Frank with all of its detail but particularly gave time to say he thought that the "Living Wills' regulation was a work of fiction and of little practical use - he quoted Hemingway who said that "You go bankrupt slowly then quickly".

Jeremy believes that Principles-Based Regulation is a better solution - although I would say that this proved no better looking at the UK vs US regulation through the crisis? He advocates taking politicians out of the rule making, with judges making decisions based upon case law as it builds up over history. The issue of enforcement seems to loom large here, even if principles could work, they will not work with enforcement. Jeremy pulled up a diagram showing arrows between Value, Financial Innovation and Regulation showing how intertwined they were. He suggested that "vexatious" litigation (the contract must cover every eventuality) was a problem in US regulation in particular. More fundamentally, creating regulation prior to knowing its effects was extremely difficult, since society and economies are not bounded games, unlike chess say where a computer can evaluate all possibilities. 

There were some audience questions, firstly on the viability of bitcoin which Jeremy was negative on, saying that without trust, value can disappear and that "our" electronic money only works because it is backed by governments. Another question talked about the SIFIs and Jeremy said he favored breaking them up over more regulation to control them.

In summary, Jeremy was a great speaker with some good ideas. As he said, most were common sense but I guess his main point was that financial innovation would not happen if it is regulated too quickly/too harshly. So new financial innovation can be destructive and painful, but regulation itself can stifle innovation and the creation of value and new markets.

 

25 March 2015

A few recent news articles out from yours truly.

First off, one about the Chief Data Officer in Money Management Executive magazine.

Second, one about Data trends of EDM in the Wall Street Letter.

Thirdly something on data quality and the CFTC getting more aggressive on Markets Media.

And as if you didn't know, today is the last day to vote for Xenomorph in the FTFNews Technology Innovation Awards, so please (pretty please!) take a minute to vote for Xenomorph. You know it makes sense (and big thank you! if you do have time...)

Vote by clicking here.

 

 

24 November 2014

PRMIA Risk Year in Review 2014

PRMIA put on their Risk Year in Review event at the New York Life Insurance Company on Thursday. Some of the main points from the panel, starting with trade:

  • The world continues to polarize between "open" and "closed" societies with associated attitudes towards trade and international exposure.
  • US growth at around 3% is better than the rest of the world but this progress is not seen/benefitting a lot of the poplation yet.
  • This against an economic background of Japan, Europe and China all struggling to maintain "healthy" growth (if at all).
  • Looking back at the financial crisis of 2008/9 it was the WTO rules that were in place that kept markets open and prevented isolationist and closed policies from really taking hold - although such populist inward-looking policies are still are major issue and risk for the global economy today.
  • Some optimistic examples of progress howver on world trade recently:
  • US Government is divided and needs to get back to pragmatic decision making
  • The Federval Reserve currently believes that external factors/the rest of the world are not major risks to growth in the US economy.

James Church of sponsor FINCAD then did a brief presentation on their recent experience and a recent survey of their clients in the area valuation and risk management in financial markets:

  • Risk management is now considered as a source of competitive advantage by many insitutions
  • 63% of survey respondents are currently involved in replacing risk systems
  • James gave the example of Alex Lurye saying risk is a differentiator
  • Aggregate view of risk is still difficult due to siloed systems (hello BCBS239)
  • Risk aggregation also needs consistency of modelling assumptions, data and analytics all together if you are avoid adding apples and pears
  • Institutions now need more flexibility in building curves post-crisis with OIS/Libor discounting (see FINCAD white paper)
    • 70% of survey respondents are involved in changes to curve basis
  • Many new calculations to be considered in collateralization given the move to central clearing
  • 62% of survey respondents are investing in better risk management process, so not just technology but people and process aswell

James was followed by a discussion on market/risk events this year:

  • Predictions are hard but 50 years ago Isaac Asimov made 10 predictions for 2014 and 8 of which have come true
  • Bonds and the Dollar are still up but yields are low - this is as a result of relatively poor performance of other currencies and the inward strength of US economy. US is firmly post-crisis economically and markets are anticipating both oil independence and future interest rate movements.
  • Employment level movements are no longer a predictor of interest rate moves, now more balance of payments
  • October 15th 40bp movement in yields in 3 hours (7 standard deviation move) - this was more positioning/liquidity risk in the absence of news - and an illustration of how regulation has moved power from banks to hedge funds
  • Risk On/Off - trading correlation is very difficult - oil price goes means demand up but 30% diver in price over the past 6 months - the correlation has changed
  • On the movie Interstellar, on one planet an astronaut sees a huge mountain but another sees it is a wave larger than anything seen before - all depends on forming your own view of the same information as to what you perceive or understand as risk

Some points of macro economics:

  • Modest slow down this quarter
  • Unemployment to drop to 5.2% in 2015 from 5.8%
  • CS see the Fed hiking rates in mid-2015 followed by 3 further hikes
    • The market does not yet agree, seeing a move in Q3 2015
  • Downside risks are inflation, slow US growth and wages growth anaemic
  • Upside risks - oil price boost to spending reducing cost of gas from 3.2% down to 2.4% of disposable income

Time for some audience questions/discussions:

  • One audience member asked the panel for thoughts on the high price of US Treasuries
  • Quantitative Easing (QE) was (understandably) targetted as having distorting effects
  • Treasury yields have been a proxy for the risk free rate in the past, but the volatility in this rate due to QE has a profound effect on equity valuations
  • Replacing maturing bonds with lower yielding instruments is painful
  • The Fed are concerned to not appear to loose control of interest rates, nor wants to kill the fixed income markets so rate rises will be slow.
  • One of the panelists said that all this had a human dimension not just markets, citing effectively non-existing interest rate levels but with -ve equity still in Florida, no incentive to save so money heads into stock which is risky, low IR of little benefit to senior citizens etc.
  • Taper talk last year saw massive sell off of emerging market currencies - one problem in assessing this is to define which economies are emerging markets - but key is that current account deficits/surpluses matter - which the US escapes as the world's reserve currency but emergining markets do not.
  • Emergining market boom of the past was really a commodities boom, and the US still leads the world's economies and current challenges may expose the limits of authoritarian capitalism

The discussion moved onto central clearing/collateral:

  • Interest rate assets for collateral purposes are currently expensive
  • Regulation may exacerbate volatility with unintended consequencies
  • $4.5T of collateral set aside currently set to rise to $12-13T
  • Risk is that other sovereign nations will target the production of AAA securities for collateral use that are not AAA
  • Banks will not be the place for risk, the shadow banking system will
  • Futures markets may be under collateralized and a source of future risk

One audience member was interested in downside risks for the US and couldn't understand why anyone was pessimistic given the stock market performance and other measures. The panel put forward the following as possible reasons behind a potential slow down:

  • Income inequality meaning benefits are not throughout the economy
  • Corporations making more and more money but not proportionate increase in jobs
  • Wages are flat and senior citizens are struggling
  • (The financial district is not representative of the rest of the economy in the US however surprising that may be to folks in Manhattan)
  • The rest of the US does not have jobs that make them think the future is going to get better

Other points:

  • Banks have badly underperformed the S&P
  • Regulation is a burden on the US economy that is holding US growth back
  • Republicans and Democrats need to co-operate much more
  • House prices need more oversight
  • Currently $1.2T in student loands and students are not expecting to earn more than their parents
  • Top 10 oil producers are all pumping full out
    • The Saudis are refusing to cut production
    • Venezuela funding policies from oil
    • Russia desparately generating dollars from oil
    • Will the US oil bonanza break OPEC - will they be able to co-ordinate effectively given their conflicting interests

 Summary - overall good event with a fair amount of economics to sum up the risks for 2014 and on into 2015. Food and wine tolerably good afterwards too!

 

05 November 2014

Data Management Summit NYC from the A-Team

The A-Team put on another good event at DMS New York yesterday. Lots of good stuff talked and here are a few takeaways that I remember, after a photo of Ludwig D'Angelo of JPMorgan:

WP_20141104_12_33_59_Raw

  • Data Utilities - One of presenters said that "Data Utility" was a really overused term second only to "Big Data". My comment would be that a lot of the managed services folks seem to want to talk about "Data Utilities" - seeming to prefer that term rather than what they are? Maybe because they perceive as better marketing and/or maybe because they hope to be annointed/appointed (how I don't know) as an industry "Data Utility". Anyway for me they fail to address the issue of client-specific data and its management very well, much to the detriment of their argument imho - although SmartStream did say that client data can be mixed up into the data services they offer. 
  • Andrew Gets Literaturally Physical - Andrew Delaney of the A-Team expressed a preference for "physical" books when talking about why the A-Team also prints the Regulatory Data Handbook2 as well as making it available online. I have to agree that holding a book still beats my Kindle experience but maybe I am just getting old. Andrew should check out this YouTube video on how the book was first introduced...
  • FIBO - The Financial Instrument Business Ontology (FIBO) was discussed in the context of trying to establish industry standards for data. As ever the usage of words like "Ontology" I suspect leaves a lot of business folks looking for the nearest double shot of expresso but that aside, seems like the EDM Council are making some progress on developing this standard. Main point from the event was industry adoption is key. I found some of the comments during the day a bit schizophrenic, in that some said that the regulators should not mandate standards (i.e. leave it to industry adoption and principles) but then in the next breath discussing the benefits (or otherwise) of the LEI (ok, not mandated but specific and coming from the regulators). Certainly the industry needs "help" (is that a strong enough word?) to get standards in place.
  • Data Quality - Lots on data quality with assessing the business value of data quality initiatives being a key point. On the same subject, Predrag of element-22 announced that the EDM Council will soon be announcing adoption of the Data Quality Index, which could be used to correlate data quality with operational KPIs for the business. 
  • Regulation (doh!) - It wouldn't be a data management event without lots of discussion on regulation - a key point being that even those regulations that are not directly/explicitly about data still imply that data management is key (take CVA calcs for example) - and on a related note it was suggested that BCBS239 should be considered as a more general data managment template for any business objective. 
  • Entity Hierarchies/LEI - Ludwig D'Angelo of JPMorgan gave a great talk and said that vendors were missing a massive opportunity in delivering good hierarchy datasets to clients, and that the effort expended on this at firms was enormous. Ludwig said that the lack of hierarchies in the Legal Entity Identifier (LEI) is a gap that the private sector could and should fill.  Ludwig also seemed initially to be thrown when one of the audience suggested that they were multiple "golden copies" of hierarchies needed, since definitions of ownership can differ depending on which department you are in (old battle of risk and finance departments again). Good discussion later of how regulation was driving all systems to be much more entity-centric rather than portfolio-centric, emphasising the importance of getting entity hierarchies right. 
  • DCAM - John Bottega did a great presentation on the Data Management Capability Model (DCAM). John asked Predrag of element-22 to speak about DCAM and he said that unlike previous models (DMM) then this framework would not only assess where you are in data management but will also show you where you need to go. DCAM covers data management strategy / operations / quality / business case / data architecture / tech architecture / governance / program. From what I could see it looked like a great framework - it appeared like common sense and obvious but that is in itself difficult to achieve so good effort I think. Element-22 will offer an online service around DCAM that will also allow anonymous benchmarking of data management capabilities as more institutions get involved (update: the service is called pellustro).
  • BCBS239 - Big thanks to John M. Fleming of BNY Mellon and Srikant Ganesan of Risk Focus for taking part in the panel with me. Less focus on spreadsheet use and abuse on this panel unlike the London Panel from last month. John had some very practical ideas such as the use of Wikis to publish/gather data dictionary information and with a large legacy infrastructure you are better documenting differences in definitions across systems rather than trying to change the world from day one. Echoing some of the points from DMS London, it was thought that making the use of internal data standards as part of a project sign off was very pragmatic data governance, but that also some systems should be marked/assessed as obsolete/declining and hence blocked from any additional usage in new project work. Bit of a plug for some of our recent work on data validation and exception management, but the panel said that BCBS239 needs to encompass audit/lineage on calculations/derived data/rules in addition to just the raw data

You can get more on the day by taking a look at my feed via @TheLongSentance and involving others at #DMSNYC.

 

16 October 2014

TabbForum MarketTech 2014: Game of Smarts

A great afternoon event put on by TabbFORUM in New York yesterday with a number of panels and one on one interviews (see agenda). You can see some of went on at the event via the hashtag #TabbTech or via the @XenomorphNews feed.

WP_20141015_16_36_01_Raw

"Death of Legacy" Panel Discussion

13 October 2014

A-Team DMS London Event and BCBS239 Panel

Good day at the A-Team's DMS London event last Wednesday. The day started with Tom Dalglish doing a pretty passable impression of a stand-up comedian in the morning keynote to open the day - not exactly an easy thing to do if 1) you are asked to do it very much at the last minute and 2) this is data management, not the subject that most comedians would immediately reach out for. So due kudos to Tom, and some of the comments he made about technology architects and technology builders were funny and resonated with the audience, such as this quote coming from a technologist: "How can I give you the requirements, I haven't finished the code yet?" (I think we have all been there on that one a few times in our careers...).

You can find some of the main points from the various panels at via @XenomorphNews or more generally by #dmslondon (you could also find out a bit via my twitter account @TheLongSentance so long as you don't mind the odd photograph and a few bits of personal baggage now and again).

BCBS239 Panel - I took part in the panel on BCBS239 on risk data aggregation and reporting, something which I have written about before, and obviously a prime example of how regulation is influencing (dictating?) financial markets institutions to take data management seriously. Dennis Slattery of EDMWorks moderated the panel, and on the panel with me was Sally Hinds of DCMS, and Mikael Soboen, head of risk systems at BNP Paribas. 

IMG_3401

BCBS239 Panel at DMS London

Dennis started by outlining the four pillars of BCBS239:

  • Pillar 1 “Overarching Governance and Infrastructure.”
  • Pillar 2 “Risk aggregation” capabilities.
  • Pillar 3 “Risk reporting” capabilities.
  • Pillar 4 “Supervisory review, tools and cooperation."

Regulatory Chicken - Dennis started by asking the panel whether BCBS was another game of regulatory "chicken" where the approach of "principles" means 1) the banks do the minimum and wait for the banks to inspect and tell them what they specifically have to do 2) the regulators don't really want to be more specific beyond principles because they themselves are unsure of what is needed and want to learn from what different banks have done. General concensus from the panel debate was that firms were not doing as much as they could, but that banks needed to show at least that they had a program in place and running by the January 2016 deadline or face big issues with the regulators (so the game of regulatory chicken is "on" seems to be the conclusion). Mikael Soboen added that he was unsure whether his regulator would have the time to conduct the BCBS239 given the workload that the regulators currently faced. 

The End of Spreadsheets? - Dennis asked whether BCBS239 and the requirements for having a clear data lineage meant this sounded the bell for the end of spreadsheet usage at banks. I said not - I personally feel that a lot of folks in technology underestimate how difficult using software is for many business users and tools that make manipulating data easy like spreadsheets will have a role for the foreseeable future. I suggested that spreadsheets are a great adhoc reporting and analysis tool, and things mainly go wrong when they are used as a personal, "siloed" desktop database.

BCBS239 does not itself preclude the usage of spreadsheets and end user computing, but rather like a lot of regulation says that their usage must be taken seriously - in my view there is a tendency for some in IT to regard spreadsheets as someone else's problem, which is understandable but problematic for any CDO. Also there are approaches to spreadsheet usage that can help maintain data lineage, such as what Microsoft offers with web provision of spreadsheet dashboards using PowerView and PowerBI (used in our TimeScape MarketPlace offering), folks such as Cluster7 with their "closed circuit TV" for spreadsheet monitoring, and indeed Xenomorph with our SpreadSheet Inside approach of including centralised spreadsheet-like calculations as a supported data type within the audited data management process.

Data Dictionary - Mikael said that one responsibility he had was to represent the investment bank within the wider data dictionary initiatives due to BCBS239 at the retail bank, and said that this was challenging given the different terminology sometimes used. 

Is BCBS239 a Project or Data Governance? - The panel thought that the best approach was to use BCBS239 as a framework for compliance with current regulation and regulation to come, but that this needs to obviously be subject to having the budget to do so. There were some general comments on how the data management needs of the front office and risk were converging. Standards such as FIBO were also discussed, with feedback being that they are desirable but that it is early days where their immaturity means they are often used for specific areas such as modeling counterparty data. 

Overall a good panel (I hope!) with a good amount of audience questions and participation. Again you can find some of the main points from the various panels at via @XenomorphNews or more generally by #dmslondon (you could also find out a bit via my twitter account @TheLongSentance so long as you don't mind the odd photograph and a few bits of personal baggage now and again).

IMG_3352

A bit of fun - Brian looking up to Ron Wilbraham at DMS London

11 September 2014

A-Team DMS Awards 2014 - Xenomorph on the Cloud

A-Team’s DMS Data Management Awards close on the 26th of September so if you haven't already, please vote for Xenomorph!

Xenomorph on the Cloud - First of a few lookbacks at what we have been doing over the past year - firstly with a short animation about one of our major initiatives this year, cloud provision of data management and a new venture into cloud-based data publishing with the TimeScape MarketPlace

So it would be fantastic if you could support Xenomorph by voting here

Thank you!

01 July 2014

Cloud, data and analytics in London - thanks for coming along!

We had over 60 folks along to our event our the Merchant Taylors' Hall last week in London. Thanks to all who attended, all who helped with the organization of the event and sorry to miss those of you that couldn't come along this time.

Some photos from the event are below starting with Brad Sevenko of Microsoft (Director, Capital Markets Technology Strategy) in the foreground with a few of the speakers doing some last minute adjustments at the front of the room before the guests arrived:

AzureUK-1

 

Rupesh Khendry of Microsoft (Head of World-Wide Capital Markets Solutions) started off the presentations at the event, introducing Microsoft's capital markets technology strategy to a packed audience:

AzureUK-3

 

After a presentation by Virginie O'Shea of Aite Group on Cloud adoption in capital markets, Antonio Zurlo (below) of Microsoft (Senior Program Manager) gave a quick introduction to the services available through the Microsoft Azure cloud and then moved on to more detail around Microsoft Power BI:

AzureUK-5

 

After Antonio, then yours truly (Brian Sentance, CEO, Xenomorph) gave a presentation on what we have been building with Microsoft over the past 18 months, the TimeScape MarketPlace. At this point in the presentation I was giving some introductory background on the challenges of regulatory compliance and the pros and cons between point solutions and having a more general data framework in place:

AzureUK-6

 

The event ended with some networking and further discussions. Big thanks to those who came forward to speak with me afterwards, great to get some early feedback.

AzureUK-8

 

24 June 2014

Cloud, data and analytics in London. Tomorrow Wednesday 25th June.

One day to go until our TimeScape MarketPlace breakfast briefing "Financial Markets Data and Analytics. Everywhere You Need Them" at Merchant Taylor's Hall tomorrow, Wednesday June 25th. With over ninety people registered so far it should be a great event, but if you can make it please register and come along, it would be great to see you there.

19 June 2014

Cloud, data and analytics in London. Next Wednesday June 25th.

Less than one week to go until our TimeScape MarketPlace breakfast briefing "Financial Markets Data and Analytics. Everywhere You Need Them" at Merchant Taylor's Hall on Wednesday June 25th. 

Come and join Xenomorph, Aite Group and Microsoft for breakfast and hear Virginie O'Shea of the analyst firm Aite Group offering some great insights from financial institutions into their adoption of cloud technology, applying it to address risk management, data management and regulatory reporting challenges.

Microsoft will be showing how their new Power BI can radically change and accelerate the integration of data for business and IT staff alike, regardless of what kind of data it is, what format it is stored in or where it is located.

And Xenomorph will be demonstrating the TimeScape MarketPlace, our new cloud-based data mashup service for publishing and consuming financial markets data and analytics. 

In the meantime, please take a look at the event and register if you can come along, it would be great to see you there.

11 June 2014

Financial Markets Data and Analytics. Everywhere London Needs Them.

Pleased to announce that our TimeScape MarketPlace event "Financial Markets Data and Analytics. Everywhere You Need Them" is coming to London, at Merchant Taylor's Hall on Wednesday June 25th. 

Come and join Xenomorph, Aite Group and Microsoft for breakfast and hear Virginie O'Shea of the analyst firm Aite Group offering some great insights from financial institutions into their adoption of cloud technology, applying it to address risk management, data management and regulatory reporting challenges.

Microsoft will be showing how their new Power BI can radically change and accelerate the integration of data for business and IT staff alike, regardless of what kind of data it is, what format it is stored in or where it is located.

And Xenomorph will be demonstrating the TimeScape MarketPlace, our new cloud-based data mashup service for publishing and consuming financial markets data and analytics. 

In the meantime, please take a look at the event and register if you can come along, it would be great to see you there.

14 May 2014

Clients and Partners. Everywhere You Need Them.

Quick thank you to the clients and partners who took some time out of their working day to attend our breakfast briefing, "Financial Markets Data and Analytics. Everywhere You Need Them." at Microsoft's Times Square offices last Friday morning. Not particularly great weather on here in Manhattan so it was great to see around 60 folks turn up...

Photo 1

 
Rupesh Khendry of Microsoft (Head of World-Wide Capital Markets Solutions) started the event and set out the agenda for the morning. Rupesh described the expense of data within financial markets, and the difficulties experienced by risk managers in pulling together all the data and analytics they need...  Photo 2
 
 ...and following Rupesh was Antonio Zurlo (below) of Microsoft (Senior Program Manager) who explained the fundamentals of Microsoft Azure and what services and infrastructure it offers, including public cloud, virtual private cloud and hybrid cloud architectures. Antonio also described a key usage pattern for HPC/grid on Azure being used to "burst to the cloud" when on-premise infrasture needs to be extended for end/intra-day risk calcs...
Photo 3
 
Sang Lee (below) of Aite Group (Managing Partner) then delivered his presentation "Floating in the Capital Markets Cloud: Moving Beyond Data Storage". Sang's main findings from the survey of 20 financial institutions were that concerns about security and SLAs relating to cloud usage remain, but even those that were concerned about this also said they were planning to start a cloud project within the next 24 months. Cloud technology seems to becoming more acceptable of late, and Sang said this seems to be due to regulation, cost pressures and the desire to offer better services to clients. Sang confirmed that HPC/Grid with "burst to the cloud" is a common usage pattern and that "Data as a Service" is becoming more popular... 
Photo 4
 
Fred Veasley (below) of Microsoft (Tech Solutions Professional) to introduce Microsoft Power BI and Office 365. Fred explained how Power BI extended the capabilities of Excel with data search (finding and retrieving publicized data sources both within an organization and over the web), its integration capabilities with standard databases, NoSQL databases, data standards such as OData and new APIs/sources of data such as Facebook. Once downloaded, the data can be shaped and merged with other datasets (for instance combining data from positions databases/systems with analytics and data from the cloud), and kept up to date automatically. In addition to Power BI, Power View enables great visualizations and interactive dashboards to be created, and once finalized these can be deployed centrally via web pages down to end users...
Photo 5
 
After Fred, Brian Sentance (below), CEO of Xenomorph explained the origins of the TimeScape MarketPlace. Based on some discussions with Microsoft about 18 months back, the idea was effectively to firstly to get TimeScape running in the Microsoft Azure cloud, secondly to turn the data management capabilities of TimeScape "upside-down" by using it as a means to upload and publish data to the cloud and thirdly to provide one-to-many access to multiple sources of data via web interfaces and key delivery tools such as Microsoft Power BI. Put another way, without any local software or hardware infrastructure both business users and IT staff can access multiple data sources in the same format and using the same data model wherever the data is needed. In addition to .NET and Java interfaces to the TimeScape MarketPlace via OData, web API delivery into F#, Python, R and MATLAB are all in development...
Photo 1 - Copy
 
...and in addition to downloading data via Power BI, Brian also demonstrated how you could build on the data using "Power View" to create powerful analytical dashboard functionality that could be built and tested in Excel, then deployed centrally within a browser for access by users outside of Excel. He added that partners was one of the key aspects for the platform, and introduced the TimeScape MarketPlace Partner Program for the platform to get data, analytics, model vendors, software and service vendors involved and building on the platform. Andrew Tognela (below) of Microsoft (Worldwide Managing Director) closed the presentations...
Photo 4 - Copy

15 April 2014

Financial Markets Data and Analytics. Everywhere You Need Them.

Very pleased to announce that Xenomorph will be hosting an event, "Financial Markets Data and Analytics. Everywhere You Need Them.", at Microsoft's Times Square New York offices on May 9th.

This breakfast briefing includes Sang Lee of the analyst firm Aite Group offering some great insights from financial institutions into their adoption of cloud technology, applying it to address risk management, data management and regulatory reporting challenges.

Microsoft will be showing how their new Power BI can radically change and accelerate the integration of data for business and IT staff alike, regardless of what kind of data it is, what format it is stored in or where it is located.

And Xenomorph will be introducing the TimeScape MarketPlace, our new cloud-based data mashup service for publishing and consuming financial markets data and analytics. More background and updates on MarketPlace in coming weeks.

In the meantime, please take a look at the event and register if you can come along, it would be great to see you there.

31 March 2014

Innovations in Liquidity Risk Management - PRMIA

PRMIA put on an event at MSCI on Wednesday, called "Innovations in Liquidity Risk Management".

 

WP_20140326_18_04_09_Raw

Melissa Sexton of Morgan Stanley introduced the agenda, saying that the evening would focus on three aspects of liquidity risk management:

  • methodology
  • industry practice
  • regulation

LiquidityMetrics by MSCI - Carlo Acerbi of MSCI then took over with his presentation on "LiquidityMetrics". Carlo said that he was pleased to be involved with MSCI (and RiskMetrics, aquired by MSCI) in that it had helped to establish and define standards for risk management that were used across the industry. He said that liquidity risk management was difficult because:

  • Clarity of Definition - Carlo suggest that if he asked the audience to define liquidity risk he would receive 70 differing definitions. Put another way, he suggested that liquidity risk was "a strange animal with many faces".
  • Data Availability - Carlo said that there were aspects of the market that we unobservable and hence data was scarce/non-existent and as such this was a limit on the validity of the models that could be applied to liquidity risk.

Carlo went on to clarify that liquidity risk was different depending upon the organization type/context being considered, with banks obviously focusing on funding. He said that LiquidityMetrics was focused on asset liquidity risk, and as such was more applicable to the needs of asset managers and hedge funds given recent regulation such as UCITS/AIFMD/FormPF. The methodology is aimed at bringing traditional equity market impact models out from the trading floor across into risk management and across other asset classes. 

Liquidity Surfaces - LiquidityMetrics measures the expected price impact for an order of a given size, and as such has dimensions in:

  • order size
  • liquidity time horizon
  • transaction costs

The representation shown by Carlo was of a "liquidity surface" with x dimension of order size (both bid and ask around 0), y dimension of time horizon for liquidation and z (vertical) dimension of transaction cost. The surface shown had a U-shaped cross section around zero order size, at which the transaction cost was half the bid-ask spread (this link illustrates my attempt at verbal visualization). The U-shape cross section indicates "Market Impact", its shape over time "Market Elasticity" and the limits for what it is observable "Market Depth". 

Carlo then moved to consider a portfolio of instruments, and how obligations on an investment fund (a portfolio) can be translated into the estimated transaction costs of meeting this obligations, so as to quantify the hidden costs of redemption in a fund. He mentioned that LiquidityMetrics could be used to quantify the costs of regulations such as UCITS/AIFMD/FormPF. There was some audience questioning about portfolios of foreign assets, such as holding Russian Bonds (maybe currently topical for an audience member maybe?). Carlo said that you would use both the liquidity surfaces for both the bond itself and the FX transaction (and in FX, there is much data available). He was however keen to emphasize that LiquidityMetrics was not intended to be used to predict "regime change" i.e. it is concerned with transaction costs under normal market conditions). 

Model Calibration - In terms of model calibration, then Carlo said that the established equity market impact models (see this link for some background for instance) have observable market data to work with. In equity markets, traditionally there was a "lit" central trading venue (i.e. an exchange) with a star network of participants fanning out from it. In OTC markets such as bonds, there is no star network but rather many to many linkages establised between all market participants, where each participant may have a network of connections of different size. As such there has not been enough data around to calibrate traditional market impact models for OTC markets. As a result, Carlo said that MSCI had implemented some simple models with a relatively small number of parameters. 

Two characteristics of standard market impact models are:

  1. Permanent Effects - this is where the fair price is impacted by a large order and the order book is dragged along to follow this.
  2. Temporary Effects - this is where the order book is emptied but then liquidity regenerates

Carlo said that the effects were obviously related to the behavioural aspects of market participants. He said that the bright side for bonds (and OTC markets) was given that the trades are private there was no public information, and price movements were often constrained by theoretical pricing, therefore permanent effects could be ignored and the fair price is insenstive to trading (again under "normal" market conditions). Carlo then moved on to talk about some of the research his team was doing looking at the shape of the order book and the time needed to regenerate it. He talked of "Perfectly Elastic" markets that digest orders immediately and "Perfectly Plastic" markets that never regenerate, and how "Relaxation Time" measures in days how long the market takes to regenerate the order book. 

WP_20140326_18_34_29_Raw

Liquidity Observatory - Carlo described how the data was gathered from market participants on a monthly basis using a spreadsheet to categorize the bond/asset class type, and again using simple parameters from active "expert" traders. Take a look at this link and sign up if this is you. (This sounded to me a lot like another "market consensus" data gathering exercise which are proving increasingly popular, such as one the first I had heard of many years back in Totem - we are not quite fully ready for "crowdsourcing" in financial markets maybe, but more people are seeing sense in sharing data.). 

Panel Debate - Ron Papenek of MSCI was moderator of the panel, and asked Karen Cassidy of Morgan Stanley about her experiences in liquidity risk management.

Liqudity Risk Management at Banks - Karen started by saying that in liquidity management at Morgan Stanley they look at:

  • Funding
  • Operating Capital
  • Client Behaviour

Since 2008, Karen said that liquidity management had become a lot more rigorous and formalized, being rule based and using a categorisation of assets held from highly liquid to highly illiquid. She said that Morgan Stanley undertake stress testing by market and also by idiosyncratic risk over time frames of 1 month and 1 year. As part of this they are assessing the minimum operating liquidity needed based on working capital needs. 

Karen added that Morgan Stanley are expending a lot of effect currently on data collection and modelling given that their data is specific to a retail broker-dealer unit, unlike many other firms. They are also looking at metrics around financial advisors, and how many clients follow the financial advisor when he or she decides to switch firms. 

Business or Regulation Driving Liquidity Risk Management - Ron asked Karen what were the drivers of their processes at Morgan Stanley. Karen said that in 2008 the focus was on fundability of assets, saying that the FED was monitoring this on a daily basis. She made the side comment that this monitoring was not unusual since "Regulators live with us anyway". Karen said that it was the responsibility of firms to come up with the controls and best practice needed to manage liquidity risk, and that is what Morgan Stanley do anyway.

Karen added that in her view the industry was over-funding and funding too long in response to regulation, and that funding would be at lower but still pragmatic levels in the absence of regulatory pressure. Like many in the industry, Karen thought the regulation had swung too far in response to the 2008 crisis and would eventually swing back to more normal levels. 

Carlo added that he had written an unintentionally prescient academic paper on liquidity management in 2008 just prior to the crisis hitting, and he thought the regulators certainly arrived "after" the crisis rather than anticipating it in any way. He thought that the banks have anticipated the regulators very well with measures such as LCR and SFR already in place. 

In contrast, Carlo said that the regulators were lost in dealing with liquidity risk management for asset managers and hedge funds, with regulation such as UCITS being very vague on this topic and regulators themselves seeking guidance from the industry. He recounted a meeting he had with BaFin in 2009 where he told them that certain of their regulations made no sense and he said they acknowledge this and said the asset management industry needed to tell them what to implement (sounds like the German regulator is using the same card as the UK regulators in keeping regulations vague when they are uncertain, waiting for regulated firms to implement them to see what the regulation really becomes...). 

What Have We Learnt Since 2008 - Karen said that back in 2008 liquidity was not managed to term, funding basis was not rigorous and relied heavily on unsecured debt. She said that since then Morgan Stanley had been actively involved in shaping the requirements of better liquidity risk management with more rigorous analysis of counterparties and funding capacity. Karen said that stronger governance was a foundation for the creation of better policy and process. She said that regulators were receptive to new ideas and had been working with them closely.

What will be the effect of CCPs on OTC markets? Carlo said that when executing a large order, you have the choice between executing 1) multiple small orders with multiple counterparties or 2) a single large block order with one counterparty. In this regard, the equity and bond markets are very different. In lit equity venues, the best approach is 1), but in the bond markets approach 2) is taken since the trade information is not transparent to the market.

Obviously equity markets have become more fragmented, and this has resulted in improve market quality since it is harder to get all market information and hence the market is less resonant to big events/orders. Carlo added that with the increased transparency proposed for OTC markets with CCPs etc will this improve them? His answer was that this was likely to improve the counterparty risk inherent in the market but due to increased transaparency is likely to have a negative effect on transaction costs (I guess another example of the law of unintended consequencies for the regulators).

Audience Questions - there then followed some audience questions:

LiqidityMetrics extrapolation - one audience member asked about transaction cost extrapolation in Carlo's modelling. Carlo said that MSCI do not extrapolate and the liquidity surface terminates where the market terminates its liquidity. There was some extrapolation used along the time dimension however particularly in relation to the time-relaxation parameter. 

LiquidityMetrics "Cross-Impact" - looking at applying LiquidityMetrics to a portfolio, one audience member wondering if an order for one asset distorted the liquidity surface for other potentially related assets. Carlo said this was a very interesting area with little research done so far. He said that this "cross-impact" had not been detected in equity markets but that they were looking at it in other markets such as fixed income where effective two assets might be proxies for duration related trading. Carlo put forward a simple model of where the two assets are analogous to two species of animal feeding from the same source of food.

Long and short position liquidity modelling - one audience member asked Carlo what the effects would be of being long or short and that in a crisis you would prefer to be short (maybe obviously?) given the sell off by those with long positions. Carlo clarified that being "short" was not merely taking the negative number on a liquidity surface for a particular asset but rather a "short" is a borrowing position with an obligation to deliver a security at some defined point, and as such is a different asset with its own liquidity surface.  

Changing markets, changing participants - final question of the evening was from one member of the audience who asked if the general move out of fixed income trading by the banks over recent years was visible in Carlo's data? Carlo said that MSCI only have around two years of data so far and as such this was not yet visible but his team are looking for effects like this amongst others. He added that the August 2011 weak banks - weak sovereigns in Europe was visible with signals present in the data.

WP_20140326_20_10_03_Raw

Good food and good (really good I thought) wine put on by MSCI at the event reception. Great view of Manhattan from the 48th floor of World Trade Centre 7 too.

WP_20140326_19_46_09_Raw

 

 

 

 

 

12 March 2014

S&P Capital IQ Risk Event #2 - Enterprise or Risk Data Strategy?

Christian Nilsson of S&P CIQ followed up Richard Burtsal's talk with a presentation on data management for risk, containing many interesting questions for those considering data for risk management needs. Christian started his talk by taking a time machine back to 2006, and asking what were the issues then in Enterprise Data Management:

  1. There is no current crisis - we have other priorities (we now know what happened there)
  2. The business case is still too fuzzy (regulation took care of this issue)
  3. Dealing with the politics of implementation (silos are still around, but cost and regulation are weakening politics as a defence?)
  4. Understanding data dependencies (understanding this throughout the value chain, but still not clear today?)
  5. The risk of doing it wrong (there are risk you will do data management wrong given all the external parties and sources involved, but what is the risk of not doing it?)

Christian then moved on to say the current regulatory focus is on clearer roadmaps for financial institutions, citing Basel II/III, Dodd Frank/Volker Rule in the US, challenges in valuation from IASB and IFRS, fund management challenges with UCITS, AIFMD, EMIR, MiFID and MiFIR, and Solvency II in the Insurance industry. He coined the phrase that "Regulation Goes Hollywood" with multiple versions of regulation like UCITS I, II, III, IV, V, VII for example having more versions than a set of Rocky movies. 

He then touched upon some of the main motivations behind the BCBS 239 document and said that regulation had three main themes at the moment:

  1. Higher Capital and Liquidity Ratios
  2. Restrictions on Trading Activities
  3. Structural Changes ("ring fence" retail, global operations move to being capitalized local subsidiaries)

Some further observations were on what will be the implications of the effective "loss" of globablization within financial markets, and also what now can be considered as risk free assets (do such things now exist?). Christian then gave some stats on risk as a driver of data and technology spend with over $20-50B being spent over the next 2-3 years (seems a wide range, nothing like a consensus from analysts I guess!). 

The talk then moved on to what role data and data management plays within regulatory compliance, with for example:

  • LEI - Legal Entity Identifiers play out throughout most regulation, as a means to enable automated processing and as a way to understand and aggregate exposures.
  • Dodd-Frank - Data management plays within OTC processing and STP in general.
  • Solvency II - This regulation for insurers places emphasis on data quality/data lineage and within capital reserve requirements.
  • Basel III - Risk aggregation and counterparty credit risk are two areas of key focus.

Christian outlined the small budget of the regulators relative to the biggest banks (a topic discussed in previous posts, how society wants stronger, more effective regulation but then isn't prepared to pay for it directly - although I would add we all pay for it indirectly but that is another story, in part illustrated in the document this post talks about).

In addtion to the well-known term "regulatory arbitrage" dealing with different regulations in different jurisdictions, Christian also mentioned the increasingly used term "subsituted compliance" where a global company tries to optimise which jurisdictions it and its subsidiaries comply within, with the aim of avoiding compliance in more difficult regimes through compliance within others.

I think Christian outlined the "data management dichotomy" within financial markets very well :

  1. Regulation requires data that is complete, accurate and appropriate
  2. Industry standards of data management and data are poorly regulated, and there is weak industry leadership in this area.

(not sure if it was quite at this point, but certainly some of the audience questions were about whether the data vendors themselves should be regulated which was entertaining).

He also outlined the opportunity from regulation in that it could be used as a catalyst for efficiency, STP and cost base reduction.

Obviously "Big Data" (I keep telling myself to drop the quotes, but old habits die hard) is hard to avoid, and Christian mentioned that IBM say that 90% of the world's data has been created in the last 2 years. He described the opportunities of the "3 V's" of Volume, Variety, Velocity and "Dark Data" (exploiting underused data with new technology - "Dark" and "Deep" are getting more and more use of late). No mention directly in his presentation but throughout there was the implied extension of the "3 V's" to "5 V's" with Veracity (aka quality) and Value (aka we could do this, but is it worth it?). Related to the "Value" point Christian brought out the debate about what data do you capture, analyse, store but also what do you deliberately discard which is point worth more consideration that it gets (e.g. one major data vendor I know did not store its real-time tick data and now buys its tick data history from an institution who thought it would be a good idea to store the data long before the data vendor thought of it).

I will close this post taking a couple of summary lists directly from his presentation, the first being the top areas of focus for risk managers:

  • Counterparty Risk
  • Integrating risk into the Pre-trade process
  • Risk Aggregation across the firm
  • Risk Transparency
  • Cross Asset Risk Reporting
  • Cost Management/displacement

The second list outlines the main challenges:

  • Getting complete view of risk from multiple systems
  • Lack of front to back integration of systems
  • Data Mapping
  • Data availability of history
  • Lack of Instrument coverage
  • Inability to source from single vendor
  • Growing volumes of data

Christian's presentation then put forward a lot of practical ideas about how best to meet these challenges (I particularly liked the risk data warehouse parts, but I am unsurprisingly biassed). In summary if you get the chance then see or take a read of Christian's presentation, I thought it was a very thoughtful document with some interesting ideas and advice put forward.

 

 

 

 

 

 

 

27 November 2013

Putting the F# in Finance with TimeScape

Quick thank you to Don Syme of Microsoft Research for including a demonstration of F# connecting to TimeScape running on the Windows Azure cloud in the F# in Finance event this week in London. F# is functional language that is developing a large following in finance due to its applicability to mathematical problems, the ease of development with F# and its performance. You can find some testimonials on the language here.

Don has implemented a proof-of-concept F# type provider for TimeScape. If that doesn't mean much to you, then a practical example below will help, showing how the financial instrument data in TimeScape is exposed at runtime into the F# programming environment. I guess the key point is just how easy it looks to code with data, since effectively you get guided through what is (and is not!) available as you are coding (sorry if I sound impressed, I spent a reasonable amount of time writing mathematical C code using vi in the mid 90's - so any young uber-geeks reading this, please make allowances as I am getting old(er)...). Example steps are shown below:

Referencing the Xenomorph TimeScape type provider and creating a data context: 

F_1

Connecting to a TimeScape database:

F_2

Looking at categories (classes) of financial instrument available:

F_3

Choosing an item (instrument) in a category by name:

F_4

Looking at the properties associated with an item:

F_5

The intellisense-like behaviour above is similar to what TimeScape's Query Explorer offers and it is great to see this implemented in an external run-time programming language such as F#. Don additionally made the point that each instrument only displays the data it individually has available, making it easy to understand what data you have to work with. This functionality is based on F#'s ability to make each item uniquely nameable, and to optionally to assign each item (instrument) a unique type, where all the category properties (defined at the category schema level) that are not available for the item are hidden. 

The next event for F# in Finance will take place in New York on Wednesday 11th of December 2013 in New York, so hope to see you there. We are currently working on a beta program for this functionality to be available early in the New Year so please get in touch if this is of interest via info@xenomorph.com.  

 

04 November 2013

Risk Data Aggregation and Risk Reporting from PRMIA

Another good event from PRMIA at the Harmonie Club here in NYC last week, entitled Risk Data Agregation and Risk Reporting - Progress and Challenges for Risk Management. Abraham Thomas of Citi and PRMIA introduced the evening, setting the scene by refering to the BCBS document Principles for effective risk data aggregation and risk reporting, with its 14 principles to be implemented by January 2016 for G-SIBs (Globally Systemically Important Banks) and December 2016 for D-SIBS (Domestically Systemically Important Banks).

The event was sponsored by SAP and they were represented by Dr Michael Adam on the panel, who gave a presentation around risk data management and the problems have having data siloed across many different systems. Maybe unsurprisingly Michael's presentation had a distinct "in-memory" focus to it, with Michael emphasizing the data analysis speed that is now possible using technologies such as SAP's in-memory database offering "Hana".

Following the presentation, the panel discussion started with a debate involving Dilip Krishna of Deloitte and Stephanie Losi of the Federal Reserve Bank of New York. They discussed whether the BCBS document and compliance with it should become a project in itself or part of existing initiatives to comply with data intensive regulations such as CCAR and CVA etc. Stephanie is on the board of the BCBS committee for risk data aggregation and she said that the document should be a guide and not a check list. There seemed to be general agreement on the panel that data architectures should be put together not with a view to compliance with one specific regulation but more as a framework to deal with all regulation to come, a more generalized approach.

Dilip said that whilst technology and data integration are issues, people are the biggest issue in getting a solid data architecture in place. There was an audience question about how different departments need different views of risk and how were these to be reconciled/facilitated. Stephanie said that data security and control of who can see what is an issue, and Dilip agreed and added that enterprise risk views need to be seen by many which was a security issue to be resolved. 

Don Wesnofske of PRMIA and Dell said that data quality was another key issue in risk. Dilip agreed and added that the front office need to be involved in this (data management projects are not just for the back office in insolation) and that data quality was one of a number of needs that compete for resources/budget at many banks at the moment. Coming back to his people theme, Dilip also said that data quality also needed intuition to be carried out successfully. 

An audience question from Dan Rodriguez (of PRMIA and Credit Suisse) asked whether regulation was granting an advantage to "Too Big To Fail" organisations in that only they have the resources to be able to cope with the ever-increasing demands of the regulators, to the detriment of the smaller financial insitutions. The panel did not completely agree with Dan's premise, arguing that smaller organizations were more agile and did not have the legacy and complexity of the larger institutions, so there was probably a sweet spot between large and small from a regulatory compliance perspective (I guess it was interesting that the panel did not deny that regulation was at least affecting the size of financial institutions in some way...)

Again focussing on where resources should be deployed, the panel debated trade-offs such as those between accuracy and consistency. The Legal Entity Identifier (LEI) initiative was thought of as a great start in establishing standards for data aggregation, and the panel encouraged regulators to look at doing more. One audience question was around the different and inconsistent treatment of gross notional and trade accounts. Dilip said that yes this was an issue, but came back to Stephanie's point that what is needed is a single risk data platform that is flexible enough to be used across multiple business and compliance projects.  Don said that he suggests four "views" on risk:

  • Risk Taking
  • Risk Management
  • Risk Measurement
  • Risk Regulation

Stephanie added that organisations should focus on the measures that are most appropriate to your business activity.

The next audience question asked whether the panel thought that the projects driven by regulation had a negative return. Dilip said that his experience was yes, they do have negative returns but this was simply a cost of being in business. Unsurprisingly maybe, Stephanie took a different view advocating the benefits side coming out of some of the regulatory projects that drove improvements in data management.

The final audience question was whether the panel through the it was possible to reconcile all of the regulatory initiatives like Dodd-Frank, Basel III, EMIR etc with operational risk. Don took a data angle to this question, taking about the benefits of big data technologies applied across all relevant data sets, and that any data was now potentially valuable and could be retained. Dilip thought that the costs of data retention were continually going down as data volumes go up, but that there were costs in capturing the data need for operational risk and other applications. Dilip said that when compared globally across many industries, financial markets were way behind the data capabilities of many sectors, and that finance was more "Tiny Data" than "Big Data" and again he came back to the fact that people were getting in the way of better data management. Michael said that many banks and market data vendors are dealing with data in the 10's of TeraBytes range, whereas the amount of data in the world was around 8-900 PetaBytes (I thought we were already just over into ZetaBytes but what are a few hundred PetaBytes between friends...).

Abraham closed off the evening, firstly by asking the audience if they thought the 2016 deadline would be achieved by their organisation. Only 3 people out of around 50+ said yes. Not sure if this was simply people's reticence to put their hand up, but when Abraham asked one key concern for many was that the target would change by then - my guess is that we are probably back into the territory of the banks not implementing a regulation because it is too vague, and the regulators not being too prescriptive because they want feedback too. So a big game of chicken results, with the banks weighing up the costs/fines of non-compliance against the costs of implementing something big that they can't be sure will be acceptable to the regulators. Abraham then asked the panel for closing remarks: Don said that data architecture was key; Stephanie suggested getting the strategic aims in place but implementing iteratively towards these aims; Dilip said that deciding your goal first was vital; and Michael advised building a roadmap for data in risk. 

 

 

 

 

23 October 2013

Model Risk Management from PRMIA

Guest blog post by Qi Fu of PRMIA and Credit Suisse NYC with some notes on a model risk management event held ealier in September of this year. Big thank you to Qi for his notes and to all involved in organising the event:

The PRMIA event on Model Risk Management (MRM) was held in the evening of September 16th at Credit Suisse.  The discussion was sponsored by Ernst & Young, and was organized by Cynthia Williams, Regulatory Coordinator for Americas at Credit Suisse. 

As financial institutions have shifted considerable focus to model governance and independent model validation, MRM is as timely a topic as any in risk management, particularly since the Fed and OCC issued the Supervisory Guidance on Model Risk Management, also known as SR 11-7.

The event brings together a diverse range of views: the investment banks Morgan Stanley, Bank of American Merrill Lynch, and Credit Suisse are each represented, also on the panel are a consultant from E&Y and a regulator from Federal Reserve Bank of NY.  The event was well attended with over 100 attendees.

Colin Love-Mason, Head of Market Risk Analytics at CS moderated the panel, and led off by discussing his 2 functions at Credit Suisse, one being traditional model validation (MV), the other being VaR development and completing gap assessment, as well as compiling model inventory.  Colin made an analogy between model risk management with real estate.   As in real estate, there are three golden rules in MRM, which are emphasized in SR 11-7: documentation, documentation, and documentation.  Looking into the future, the continuing goals in MRM are quantification and aggregation.

Gagan Agarwala of E&Y’s Risk Advisory Practice noted that there is nothing new about many of the ideas in MRM.  Most large institutions already have in place guidance on model validation and model risk management.  In the past validation consisted of mostly quantitative analysis, but the trend has shifted towards establishing more mature, holistic, and sustainable risk management practices. 

Karen Schneck of FRBNY’s Models and Methodology Department spoke about her role at the FRB where she is on the model validation unit for stress testing for Comprehensive Capital Analysis and Review (CCAR); thus part of her work was on MRM before SR 11-7 was written.  SR 11-7 is definitely a “game changer”; since its release, there is now more formalization and organization around the oversight of MRM; rather than a rigid organization chart, the reporting structure at the FRB is much more open minded.  In addition, there is an increased appreciation of the infrastructure around the models themselves and the challenges faced by practitioners, in particularly the model implementation component, which is not always immediately recognized.

Craig Wotherspoon of BAML Model Risk Management remarked on his experience in risk management, and comments that a new feature in the structure of risk governance is that model validation is turning into a component of risk management.  In addition, the people involved are changing: risk professionals with the combination of a scientific mind, business sense, and writing skills will be in as high demand as ever.

Jon Hill, Head of Morgan Stanley’s Quantitative Analytics Group discussed his past experience in MRM since 90’s, when then the primary tools applied were “sniff tests”.  Since then, the landscape has long been completely changed.  In the past, focus had been on production, while documentation of models was an afterthought, now documentation must be detailed enough for highly qualified individual to review.  In times past the focus was only around validating methodology, nowadays it is just as important to validate the implementation.  There is an emphasis on stress testing, especially for complex models, in addition to internal threshold models and independent benchmarking.  The definition of what a model is has also expanded to anything that takes numbers in and haves numbers as output.  However, these increased demands require a substantial increase in resources; the difficulty of recruiting talent in these areas will remain a major challenge.

Colin noted a contrast in the initial comments of the panelists, on one hand some are indicating that MRM is mostly common sense; but Karen in particular emphasized the “game-changing” implications of SR 11-7, with MRM becoming more process oriented, when in the past it had been more of an intellectual exercise.  With regards to recruitment, it is difficult to find candidates with all the prerequisite skill sets, one option is to split up the workload to make it easier to hire.

Craig noted the shift in the risk governance structure, the model risk control committees are defining what models are, more formally and rigorously.  Gagan added that models have lifecycles, and there are inherent risks associated within that lifecycle.  It is important to connect the dots to make sure everything is conceptually sound, and to ascertain that other control functions understand the lifecycles.

Karen admits that additional process requirements contain the risk of trumping value.  MRM should aim to maintain high standards while not get overwhelmed by the process itself, so that some ideas become too expensive to implement.  There is also the challenge of maintaining independence of the MV team.

Jon concurred with Karen on the importance of maintaining independence.  A common experience is when validators find mistakes in the models, they become drawn into the development process with the modelers.  He also notes differences with the US, UK, and European MV processes, and Jon asserts his view that the US is ahead of the curve and setting standards.

Colin noted the issue of the lack of an analogous PRA document to SR 11-7, that drills down into nuts and bolts of the challenges in MRM.  He also concurred on the difficulty of maintaining independence, particularly in areas with no established governance.  It is important to get model developers to talk to other developers about the definition and scope of the models, as well as possible expansion of scope.  There is a wide gamut of models: core, pricing, risk, vendor, sensitivity, scenarios, etc.  Who is responsible for validating which?  Who checks on the calibration, tolerance, and weights of the models?  These are important questions to address.

Craig commented further on the complexity and uncertainty of defining what a model is, and on whose job it is to determine that, amongst the different stakeholders.  It also needs to be taken into consideration that model developers maybe biased towards limiting the number of models.

Gagan followed up by noting that while the generic definition of models is broad, and will need to be redefined, but analytics do not all need to have the same standards, the definition should leave some flexibility for context.  Also, the highest standard should be assigned to risk models.

Karen adds that, defining and validating models used to have a narrow focus, and done in a tailor-controlled environment.  It would be better to broaden the scope, and to reexamine the question on an ongoing basis (it is however important to point out that annual review does not equal annual re-validation).  In addition to the primary models, some challenge models also need to be supported; developers should discuss why they’re happy with primary model, how it is different from challenger model, and how it impacts output.

Colin brought up the point of stress-testing.  Jon asserts that stress-testing is more important for stochastic models, which are more likely to break under nonsensical inputs.  Also any model that plugs into the risk system should require judicious decision-making, as well as annual reviews to look at changes since the previous review.

Colin also brought up the topic of change management: what are the system challenges when model developers release code, which may include experimental releases.  Often discussed are concepts of annual certification and checkpoints.  Jon commented that the focus should be on changes of 5% or more, with pricing model being less of a priority; and firms should move towards centralized source code depositories.

Karen also added the question of what ought to considered material change: the more conservative answer is any variation, even if a pure code change that didn’t change model usage or business application, may need to be communicated to upper management.

Colin noted that developers often have a tendency to encapsulate intentions, and have difficulty or reluctance to document changes, thus resulting in many grey areas.  Gagan added that infrastructure is crucial.  Especially when market conditions are rapidly changing, MRM need to have controls that are in place.  Also, models are in Excel make the change management process more difficult.  

The panel discussion was followed by a lively Q&A session with an engaged audience, below are some highlights.

Q:  How do you distinguish between a trader whose model actually needs change, versus a trader who is only saying so because he/she has lost money?

Colin:  Maintain independent price verification and control functions.

Craig:  Good process for model change, and identify all stakeholders.

Karen:  Focus on what model outputs are being changed, what the trader’s assumptions are, and what is driving results.

Q:  How do you make sure models are used in business in a way that makes sense?

Colin:  This can be difficult, front office builds the models, states what is it good for, there is no simple answer from the MV perspective; usage means get as many people in the governance process as possible, internal audit and setting up controls.

Gagan:  Have coordination with other functions, holistic MRM.

Karen:  Need structure, inventory a useful tool for governance function.

Q:  Comments on models used in the insurance industry?

Colin:  Very qualitative, possible to give indications, difficult to do exact quantitative analysis, estimates are based on a range of values.  Need to be careful with inputs for very complex models, which can be based on only a few trades.

Q:  What to do about big shocks in CCAR?

Jon:  MV should validate for severe shocks, and if model fails may need only simple solution.

Karen:  Validation tools, some backtesting data, need to benchmark, quant element of stress testing need to substantiated and supported by qualitative assessment.

Q:  How to deal with vendor models?

Karen:  Not acceptable just to say it’s okay as long as the vendor is reputable, want to see testing done, consider usage also compare to original intent.

Craig:  New guidance makes it difficult to buy vendors models, but if vendor recognizes this, this will give them competitive advantage.

Q:  How to define independence for medium and small firms?

Colin:  Be flexible with resources, bring in different people, get feedback from senior management, and look for consistency.

Jon:  Hire E&Y?  There is never complete independence even in a big bank.

Gagan:  Key is the review process.

Karen:  Consultants could be cost effective; vendor validation may not be enough.

Q:  At firm level, do you see practice of assessing risk models?

Jon:  Large bank should appoint Model Risk Officer.

Karen:  Just slapping on additional capital is not enough

Q:  Who actually does MV?

Colin:  First should be user, then developer, 4 eyes principle.

Q:  Additional comments on change management?

Colin:  Ban Excel for anything official; need controlled environment.

 

21 October 2013

Credit Risk: Default and Loss Given Default from PRMIA

Great event from PRMIA on Tuesday evening of last week, entitled Credit Risk: The link between Loss Given Default and Default. The event was kicked off by Melissa Sexton of PRMIA, who introduced Jon Frye of the Federal Reserve Bank of Chicago. Jon seems to an acknowledged expert in the field of Loss Given Default (LGD) and credit risk modelling. I am sure that the slides will be up on the PRMIA event page above soon, but much of Jon's presentation seems to be around the following working paper. So take a look at the paper (which is good in my view) but I will stick to an overview and in particular any anecdotal comments made by Jon and other panelists.

Jon is an excellent speaker, relaxed in manner, very knowledgeable about his subject, humourous but also sensibly reserved in coming up with immediate answers to audience questions. He started by saying that his talk was not going to be long on philosophy, but very pragmatic in nature. Before going into detail, he outlined that the area of credit risk can and will be improved, but that this improvement becomes easier as more data is collected, and inevitably that this data collection process may need to run for many years and decades yet before the data becomes statistically significant. 

Which Formula is Simpler? Jon showed two formulas for estimating LGD, one a relatively complex looking formula (the Vasicek distribution mentioned his working paper) and the other a simple linear model of the a + b.x. Jon said that looking at the two formulas, then many would hope that the second formula might work best given its simplicity, but he wanted to convince us that the first formula was infact simpler than the second. He said that the second formula would need to be regressed on all loans to estimate its parameters, whereas the first formula depended on two parameters that most banks should have a fairly good handle on. The two parameters were Default Rate (DR) and Expected Loss (EL). The fact that these parameters were relatively well understood seemed to be the basis for saying the first formula was simpler, despite its relative mathematical complexity. This prompted an audience question on what is the difference between Probability of Default (PD) and Default Rate (DR). Apparently it turns out PD is the expected probability of default before default happens (so ex-ante) and DR is the the realised rate of default (so ex-post). 

Default and LGD over Time. Jon showed a graph (by an academic called Altman) of DR and LGD over time. When the DR was high (lots of companies failing, in a likely economic downtown) the LGD was also perhaps understandably high (so high number of companies failing, in an economic background that is both part of the causes of the failures but also not helping the loss recovery process). When DR is low, then there is a disconnect between LGD and DR. Put another way, when the number of companies failing is low, the losses incurred by those companies that do default can be high or low, there is no discernable pattern. I guess I am not sure in part whether this disconnect is due to the smaller number of companies failing meaning the sample space is much smaller and hence the outcomes are more volatile (no averaging effect), or more likely that in healthy economic times the loss given a default is much more of random variable, dependent on the defaulting company specifics rather than on general economic background.

Conclusions Beware: Data is Sparse. Jon emphasised from the graph that the Altman data went back 28 years, of which 23 years were periods of low default, with 5 years of high default levels but only across 3 separate recessions. Therefore from a statistical point of view this is very little data, so makes drawing any firm statistical conclusions about default and levels of loss given default very difficult and error-prone. 

The Inherent Risk of LGD. Jon here seemed to be focussed not on the probability of default, but rather on the conditional risk that once a default has occurred then how does LGD behave and what is the risk inherent from the different losses faced. He described how LGD affects i) Economic Capital - if LGD is more variable, then you need stronger capital reserves, ii) Risk and Reward - if a loan has more LGD risk, then the lender wants more reward, and iii) Pricing/Valuation - even if the expected LGD of two loans is equal, then different loans can still default under different conditions having different LGD levels.

Models of LGD

Jon showed a chart will LGC plotted against DR for 6 models (two of which I think he was involved in). All six models were dependent on three parameters, PD, EL and correlation, plus all six models seemed to produce almost identical results when plotted on the chart. Jon mentioned that one of his models had been validated (successfully I think, but with a lot of noise in the data) against Moody's loan data taken over the past 14 years. He added that he was surprised that all six models produced almost the same results, implying that either all models were converging around the correct solution or in total contrast that all six models were potentially subject to "group think" and were systematically all wrong in the ways the problem should be looked at.

Jon took one of his LGD models and compared it against the simple linear model, using simulated data. He showed a graph of some data points for what he called a "lucky bank" with the two models superimposed over the top. The lucky bit came in since this bank's data points for DR against LGD showed lower DR than expected for a given LGD, and lower LGD for a given DR. On this specific case, Jon said that the simple linear model fits better than his non-linear one, but when done over many data sets his LGD model fitted better overall since it seemed to be less affected by random data.

There were then a few audience questions as Jon closed his talk, one leading Jon to remind everyone of the scarcity of data in LGD modelling. In another Jon seemed to imply that he would favor using his model (maybe understandably) in the Dodd-Frank Annual Stress Tests for banks, emphasising that models should be kept simple unless a more complex model can be justified statistically. 

Steve Bennet and the Data Scarcity Issue 

Following Jon's talk, Steve Bennet of PECDC picked on Jon's issue of scare data within LGD modelling. Steve is based in the US, working for his organisation PECDC which is a cross border initiative to collect LGD and EAD (exposure at default) data. The basic premise seems to be that in dealing with the scarce data problem, we do not have 100 years of data yet, so in the mean time lets pool data across member banks and hence build up a more statistically significant data set - put another way: let's increase the width of the dataset if we can't control the depth. 

PECDC is a consortia of around 50 organisations that pool data relating to credit events. Steve said that capture data fields per default at four "snapshot" times: orgination, 1 year prior to default, at default and at resolution. He said that every bank that had joined the organisation had managed to improve its datasets. Following an audience question, he clarified that PECDC does not predict LGD with any of its own models, but rather provides the pooled data to enable the banks to model LGD better. 

Steve said that LGD turns out to be very different for different sectors of the market, particularly between SMEs and large corporations (levels of LGD for large corporations being more stable globally and less subject to regional variations). But also there is great LGD variation across specialist sectors such as aircraft finance, shipping and project finance. 

Steve ended by saying that PECDC was orginally formed in Europe, and was now attempting to get more US banks involved, with 3 US banks already involved and 7 waiting to join. There was an audience question relating to whether regulators allowed pooled data to be used under Basel IRB - apparently Nordic regulators allow this due to needing more data in a smaller market, European banks use the pooled data to validate their own data in IRB but in the US banks much use their own data at the moment.

Til Schuermann

Following Steve, Til Schuermann added his thoughts on LGD. He said that LGD has a time variation and is not random, being worse in recession when DR is high. His stylized argument to support this was that in recession there are lots of defaults, leading to lots of distressed assets and that following the laws of supply and demand, then assets used in recovery would be subject to lower prices. Til mentioned that there was a large effect in the timing of recovery, with recovery following default between 1 and 10 quarters later. He offered words of warning that not all defaults and not all collateral are created equal, emphasising that debt structures and industry stress matter. 

Summary

The evening closed with a few audience questions and a general summation by the panelists of the main issues of their talks, primarily around models and modelling, the scarcity of data and how to be pragmatic in the application of this kind of credit analysis. 

 

 

09 October 2013

And the winner of the Best Risk Data Management and Analytics Platform is...

...Xenomorph!!! Thanks to all who voted for us in the recent A-Team Data Management Awards, it was great to win the award for Best Risk Data Management and Analytics Platform. Great that our strength in the Data Management for Risk field is being recognised, and big thanks again to clients, partners and staff who make it all possible!

Please also find below some posts for the various panel debates at the event:

 Some photos, slides and videos from the event are now available on the A-Team site.

 

07 October 2013

#DMSLondon - The Chief Data Officer Challenge

The first panel of the afternoon touched on a hot topic at the moment, the role of the Chief Data Officer (CDO). Andrew Delaney again moderated the panel, consisting of Rupert Brown of UBS, Patrick Dewald of Diaku, Colin Hall of Credit Suisse, Nigel Matthews of Barclays and Neill Vanlint of GoldenSource. Main points:

  • Colin said that the need for the CDO role is that someone needs to sit at the top table who is both nerdy about data but also can communicate a vision for data to the CEO.
  • Rupert said that role of CDO was still a bit nebulous covering data conformance, storage management, security and data opportunity (new functionality and profit). He suggested this role used to be called "Data Stewardship" and that the CDO tag is really a rename.
  • Colin answered that the role did use to be a junior one, but regulation and the rate of industry change demands a CDO, a point contact for everyone when anything comes up that concerns data - previously nobody knew quite who to speak to on this topic.
  • Patrick suggested that a CDO needs a long-term vision for data, since the role is not just an operational one. 
  • Nigel pointed out that the CDO needs to cover all kinds of data and mentioned recent initiatives like BCBS with their risk data aggregation paper.
  • Neil said that he had seen the use of a CDO per business line at some of his clients.
  • There was some conversation around the different types of CDO and the various carrots and sticks that can be employed. Neil made the audience laugh with his quote from a client that "If the stick doesn't work, I have a five-foot carrot to hit them with!"
  • Patrick said that CDO role is about business not just data.
  • Colin picked up on what Patrick said and illustrated this with an example of legal contract data feeding directly into capital calculations.
  • Nigel said that the CDO is a facilitator with all departments. He added that the monitoring tools from market data where needed in reference data

Overall good debate, and I guess if you were starting from scratch (if only we could!) you would have to think that the CDO is a key role given the finance industry is primarily built on the flow of data from one organisation to another.

 

 

#DMSLondon - Big Data, Cloud, In-Memory

Andrew Delaney introduced the second panel of the day, with the long title of "The Industry Response: High Performance Technologies for Data Management - Big Data, Cloud, In-Memory, Meta Data & Big Meta Data". The panel included Rupert Brown of UBS, John Glendenning of Datastax, Stuart Grant of SAP and Pavlo Paska of Falconsoft. Andrew started the panel by asking what technology challenges the industry faced:

  • Stuart said that risk data on-demand was a key challenge, that there was the related need to collapse the legacy silos of data.
  • Pavlo backed up Stuart by suggesting that accuracy and consistency were needed for all live data.
  • Rupert suggested that there has been a big focus on low latency and fast data, but raised a smile from the audience when he said that he was a bit frustrated by the "format fetishes" in the industry. He then brought the conversation back to some fundamentals from his viewpoint, talking about wholeness of data and namespaces/data dictionaries - Rupert said that naming data had been too stuck in the functional area and not considered more in isolation from the technology.
  • John said that he thought there were too many technologies around at the moment, particularly in the area of Not Only SQL (NoSQL) databases. John seemed keen to push NoSQL, and in particular Apache Cassandra, as post relational databases. He put forward that these technologies, developed originally by the likes of Google and Yahoo, were the way forward and that in-memory databases from traditional database vendors were "papering over the cracks" of relational database weaknesses.
  • Stuart countered John by saying that properly designed in-memory databases had their place but that some in-memory databases had indeed been designed to paper over the cracks and this was the wrong approach, exascerbating the problem sometimes.
  • Responding to Andrew's questions around whether cloud usage was more accepted by the industry than it had been, Rupert said he thought it was although concerns remain over privacy and regulatory blockers to cloud usage, plus there was a real need for effective cloud data management. Rupert also asked the audience if we knew of any good release management tools for databases (controlling/managing schema versioning etc) because he and his group were yet to find one. 
  • Rupert expressed that Hadoop 2 was of more interest to him at UBS that Hadoop, and as a side note mentioned that map reduce was becoming more prevalent across NoSQL not just within the Hadoop domain. Maybe controversially, he said that UBS was using less data than it used to and as such it was not the "big data" organisation people might think it to be. 
  • As one example of the difficulties of dealing with silos, Stuart said that at one client it required the integration of data from 18 different system to a get an overall view of the risk exposure to one counterparty. Stuart advocated bring the analytics closer to the data, enabling more than one job to be done on one system.
  • Rupert thought that Goldman Sachs and Morgan Stanley seem to do what is the right thing for their firm, laying out a long-term vision for data management. He said that a rethink was needed at many organisations since fundamentally a bank is a data flow.
  • Stuart picked up on this and said that there will be those organisations that view data as an asset and those that view data as an annoyance.
  • Rupert mentioned that in his view accountants and lawyers are getting in the way of better data usage in the industry.
  • Rupert added that data in Excel needed to passed by reference and not passed by value. This "copy confluence" was wasting disk space and a source of operational problems for many organisations (a few past posts here and here on this topic).
  • Moving on to describe some of the benefits of semantic data and triple stores, Rupert proposed that the statistical world needed to be added to the semantic world to produce "Analytical Semantics" (see past post relating to the idea of "analytics management").

Great panel, lots of great insight with particularly good contributions from Rupert Brown.

#DMSLondon - What Will Drive Data Management?

The first panel of the day opened with an introductory talk by Chris Johnson of HSBC. Chris started his talk by proudly announcing that he drives a Skoda car, something that to him would have been unthinkable 25 years ago but with investment, process and standards things can and will change. He suggested that data management needs to go through a similar transformation, but that there remained a lot to be done. 

Moving on to the current hot topics of data unitilities and managed services, he said that reduced costs of managed services only became apparent in the long term and that both types of initiative have historically faced issues with:

  • Collaboration
  • Complexity
  • Logistical Challenges and Risks

Chris made the very good point that until service providers accept liability for data quality then this means that clients must always check the data they use. He also mentioned that in relation to Solvency II (a hot topic for Chris at HSBC Security Services), that EIOPA had recently mentioned that managed services may need to be regulated. Chris mentioned the lack of time available to respond to all the various regulatory deadlines faced (a recurring theme) and that the industry still lacked some basic fundamentals such as a standard instrument identifier.

Chris then joined the panel discussion with Andrew Delaney as moderator and with other panelists including Colin Gibson (see previous post), Matt Cox of Denver Perry, Sally Hinds of Data Management Consultancy Services and Robert Hofstetter of Bank J. Safra Sarasin. The key points I took from the panel are outlined below:

  • Sally said that many firms were around Level 3 in the Data Management Maturity Model, and that many were struggling particularly with data integration. Sally added that utililities were new, as was the CDO role and that implications for data management were only just playing out.
  • Matt thought that reducing cost was an obvious priority in the industry at the moment, with offshoring playing its part but progress was slow. He believed that data management remains underdeveloped with much more to be done.
  • Colin said that organisations remain daunted by their data management challenges and said that new challenges for data management with transactional data and derived data.
  • Sally emphasised the role of the US FATCA regulation and how it touches upon some many processess and departments including KYC, AML, Legal, Tax etc.
  • Matt highlighted derivatives regulation with the current activity in central clearing, Dodd-Frank, Basel III and EMIR.
  • Chris picked up on this and added Solvency II into the mix (I think you can sense regulation was a key theme...). He expressed the need and desirability of a Unique Product Identifier (UPI see report) as essential for the financial markets industry and how we need not just stand still now the LEI was coming. He said that industry associations really needed to pick up their game to get more standards in place but added that the IMA had been quite proactive in this regard. He expressed his frustration at current data licensing arrangements with data vendors, with the insistence on a single point of use being the main issue (big problem if you are in security services serving your clients I guess)
  • Robert added that his main issues were data costs and data quality
  • Andrew then brought the topic around to risk management and its impact on data management.
  • Colin suggested that more effort was needed to understand the data needs of end users within risk management. He also mentioned that products are not all standard and data complexity presents problems that need addressing in data management.
  • Chris mentioned that there 30 data fields used in Solvency II calculations and that if any are wrong this would have a direct impact on the calcualated capital charge (i.e. data is important!)
  • Colin got onto the topic of unstructured data and said how it needed to be tagged in some way to become useful. He suggested that there was an embrionic cross-over taking place between structured and unstructured data usage.
  • Sally thought that the merging of Business Intelligence into Data Management was a key development, and that if you have clean data then use it as much as you can.
  • Robert thought that increased complexity in risk management and elsewhere should drive the need for increased automation.
  • Colin thought cost pressures mean that the industry simply cannot afford the old IT infrastructure and that architecture needs to be completely rethought.
  • Chris said that we all need to get the basics right, with LEI but then on to UPI. He said to his knowledge data management will always be a cost centre and standardisation was a key element of reducing costs across the industry.
  • Sally thought that governance and ownership of data was wooly at many organisations and needed more work. She added this needed senior sponsorship and that data management was an ongoing process, not a one-off project.
  • Matt said that the "stick" was very much needed in addition to the carrot, advising that the proponents of improved data management should very much lay out the negative consequences to bring home the reality to business users who might not see the immediate benefits and costs.

Overall good panel, lots of good debate and exchanging of ideas.

 

#DMSLondon - Data Architecture: Sticks or Carrots?

Great day on Thursday at the A-Team Data Management Summit in London (personally not least because Xenomorph won the Best Risk Data Management/Analytics Platform Award but more of that later!). The event kicked off with a brief intro from Andrew Delaney of the A-Team talking through some of the drivers behind the current activity in data management, with Andrew saying that risk and regulation were to the fore. Andrew then introduced Colin Gibson, Head of Data Architecture, Markets Division at Royal Bank of Scotland.

Data Architecture - Sticks or Carrots? Colin began by looking at the definition of "data architecture" showing how the definition on Wikipedia (now obviously the definitive source of all knowledge...) was not particularly clear in his view. He suggested himself that data architecture is composed of two related frameworks:

  • Orderly Arrangement of Parts
  • Discipline 

He said that the orderly arrangement of parts is focussed on business needs and aims, covering how data is sourced, stored, referenced, accessed, moved and managed. On the discipline side, he said that this covered topics such as rules, governance, guides, best practice, modelling and tools.

Colin then put some numbers around the benefits of data management, saying that for every dollar spend on centralising data saves 20 dollars, and mentioning a resulting 80% reduction in operational costs. Related to this he said that for every dollar spent on not replicating data saved a dollar on reconcilliation tools and a further dollar saved on the use of reconcilliation tools (not sure how the two overlap but these are obviously some of the "carrots" from the title of the talk). 

Despite these incentives, Colin added that getting people to actually use centralised reference data remains a big problem in most organisations. He said he thought that people find it too difficult to understand and consume what is there, and faced with a choice they do their own thing as an easier alternative. Colin then talked about a program within RBS called "GoldRush" whereby there is a standard data management library available to all new projects in RBS which contains:

  • messaging standards
  • standard schema
  • update mechanisms

The benefit being that if the project conforms with the above standards then they have little work to do for managing reference data since all the work is done once and centrally. Colin mentioned that also there needs to be feedback from the projects back to central data management team around what is missing/needing to be improved in the library (personally I would take it one step further so that end-users and not just IT projects have easy discovery and access to centralised reference data). The lessons he took from this were that we all need to "learn to love" enterprise messaging if we are to get to the top down publish once/consume often nirvana, where consuming systems can pick up new data and functionality without significant (if any) changes (might be worth a view of this post on this topic). He also mentioned the role of metadata in automating reconcilliation where that needed to occur.

Colin then mentioned that allocation of costs of reference data to consumers is still a hot topic, one where reference data lags behind the market data permissioning/metering insisted upon by exchanges. Related to this Colin thought that the role of the Chief Data Officer to enforce policies was important, and the need for the role was being driven by regulation. He said that the true costs of a tactical, non-standard approach need to be identifiable (quantifying the size of the stick I guess) but that he had found it difficult to eliminate the tactical use of pricing data sourced for the front office. He ended by mentioning that there needs to be a coming together of market data and reference data since operations staff are not doing quantitative valuations (e.g. does the theoretical price of this new bond look ok?) and this needs to be done to ensure better data quality and increased efficiency (couldn't agree more, have a look at this article and this post for a few of my thoughts on the matter). Overall very good speaker with interesting, practical examples to back up the key points he was trying to get across. 

 

Xenomorph: analytics and data management

About Xenomorph

Xenomorph is the leading provider of analytics and data management solutions to the financial markets. Risk, trading, quant research and IT staff use Xenomorph’s TimeScape analytics and data management solution at investment banks, hedge funds and asset management institutions across the world’s main financial centres.

@XenomorphNews



Blog powered by TypePad
Member since 02/2008