35 posts categorized "Spreadsheets"

01 July 2014

Cloud, data and analytics in London - thanks for coming along!

We had over 60 folks along to our event our the Merchant Taylors' Hall last week in London. Thanks to all who attended, all who helped with the organization of the event and sorry to miss those of you that couldn't come along this time.

Some photos from the event are below starting with Brad Sevenko of Microsoft (Director, Capital Markets Technology Strategy) in the foreground with a few of the speakers doing some last minute adjustments at the front of the room before the guests arrived:

AzureUK-1

 

Rupesh Khendry of Microsoft (Head of World-Wide Capital Markets Solutions) started off the presentations at the event, introducing Microsoft's capital markets technology strategy to a packed audience:

AzureUK-3

 

After a presentation by Virginie O'Shea of Aite Group on Cloud adoption in capital markets, Antonio Zurlo (below) of Microsoft (Senior Program Manager) gave a quick introduction to the services available through the Microsoft Azure cloud and then moved on to more detail around Microsoft Power BI:

AzureUK-5

 

After Antonio, then yours truly (Brian Sentance, CEO, Xenomorph) gave a presentation on what we have been building with Microsoft over the past 18 months, the TimeScape MarketPlace. At this point in the presentation I was giving some introductory background on the challenges of regulatory compliance and the pros and cons between point solutions and having a more general data framework in place:

AzureUK-6

 

The event ended with some networking and further discussions. Big thanks to those who came forward to speak with me afterwards, great to get some early feedback.

AzureUK-8

 

24 June 2014

Cloud, data and analytics in London. Tomorrow Wednesday 25th June.

One day to go until our TimeScape MarketPlace breakfast briefing "Financial Markets Data and Analytics. Everywhere You Need Them" at Merchant Taylor's Hall tomorrow, Wednesday June 25th. With over ninety people registered so far it should be a great event, but if you can make it please register and come along, it would be great to see you there.

19 June 2014

Cloud, data and analytics in London. Next Wednesday June 25th.

Less than one week to go until our TimeScape MarketPlace breakfast briefing "Financial Markets Data and Analytics. Everywhere You Need Them" at Merchant Taylor's Hall on Wednesday June 25th. 

Come and join Xenomorph, Aite Group and Microsoft for breakfast and hear Virginie O'Shea of the analyst firm Aite Group offering some great insights from financial institutions into their adoption of cloud technology, applying it to address risk management, data management and regulatory reporting challenges.

Microsoft will be showing how their new Power BI can radically change and accelerate the integration of data for business and IT staff alike, regardless of what kind of data it is, what format it is stored in or where it is located.

And Xenomorph will be demonstrating the TimeScape MarketPlace, our new cloud-based data mashup service for publishing and consuming financial markets data and analytics. 

In the meantime, please take a look at the event and register if you can come along, it would be great to see you there.

11 June 2014

Financial Markets Data and Analytics. Everywhere London Needs Them.

Pleased to announce that our TimeScape MarketPlace event "Financial Markets Data and Analytics. Everywhere You Need Them" is coming to London, at Merchant Taylor's Hall on Wednesday June 25th. 

Come and join Xenomorph, Aite Group and Microsoft for breakfast and hear Virginie O'Shea of the analyst firm Aite Group offering some great insights from financial institutions into their adoption of cloud technology, applying it to address risk management, data management and regulatory reporting challenges.

Microsoft will be showing how their new Power BI can radically change and accelerate the integration of data for business and IT staff alike, regardless of what kind of data it is, what format it is stored in or where it is located.

And Xenomorph will be demonstrating the TimeScape MarketPlace, our new cloud-based data mashup service for publishing and consuming financial markets data and analytics. 

In the meantime, please take a look at the event and register if you can come along, it would be great to see you there.

14 May 2014

Clients and Partners. Everywhere You Need Them.

Quick thank you to the clients and partners who took some time out of their working day to attend our breakfast briefing, "Financial Markets Data and Analytics. Everywhere You Need Them." at Microsoft's Times Square offices last Friday morning. Not particularly great weather on here in Manhattan so it was great to see around 60 folks turn up...

Photo 1

 
Rupesh Khendry of Microsoft (Head of World-Wide Capital Markets Solutions) started the event and set out the agenda for the morning. Rupesh described the expense of data within financial markets, and the difficulties experienced by risk managers in pulling together all the data and analytics they need...  Photo 2
 
 ...and following Rupesh was Antonio Zurlo (below) of Microsoft (Senior Program Manager) who explained the fundamentals of Microsoft Azure and what services and infrastructure it offers, including public cloud, virtual private cloud and hybrid cloud architectures. Antonio also described a key usage pattern for HPC/grid on Azure being used to "burst to the cloud" when on-premise infrasture needs to be extended for end/intra-day risk calcs...
Photo 3
 
Sang Lee (below) of Aite Group (Managing Partner) then delivered his presentation "Floating in the Capital Markets Cloud: Moving Beyond Data Storage". Sang's main findings from the survey of 20 financial institutions were that concerns about security and SLAs relating to cloud usage remain, but even those that were concerned about this also said they were planning to start a cloud project within the next 24 months. Cloud technology seems to becoming more acceptable of late, and Sang said this seems to be due to regulation, cost pressures and the desire to offer better services to clients. Sang confirmed that HPC/Grid with "burst to the cloud" is a common usage pattern and that "Data as a Service" is becoming more popular... 
Photo 4
 
Fred Veasley (below) of Microsoft (Tech Solutions Professional) to introduce Microsoft Power BI and Office 365. Fred explained how Power BI extended the capabilities of Excel with data search (finding and retrieving publicized data sources both within an organization and over the web), its integration capabilities with standard databases, NoSQL databases, data standards such as OData and new APIs/sources of data such as Facebook. Once downloaded, the data can be shaped and merged with other datasets (for instance combining data from positions databases/systems with analytics and data from the cloud), and kept up to date automatically. In addition to Power BI, Power View enables great visualizations and interactive dashboards to be created, and once finalized these can be deployed centrally via web pages down to end users...
Photo 5
 
After Fred, Brian Sentance (below), CEO of Xenomorph explained the origins of the TimeScape MarketPlace. Based on some discussions with Microsoft about 18 months back, the idea was effectively to firstly to get TimeScape running in the Microsoft Azure cloud, secondly to turn the data management capabilities of TimeScape "upside-down" by using it as a means to upload and publish data to the cloud and thirdly to provide one-to-many access to multiple sources of data via web interfaces and key delivery tools such as Microsoft Power BI. Put another way, without any local software or hardware infrastructure both business users and IT staff can access multiple data sources in the same format and using the same data model wherever the data is needed. In addition to .NET and Java interfaces to the TimeScape MarketPlace via OData, web API delivery into F#, Python, R and MATLAB are all in development...
Photo 1 - Copy
 
...and in addition to downloading data via Power BI, Brian also demonstrated how you could build on the data using "Power View" to create powerful analytical dashboard functionality that could be built and tested in Excel, then deployed centrally within a browser for access by users outside of Excel. He added that partners was one of the key aspects for the platform, and introduced the TimeScape MarketPlace Partner Program for the platform to get data, analytics, model vendors, software and service vendors involved and building on the platform. Andrew Tognela (below) of Microsoft (Worldwide Managing Director) closed the presentations...
Photo 4 - Copy

15 April 2014

Financial Markets Data and Analytics. Everywhere You Need Them.

Very pleased to announce that Xenomorph will be hosting an event, "Financial Markets Data and Analytics. Everywhere You Need Them.", at Microsoft's Times Square New York offices on May 9th.

This breakfast briefing includes Sang Lee of the analyst firm Aite Group offering some great insights from financial institutions into their adoption of cloud technology, applying it to address risk management, data management and regulatory reporting challenges.

Microsoft will be showing how their new Power BI can radically change and accelerate the integration of data for business and IT staff alike, regardless of what kind of data it is, what format it is stored in or where it is located.

And Xenomorph will be introducing the TimeScape MarketPlace, our new cloud-based data mashup service for publishing and consuming financial markets data and analytics. More background and updates on MarketPlace in coming weeks.

In the meantime, please take a look at the event and register if you can come along, it would be great to see you there.

03 March 2014

See you at the A-Team Data Management Summit this week!

Xenomorph is sponsoring the networking reception at the A-Team DMS event in London this week, and if you are attending then I wanted to extend a cordial invite to you to attend the drinks and networking reception at the end of day at 5:30pm on Thursday.

In preparation for Thursday’s Agenda then the blog links below are a quick reminder of some of the main highlights from last September’s DMS:

I will also be speaking on the 2pm panel “Reporting for the C-Suite: Data Management for Enterprise & Risk Analytics”. So if you like what you have heard during the day, come along to the drinks and firm up your understanding with further discussion with like-minded individuals. Alternatively, if you find your brain is so full by then of enterprise data architecture, managed services, analytics, risk and regulation that you can hardly speak, come along and allow your cerebellum to relax and make sense of it all with your favourite beverage in hand. Either way your you will leave the event more informed then when you went in...well that’s my excuse and I am sticking with it!

Hope to see you there!

04 November 2013

Risk Data Aggregation and Risk Reporting from PRMIA

Another good event from PRMIA at the Harmonie Club here in NYC last week, entitled Risk Data Agregation and Risk Reporting - Progress and Challenges for Risk Management. Abraham Thomas of Citi and PRMIA introduced the evening, setting the scene by refering to the BCBS document Principles for effective risk data aggregation and risk reporting, with its 14 principles to be implemented by January 2016 for G-SIBs (Globally Systemically Important Banks) and December 2016 for D-SIBS (Domestically Systemically Important Banks).

The event was sponsored by SAP and they were represented by Dr Michael Adam on the panel, who gave a presentation around risk data management and the problems have having data siloed across many different systems. Maybe unsurprisingly Michael's presentation had a distinct "in-memory" focus to it, with Michael emphasizing the data analysis speed that is now possible using technologies such as SAP's in-memory database offering "Hana".

Following the presentation, the panel discussion started with a debate involving Dilip Krishna of Deloitte and Stephanie Losi of the Federal Reserve Bank of New York. They discussed whether the BCBS document and compliance with it should become a project in itself or part of existing initiatives to comply with data intensive regulations such as CCAR and CVA etc. Stephanie is on the board of the BCBS committee for risk data aggregation and she said that the document should be a guide and not a check list. There seemed to be general agreement on the panel that data architectures should be put together not with a view to compliance with one specific regulation but more as a framework to deal with all regulation to come, a more generalized approach.

Dilip said that whilst technology and data integration are issues, people are the biggest issue in getting a solid data architecture in place. There was an audience question about how different departments need different views of risk and how were these to be reconciled/facilitated. Stephanie said that data security and control of who can see what is an issue, and Dilip agreed and added that enterprise risk views need to be seen by many which was a security issue to be resolved. 

Don Wesnofske of PRMIA and Dell said that data quality was another key issue in risk. Dilip agreed and added that the front office need to be involved in this (data management projects are not just for the back office in insolation) and that data quality was one of a number of needs that compete for resources/budget at many banks at the moment. Coming back to his people theme, Dilip also said that data quality also needed intuition to be carried out successfully. 

An audience question from Dan Rodriguez (of PRMIA and Credit Suisse) asked whether regulation was granting an advantage to "Too Big To Fail" organisations in that only they have the resources to be able to cope with the ever-increasing demands of the regulators, to the detriment of the smaller financial insitutions. The panel did not completely agree with Dan's premise, arguing that smaller organizations were more agile and did not have the legacy and complexity of the larger institutions, so there was probably a sweet spot between large and small from a regulatory compliance perspective (I guess it was interesting that the panel did not deny that regulation was at least affecting the size of financial institutions in some way...)

Again focussing on where resources should be deployed, the panel debated trade-offs such as those between accuracy and consistency. The Legal Entity Identifier (LEI) initiative was thought of as a great start in establishing standards for data aggregation, and the panel encouraged regulators to look at doing more. One audience question was around the different and inconsistent treatment of gross notional and trade accounts. Dilip said that yes this was an issue, but came back to Stephanie's point that what is needed is a single risk data platform that is flexible enough to be used across multiple business and compliance projects.  Don said that he suggests four "views" on risk:

  • Risk Taking
  • Risk Management
  • Risk Measurement
  • Risk Regulation

Stephanie added that organisations should focus on the measures that are most appropriate to your business activity.

The next audience question asked whether the panel thought that the projects driven by regulation had a negative return. Dilip said that his experience was yes, they do have negative returns but this was simply a cost of being in business. Unsurprisingly maybe, Stephanie took a different view advocating the benefits side coming out of some of the regulatory projects that drove improvements in data management.

The final audience question was whether the panel through the it was possible to reconcile all of the regulatory initiatives like Dodd-Frank, Basel III, EMIR etc with operational risk. Don took a data angle to this question, taking about the benefits of big data technologies applied across all relevant data sets, and that any data was now potentially valuable and could be retained. Dilip thought that the costs of data retention were continually going down as data volumes go up, but that there were costs in capturing the data need for operational risk and other applications. Dilip said that when compared globally across many industries, financial markets were way behind the data capabilities of many sectors, and that finance was more "Tiny Data" than "Big Data" and again he came back to the fact that people were getting in the way of better data management. Michael said that many banks and market data vendors are dealing with data in the 10's of TeraBytes range, whereas the amount of data in the world was around 8-900 PetaBytes (I thought we were already just over into ZetaBytes but what are a few hundred PetaBytes between friends...).

Abraham closed off the evening, firstly by asking the audience if they thought the 2016 deadline would be achieved by their organisation. Only 3 people out of around 50+ said yes. Not sure if this was simply people's reticence to put their hand up, but when Abraham asked one key concern for many was that the target would change by then - my guess is that we are probably back into the territory of the banks not implementing a regulation because it is too vague, and the regulators not being too prescriptive because they want feedback too. So a big game of chicken results, with the banks weighing up the costs/fines of non-compliance against the costs of implementing something big that they can't be sure will be acceptable to the regulators. Abraham then asked the panel for closing remarks: Don said that data architecture was key; Stephanie suggested getting the strategic aims in place but implementing iteratively towards these aims; Dilip said that deciding your goal first was vital; and Michael advised building a roadmap for data in risk. 

 

 

 

 

07 October 2013

#DMSLondon - Big Data, Cloud, In-Memory

Andrew Delaney introduced the second panel of the day, with the long title of "The Industry Response: High Performance Technologies for Data Management - Big Data, Cloud, In-Memory, Meta Data & Big Meta Data". The panel included Rupert Brown of UBS, John Glendenning of Datastax, Stuart Grant of SAP and Pavlo Paska of Falconsoft. Andrew started the panel by asking what technology challenges the industry faced:

  • Stuart said that risk data on-demand was a key challenge, that there was the related need to collapse the legacy silos of data.
  • Pavlo backed up Stuart by suggesting that accuracy and consistency were needed for all live data.
  • Rupert suggested that there has been a big focus on low latency and fast data, but raised a smile from the audience when he said that he was a bit frustrated by the "format fetishes" in the industry. He then brought the conversation back to some fundamentals from his viewpoint, talking about wholeness of data and namespaces/data dictionaries - Rupert said that naming data had been too stuck in the functional area and not considered more in isolation from the technology.
  • John said that he thought there were too many technologies around at the moment, particularly in the area of Not Only SQL (NoSQL) databases. John seemed keen to push NoSQL, and in particular Apache Cassandra, as post relational databases. He put forward that these technologies, developed originally by the likes of Google and Yahoo, were the way forward and that in-memory databases from traditional database vendors were "papering over the cracks" of relational database weaknesses.
  • Stuart countered John by saying that properly designed in-memory databases had their place but that some in-memory databases had indeed been designed to paper over the cracks and this was the wrong approach, exascerbating the problem sometimes.
  • Responding to Andrew's questions around whether cloud usage was more accepted by the industry than it had been, Rupert said he thought it was although concerns remain over privacy and regulatory blockers to cloud usage, plus there was a real need for effective cloud data management. Rupert also asked the audience if we knew of any good release management tools for databases (controlling/managing schema versioning etc) because he and his group were yet to find one. 
  • Rupert expressed that Hadoop 2 was of more interest to him at UBS that Hadoop, and as a side note mentioned that map reduce was becoming more prevalent across NoSQL not just within the Hadoop domain. Maybe controversially, he said that UBS was using less data than it used to and as such it was not the "big data" organisation people might think it to be. 
  • As one example of the difficulties of dealing with silos, Stuart said that at one client it required the integration of data from 18 different system to a get an overall view of the risk exposure to one counterparty. Stuart advocated bring the analytics closer to the data, enabling more than one job to be done on one system.
  • Rupert thought that Goldman Sachs and Morgan Stanley seem to do what is the right thing for their firm, laying out a long-term vision for data management. He said that a rethink was needed at many organisations since fundamentally a bank is a data flow.
  • Stuart picked up on this and said that there will be those organisations that view data as an asset and those that view data as an annoyance.
  • Rupert mentioned that in his view accountants and lawyers are getting in the way of better data usage in the industry.
  • Rupert added that data in Excel needed to passed by reference and not passed by value. This "copy confluence" was wasting disk space and a source of operational problems for many organisations (a few past posts here and here on this topic).
  • Moving on to describe some of the benefits of semantic data and triple stores, Rupert proposed that the statistical world needed to be added to the semantic world to produce "Analytical Semantics" (see past post relating to the idea of "analytics management").

Great panel, lots of great insight with particularly good contributions from Rupert Brown.

#DMSLondon - What Will Drive Data Management?

The first panel of the day opened with an introductory talk by Chris Johnson of HSBC. Chris started his talk by proudly announcing that he drives a Skoda car, something that to him would have been unthinkable 25 years ago but with investment, process and standards things can and will change. He suggested that data management needs to go through a similar transformation, but that there remained a lot to be done. 

Moving on to the current hot topics of data unitilities and managed services, he said that reduced costs of managed services only became apparent in the long term and that both types of initiative have historically faced issues with:

  • Collaboration
  • Complexity
  • Logistical Challenges and Risks

Chris made the very good point that until service providers accept liability for data quality then this means that clients must always check the data they use. He also mentioned that in relation to Solvency II (a hot topic for Chris at HSBC Security Services), that EIOPA had recently mentioned that managed services may need to be regulated. Chris mentioned the lack of time available to respond to all the various regulatory deadlines faced (a recurring theme) and that the industry still lacked some basic fundamentals such as a standard instrument identifier.

Chris then joined the panel discussion with Andrew Delaney as moderator and with other panelists including Colin Gibson (see previous post), Matt Cox of Denver Perry, Sally Hinds of Data Management Consultancy Services and Robert Hofstetter of Bank J. Safra Sarasin. The key points I took from the panel are outlined below:

  • Sally said that many firms were around Level 3 in the Data Management Maturity Model, and that many were struggling particularly with data integration. Sally added that utililities were new, as was the CDO role and that implications for data management were only just playing out.
  • Matt thought that reducing cost was an obvious priority in the industry at the moment, with offshoring playing its part but progress was slow. He believed that data management remains underdeveloped with much more to be done.
  • Colin said that organisations remain daunted by their data management challenges and said that new challenges for data management with transactional data and derived data.
  • Sally emphasised the role of the US FATCA regulation and how it touches upon some many processess and departments including KYC, AML, Legal, Tax etc.
  • Matt highlighted derivatives regulation with the current activity in central clearing, Dodd-Frank, Basel III and EMIR.
  • Chris picked up on this and added Solvency II into the mix (I think you can sense regulation was a key theme...). He expressed the need and desirability of a Unique Product Identifier (UPI see report) as essential for the financial markets industry and how we need not just stand still now the LEI was coming. He said that industry associations really needed to pick up their game to get more standards in place but added that the IMA had been quite proactive in this regard. He expressed his frustration at current data licensing arrangements with data vendors, with the insistence on a single point of use being the main issue (big problem if you are in security services serving your clients I guess)
  • Robert added that his main issues were data costs and data quality
  • Andrew then brought the topic around to risk management and its impact on data management.
  • Colin suggested that more effort was needed to understand the data needs of end users within risk management. He also mentioned that products are not all standard and data complexity presents problems that need addressing in data management.
  • Chris mentioned that there 30 data fields used in Solvency II calculations and that if any are wrong this would have a direct impact on the calcualated capital charge (i.e. data is important!)
  • Colin got onto the topic of unstructured data and said how it needed to be tagged in some way to become useful. He suggested that there was an embrionic cross-over taking place between structured and unstructured data usage.
  • Sally thought that the merging of Business Intelligence into Data Management was a key development, and that if you have clean data then use it as much as you can.
  • Robert thought that increased complexity in risk management and elsewhere should drive the need for increased automation.
  • Colin thought cost pressures mean that the industry simply cannot afford the old IT infrastructure and that architecture needs to be completely rethought.
  • Chris said that we all need to get the basics right, with LEI but then on to UPI. He said to his knowledge data management will always be a cost centre and standardisation was a key element of reducing costs across the industry.
  • Sally thought that governance and ownership of data was wooly at many organisations and needed more work. She added this needed senior sponsorship and that data management was an ongoing process, not a one-off project.
  • Matt said that the "stick" was very much needed in addition to the carrot, advising that the proponents of improved data management should very much lay out the negative consequences to bring home the reality to business users who might not see the immediate benefits and costs.

Overall good panel, lots of good debate and exchanging of ideas.

 

#DMSLondon - Data Architecture: Sticks or Carrots?

Great day on Thursday at the A-Team Data Management Summit in London (personally not least because Xenomorph won the Best Risk Data Management/Analytics Platform Award but more of that later!). The event kicked off with a brief intro from Andrew Delaney of the A-Team talking through some of the drivers behind the current activity in data management, with Andrew saying that risk and regulation were to the fore. Andrew then introduced Colin Gibson, Head of Data Architecture, Markets Division at Royal Bank of Scotland.

Data Architecture - Sticks or Carrots? Colin began by looking at the definition of "data architecture" showing how the definition on Wikipedia (now obviously the definitive source of all knowledge...) was not particularly clear in his view. He suggested himself that data architecture is composed of two related frameworks:

  • Orderly Arrangement of Parts
  • Discipline 

He said that the orderly arrangement of parts is focussed on business needs and aims, covering how data is sourced, stored, referenced, accessed, moved and managed. On the discipline side, he said that this covered topics such as rules, governance, guides, best practice, modelling and tools.

Colin then put some numbers around the benefits of data management, saying that for every dollar spend on centralising data saves 20 dollars, and mentioning a resulting 80% reduction in operational costs. Related to this he said that for every dollar spent on not replicating data saved a dollar on reconcilliation tools and a further dollar saved on the use of reconcilliation tools (not sure how the two overlap but these are obviously some of the "carrots" from the title of the talk). 

Despite these incentives, Colin added that getting people to actually use centralised reference data remains a big problem in most organisations. He said he thought that people find it too difficult to understand and consume what is there, and faced with a choice they do their own thing as an easier alternative. Colin then talked about a program within RBS called "GoldRush" whereby there is a standard data management library available to all new projects in RBS which contains:

  • messaging standards
  • standard schema
  • update mechanisms

The benefit being that if the project conforms with the above standards then they have little work to do for managing reference data since all the work is done once and centrally. Colin mentioned that also there needs to be feedback from the projects back to central data management team around what is missing/needing to be improved in the library (personally I would take it one step further so that end-users and not just IT projects have easy discovery and access to centralised reference data). The lessons he took from this were that we all need to "learn to love" enterprise messaging if we are to get to the top down publish once/consume often nirvana, where consuming systems can pick up new data and functionality without significant (if any) changes (might be worth a view of this post on this topic). He also mentioned the role of metadata in automating reconcilliation where that needed to occur.

Colin then mentioned that allocation of costs of reference data to consumers is still a hot topic, one where reference data lags behind the market data permissioning/metering insisted upon by exchanges. Related to this Colin thought that the role of the Chief Data Officer to enforce policies was important, and the need for the role was being driven by regulation. He said that the true costs of a tactical, non-standard approach need to be identifiable (quantifying the size of the stick I guess) but that he had found it difficult to eliminate the tactical use of pricing data sourced for the front office. He ended by mentioning that there needs to be a coming together of market data and reference data since operations staff are not doing quantitative valuations (e.g. does the theoretical price of this new bond look ok?) and this needs to be done to ensure better data quality and increased efficiency (couldn't agree more, have a look at this article and this post for a few of my thoughts on the matter). Overall very good speaker with interesting, practical examples to back up the key points he was trying to get across. 

 

25 April 2013

The Anthropology, Sociology, and Epistemology of Risk

Background - I went along to my first PRMIA event in Stamford, CT last night, with the rather grandiose title of "The Anthropology, Sociology, and Epistemology of Risk". Stamford is about 30 miles north of Manhattan and is the home to major offices of a number of financial markets companies such as Thomson Reuters, RBS and UBS (who apparently have the largest column-less trading floor in the world at their Stamford headquarters - particularly useful piece of trivia for you there...). It also happens to be about 5 minutes drive/train journey away from where I now live, so easy for me to get to (thanks for another useful piece of information I hear you say...). Enough background, more on the event which was a good one with five risk managers involved in an interesting and sometimes philosophical discussion on fundamentally what "risk management" is all about.

IntroductionMarc Groz who heads the Stamford Chapter of PRMIA introduced the evening and started by thanking Barry Schwimmer for allowing PRMIA to use the Stamford Innovation Centre (the Old Town Hall) for the meeting. Henrik Neuhaus moderated the panel, and started by outlining the main elements of the event title as a framework for the discussion:

  • Anthropology - risk management is to what purpose?
  • Sociology - how does risk management work?
  • Epistemology - what knowledge is really contained within risk management?

Henrik started by taking a passage about anthropology and replacing human "development" with "risk management" which seemed to fit ok, although the angle I was expecting was much more about human behaviour in risk management than where Henrik started. Henrik asked the panel what results they had seen from risk management and what did that imply about risk management? The panelists seemed a little confused or daunted by the question prompting one of them to ask "Is that the question?".

Business Model and Risk CultureElliot Noma dived in by responding that the purpose of risk management obviously depended very much on what are the institutional goals of the organization. He said that it was as much about what you are forced to do and what you try to do in risk management. Elliot said that the sell-side view of risk management was very regulatory and capital focused, whereas mutual funds are looking more at risk relative to benchmarks and performance attribution. He added that in the alternatives (hedge-fund) space then there were no benchmarks and the focus was more about liquidity and event risk.

Steve Greiner said that it was down to the investment philosophy and how risk is defined and measured. He praised some asset managers where the risk managers sit across from the portfolio managers and are very much involved in the decision making process.

Henrik asked the panel whether any of the panel had ever defined a “mission statement” for risk management. Marc Groz chipped in that he remember that he had once defined one, and that it was very different from what others in the institution were expecting and indeed very different from the risk management that he and his department subsequently undertook.

Mark Szycher (of GM Pension Fund) said that risk management split into two areas for him, the first being the symmetrical risks where you need to work out the range of scenarios for a particular trade or decision being taken. The second was the more asymmetrical risks (i.e. downside only) such as those found in operational risk where you are focused on how best to avoid them happening.

Micro Risk Done Well - Santa Federico said that he had experience of some of the major problems experienced at institutions such as Merrill Lynch, Salomen Brothers and MF Global, and that he thought risk management was much more of a cultural problem than a technical one. Santa said he thought that the industry was actually quite good at the micro (trade, portfolio) risk management level, but obviously less effective at the large systematic/economic level. Mark asked Santa what was the nature of the failures he had experienced. Santa said that the risks were well modeled, but maybe the assumptions around macro variables such as the housing market proved to be extremely poor.

Keep Dancing? - Henrik asked the panel what might be done better? Elliot made the point that some risks are just in the nature of the business. If a risk manager did not like placing a complex illiquid trade and the institution was based around trading in illiquid markets then what is a risk manager to do? He quote the Citi executive who said “ whilst the music is still playing we have to dance”. Again he came back to the point that the business model of the institution drives its cultural and the emphasis of risk management (I guess I see what Elliot was saying but taken one way it implied that regardless of what was going on risk management needs to fit in with it, whereas I am sure that he meant that risk managers must fit in with the business model mandated to shareholders).

Risk Attitudes in the USA - Mark said that risk managers need to recognize that the improbable is maybe not so improbable and should be more prepared for the worst rather than risk management under “normal” market and institutional behavior. Steven thought that a cultural shift was happening, where not losing money was becoming as important to an organization as gaining money. He said that in his view, Europe and Asia had a stronger risk culture than in the United States, with much more consensus, involvement and even control over the trading decisions taken. Put another way, the USA has more of a culture of risk taking than Europe. (I have my own theories on this. Firstly I think that the people are generally much more risk takers in the USA than in UK/Europe, possibly influenced in part by the relative lack of underlying social safety net – whilst this is not for everyone, I think it produces a very dynamic economy as a result. Secondly, I do not think that cultural desire in the USA for the much admired “presidential” leader necessarily is the best environment for sound, consensus based risk management. I would also like to acknowledge that neither of my two points above seem to have protected Europe much from the worst of the financial crisis, so it is obviously a complex issue!).

Slaves to Data? - Henrik asked whether the panel thought that risk managers were slaves to data? He expanded upon this by asking what kinds of firms encourage qualitative risk management and not just risk management based on Excel spreadsheets? Santa said that this kind of qualitative risk management occurred at a business level and less so at a firm wide level. In particular he thought this kind of culture was in place at many hedge funds, and less so at banks. He cited one example from his banking career in the 1980's, where his immediate boss was shouted off the trading floor by the head of desk, saying that he should never enter the trading floor again (oh those were the days...). 

Sociology and Credibility - Henrik took a passage on the historic development of women's rights and replaced the word "women" with "risk management" to illustrate the challenges risk management is facing with trying to get more say and involvement at financial institutions. He asked who should the CRO report to? A CEO? A CIO? Or a board member? Elliot responded by saying this was really a issue around credibility with the business for risk managers and risk management in general. He made the point that often Excel and numbers were used to establish credibility with the business. Elliot added that risk managers with trading experience obviously had more credibility, and to some extent where the CRO reported to was dependent upon the credibility of risk management with the business. 

Trading and Risk Management Mindsets - Elliot expanded on his previous point by saying that the risk management mindset thinks more in terms of unconditional distributions and tries to learn from history. He contrasted this with a the "conditional mindset' of a trader, where the time horizon forwards (and backwards) is rarely longer than a few days and the belief is strong that a trade will work today given it worked yesterday is high. Elliot added that in assisting the trader, the biggest contribution risk managers can make is more to be challenging/helpful on the qualitative side rather than just quantitative.

Compensation and Transactions - Most of the panel seemed to agree that compensation package structure was a huge influencer in the risk culture of an organisation. Mark touched upon a pet topic of mine, which is that it very hard for a risk manager to gain credibility (and compensation) when what risk management is about is what could happen as opposed to what did happen. A risk manager blocking a trade due to some potentially very damaging outcomes will not gain any credibility with the business if the trading outcome for the suggested trade just happened to come out positive. There seemed to be concensus here that some of the traditional compensation models that were based on short-term transactional frequency and size were ill-formed (given the limited downside for the individual), and whilst the panel reserved judgement on the effectiveness of recent regulation moves towards longer-term compensation were to be welcome from a risk perspective.

MF Global and Busines Models - Santa described some of his experiences at MF Global, where Corzine moved what was essentially a broker into taking positions in European Sovereign Bonds. Santa said that the risk management culture and capabilities were not present to be robust against senior management for such a business model move. Elliot mentioned that he had been courted for trades by MF Global and had been concerned that they did not offer electronic execution and told him that doing trades through a human was always best. Mark said that in the area of pension fund management there was much greater fidiciary responsibility (i.e. behave badly and you will go to jail) and maybe that kind of responsibility had more of a place in financial markets too. Coming back to the question of who a CRO should report to, Mark also said that questions should be asked to seek out those who are 1) less likely to suffer from the "agency" problem of conflicts of interest and on a related note those who are 2) less likely to have personal biases towards particular behaviours or decisions.

Santa said that in his opinion hedge funds in general had a better culture where risk management opinions were heard and advice taken. Mark said that risk managers who could get the business to accept moral persuasion were in a much stronger position to add value to the business rather than simply being able to "block" particular trades. Elliot cited one experience he had where the traders under his watch noticed that a particular type of trade (basis trades) did not increase their reported risk levels, and so became more focussed on gaming the risk controls to achieve high returns without (reported) risk. The panel seemed to be in general agreement that risk managers with trading experience were more credible with the business but also more aware of the trader mindset and behaviors. 

Do we know what we know? - Henrik moved to his third and final subsection of the evening, asking the panel whether risk managers really know what they think they know. Elliot said that traders and risk managers speak a different language, with traders living in the now, thinking only of the implications of possible events such as those we have seen with Cyprus or the fiscal cliff, where the risk management view was much less conditioned and more historical. Steven re-emphasised the earlier point that risk management at this micro trading level was fine but this was not what caused events such as the collapse of MF Global.

Rational argument isn't communication - Santa said that most risk managers come from a quant (physics, maths, engineering) background and like structured arguments based upon well understood rational foundations. He said that this way of thinking was alien to many traders and as such it was a communication challenge for risk managers to explain things in a way that traders would actually put some time to considering. On the modelling side of things, Santa said that sometimes traders dismissed models as being "too quant" and sometimes traders followed models all too blindly without questioning or understanding the simplifying assumptions they are based on. Santa summarised by saying that risk management needs to intuitive for traders and not just academically based. Mark added that a quantitative focus can sometimes become too narrow (modeler's manifesto anyone?) and made the very profound point that unfortunately precision often wins over relevance in the creation and use of many models. Steven added that traders often deal with absolutes, so as knowing the spread between two bonds to the nearest basis point, whereas a risk manager approaching them with a VaR number really means that this is the estimated VaR which really should be thought to be within a range of values. This is alien to the way traders think and hence harder to explain.

Unanticipated Risk - An audience member asked whether risk management should focus mainly on unanticipated risks rather than "normal' risks. Elliot said that in his trading he was always thinking and checking whether the markets were changing or continuing with their recent near-term behaviour patterns. Steven said that history was useful to risk management when markets were "normal", but in times of regime shifts this was not the case and cited the example of the change in markets when Mario Dragi announced that the ECB would stand behind the Euro and its member nations. 

Risky Achievements - Henrik closed the panel by asking each member what they thought was there own greatest achievement in risk management. Elliot cited a time when he identified that a particular hedge fund had a relatively inconspicuous position/trade that he identified as potentially extremely dangerous and was proved correct when the fund closed down due to this. Steven said he was proud of some good work he and his team did on stress testing involving Greek bonds and Eurozone. Santa said that some of the work he had done on portfolio "risk overlays" was good. Mark ended the panel by saying that he thought his biggest achievement was when the traders and portfolio managers started to come to the risk management department to ask opinions before placing key trades. Henrik and the audience thanked the panel for their input and time.

An Insured View - After the panel closed I spoke with an actuary who said that he had greatly enjoyed the panel discussions but was surprised that when talking of how best to support the risk management function in being independent and giving "bad" news to the business, the role of auditors were not mentioned. He said he felt that auditors were a key support to insurers in ensuring any issues were allowed to come to light. So food for thought there as to whether financial markets can learn from other industry sectors.

Summary - great evening of discussion, only downside being the absence of wine once the panel had closed!

 


27 March 2013

Spreadsheet control and contagion

Just caught saw a reference on LinkedIn to this FT article "Finance groups lack spreadsheet controls". Started to write a quick response and given it is one of my major hobby-horses, I ended up doing a bit of an essay, so I decided to post it here too:

"As many people have pointed out elsewhere, much of the problem with spreadsheet usage is that they are not treated as a corporate and IT asset, and as such things like testing, peer review and general QA are not applied (mind you, maybe more of that should still be applied to many mainstream software systems in financial markets...). 

Ralph and the guys at Cluster Seven do a great job in helping institutions to manage and monitor spreadsheet usage (I like Ralph's "we are CCTV for spreadsheets" analogy), but I think a fundamental (and often overlooked) consideration is to ask yourself why did the business users involved decide that they needed spreadsheets to manage trading and risk in the first place? It is a bit like trying to address the symptoms of a illness without ever considering how we got the illness in the first place. 

Excel is a great tool, but to quote Spider-Man "with great power comes great responsibility" and I guess we can all see the consequences of not taking the usage of spreadsheets seriously and responsibly. So next time the trader or risk manager says "we've just built this really great model in Excel" ask them why they built it in Excel, and why they didn't build upon the existing corporate IT solutions and tools. In these cost- and risk- conscious times, I think the answers would be interesting..."

 

14 February 2013

Analytics Strategy from Numerix

Good post from Jim Jockle over at Numerix - main theme is around having an "analytics" strategy in place in addition to (and probably as part of) a "Big Data" strategy. Fits strongly around Xenomorph's ideas on having both data management and analytics management in place (a few posts on this in the past, try this one from a few years back) - analytics generate the most valuable data of all, yet the data generated by analytics and the input data that supports analytics is largely ignored as being too business focussed for many data management vendors to deal with, and too low level for many of the risk management system vendors to deal with. Into this gap in functionality falls the risk manager (supported by many spreadsheets!), who has to spend too much time organizing and validating data, and too little time on risk management itself.

Within risk management, I think it comes down to having the appropriate technical layers in place of data management, analytics/pricing management and risk model management. Ok it is a greatly simplified representation of the architecture needed (apologies to any techies reading this), but the majority of financial institutions do not have these distinct layers in place, with each of these layers providing easy "business user" access to allow risk managers to get to the "detail" of the data when regulators, auditors and clients demand it. Regulators are finally waking up to the data issue (see Basel on data aggregation for instance) but more work is needed to pull analytics into the technical architecture/strategy conversation, and not just confine regulatory discussions of pricing analytics to model risk. 

08 February 2013

Big Data – What is its Value to Risk Management?

A little late on these notes from this PRMIA Event on Big Data in Risk Management that I helped to organize last month at the Harmonie Club in New York. Big thank you to my PRMIA colleagues for taking the notes and for helping me pull this write-up together, plus thanks to Microsoft and all who helped out on the night.

Introduction: Navin Sharma (of Western Asset Management and Co-Regional Director of PRMIA NYC) introduced the event and began by thanking Microsoft for its support in sponsoring the evening. Navin outlined how he thought the advent of “Big Data” technologies was very exciting for risk management, opening up opportunities to address risk and regulatory problems that previously might have been considered out of reach.

Navin defined Big Data as the structured or unstructured in receive at high volumes and requiring very large data storage. Its characteristics include a high velocity of record creation, extreme volumes, a wide variety of data formats, variable latencies, and complexity of data types. Additionally, he noted that relative to other industries, in the past financial services has created perhaps the largest historical sets of data and continually creates enormous amount of data on a daily or moment-by-moment basis. Examples include options data, high frequency trading, and unstructured data such as via social media.  Its usage provides potential competitive advantages in a trading and investment management. Also, by using Big Data it is possible to have faster and more accurate recognition of potential risks via seemingly disparate data - leading to timelier and more complete risk management of investments and firms’ assets. Finally, the use of Big Data technologies is in part being driven by regulatory pressures from Dodd-Frank, Basel III, Solvency II, Markets for Financial Instruments Directives (1 & 2) as well as Markets for Financial Instruments Regulation.

Navin also noted that we will seek to answer questions such as:

  • What is the impact of big data on asset management?
  • How can Big Data’s impact enhance risk management?
  • How is big data used to enhance operational risk?

Presentation 1: Big Data: What Is It and Where Did It Come From?: The first presentation was given by Michael Di Stefano (of Blinksis Technologies), and was titled “Big Data. What is it and where did it come from?”.  You can find a copy of Michael’s presentation here. In summary Michael started with saying that there are many definitions of Big Data, mainly defined as technology that deals with data problems that are either too large, too fast or too complex for conventional database technology. Michael briefly touched upon the many different technologies within Big Data such as Hadoop, MapReduce and databases such as Cassandra and MongoDB etc. He described some of the origins of Big Data technology in internet search, social networks and other fields. Michael described the “4 V’s” of Big Data: Volume, Velocity, Variety and a key point from Michael was “time to Value” in terms of what you are using Big Data for. Michael concluded his talk with some business examples around use of sentiment analysis in financial markets and the application of Big Data to real-time trading surveillance.

Presentation 2: Big Data Strategies for Risk Management: The second presentation “Big Data Strategies for Risk Management” was introduced by Colleen Healy of Microsoft (presentation here). Colleen started by saying expectations of risk management are rising, and that prior to 2008 not many institutions had a good handle on the risks they were taking. Risk analysis needs to be done across multiple asset types, more frequently and at ever greater granularity. Pressure is coming from everywhere including company boards, regulators, shareholders, customers, counterparties and society in general. Colleen used to head investor relations at Microsoft and put forward a number of points:

  • A long line of sight of one risk factor does not mean that we have a line of sight on other risks around.
  • Good risk management should be based on simple questions.
  • Reliance on 3rd parties for understanding risk should be minimized.
  • Understand not just the asset, but also at the correlated asset level.
  • The world is full of fast markets driving even more need for risk control
  • Intraday and real-time risk now becoming necessary for line of sight and dealing with the regulators
  • Now need to look at risk management at a most granular level.

Colleen explained some of the reasons why good risk management remains a work in progress, and that data is a key foundation for better risk management. However data has been hard to access, analyze, visualize and understand, and used this to link to the next part of the presentation by Denny Yu of Numerix.

Denny explained that new regulations involving measures such as Potential Future Exposure (PFE) and Credit Value Adjustment (CVA) were moving the number of calculations needed in risk management to a level well above that required by methodologies such as Value at Risk (VaR). Denny illustrated how the a typical VaR calculation on a reasonable sized portfolio might need 2,500,000 instrument valuations and how PFE might require as many as 2,000,000,000. He then explain more of the architecture he would see as optimal for such a process and illustrated some of the analysis he had done using Excel spreadsheets linked to Microsoft’s high performance computing technology.

Presentation 3: Big Data in Practice: Unintentional Portfolio Risk: Kevin Chen of Opera Solutions gave the third presentation, titled “Unintentional Risk via Large-Scale Risk Clustering”. You can find a copy of the presentation here. In summary, the presentation was quite visual and illustrating how large-scale empirical analysis of portfolio data could produce some interesting insights into portfolio risk and how risks become “clustered”. In many ways the analysis was reminiscent of an empirical form of principal component analysis i.e. where you can see and understand more about your portfolio’s risk without actually being able to relate the main factors directly to any traditional factor analysis. 

Panel Discussion: Brian Sentance of Xenomorph and the PRMIA NYC Steering Committee then moderated a panel discussion. The first question was directed at Michael “Is the relational database dead?” – Michael replied that in his view relational databases were not dead and indeed for dealing with problems well-suited to relational representation were still and would continue to be very good. Michael said that NoSQL/Big Data technologies were complimentary to relational databases, dealing with new types of data and new sizes of problem that relational databases are not well designed for. Brian asked Michael whether the advent of these new database technologies would drive the relational database vendors to extend the capabilities and performance of their offerings? Michael replied that he thought this was highly likely but only time would tell whether this approach will be successful given the innovation in the market at the moment. Colleen Healy added that the advent of Big Data did not mean the throwing out of established technology, but rather an integration of established technology with the new such as with Microsoft SQL Server working with the Hadoop framework.

Brian asked the panel whether they thought visualization would make a big impact within Big Data? Ken Akoundi said that the front end applications used to make the data/analysis more useful will evolve very quickly. Brian asked whether this would be reminiscent of the days when VaR first appeared, when a single number arguably became a false proxy for risk measurement and management? Ken replied that the size of the data problem had increased massively from when VaR was first used in 1994, and that visualization and other automated techniques were very much needed if the headache of capturing, cleansing and understanding data was to be addressed.

Brian asked whether Big Data would address the data integration issue of siloed trading systems? Colleen replied that Big Data needs to work across all the silos found in many financial organizations, or it isn’t “Big Data”. There was general consensus from the panel that legacy systems and people politics were also behind some of the issues found in addressing the data silo issue.

Brian asked if the panel thought the skills needed in risk management would change due to Big Data? Colleen replied that effective Big Data solutions require all kinds of people, with skills across a broad range of specific disciplines such as visualization. Generally the panel thought that data and data analysis would play an increasingly important part for risk management. Ken put forward his view all Big Data problems should start with a business problem, with not just a technology focus. For example are there any better ways to predict stock market movements based on the consumption of larger and more diverse sources of information. In terms of risk management skills, Denny said that risk management of 15 years ago was based on relatively simply econometrics. Fast forward to today, and risk calculations such as CVA are statistically and computationally very heavy, and trading is increasingly automated across all asset classes. As a result, Denny suggested that even the PRMIA PRM syllabus should change to focus more on data and data technology given the importance of data to risk management.

Asked how best to should Big Data be applied?, then Denny replied that echoed Ken in saying that understanding the business problem first was vital, but that obviously Big Data opened up the capability to aggregate and work with larger datasets than ever before. Brian then asked what advice would the panel give to risk managers faced with an IT department about to embark upon using Big Data technologies? Assuming that the business problem is well understood, then Michael said that the business needed some familiarity with the broad concepts of Big Data, what it can and cannot do and how it fits with more mainstream technologies. Colleen said that there are some problems that only Big Data can solve, so understanding the technical need is a first checkpoint. Obviously IT people like working with new technologies and this needs to be monitored, but so long as the business problem is defined and valid for Big Data, people should be encouraged to learn new technologies and new skills. Kevin also took a very positive view that IT departments should  be encouraged to experiment with these new technologies and understand what is possible, but that projects should have well-defined assessment/cut-off points as with any good project management to decide if the project is progressing well. Ken put forward that many IT staff were new to the scale of the problems being addressed with Big Data, and that his own company Opera Solutions had an advantage in its deep expertise of large-scale data integration to deliver quicker on project timelines.

Audience Questions: There then followed a number of audience questions. The first few related to other ideas/kinds of problems that could be analyzed using the kind of modeling that Opera had demonstrated. Ken said that there were obvious extensions that Opera had not got around to doing just yet. One audience member asked how well could all the Big Data analysis be aggregated/presented to make it understandable and usable to humans? Denny suggested that it was vital that such analysis was made accessible to the user, and there general consensus across the panel that man vs. machine was an interesting issue to develop in considering what is possible with Big Data. The next audience question was around whether all of this data analysis was affordable from a practical point of view. Brian pointed out that there was a lot of waste in current practices in the industry, with wasteful duplication of ticker plants and other data types across many financial institutions, large and small. This duplication is driven primarily by the perceived need to implement each institution’s proprietary analysis techniques, and that this kind of customization was not yet available from the major data vendors, but will become more possible as cloud technology such as Microsoft’s Azure develops further. There was a lot of audience interest in whether Big Data could lead to better understanding of causal relationships in markets rather than simply correlations. The panel responded that causal relationships were harder to understand, particularly in a dynamic market with dynamic relationships, but that insight into correlation was at the very least useful and could lead to better understanding of the drivers as more datasets are analyzed.

 

22 January 2013

Chartis Research - Data Management for Risk White Paper

New whitepaper on data management for risk from the analysts Chartis Research, including a section on how Xenomorph's TimeScape solution addresses some of the key issues identified.

16 October 2012

The Missing Data Gap

Getting to the heart of "Data Management for Risk", PRMIA held an event entitled "Missing Data for Risk Management Stress Testing" at Bloomberg's New York HQ last night. For those of you who are unfamiliar with the topic of "Data Management for Risk", then the following diagram may help to further explain how the topic is to do with all the data sets feeding the VaR and scenario engines.

Data-Flow-for-Risk-Engines
I have a vested interest in saying this (and please forgive the product placement in the diagram above, but hey this is what we do...), but the topic of data management for risk seems to fall into a functionality gap between: i) the risk system vendors who typically seem to assume that the world of data is perfect and that the topic is too low level to concern them and ii) the traditional data management vendors who seem to regard things like correlations, curves, spreads, implied volatilities and model parameters as too business domain focussed (see previous post on this topic) As a result, the risk manager is typically left with ad-hoc tools like spreadsheets and other analytical packages to perform data validation and filling of any missing data found. These ad-hoc tools are fine until the data universe grows larger, leading to the regulators becoming concerned about just how much data is being managed "out of system" (see past post for some previous thoughts on spreadsheets).

The Crisis and Data Issues. Anyway enough background above and on to some of the issues raised at the event. Navin Sharma of Western Asset Management started the evening by saying that pre-crisis people had a false sense of security around Value at Risk, and that crisis showed that data is not reliably smooth in nature. Post-crisis, then questions obviously arise around how much data to use, how far back and whether you include or exclude extreme periods like the crisis. Navin also suggested that the boards of many financial institutions were now much more open to reviewing scenarios put forward by the risk management function, whereas pre-crisis their attention span was much more limited.

Presentation. Don Wesnofske did a great presentation on the main issues around data and data governance in risk (which I am hoping to link to here shortly...)

Issues with Sourcing Data for Risk and Regulation. Adam Litke of Bloomberg asked the panel what new data sourcing challenges were resulting from the current raft of regulation being implemented. Barry Schachter cited a number of Basel-related examples. He said that the costs of rolling up loss data across all operations was prohibitative, and hence there were data truncation issues to be faced when assessing operational risk. Barry mentioned that liquidity calculations were new and presenting data challenges. Non centrally cleared OTC derivatives also presented data challenges, with initial margin calculations based on stressed VaR. Whilst on the subject of stressed VaR, Barry said that there were a number of missing data challenges including the challenge of obtaining past histories and of modelling current instruments that did not exist in past stress periods. He said that it was telling on this subject that the Fed had decided to exclude tier 2 banks from stressed VaR calculations on the basis that they did not think these institutions were in a position to be able to calculate these numbers given the data and systems that they had in place.

Barry also mentioned the challenges of Solvency II for insurers (and their asset managers) and said that this was a huge exercise in data collection. He said that there were obvious difficulties in modelling hedge fund and private equity investments, and that the regulation penalised the use of proxy instruments where there was limited "see-through" to the underlying investments. Moving on to UCITS IV, Barry said that the regulation required VaR calculations to be regularly reviewed on an ongoing basis, and he pointed out one issue with much of the current regulation in that it uses ambiguous terms such as models of "high accuracy" (I guess the point being that accuracy is always arguable/subjective for an illiquid security).

Sandhya Persad of Bloomberg said that there were many practical issues to consider such as exchanges that close at different times and the resultant misalignment of closing data, problems dealing with holiday data across different exchanges and countries, and sourcing of factor data for risk models from analysts. Navin expanded more on his theme of which periods of data to use. Don took a different tack, and emphasised the importance of getting the fundamental data of client-contract-product in place, and suggested that this was a big challenge still at many institutions. Adam closed the question by pointing out the data issues in everyday mortgage insurance as an example of how prevalant data problems are.

What Missing Data Techniques Are There? Sandhya explained a few of the issues her and her team face working at Bloomberg in making decisions about what data to fill. She mentioned the obvious issue of distance between missing data points and the preceding data used to fill it. Sandhya mentioned that one approach to missing data is to reduce factor weights down to zero for factors without data, but this gave rise to a data truncation issue. She said that there were a variety of statistical techniques that could be used, she mentioned adaptive learning techniques and then described some of the work that one of her colleagues had been doing on maximum-likehood estimation, whereby in addition to achieving consistency with the covariance matrix of "near" neighbours, that the estimation also had greater consistency with the historical behaviour of the factor or instrument over time.

Navin commented that fixed income markets were not as easy to deal with as equity markets in terms of data, and that at sub-investment grade there is very little data available. He said that heuristic models where often needed, and suggested that there was a need for "best practice" to be established for fixed income, particularly in light of guidelines from regulators that are at best ambiguous.

I think Barry then made some great comments about data and data quality in saying that risk managers need to understand more about the effects (or lack of) that input data has on the headline reports produced. The reason I say great is that I think there is often a disconnect or lack of knowledge around the effects that input data quality can have on the output numbers produced. Whilst regulators increasingly want data "drill-down" and justfication on any data used to calculate risk, it is still worth understanding more about whether output results are greatly sensitive to the input numbers, or whether maybe related aspects such as data consistency ought to have more emphasis than say absolute price accuracy. For example, data quality was being discussed at a recent market data conference I attended and only about 25% of the audience said that they had ever investigated the quality of the data they use. Barry also suggested that you need to understand to what purpose the numbers are being used and what effect the numbers had on the decisions you take. I think here the distinction was around usage in risk where changes/deltas might be of more important, whereas in calculating valuations or returns then price accuracy might receieve more emphasis. 

How Extensive is the Problem? General consensus from the panel was that the issues importance needed to be understood more (I guess my experience is that the regulators can make data quality important for a bank if they say that input data issues are the main reason for blocking approval of an internal model for regulatory capital calculations). Don said that any risk manager needed to be able to justify why particular data points were used and there was further criticism from the panel around regulators asking for high quality without specifying what this means or what needs to be done.

Summary - My main conclusions:

  • Risk managers should know more of how and in what ways input data quality affects output reports
  • Be aware of how your approach to data can affect the decisions you take
  • Be aware of the context of how the data is used
  • Regulators set the "high quality" agenda for data but don't specify what "high quality" actually is
  • Risk managers should not simply accept regulatory definitions of data quality and should join in the debate

Great drinks and food afterwards (thanks Bloomberg!) and a good evening was had by all, with a topic that needs further discussion and development.

 

 

11 September 2012

We Can’t Upgrade, the Data Model’s Changed!

New article with some of my thoughts on data models, interfaces and software upgrades has just gone up on the Waters Inside Reference Data site.

 

17 July 2012

Charting, Heatmaps and Reports for TimeScape, plus new Query Explorer

We have a great new software release out today for TimeScape, Xenomorph's analytics and data management solution, more details of which you can find here. For some additional background to this release then please take a read below.

For many users of Xenomorph's TimeScape, our Excel interface to TimeScape has been a great way of extending and expanding the data analysis capabilities of Excel through moving the burden of both the data and the calculation out of each spreadsheet and into TimeScape. As I have mentioned before, spreadsheets are fantastic end-user tools for ad-hoc reporting and analysis, but problems arise when their very usefulness and ease of use cause people to use them as standalone desktop-based databases. The four-hundred or so functions available in TimeScape for Excel, plus Excel access to our TimeScape QL+ Query Language have enabled much simpler and more powerful spreadsheets to be built, simply because Excel is used as a presentation layer with the hard work being done centrally in TimeScape.

Many people like using spreadsheets, however many users equally do not and prefer more application based functionality. Taking this feedback on board has previously driven us to look at innovative ways of extending data management, such as embedding spreadsheet-like calculations inside TimeScape and taking them out of spreadsheets with our SpreadSheet Inside technology. With this latest release of TimeScape, we are providing much of the ease of use, analysis and reporting power of spreadsheets but doing so in a more consistent and centralised manner. Charts can now be set up as default views on data so that you can quickly eyeball different properties and data sources for issues. New Heatmaps allow users to view large colour-coded datasets and zoom in quickly on areas of interest for more analysis. Plus our enhanced Reporting functionality allows greater ease of use and customisation when wanting to share data analysis with other users and departments.

Additionally, the new Query Explorer front really shows off what is possible with TimeScape QL+, in allowing users to build and test queries in the context of easily configurable data rules for things such as data source preferences, missing data and proxy instruments. The new auto-complete feature is also very useful when building queries, and automatically displays all properties and methods available at each point in the query, even including user-defined analytics and calculations. It also displays complex and folded data in an easy manner, enabling faster understanding and analysis of more complex data sets such as historical volatility surfaces. 

22 June 2012

Front to back office data management

Some recent thoughts in Advanced Trading on turning data management on its head, and how to extend data management initiatives from the back office into both risk management and the front office.

14 June 2012

Paris Financial Information Summit 2012

I attended the Financial Information Summit event on Tuesday, organized in Paris by Inside Market Data and Inside Reference Data.

Unsurprisingly, most of the topics discussed during the panels focused on reducing data costs, managing the vendor relationship strategically, LEI and building sound data management strategies.

Here is a (very) brief summary of the key points touched which generated a good debate from both panellists and audience:

Lowering data costs and cost containment panels

  • Make end-users aware of how much they pay for that data so that they will have a different perspective when deciding if the data is really needed or a "nice to have"
  • Build a strong relationship with the data vendor: you work for the same aim and share the same industry issues
  • Evaluate niche data providers who are often more flexible and willing to assist while still providing high quality data
  • Strategic vendor management is needed within financial institutions: this should be an on-going process aimed to improve contract mgmt for data licenses
  • A centralized data management strategy and consolidation of processes and data feeds allow cost containment (something that Xenomorph have long been advocating)
  • Accuracy and timeliness of data is essential: make sure your vendor understands your needs
  • Negotiate redistribution costs to downstream systems

One good point was made by David Berry, IPUG-Cossiom, on the acquisition of data management software vendors by the same data providers (referring to the Markit-Cadis and PolarLake-Bloomberg deals) and stating that it will be tricky to see how the two business units will be managed "separately" (if kept separated...I know what you are thinking!).

There were also interesting case studies and examples supporting the points above. Many panellists pointed out how difficult can be to obtain high quality data from vendors and that only regulation can actually improve the standards. Despite the concerns, I must recognize that many firms are now pro-actively approaching the issue and trying to deal with the problem in a strategic manner. For example, Hand Henrik Hovmand, Market Data Manager, Danske Bank, explained how Danske Bank are in the process of adopting a strategic vendor system made of 4 steps: assessing vendor, classifying vendor, deciding what to do with the vendor and creating a business plan. Vendors are classified as strategic, tactical, legacy or emerging. Based on this classification, then the "bad" vendors are evaluated to verify if they are enhancing data quality. This vendor landscape is used both internally and externally during negotiation and Hovmand was confident it will help Danske Bank to contain costs and get more for the same price.

I also enjoyed the panel on Building a sound management strategy where Alain Robert- Dauton, Sycomore Asset Management, was speaking. He highlighted how asset managers, in particular smaller firms, are now feeling the pressure of regulators but at the same time are less prepared to deal with compliance than larger investment banks. He recognized that asset managers need to invest in a sound risk data management strategy and supporting technology, with regulators demanding more details, reports and high quality data.

For a summary on what was said on LEI, then seems like most financial institutions are still unprepared on how it should be implemented, due to uncertainty around it but I refer you to an article from Nicholas Hamilton in Inside Reference Data for a clear picture of what was discussed during the panel.

Looking forward, the panellists agreed that the main challenge is and will be managing the increasing volume of data. Though, as Tom Dalglish affirmed, the market is still not ready for the cloud, given than not much has been done in terms of legislation. Watch out!

The full agenda of the event is available here.

27 January 2012

PRMIA - Operational Risk, Big Data and Human Behaviour

I attended Challenges and Innovations in Operational Risk Management event last night which was surprisingly interesting. I say surprising since I must admit to some prejudice against learning about operational risk, which has for me the unfortunate historical reputation of being on the dull side.

Definition of Operational Risk

Michael Duffy (IBM GRC Strategy Leader, Ex-CEO of OpenPages) was asked by the moderator to define Operational Risk. Michael answered that he assumed that most folks attending already knew the definition (fair comment, the auditorium was full of risk managers...), but he sees it in practice as the definition of policy, the controls to enforce the policies and ongoing monitoring of the performance of the controls. Michael suggestion that many where looking to move the scope and remit of Operational Risk into business performance improvement, but clients are not there yet on this more advanced aspect.

Vick Panwar (Financial Services Industry Lead, SAS) added that Operational Risk was there to mitigate the risks for those unexpected future events (getting into the territory of Dick Cheney's Unknown Unknowns which I never tire of, particularly after a glass of wine).

Rajeev Lakra (Director Operational Risk Management, GE Treasury) took his definition from Basel II of Operational Risk as risk of loss resulting from inadequate or failed internal processes, people and systems, or from external events. Coming from GE, he said that he thought of best practice Operational Risk as similar to another GE initiative in the use of Six Sigma for improving process management. Raj said that his operational risks were mainly concerned with trade execution so covering data quality/errors, human error and settlement errors.

Beyond Box Ticking for Operational Risk

Raj said that Operational Risk is treated seriously at GE with the Head of Operational Risk reporting into the CRO and leaders of Operational Risk in each business division.

Michael suggested that the "regulators force us to do it" motive for Operational Risk had reduced given some of the operational failures during the financial crisis and recent "rogue trader" events, with the majority of institutions post-2008 having created risk committees at the "C" level and being so much more aware of tail events and the reputational damage that can damage shareholder value.

Vik said that Operational Risk is concerned primarily with "tail events" which by definition are not limited in size and therefore should be treated seriously. Pragmatically, he suggested that "the regulators need it" should be used as an excuse if there was no other way to get people to pay attention, but getting them to understand the importance of it was far more powerful.

The "What's in it for you" Approach to Operational Risk

Raj emphasised that it was possible to emphasise the benefits of operational risk to people in their everyday jobs, explaining to operators/managers that if they get frustated with failures/problems in the working day, then wouldn't it be great if these problems/losses were recorded so that they could justify a process change to senior management. He emphasised that this was a big cultural challange at GE.

Michael suggested that his clients in financial markets had gone through risk assessment, controls and recording of losses, but had not yet progressed to the use of Operational Risk to improve business performance.

Duplication of Effort

A key thing that all the panelists discussed was the overlap at many organisations between Operational Risk, Audit and Compliance. The said that the testing of the controls used for each had much in overlap, but was not based on a common nomenclature nor on common systems. For instance Vik pointed out that many of the tests on controls in Sarbanes-Oxley compliance were re-usable in an Operational Risk context, but that this was not yet happening. Vik said that this pointed to the need for comprehensive GRC platform rather than many siloed platforms.

Michael said that regulators want an integrated view, but no institution has an integrated nomenclature as yet. He recounted that one client sent 12 different control tests to branches that needed to be filled in for head office, which was a waste of resources and confusing/demotivating for staff. Raj said that the integration of Audit and Operational Risk at GE had proved to be a very difficult process. All agreed that senior management need to get involved and that a 5 year vision of how things should be incrementally integrated needs to be put in place.

Audience Questions:

Is business process risk different to business product risk? Michael said that Operational Risk certainly does and should cover both internal process and also the risks produced by the introduction of a new financial product for instance (is it well understood for instance, do clients understand what they are being sold?). He added that Operational Risk encompassed both the quantitative (statistical number of failures for instance) and the qualitative for which statistics were either not available (or not relevant to the risk).

Are there any surrogate measures for Operational Risk? Here a member of the audience was relaying senior management comments and frustration over the stereotyped red/amber/green traffic lights approach to reporting on operational risk. Michael mentioned the Operational Riskdata eXchange Association (ORX) where a number of financial institutions anonymously share operational risk loss data with a view to using this data to build better models and measures of operational risk. Apparently this has been going on since 2003 and the participants already have a shared taxonomy for Operational Risk. (my only comment on having a single measure for "operational riskiness" is that do you really want a "single number" approach to make things simple for C-level managers to understand, or should the C-levels be willing to understand more of the detail behind the number?)

Is "Rogue Trading" Operational Risk? Michael said that it definitely was, and that obviously each institution must control and monitor its trading policies to ensure they were being followed. The panel proposed that Operational Risk applied to trading activity could be a good application of "Big Data" (much hyped by industry journalists lately) to understand typical trading patterns and understand unusual trading patterns and behaviours. (Outside of bulk tick-data analysis this is one of the first sensible applications of Big Data so far that I have heard suggested so far given how much journalists seem to be in love with the "bigness" of it all without any business context to why you actually would invest in it...sorry, mini-rant there for a moment...)

Summary

Good event with an interesting panel, the GE speaker had lots of practical insight and the vendor speakers were knowledgeable without towing the marketing line too much. Operational Risk seems to be growing up in its linkage into and across market, credit and liquidity risk. The panel agreed however that it was very early days for the discipline and a lot more needs to be done.

Given the role of human behaviour in all aspects of the recent financial crisis, then in my view Operational Risk has a lot to offer but also a lot to learn, not least in that I think it should market itself more agressively along the lines of being the field of risk management that encompasses the study and understanding of human behaviour. Maybe there is a new career path looming for anthropologists in financial risk management...

 

 

 

 

 

 

14 December 2011

PRMIA - From Risk Measurement to Risk Management by Samuel Won

I attended the PRMIA event last night "Risk Year in Review" at Moody's New York offices. It was a good event, but by far the most interesting topic of the evening for me was from Samuel Won, who gave a talk about some of the best and most innovative risk management techniques being used in the market today. Sam said that he was inspired to do this after reading the book "The Information" by James Gleik about the history of information and its current exponential growth. Below are some of the notes I took on Sam's talk, please accept my apologies in advance for any errors but hopefully the main themes are accurate.

Early '80s ALM - Sam gave some context to risk management as a profession through his own personal experiences. He started work in the early 80's at a supra-regional bank, managing interest rate risk on a long portfolio of mortgages. These were the days before the role of "risk manager" was formally defined, and really revolved around Asset and Liability Management (ALM).

Savings and Loans Crisis - Sam then changed roles and had some first hand experience in sorting out the Savings and Loans crisis of the mid '80s. In this role he become more experienced with products such as mortgage backed securities, and more familiar with some of the more data intensive processes needed to manage such products in order to account for such factors such as prepayment risk, convexity and cashflow mapping.

The Front Office of the '90s - In the '90s he worked in the front office at a couple of tier one investment banks, where the role was more of optimal allocation of available balance sheet rather than "risk management" in the traditional sense. In order to do this better, Sam approached the head of trading for budget to improve and systemise this balance sheet allocation but was questioned as to why he needed budget when the central Risk Control department had a large staff and large budget already.

Eventually, he successfully argued the case that Risk Control were involved in risk measurement and control, whereas what he wanted to implement was active decision support to improve P&L and reduce risk. He was given a total budget of just $5M (small for a big bank) and told to get on with it. These two themes of implementing active decision support (not just risk measurement) and have a profit motive driving better risk management ran through the rest of his talk.

A Datawarehouse for End-Users Too - With a small team and a small budget, Sam made use of postgraduate students to leverage what his team could develop. They had seen that (at the time) getting systems talking to each other was costly and unproductive, and decided as a result to implement a datawarehouse for the front office, implementing data normalisation and data scrubbing, with data dashboard over the top that was easy enough for business users to do data mining. Sam made the point that useability was key in allowing the business people to extract full value from the solution.

Sam said that the techniques used by his team and the developers were not necessarily that new, things like regression and correlation analysis were used at first. These were used to establish key variables/factors, with a view to establish key risk and investment triggers in as near to real-time as possible. The expense of all of this development work was justified through its effects on P&L which given its success resulting in more funding from the business.

Poor Sell-Side Risk Innovation - Sam has seen the most innovative risk techniques being used on the buy-side and was disappointed by the lack of innovation in risk management at the banks. He listed the following sell-side problems for risk innovation:

  • politically driven requirements, not economically driven
  • arbitrary increases in capital levels required is not a rigorous approach
  • no need for decision analysis with risk processes
  • just passing a test mentality
  • just do the marginal work needed to meet the new rules
  • no P&L justification driving risk management

Features of Innovative Approaches - Sam said that he had noted a few key features of some of the initiatives he admired at some of the asset managers:

  1. Based on a sophisticated data warehouse (not usually Oracle or Sybase, but Microsoft and other databases used - maybe driven by ease of use or cost maybe?)
  2. Traders/Portfolio Managers are the people using the system and implementing it, not the technical staff.
  3. Dedicated teams within the trading division to support this, so not relying on central data team.

A Forward-Looking Risk Model Example - The typical output from such decision analysis systems he found was in the form of scenarios for users to consider. A specific example was a portfolio manager involved in event-driven long-short equity strategies around mergers and acquisitions. The manager is interested in the risk that a particular deal breaks, and in this case techniques such as Value at Risk (VaR) do not work, since the arbitrage usually requires going long the company being acquired and short the acquiror (VaR would indicate little risk in this long-short case). The manager implemented a forward looking model that was based on information relevant to the deal in question plus information from similar historic deals. The probabilities used in the model where gathered from a range of sources, and techniques such as triangulation where used to verify the probabilities. Sam views that forward-looking models to assist in decision support are real risk management, as opposed to the backward-looking risk measurement models implemented at banks to support regulatory reporting.

Summary - Sam was a great speaker, and for a change it was refreshing to not have presentation slides backing up what the speaker was saying. His thoughts on forward looking models being true risk management and moving away from risk measurement seem to echo those of Ricardo Rebanato of a few years back at RiskMinds (see post). I think his thoughts on P&L motivation being the only way that risk management advances are correct, although I think there is a lot of risk innovation at the banks but at a trading desk level and not at the firm-wide level which is caught up in regulation - the trading desks know that capital is scarce and are wanting to use it better. I think this siloed risk management flies in the face of much of the firm-wide risk management and indeed firm-wide data management talked about in the industry, and potentially still shows that we have a long way to go in getting innovation and forward looking risk management at a firm level, particularly when it is dominated by regulatory requirements. However, having a truly integrated risk data platform is something of a hobby-horse for me, I think it is the foundation for answering all of the regulatory and risk requirementst to come, whatever their form. Finally, I could not agree more easy analysis for end-users is a vital part of data management for risk, allowing business users to do risk management better. Too many times IT is focussed on systems that require more IT involvement, when the IT investment and focus should be on systems that enable business users (trading, risk, compliance) to do more for themselves. Data management for risk is key area for improvement in the industry, where many risk management sytem vendors assume that the world of data they require is perfect. Ask any risk manager - the world of data is not perfect and manual data validation continues to be a task that takes time away from actually doing risk management.

27 July 2011

Data Unification - just when you thought it was safe to go back in the water...

Sitting by the sea, you have just finished your MATLAB reading and now are wondering what to read next?

No worries! 

We have just published our "TimeScape Data Unification" white paper. Not a pocket edition I am afraid, but some of you may find it interesting.

It describes how - post-crisis - a key business and technical challenge for many large financial institutions is to knit together their many disparate data sources, databases and systems into one consistent framework than can meet the ongoing demands of the business, its clients and regulators. It then analyses the approaches that financial institutions have adopted to respond to this issue, such as implementing a ETL-type infrastructure or a traditional golden copy data management solution. 

Taking on from their effectiveness and constraints, it then shows how companies looking to satisfy the need for business-user access to data across multyple systems should consider a "distributed golden copy" approach. This federated approach deals with disparate and distributed sources of data and should also provide easy and end-user interactivity whilst maintaining data quality and auditability. 

The white paper is available here if you want to take a look and if you have any feedback or questions, drop us a line!

 

04 May 2011

More formal management of instrument valuation needed

Xenomorph has today released its white paper “Instrument Valuation Management: management of derivative and fixed income valuations in a multi-asset, multi-model, multi-datasource and multi-timeframe environment”.

The white paper expands on the “Rates, Curves and Surfaces – Golden Copy Management of Complex Datasets” white paper Xenomorph published recently (see earlier post) and describes how, despite the increasing importance of instrument valuation to investment, trading and risk management decisions, valuation management is not yet formally and fully addressed within data management strategies and remains a big concern for financial institutions.

Too often, says Xenomorph, valuations (and the analytics used to process input and calculate output data) fall between traditional data management providers and pricing model vendors. This leads to the over–use of tactical desktop spreadsheets where data “escapes” the control of the data management system, leading to an increased operational risk.

Whilst instrument valuation is certainly not the primary cause of the recent financial crisis, the lack of high quality, transparent valuations of many complex securities resulted in market uncertainty and in the failure of many risk models fed by untrustworthy valuations.

“A deeper understanding of financial products reduces operational risk and promotes quality, consistency and auditability, ensuring regulatory compliance”, says Brian Sentance, CEO Xenomorph. “Clients’ requirements have evolved and portfolio managers, traders and risk managers recognize that it is no longer sufficient to treat valuation as an external, black-box process offered by pricing service providers”, he adds.

Nowadays, regulators, auditors, clients and investors demand even more drill-down to the underlying details of an instrument’s valuation. It is therefore important to implement an integrated, consistent analytics and data management strategy which cuts across different departments and glues together reference and market data, pricing and analytics models, for transparent, high quality, independent valuation management.

“Our TimeScape solution provides a valuation environment which offers rapid and timely support for even the most complex instruments, allowing our clients to check easily the external valuation numbers, based on their choice of model and data providers”, says Sentance. “Otherwise, what is the point of good data management if the valuations and the analytics used are not based on the same data management infrastructure principles?”

For those who are interested, the white paper is available here.

 

Xenomorph: analytics and data management

About Xenomorph

Xenomorph is the leading provider of analytics and data management solutions to the financial markets. Risk, trading, quant research and IT staff use Xenomorph’s TimeScape analytics and data management solution at investment banks, hedge funds and asset management institutions across the world’s main financial centres.

@XenomorphNews



Blog powered by TypePad
Member since 02/2008