34 posts categorized "Automated Trading"

06 December 2013

F# in Finance New York Style

Quick plug for the New York version of F# in Finance event taking place next Wednesday December 11th, following on from the recent event in London. Don Syme of Microsoft Research will be demonstrating access to market data using F# and TimeScape. Hope to see you there!

27 November 2013

Putting the F# in Finance with TimeScape

Quick thank you to Don Syme of Microsoft Research for including a demonstration of F# connecting to TimeScape running on the Windows Azure cloud in the F# in Finance event this week in London. F# is functional language that is developing a large following in finance due to its applicability to mathematical problems, the ease of development with F# and its performance. You can find some testimonials on the language here.

Don has implemented a proof-of-concept F# type provider for TimeScape. If that doesn't mean much to you, then a practical example below will help, showing how the financial instrument data in TimeScape is exposed at runtime into the F# programming environment. I guess the key point is just how easy it looks to code with data, since effectively you get guided through what is (and is not!) available as you are coding (sorry if I sound impressed, I spent a reasonable amount of time writing mathematical C code using vi in the mid 90's - so any young uber-geeks reading this, please make allowances as I am getting old(er)...). Example steps are shown below:

Referencing the Xenomorph TimeScape type provider and creating a data context: 

F_1

Connecting to a TimeScape database:

F_2

Looking at categories (classes) of financial instrument available:

F_3

Choosing an item (instrument) in a category by name:

F_4

Looking at the properties associated with an item:

F_5

The intellisense-like behaviour above is similar to what TimeScape's Query Explorer offers and it is great to see this implemented in an external run-time programming language such as F#. Don additionally made the point that each instrument only displays the data it individually has available, making it easy to understand what data you have to work with. This functionality is based on F#'s ability to make each item uniquely nameable, and to optionally to assign each item (instrument) a unique type, where all the category properties (defined at the category schema level) that are not available for the item are hidden. 

The next event for F# in Finance will take place in New York on Wednesday 11th of December 2013 in New York, so hope to see you there. We are currently working on a beta program for this functionality to be available early in the New Year so please get in touch if this is of interest via [email protected].  

 

07 October 2013

#DMSLondon - Big Data, Cloud, In-Memory

Andrew Delaney introduced the second panel of the day, with the long title of "The Industry Response: High Performance Technologies for Data Management - Big Data, Cloud, In-Memory, Meta Data & Big Meta Data". The panel included Rupert Brown of UBS, John Glendenning of Datastax, Stuart Grant of SAP and Pavlo Paska of Falconsoft. Andrew started the panel by asking what technology challenges the industry faced:

  • Stuart said that risk data on-demand was a key challenge, that there was the related need to collapse the legacy silos of data.
  • Pavlo backed up Stuart by suggesting that accuracy and consistency were needed for all live data.
  • Rupert suggested that there has been a big focus on low latency and fast data, but raised a smile from the audience when he said that he was a bit frustrated by the "format fetishes" in the industry. He then brought the conversation back to some fundamentals from his viewpoint, talking about wholeness of data and namespaces/data dictionaries - Rupert said that naming data had been too stuck in the functional area and not considered more in isolation from the technology.
  • John said that he thought there were too many technologies around at the moment, particularly in the area of Not Only SQL (NoSQL) databases. John seemed keen to push NoSQL, and in particular Apache Cassandra, as post relational databases. He put forward that these technologies, developed originally by the likes of Google and Yahoo, were the way forward and that in-memory databases from traditional database vendors were "papering over the cracks" of relational database weaknesses.
  • Stuart countered John by saying that properly designed in-memory databases had their place but that some in-memory databases had indeed been designed to paper over the cracks and this was the wrong approach, exascerbating the problem sometimes.
  • Responding to Andrew's questions around whether cloud usage was more accepted by the industry than it had been, Rupert said he thought it was although concerns remain over privacy and regulatory blockers to cloud usage, plus there was a real need for effective cloud data management. Rupert also asked the audience if we knew of any good release management tools for databases (controlling/managing schema versioning etc) because he and his group were yet to find one. 
  • Rupert expressed that Hadoop 2 was of more interest to him at UBS that Hadoop, and as a side note mentioned that map reduce was becoming more prevalent across NoSQL not just within the Hadoop domain. Maybe controversially, he said that UBS was using less data than it used to and as such it was not the "big data" organisation people might think it to be. 
  • As one example of the difficulties of dealing with silos, Stuart said that at one client it required the integration of data from 18 different system to a get an overall view of the risk exposure to one counterparty. Stuart advocated bring the analytics closer to the data, enabling more than one job to be done on one system.
  • Rupert thought that Goldman Sachs and Morgan Stanley seem to do what is the right thing for their firm, laying out a long-term vision for data management. He said that a rethink was needed at many organisations since fundamentally a bank is a data flow.
  • Stuart picked up on this and said that there will be those organisations that view data as an asset and those that view data as an annoyance.
  • Rupert mentioned that in his view accountants and lawyers are getting in the way of better data usage in the industry.
  • Rupert added that data in Excel needed to passed by reference and not passed by value. This "copy confluence" was wasting disk space and a source of operational problems for many organisations (a few past posts here and here on this topic).
  • Moving on to describe some of the benefits of semantic data and triple stores, Rupert proposed that the statistical world needed to be added to the semantic world to produce "Analytical Semantics" (see past post relating to the idea of "analytics management").

Great panel, lots of great insight with particularly good contributions from Rupert Brown.

07 May 2013

Big Data Finance at NYU Poly

I went over to NYU Poly in Brooklyn on Friday of last week for their Big Data Finance Conference. To get a slightly negative point out of the way early, I guess I would have to pose the question "When is a big data conference, not a big data Conference?". Answer: "When it is a time series analysis conference" (sorry if you were expecting a funny answer...but as you can see, then what I occupy my time with professionally doesn't naturally lend itself to too much comedy). As I like time series analysis, then this was ok, but certainly wasn't fully "as advertised" in my view, but I guess other people are experiencing this problem too.

Maybe this slightly skewed agenda was due to the relative newness of the topic, the newness of the event and the temptation for time series database vendors to jump on the "Big Data" marketing bandwagon (what? I hear you say, we vendors jumping on a buzzword marketing bandwagon, never!...). Many of the talks were about statistical time series analysis of market behaviour and less about what I was hoping for, which was new ways in which empirical or data-based approaches to financial problems might be addressed through big data technologies (as an aside, here is a post on a previous PRMIA event on big data in risk management as some additional background). There were some good attempts at getting a cross-discipline fertilization of ideas going at the conference, but given the topic then representatives from the mobile and social media industries were very obviously missing in my view. 

So as a complete counterexample to the two paragraphs above, the first speaker (Kevin Atteson of Morgan Stanley) at the event was on very much on theme with the application of big data technologies to the mortgage market. Apparently Morgan Stanley had started their "big data" analysis of the mortgage market in 2008 as part of a project to assess and understand more about the potential losses than Fannie Mae and Freddie Mac faced due to the financial crisis.  

Echoing some earlier background I had heard on mortgages, one of the biggest problems in trying to understand the market according to Kevin was data, or rather the lack of it. He compared mortgage data analysis to "peeling an onion" and that going back to the time of the crisis, mortgage data at an individual loan level was either not available or of such poor quality as to be virtually useless (e.g. hard to get accurate ZIP code data for each loan). Kevin described the mortgage data set as "wide" (lots of loans with lots of fields for each loan) rather than "deep" (lots of history), with one of the main data problems was trying to match nearest-neighbour loans. He mentioned that only post crisis have Fannie and Freddie been ordered to make individual loan data available, and that there is still no readily available linkage data between individual loans and mortgage pools (some presentations from a recent PRMIA event on mortgage analytics are at the bottom of the page here for interested readers). 

Kevin said that Morgan Stanley had rejected the use of Hadoop, primarily due write through-put capabilities, which Kevin indicated was a limiting factor in many big data technologies. He indicated that for his problem type that he still believed their infrastructure to be superior to even the latest incarnations of Hadoop. He also mentioned the technique of having 2x redundancy or more on the data/jobs being processed, aimed not just at failover but also at using the whichever instance of a job that finished first. Interestingly, he also added that Morgan Stanley's infrastructure engineers have a policy of rebooting servers in the grid even during the day/use, so fault tolerance was needed for both unexpected and entirely deliberate hardware node unavailability. 

Other highlights from the day:

  • Dennis Shasha had some interesting ideas on using matrix algebra for reducing down the data analysis workload needed in some problems - basically he was all for "cleverness" over simply throwing compute power at some data problems. On a humourous note (if you are not a trader?), he also suggested that some traders had "the memory of a fruit-fly".
  • Robert Almgren of QuantitativeBrokers was an interesting speaker, talking about how his firm had done a lot of analytical work in trying to characterise possible market responses to information announcements (such as Friday's non-farm payroll announcement). I think Robert was not so much trying to predict the information itself, but rather trying to predict likely market behaviour once the information is announced. 
  • Scott O'Malia of the CFTC was an interesting speaker during the morning panel. He again acknowledged some of the recent problems the CFTC had experienced in terms of aggregating/analysing the data they are now receiving from the market. I thought his comment on the twitter crash was both funny and brutally pragmatic with him saying "if you want to rely solely upon a single twitter feed to trade then go ahead, knock yourself out."
  • Eric Vanden Eijnden gave an interesting talk on "detecting Black Swans in Big Data". Most of the examples were from current detection/movement in oceanography, but seemed quite analogous to "regime shifts" in the statistical behaviour of markets. Main point seemed to be that these seemingly unpredictable and infrequent events were predictable to some degree if you looked deep enough in the data, and in particular that you could detect when the system was on a possible likely "path" to a Black Swan event.

One of the most interesting talks was by Johan Walden of the Haas Business School, on the subject of "Investor Networks in the Stock Market". Johan explained how they had used big data to construct a network model of all of the participants in the Turkish stock exchange (both institutional and retail) and in particular how "interconnected" each participant was with other members. His findings seemed to support the hypothesis that the more "interconnected" the investor (at the centre of many information flows rather than add the edges) the more likely that investor would demonstrate superior return levels to the average. I guess this is a kind of classic transferral of some of the research done in social networking, but very interesting to see it applied pragmatically to financial markets, and I would guess an area where a much greater understanding of investor behaviour could be gleaned. Maybe Johan could do with a little geographic location data to add to his analysis of how information flows.

So overall a good day with some interesting talks - the statistical presentations were challenging to listen to at 4pm on a Friday afternoon but the wine afterwards compensated. I would also recommend taking a read through a paper by Charles S. Tapiero on "The Future of Financial Engineering" for one of the best discussions I have so far read about how big data has the potential to change and improve upon some of the assumptions and models that underpin modern financial theory. Coming back to my starting point in this post on the content of the talks, I liked the description that Charles gives of traditional "statistical" versus "data analytics" approaches, and some of the points he makes about data immediately inferring relationships without the traditional "hypothesize, measure, test and confirm-or-not" were interesting, both in favour of data analytics and in cautioning against unquestioning belief in the findings from data (feels like this post from October 2008 is a timely reminder here). With all of the hype and the hope around the benefits of big data, maybe we would all be wise to remember this quote by a certain well-known physicist: "No amount of experimentation can ever prove me right; a single experiment can prove me wrong."

 

25 April 2013

The Anthropology, Sociology, and Epistemology of Risk

Background - I went along to my first PRMIA event in Stamford, CT last night, with the rather grandiose title of "The Anthropology, Sociology, and Epistemology of Risk". Stamford is about 30 miles north of Manhattan and is the home to major offices of a number of financial markets companies such as Thomson Reuters, RBS and UBS (who apparently have the largest column-less trading floor in the world at their Stamford headquarters - particularly useful piece of trivia for you there...). It also happens to be about 5 minutes drive/train journey away from where I now live, so easy for me to get to (thanks for another useful piece of information I hear you say...). Enough background, more on the event which was a good one with five risk managers involved in an interesting and sometimes philosophical discussion on fundamentally what "risk management" is all about.

IntroductionMarc Groz who heads the Stamford Chapter of PRMIA introduced the evening and started by thanking Barry Schwimmer for allowing PRMIA to use the Stamford Innovation Centre (the Old Town Hall) for the meeting. Henrik Neuhaus moderated the panel, and started by outlining the main elements of the event title as a framework for the discussion:

  • Anthropology - risk management is to what purpose?
  • Sociology - how does risk management work?
  • Epistemology - what knowledge is really contained within risk management?

Henrik started by taking a passage about anthropology and replacing human "development" with "risk management" which seemed to fit ok, although the angle I was expecting was much more about human behaviour in risk management than where Henrik started. Henrik asked the panel what results they had seen from risk management and what did that imply about risk management? The panelists seemed a little confused or daunted by the question prompting one of them to ask "Is that the question?".

Business Model and Risk CultureElliot Noma dived in by responding that the purpose of risk management obviously depended very much on what are the institutional goals of the organization. He said that it was as much about what you are forced to do and what you try to do in risk management. Elliot said that the sell-side view of risk management was very regulatory and capital focused, whereas mutual funds are looking more at risk relative to benchmarks and performance attribution. He added that in the alternatives (hedge-fund) space then there were no benchmarks and the focus was more about liquidity and event risk.

Steve Greiner said that it was down to the investment philosophy and how risk is defined and measured. He praised some asset managers where the risk managers sit across from the portfolio managers and are very much involved in the decision making process.

Henrik asked the panel whether any of the panel had ever defined a “mission statement” for risk management. Marc Groz chipped in that he remember that he had once defined one, and that it was very different from what others in the institution were expecting and indeed very different from the risk management that he and his department subsequently undertook.

Mark Szycher (of GM Pension Fund) said that risk management split into two areas for him, the first being the symmetrical risks where you need to work out the range of scenarios for a particular trade or decision being taken. The second was the more asymmetrical risks (i.e. downside only) such as those found in operational risk where you are focused on how best to avoid them happening.

Micro Risk Done Well - Santa Federico said that he had experience of some of the major problems experienced at institutions such as Merrill Lynch, Salomen Brothers and MF Global, and that he thought risk management was much more of a cultural problem than a technical one. Santa said he thought that the industry was actually quite good at the micro (trade, portfolio) risk management level, but obviously less effective at the large systematic/economic level. Mark asked Santa what was the nature of the failures he had experienced. Santa said that the risks were well modeled, but maybe the assumptions around macro variables such as the housing market proved to be extremely poor.

Keep Dancing? - Henrik asked the panel what might be done better? Elliot made the point that some risks are just in the nature of the business. If a risk manager did not like placing a complex illiquid trade and the institution was based around trading in illiquid markets then what is a risk manager to do? He quote the Citi executive who said “ whilst the music is still playing we have to dance”. Again he came back to the point that the business model of the institution drives its cultural and the emphasis of risk management (I guess I see what Elliot was saying but taken one way it implied that regardless of what was going on risk management needs to fit in with it, whereas I am sure that he meant that risk managers must fit in with the business model mandated to shareholders).

Risk Attitudes in the USA - Mark said that risk managers need to recognize that the improbable is maybe not so improbable and should be more prepared for the worst rather than risk management under “normal” market and institutional behavior. Steven thought that a cultural shift was happening, where not losing money was becoming as important to an organization as gaining money. He said that in his view, Europe and Asia had a stronger risk culture than in the United States, with much more consensus, involvement and even control over the trading decisions taken. Put another way, the USA has more of a culture of risk taking than Europe. (I have my own theories on this. Firstly I think that the people are generally much more risk takers in the USA than in UK/Europe, possibly influenced in part by the relative lack of underlying social safety net – whilst this is not for everyone, I think it produces a very dynamic economy as a result. Secondly, I do not think that cultural desire in the USA for the much admired “presidential” leader necessarily is the best environment for sound, consensus based risk management. I would also like to acknowledge that neither of my two points above seem to have protected Europe much from the worst of the financial crisis, so it is obviously a complex issue!).

Slaves to Data? - Henrik asked whether the panel thought that risk managers were slaves to data? He expanded upon this by asking what kinds of firms encourage qualitative risk management and not just risk management based on Excel spreadsheets? Santa said that this kind of qualitative risk management occurred at a business level and less so at a firm wide level. In particular he thought this kind of culture was in place at many hedge funds, and less so at banks. He cited one example from his banking career in the 1980's, where his immediate boss was shouted off the trading floor by the head of desk, saying that he should never enter the trading floor again (oh those were the days...). 

Sociology and Credibility - Henrik took a passage on the historic development of women's rights and replaced the word "women" with "risk management" to illustrate the challenges risk management is facing with trying to get more say and involvement at financial institutions. He asked who should the CRO report to? A CEO? A CIO? Or a board member? Elliot responded by saying this was really a issue around credibility with the business for risk managers and risk management in general. He made the point that often Excel and numbers were used to establish credibility with the business. Elliot added that risk managers with trading experience obviously had more credibility, and to some extent where the CRO reported to was dependent upon the credibility of risk management with the business. 

Trading and Risk Management Mindsets - Elliot expanded on his previous point by saying that the risk management mindset thinks more in terms of unconditional distributions and tries to learn from history. He contrasted this with a the "conditional mindset' of a trader, where the time horizon forwards (and backwards) is rarely longer than a few days and the belief is strong that a trade will work today given it worked yesterday is high. Elliot added that in assisting the trader, the biggest contribution risk managers can make is more to be challenging/helpful on the qualitative side rather than just quantitative.

Compensation and Transactions - Most of the panel seemed to agree that compensation package structure was a huge influencer in the risk culture of an organisation. Mark touched upon a pet topic of mine, which is that it very hard for a risk manager to gain credibility (and compensation) when what risk management is about is what could happen as opposed to what did happen. A risk manager blocking a trade due to some potentially very damaging outcomes will not gain any credibility with the business if the trading outcome for the suggested trade just happened to come out positive. There seemed to be concensus here that some of the traditional compensation models that were based on short-term transactional frequency and size were ill-formed (given the limited downside for the individual), and whilst the panel reserved judgement on the effectiveness of recent regulation moves towards longer-term compensation were to be welcome from a risk perspective.

MF Global and Busines Models - Santa described some of his experiences at MF Global, where Corzine moved what was essentially a broker into taking positions in European Sovereign Bonds. Santa said that the risk management culture and capabilities were not present to be robust against senior management for such a business model move. Elliot mentioned that he had been courted for trades by MF Global and had been concerned that they did not offer electronic execution and told him that doing trades through a human was always best. Mark said that in the area of pension fund management there was much greater fidiciary responsibility (i.e. behave badly and you will go to jail) and maybe that kind of responsibility had more of a place in financial markets too. Coming back to the question of who a CRO should report to, Mark also said that questions should be asked to seek out those who are 1) less likely to suffer from the "agency" problem of conflicts of interest and on a related note those who are 2) less likely to have personal biases towards particular behaviours or decisions.

Santa said that in his opinion hedge funds in general had a better culture where risk management opinions were heard and advice taken. Mark said that risk managers who could get the business to accept moral persuasion were in a much stronger position to add value to the business rather than simply being able to "block" particular trades. Elliot cited one experience he had where the traders under his watch noticed that a particular type of trade (basis trades) did not increase their reported risk levels, and so became more focussed on gaming the risk controls to achieve high returns without (reported) risk. The panel seemed to be in general agreement that risk managers with trading experience were more credible with the business but also more aware of the trader mindset and behaviors. 

Do we know what we know? - Henrik moved to his third and final subsection of the evening, asking the panel whether risk managers really know what they think they know. Elliot said that traders and risk managers speak a different language, with traders living in the now, thinking only of the implications of possible events such as those we have seen with Cyprus or the fiscal cliff, where the risk management view was much less conditioned and more historical. Steven re-emphasised the earlier point that risk management at this micro trading level was fine but this was not what caused events such as the collapse of MF Global.

Rational argument isn't communication - Santa said that most risk managers come from a quant (physics, maths, engineering) background and like structured arguments based upon well understood rational foundations. He said that this way of thinking was alien to many traders and as such it was a communication challenge for risk managers to explain things in a way that traders would actually put some time to considering. On the modelling side of things, Santa said that sometimes traders dismissed models as being "too quant" and sometimes traders followed models all too blindly without questioning or understanding the simplifying assumptions they are based on. Santa summarised by saying that risk management needs to intuitive for traders and not just academically based. Mark added that a quantitative focus can sometimes become too narrow (modeler's manifesto anyone?) and made the very profound point that unfortunately precision often wins over relevance in the creation and use of many models. Steven added that traders often deal with absolutes, so as knowing the spread between two bonds to the nearest basis point, whereas a risk manager approaching them with a VaR number really means that this is the estimated VaR which really should be thought to be within a range of values. This is alien to the way traders think and hence harder to explain.

Unanticipated Risk - An audience member asked whether risk management should focus mainly on unanticipated risks rather than "normal' risks. Elliot said that in his trading he was always thinking and checking whether the markets were changing or continuing with their recent near-term behaviour patterns. Steven said that history was useful to risk management when markets were "normal", but in times of regime shifts this was not the case and cited the example of the change in markets when Mario Dragi announced that the ECB would stand behind the Euro and its member nations. 

Risky Achievements - Henrik closed the panel by asking each member what they thought was there own greatest achievement in risk management. Elliot cited a time when he identified that a particular hedge fund had a relatively inconspicuous position/trade that he identified as potentially extremely dangerous and was proved correct when the fund closed down due to this. Steven said he was proud of some good work he and his team did on stress testing involving Greek bonds and Eurozone. Santa said that some of the work he had done on portfolio "risk overlays" was good. Mark ended the panel by saying that he thought his biggest achievement was when the traders and portfolio managers started to come to the risk management department to ask opinions before placing key trades. Henrik and the audience thanked the panel for their input and time.

An Insured View - After the panel closed I spoke with an actuary who said that he had greatly enjoyed the panel discussions but was surprised that when talking of how best to support the risk management function in being independent and giving "bad" news to the business, the role of auditors were not mentioned. He said he felt that auditors were a key support to insurers in ensuring any issues were allowed to come to light. So food for thought there as to whether financial markets can learn from other industry sectors.

Summary - great evening of discussion, only downside being the absence of wine once the panel had closed!

 


23 April 2013

PRMIA on ETFs #3 - Tradable Volatility Exposure in ETP Packaging

Joanne Hill of Proshare presented next at the event. Joanne started her talk by illustrating how showing volatility levels from 1900 to the present day, and how historic volatility over the past 10 years seems to be at pre-1950's levels. Joanne had a lot of slides that she took us through (to be available on the event link above) which would be challenging to write up everyone (or at least that is my excuse and I am sticking to it...).

Joanne said that the VIX trades about 4% above realised volatility, which she described as being due to expectations that "something" might happen (so financial markets can be cautious it seems!). Joanne seemed almost disappointed that we seem now to have entered a period of relatively boring (?!) market activity following the end of the crisis given that the VIX is now trading at pre-2007 lows. In answer to audience questions she said that inverse volatility indices were growing as were products dependent on dynamic index strategies.

 

PRMIA on ETFs #2 - How the ETF Market Works: Quant for the Traders

Next up in the event was Phil Mackintosh of Credit Suisse who gave his presentation on trading ETFs, starting with some scene-setting for the market. Phil said that the ETP market had expanded enormously since its start in 1993, currently with over $2trillion of assets ($1.3trillion in the US). He mentioned that $1 in $4 of flow in the US was ETF related, and that the US ETF market was larger than the whole of the Asian equity market, but again emphasizing relative size the US ETF market was much smaller than the US equities and futures markets. 

He said that counter to the impression some have, the market is 52% institutional and only 48% retail. He mentioned that some macro hedge fund managers he speaks to manage all their business through ETPs. ETFs are available across all asset classes from alternatives, currencies, commodities, fixed income, international and domestic equities. Looking at fees, these tend to reside in the 0.1% to 1% bracket, with larger fees charged only for products that have specific characteristics and/or that are difficult to replicate.

Phil illustrated how funds have consistently flowed into ETFs over recent years, in contrast with the mutual funds industry, with around 25% in international equity and around 30% in fixed income. He said that corporate fixed income, low volatility equity indices and real estate ETFs were all on the up in terms of funds flow. 

He said that ETF values were calculated every 15 seconds and oscillated around there NAV, with arbitrage activity keeping ETF prices in line with underlying prices. Phil said that spreads in ETFs could be tighter than in their underlyings and that ETF spreads tightened for ETFs over $200m. 

Phil warned of a few traps in trading ETFs. He illustrated the trading volumes of ETFs during an average which showed that they tended to be traded in volume in the morning but not (late) afternoon (need enlightening as to why..). He added that they were more specifically not a trade for a market open or close. He said that large ETF trades sometimes caused NAV disconnects, and mentioned deviations around NAV due to underlying liquidity levels. He also said that contango can become a problem for VIX futures related products.

There were a few audience questions. One concerned how fixed income ETFs were the price discovery mechanism for some assets during the crisis given the liquidity and timeliness of the ETF relative to its underlyings. Another question concerned why the US ETF market was larger and more homogenous then in Europe. Phil said that Europe was not dominated by 3 providers as in the US, plus each nationality in Europe tended to have preferences for ETF products produced by each country. This was also further discussions on shorting Fixed Income ETFs since they were more liquid than the primary market. (Inote to self, need to find out more about the details of the ETF redemption and creation process).

Overall a great talk by a very "sharp" presenter (like a lot of good traders Phil seemed to understand the relationships in the market without needing to think about them too heavily). 

 

PRMIA on ETFs #1 - Index-Based Approaches for Risk Management in Wealth Management

It seems to be ETF week for events in New York this week, one of which was hosted by PRMIA, Credit Suisse and MSCI last night called "Risk Management of and with ETFs/Indices". The event was chaired by Seddik Meziani of Montclair State University, who opened with thanks for the sponsors and the speakers for coming along, and described the great variety of asset exposures now available in Exchange Traded Products (ETPs) and the growth in ETF assets since their formation in 1993. He also mentioned that this was the first PRMIA event in NYC specifically on ETFs. 

Index-Based Approaches for Risk Management in Wealth Management - Shaun Weuzbach of S&P Dow Jones Indices started with his proesentation. Shaun's initial point was to consider whether "Buy & Hold" works given the bad press it received over the crisis. Shaun said that the peak to trough US equity loss during the recent crisis was 57%, but when he hears of investors that made losses of this order he thinks that this was more down to a lack of diversification and poor risk management rather than inherent failures in buy and hold. To justify this, he sited an example simple portfolio constructed of 60% equity and 40% fixed income, which only lost 13% peak to trough during the crisis. He also illustrated that equity market losses of 5% or more were far more frequent during the period 1945-2012 than many people imagine, and that investors should be aware of this in portfolio construction.

Shaun suggested that we are in the third innings of indexing:

  1. Broad-based benchmark indices
  2. Precise sector-and thematic-based indices
  3. Factor-based indices (involving active strategies)

Where the factor-based indices might include ETF strategies based on/correlated with things such as dividend payments, equity weightings, fundamentals, revenues, GDP weights and volatility. 

He then described how a simple strategy index based around lowering volatility could work. Shaun suggested that low volatility was easier to explain than minimizing variance to retail investors. The process for his example low volatility index was take the 100 lowest volatility stocks out of the the S&P500 and weight by the inverse of volatility, with rebalancing every quarter.

He illustrated how this index exhibited lower volatility with higher returns over the past 13 years or so (this looked like a practical example illustrating some of the advantages of having a less volatile geometric mean of returns from what I could see). He also said that this index had worked across both developed and emerging markets.

Apparently this index has been available for only 2 years, so 11 years of the performance figures were generated from back-testing (the figures looked good, but a strategy theoretically backtested over historic markets when the strategy was not used and did not exist should always be examined sceptically).

Looking at the sector composition of this low volatility index, then one of the very interesting points that Shaun made was that the index got of the financials sector some two quarters before Lehman's went down (maybe the index was less influenced by groupthink or the fear of realising losses?)

Shaun then progressed to look a short look at VIX-based strategies, describing the VIX as the "investor fear guage". In particular he considered the S&P VIX Short-Term Future Index, which he said exhibits a high negative correlation with the S&P500 (around -0.8) and a high positive correlation with the VIX spot (approx +0.8). He said that explaining these products as portfolio insurance products was sometimes hard for financial advisors to do, and features such as the "roll cost" (moving from one set of futures contracts to others as some expire) was also harder to explain to non-institutional investors.

A few audience questions followed, one concerned concerned with whether one could capture principal retention in fixed income ETFs. Shaun briefly mentioned that the audience member should look at "maturity series" products in the ETP market. One audience member had concerns over the liquidity of ETF underlyings, to which Shaun said that S&P have very strict criteria for their indices ensuring that the free float of underlyings is high and that the ETF does not dominate liquidity in the underlying market. 

Overall a very good presentation from a knowledgeable speaker.

 

 

08 February 2013

Big Data – What is its Value to Risk Management?

A little late on these notes from this PRMIA Event on Big Data in Risk Management that I helped to organize last month at the Harmonie Club in New York. Big thank you to my PRMIA colleagues for taking the notes and for helping me pull this write-up together, plus thanks to Microsoft and all who helped out on the night.

Introduction: Navin Sharma (of Western Asset Management and Co-Regional Director of PRMIA NYC) introduced the event and began by thanking Microsoft for its support in sponsoring the evening. Navin outlined how he thought the advent of “Big Data” technologies was very exciting for risk management, opening up opportunities to address risk and regulatory problems that previously might have been considered out of reach.

Navin defined Big Data as the structured or unstructured in receive at high volumes and requiring very large data storage. Its characteristics include a high velocity of record creation, extreme volumes, a wide variety of data formats, variable latencies, and complexity of data types. Additionally, he noted that relative to other industries, in the past financial services has created perhaps the largest historical sets of data and continually creates enormous amount of data on a daily or moment-by-moment basis. Examples include options data, high frequency trading, and unstructured data such as via social media.  Its usage provides potential competitive advantages in a trading and investment management. Also, by using Big Data it is possible to have faster and more accurate recognition of potential risks via seemingly disparate data - leading to timelier and more complete risk management of investments and firms’ assets. Finally, the use of Big Data technologies is in part being driven by regulatory pressures from Dodd-Frank, Basel III, Solvency II, Markets for Financial Instruments Directives (1 & 2) as well as Markets for Financial Instruments Regulation.

Navin also noted that we will seek to answer questions such as:

  • What is the impact of big data on asset management?
  • How can Big Data’s impact enhance risk management?
  • How is big data used to enhance operational risk?

Presentation 1: Big Data: What Is It and Where Did It Come From?: The first presentation was given by Michael Di Stefano (of Blinksis Technologies), and was titled “Big Data. What is it and where did it come from?”.  You can find a copy of Michael’s presentation here. In summary Michael started with saying that there are many definitions of Big Data, mainly defined as technology that deals with data problems that are either too large, too fast or too complex for conventional database technology. Michael briefly touched upon the many different technologies within Big Data such as Hadoop, MapReduce and databases such as Cassandra and MongoDB etc. He described some of the origins of Big Data technology in internet search, social networks and other fields. Michael described the “4 V’s” of Big Data: Volume, Velocity, Variety and a key point from Michael was “time to Value” in terms of what you are using Big Data for. Michael concluded his talk with some business examples around use of sentiment analysis in financial markets and the application of Big Data to real-time trading surveillance.

Presentation 2: Big Data Strategies for Risk Management: The second presentation “Big Data Strategies for Risk Management” was introduced by Colleen Healy of Microsoft (presentation here). Colleen started by saying expectations of risk management are rising, and that prior to 2008 not many institutions had a good handle on the risks they were taking. Risk analysis needs to be done across multiple asset types, more frequently and at ever greater granularity. Pressure is coming from everywhere including company boards, regulators, shareholders, customers, counterparties and society in general. Colleen used to head investor relations at Microsoft and put forward a number of points:

  • A long line of sight of one risk factor does not mean that we have a line of sight on other risks around.
  • Good risk management should be based on simple questions.
  • Reliance on 3rd parties for understanding risk should be minimized.
  • Understand not just the asset, but also at the correlated asset level.
  • The world is full of fast markets driving even more need for risk control
  • Intraday and real-time risk now becoming necessary for line of sight and dealing with the regulators
  • Now need to look at risk management at a most granular level.

Colleen explained some of the reasons why good risk management remains a work in progress, and that data is a key foundation for better risk management. However data has been hard to access, analyze, visualize and understand, and used this to link to the next part of the presentation by Denny Yu of Numerix.

Denny explained that new regulations involving measures such as Potential Future Exposure (PFE) and Credit Value Adjustment (CVA) were moving the number of calculations needed in risk management to a level well above that required by methodologies such as Value at Risk (VaR). Denny illustrated how the a typical VaR calculation on a reasonable sized portfolio might need 2,500,000 instrument valuations and how PFE might require as many as 2,000,000,000. He then explain more of the architecture he would see as optimal for such a process and illustrated some of the analysis he had done using Excel spreadsheets linked to Microsoft’s high performance computing technology.

Presentation 3: Big Data in Practice: Unintentional Portfolio Risk: Kevin Chen of Opera Solutions gave the third presentation, titled “Unintentional Risk via Large-Scale Risk Clustering”. You can find a copy of the presentation here. In summary, the presentation was quite visual and illustrating how large-scale empirical analysis of portfolio data could produce some interesting insights into portfolio risk and how risks become “clustered”. In many ways the analysis was reminiscent of an empirical form of principal component analysis i.e. where you can see and understand more about your portfolio’s risk without actually being able to relate the main factors directly to any traditional factor analysis. 

Panel Discussion: Brian Sentance of Xenomorph and the PRMIA NYC Steering Committee then moderated a panel discussion. The first question was directed at Michael “Is the relational database dead?” – Michael replied that in his view relational databases were not dead and indeed for dealing with problems well-suited to relational representation were still and would continue to be very good. Michael said that NoSQL/Big Data technologies were complimentary to relational databases, dealing with new types of data and new sizes of problem that relational databases are not well designed for. Brian asked Michael whether the advent of these new database technologies would drive the relational database vendors to extend the capabilities and performance of their offerings? Michael replied that he thought this was highly likely but only time would tell whether this approach will be successful given the innovation in the market at the moment. Colleen Healy added that the advent of Big Data did not mean the throwing out of established technology, but rather an integration of established technology with the new such as with Microsoft SQL Server working with the Hadoop framework.

Brian asked the panel whether they thought visualization would make a big impact within Big Data? Ken Akoundi said that the front end applications used to make the data/analysis more useful will evolve very quickly. Brian asked whether this would be reminiscent of the days when VaR first appeared, when a single number arguably became a false proxy for risk measurement and management? Ken replied that the size of the data problem had increased massively from when VaR was first used in 1994, and that visualization and other automated techniques were very much needed if the headache of capturing, cleansing and understanding data was to be addressed.

Brian asked whether Big Data would address the data integration issue of siloed trading systems? Colleen replied that Big Data needs to work across all the silos found in many financial organizations, or it isn’t “Big Data”. There was general consensus from the panel that legacy systems and people politics were also behind some of the issues found in addressing the data silo issue.

Brian asked if the panel thought the skills needed in risk management would change due to Big Data? Colleen replied that effective Big Data solutions require all kinds of people, with skills across a broad range of specific disciplines such as visualization. Generally the panel thought that data and data analysis would play an increasingly important part for risk management. Ken put forward his view all Big Data problems should start with a business problem, with not just a technology focus. For example are there any better ways to predict stock market movements based on the consumption of larger and more diverse sources of information. In terms of risk management skills, Denny said that risk management of 15 years ago was based on relatively simply econometrics. Fast forward to today, and risk calculations such as CVA are statistically and computationally very heavy, and trading is increasingly automated across all asset classes. As a result, Denny suggested that even the PRMIA PRM syllabus should change to focus more on data and data technology given the importance of data to risk management.

Asked how best to should Big Data be applied?, then Denny replied that echoed Ken in saying that understanding the business problem first was vital, but that obviously Big Data opened up the capability to aggregate and work with larger datasets than ever before. Brian then asked what advice would the panel give to risk managers faced with an IT department about to embark upon using Big Data technologies? Assuming that the business problem is well understood, then Michael said that the business needed some familiarity with the broad concepts of Big Data, what it can and cannot do and how it fits with more mainstream technologies. Colleen said that there are some problems that only Big Data can solve, so understanding the technical need is a first checkpoint. Obviously IT people like working with new technologies and this needs to be monitored, but so long as the business problem is defined and valid for Big Data, people should be encouraged to learn new technologies and new skills. Kevin also took a very positive view that IT departments should  be encouraged to experiment with these new technologies and understand what is possible, but that projects should have well-defined assessment/cut-off points as with any good project management to decide if the project is progressing well. Ken put forward that many IT staff were new to the scale of the problems being addressed with Big Data, and that his own company Opera Solutions had an advantage in its deep expertise of large-scale data integration to deliver quicker on project timelines.

Audience Questions: There then followed a number of audience questions. The first few related to other ideas/kinds of problems that could be analyzed using the kind of modeling that Opera had demonstrated. Ken said that there were obvious extensions that Opera had not got around to doing just yet. One audience member asked how well could all the Big Data analysis be aggregated/presented to make it understandable and usable to humans? Denny suggested that it was vital that such analysis was made accessible to the user, and there general consensus across the panel that man vs. machine was an interesting issue to develop in considering what is possible with Big Data. The next audience question was around whether all of this data analysis was affordable from a practical point of view. Brian pointed out that there was a lot of waste in current practices in the industry, with wasteful duplication of ticker plants and other data types across many financial institutions, large and small. This duplication is driven primarily by the perceived need to implement each institution’s proprietary analysis techniques, and that this kind of customization was not yet available from the major data vendors, but will become more possible as cloud technology such as Microsoft’s Azure develops further. There was a lot of audience interest in whether Big Data could lead to better understanding of causal relationships in markets rather than simply correlations. The panel responded that causal relationships were harder to understand, particularly in a dynamic market with dynamic relationships, but that insight into correlation was at the very least useful and could lead to better understanding of the drivers as more datasets are analyzed.

 

04 April 2012

NoSQL - the benefit of being specific

NoSQL is an unfortunate name in my view for the loose family of non-relational database technologies associated with "Big Data". NotRelational might be a better description (catchy eh? thought not...) , but either way I don't like the negatives in both of these titles, due to aestetics and in this case because it could be taken to imply that these technologies are critical of SQL and relational technology that we have all been using for years. For those of you who are relatively new to NoSQL (which is most of us), then this link contains a great introduction. Also, if you can put up with a slightly annoying reporter, then the CloudEra CEO is worth a listen to on YouTube.

In my view NoSQL databases are complementary to relational technology, and as many have said relational tech and tabular data are not going away any time soon. Ironically, some of the NoSQL technologies need more standardised query languages to gain wider acceptance, and there will be no guessing which existing query language will be used for ideas in putting these new languages together (at this point as an example I will now say SPARQL, not that should be taken to mean that I know a lot about this, but that has never stopped me before...)

Going back into the distant history of Xenomorph and our XDB database technology, then when we started in 1995 the fact that we then used a proprietary database technology was sometimes a mixed blessing on sales. The XDB database technology we had at the time was based around answering a specific question, which was "give me all of the history for this attribute of this instrument as quickly as possible".

The risk managers and traders loved the performance aspects of our object/time series database - I remember one client with a historical VaR calc that we got running in around 30 minutes on laptop PC that was taking 12 hours in an RDBMS on a (then quite meaty) Sun Sparc box. It was a great example how specific database technology designed for specific problems could offer performance that was not possible from more generic relational technology. The use of database for these problems was never intended as a replacement for relational databases dealing with relational-type "set-based" problems though, it was complementary technology designed for very specific problem sets.

The technologists were much more reserved, some were more accepting and knew of products such as FAME around then, but some were sceptical over the use of non-standard DBMS tech. Looking back, I think this attitude was in part due to either a desire to build their own vector/time series store, but also understandably (but incorrectly) they were concerned that our proprietary database would be require specialist database admin skills. Not that the mainstream RDBMS systems were expensive or specialist to maintain then (Oracle DBA anyone?), but many proprietary database systems with proprietary languages can require expensive and on-going specialist consultant support even today.

The feedback from our clients and sales prospects that our database performance was liked, but the proprietary database admin aspects were sometimes a sales objection caused us to take a look at hosting some of our vector database structures in Microsoft SQL Server. A long time back we had already implemented a layer within our analytics and data management system where we could replace our XDB database with other databases, most notably FAME. You can see a simple overview of the architecture in the diagram below, where other non-XDB databases (and datafeeds) can "plugged in" to our TimeScape system without affecting the APIs or indeed the object data model being used by the client:

TimeScape-DUL

Data Unification Layer

Using this layer, we then worked with the Microsoft UK SQL team to implement/host some of our vector database structures inside of Microsoft SQL Server. As a result, we ended up with a database engine that maintained the performance aspects of our proprietary database, but offered clients a standards-based DBMS for maintaining and managing the database. This is going back a few years, but we tested this database at Microsoft with a 12TB database (since this was then the largest disk they had available), but still this contained 500 billion tick data records which even today could be considered "Big" (if indeed I fully understand "Big" these days?). So you can see some of the technical effort we put into getting non-mainstream database technology to be more acceptable to an audience adopting a "SQL is everything" mantra.

Fast forward to 2012, and the explosion of interest in "Big Data" (I guess I should drop the quotes soon?) and in NoSQL databases. It finally seems that due to the usage of these technologies on internet data problems that no relational database could address, the technology community seem to have much more willingness to accept non-RDBMS technology where the problem being addressed warrants it - I guess for me and Xenomorph it has been a long (and mostly enjoyable) journey from 1995 to 2012 and it is great to see a more open-minded approach being taken towards database technology and the recognition of the benefits of specfic databases for (some) specific problems. Hopefully some good news on TimeScape and NoSQL technologies to follow in coming months - this is an exciting time to be involved in analytics and data management in financial markets and this tech couldn't come a moment too soon given the new reporting requirements being requested by regulators.

 

 

 

20 January 2012

The Volcker Rule - aka one man's trade is another man's hedge

One of the PRMIA folks in New York kindly recommended this paper on the Volcker Rule, in which Darrell Duffie criticises the proposed this new US regulation design to drastically reduce proprietary ("own account") trading at banks.

As with all complex systems like financial markets, the more prescriptive the regulations become the harder it is "lock down" the principles that were originally intended. In this case the rules (due July 2012) make an exception to the proprietary trading ban where the bank is involved in "market-making", but Darrell suggests that the basis for what types of trades are "market-making" and what types of trades are more pure "proprietary trading" are problematic in this case, as there will always be trades that are part of "market-making" process (i.e. providing immediacy of execution to customers) that are not directly and immediately associated with actual customer trading requests.

He suggests that the consequences of the Volcker Rule as it is currently drafted will be higher bid-offer spreads, higher financing costs and reduced liquidity in the short-term, and a movement of liquidity to unregulated entities in the medium term possibly further increasing systemic risk rather than reducing it. Seems like another example of "one man's trade is another man's hedge" combined with "the law of unintended consequences". The latter law doesn't give me a lot of confidence about the Dodd-Frank regulations (of which the Volcker Rule forms part), 2319 pages of regulation probably have a lot more unintended consequences to come.

 

20 October 2010

Analytics Management by Sybase and Platform

I went along to a good event at Sybase New York this morning, put on by Sybase and Platform Computing (the grid/cluster/HPC people, see an old article for some background). As much as some of Sybase's ideas in this space are competitive to Xenomorph's, some are very complimentary and I like their overall technical and marketing direction in focussing on the issue of managing of data and analytics within financial markets (given that direction I would, wouldn't I?...). Specifically, I think their marketing pitch based on moving away from batch to intraday risk management is a good one, but one that many financial institutions are unfortunately (?) a long way away from.

The event started with a decent breakfast, a wonderful sunny window view of Manhattan and then proceeded with the expected corporate marketing pitch for Sybase and Platform - this was ok but to be critical (even of some of my own speeches) there is only so much you can say about the financial crisis. The presenters described two reference architectures that combined Platform's grid computing technology with Sybase RAP and the Aleri CEP Engine, and from these two architectures they outlined four usage cases.

The first use case was for strategy back testing. The architecture for this looked fine but some questions were raised from the audience about the need for distributed data cacheing within the proposed architecture to ensure that data did not become the bottleneck. One of the presenters said that distributed cacheing was one option, although data cacheing (involving "binning" of data) can limit the computational flexibility of a grid solution. The audience member also added that when market data changes, this can cause temporary but significant issues of cache consistency across a grid as the change cascades from one node to another.

Apparently a cache could be implemented in the Aleri CEP engine on each grid node, or the Platform guy said that it was also possible to hook in a client's own C/C++ solution into Platform to achieve this, and that their "Data Affinity" offering was designed to assist with this type of issue. In summary their presentation would have looked better with the distributed cacheing illustrated in my view, and it begged the question as to why they did not have an offering or partner in this technical space. To be fair, when asked whether the architecture had any performance issues in this way, they said for the usage case they had then no it didn't - so on that simple and fundamental aspect they were covered.

They had three usage cases for the second architecture, one was intraday market risk, one was counterparty risk exposure and one was intraday option pricing. On the option pricing case, there was some debated about whether the architecture could "share" real-time objects such as zero curves, volatility surfaces etc. Apparently this is possible, but again would have benefitted by being illustrated first as an explicit part of the architecture.

There was one question about the usage of the architecture applied to transactional problems, and as usual for an event full of database specialists there was some confusion as to whether we were talking about database "transactions" or financial transactions. I think it was the latter, but this wasn't answered too clearly but neither was the question asked clearly I guess - maybe they could have explained the counterparty exposure usage case a bit more to see if this met some of the audience member's needs.

The latter question on transactions above got a conversation going on about resilliancy within the architecture, given that the Sybase ASE database engine is held in-memory for real-time updates whilst the historic data resides on shared disk in Sybase IQ, their column-based database offering. Again full resilience is possible across the whole architecture (Sybase ASE, IQ, Aleri and the Symphony Grid from Platform) but this was not illustrated this time round.

Overall good event with some decent questions and interaction.

19 February 2010

When is a trade, not a trade?...

...er, when it is a hedge? Adding to my current confusion over just how the Obama administration is going to define just what is and is not "proprietary trading", Gillian Tett of the FT today has put together a good article on some of unexpected effects that such a ban may have - my advice is don't mess with the all-powerful "Law of Unintended Consequencies"...

04 February 2010

More CEP Events

Sybase have acquired Aleri according to Finextra. It was less than a year ago when the complex event processing (“CEP”) vendors Aleri and Coral8 announced their merger (see press release); there was also a big buzz when Sybase announced a CEP capability based on Coral8 and Streambase decided to offer an Amnesty Program for Aleri-Coral8 Customers (see earlier post 'Merging in public is difficult...). And only a few months later, Microsoft announced that their CEP Orinoco (now integrated with SQL Server 2008 as StreamInsight) was heading to market (see post 'Microsoft CEP surfaces as 'Orinoco').

Another sign that CEP is moving more mainstream and that real-time everything is becoming more important? Or a good market for acquisitions?

05 December 2009

Maths to Money - Quantitative Investment

I attended the Quant Invest 2009 event for the first time last week in Paris. The event is unsurprisingly about quantitative investment strategies, but with an institutional asset manager and hedge fund focus - so not so much about ultra-high frequency trading (although some present) but more about using quantitative techniques to manage medium/longer-term investment decisions and applied portfolio theory. A few highlights below that I found interesting:

  • Pierre Guilleman of Swiss Life Asset Management gave an interesting 1/2 day workshop entitled "A random walk through models":
  • He is a strong supporter of the need to understand more about the data and statistical assumptions upon which any quant investment model is based and how these fit with the desired investment objectives (similar to the Modeler's Manifesto)
  • He made the point that good models can sometimes be almost annoyingly simple, and cited the example by a Professor Fair of Yale who had determined that US elections were predictable based on simple parameters such as past results, inflation and gdp and that policy did not seem to be a key factor at all - annoying for the politicians anyway! 
  • Pierre seems very concerned that the Solvency II regulation applied to Life Institutions will negatively influence the investment policies of many institutions - applying sell-side risk measures like VAR to the insurance industry will drive a more short-term approach to investment. He strongly believes that VAR applied to his industry should have an expected return parameter introduced to fit with longer term investment horizons of 10 to 25 years.
  • Bob Litterman of Goldman Sachs Asset Management opened the first "official" day of the conference:
  • Bob put forward his "scientific" approach to investment modelling going through the stages of hypothesis, test and implement. He warned against overconfidence in investment (apparently 70% of us think we are "above average"...) and impulsiveness (quick impulsiveness test: "if a bat costs $1 more than the ball, and the bat and ball together cost $1.10 then how much does the bat cost?...") 
  • He said that the failure of quantitative investment models in 2007 needed to be understood given the success of quant models over past decades. In particular he thought that quant investment became the "crowded trade" of 2007 with every hedge fund having a quant investment strategy. In terms of why this became a "crowded trade" Bob thinks that the barriers to entry into quant investment (particularly technology) have lowered significantly recently.  
  • He noted that factor-based investment opportunities decay quicker than they used to due to increased competition - implying the need for a more dynamic and opportunistic investment approach.  
  • GSAM are now looking at new markets and new investment instruments, trying to find areas of market disruption but without following what others are doing in the market.  
  • He pointed out the conflict between investors wanting more transparency over what is done for them, against the need to be more proprietary about the investment models developed.  
  • Next there was a talk on regulation from the French regulator that was dull, dull, dull both in terms of content and presentation style (when will regulators actually prepare well for the talks they give?)
  • Panel debate was also pretty average, with the word "alpha" being used too much in my view - asset managers of a certain type seem to hide behind this word as an opaque "magic wand" to justify what they do.
  • Jean-Phillippe Bouchard of Capital Fund Management did a great talk called "Why do Prices Move?". Some points from the talk:
  • He started off with a reminder about the Efficient Markets Hypothesis (EMH) and how it says that crashes and market movements are caused by events outside (endogenous to) the market such as news, events etc.
  • He then said this was not born out in the data, where extreme jumps in prices were only related to news only 5% of the time.
  • Volatility looks like a long memory process with clustering of vol over time - similar to behaviour in complex systems
  • The sign of order flow is predictable but the price movement is not, with only 1% of daily order volume accounting for price movements over 5%
  • Even very liquid stocks have low immediate liquidity, meaning that price movements can play out over many hours and days as liquidity is sought to "play-out" some change in fundamental price levels.
  • Joseph Masri of the Canadian Pension Plan Investment Board then did a good talk on Risk Management:
  • Jo said that sell-side risk was easier to deal with in some ways since it involved fewer strategies in high volumes, and hence could be better resourced.
  • Buy-side quantitative risk was harder due to its reliance onsell-side research and risk tools, the outsourcing of credit assessment to the credit rating agencies, the loss of Bear and Lehman's having caused the buy-side to have to do more risk management itself (and through third parties) rather than rely on the sell side risk management tools.
  • He said that sell-side risk models are a good start for an asset manager, but need to be adapted to give both absolute and relative risk (to a benchmark fund for instance). All models are no substitute for risk governance.
  • He described the cross over from risk methods: VAR, stress testing, factor-based and their applicability to market risk, credit and counterparty risk.
  • Like Pierre he was not a fan of 1 or 10 day trading VAR being applied to investment managers since this risk measure was not suitable for long term investment in his view.
  • On stress testing he said this needed to be top down (using historical events etc) as well as bottom up from knowing the detail of strategy/portfolio.
  • In terms of challenges in risk management he said that VAR needed more stress testing to cope with the fat tails effect in markets, that liquidity risk both of counterparties and of illiquid products was vital and the importance of stress testing (he mentioned reverse stress testing) plus also the feedback (crowding effects) of having similar investment strategies to others in the market.
  • Dale Gray of the IMF gave a very interesting talk on how he and Bob Merton have been applying the contingent claims model of a company (looking at equity in terms of option payoffs for shareholders and bondholders) to whole economies:
  • He said that some of his work was being applied to produce a model for the pricing of the implicit guarantees offered by governments to banks
  • He said these models were also applicable to macro-prudential risk
  • Very interesting talk, and if he really has something of macro-level risk then this is great relative to the wooly approach by the regulators so far

There were some other good talks from Danielle Bernardi on Behavioural Finance, Martin Martens on Fixed Income Quant Investment, Vassilios Papathanakos on Stochastic Portfolio Theory (seemed to be a "holy grail" of investment model, giving good returns even in the crisis - begs the question why he is telling everyone about it?), Claudio Albanese on unified derivative pricing/calibration across all markets (again another "holy grail" worth more investigation) and Terry Lyons on speeding up monte carlo simulations.

Overall a good conference although the quality of the asset managers present seemed very digital from those who really seemed to know what they talking about to those who plainly did not (in my limited view!). Along this line of thought, I think it be good to test whether there is an inverse relationship between the quality of the asset manager and the amount of times they use the word "alpha" to explain what they are doing...

25 November 2009

It's in the hormones...

Taking the discussion on behavioural finance and news analytics a scientific step further, then this article in the FT today on how increased testorone equals an increased appetite for risk taking is interesting. Apparently experience of trading is also a big help in increasing a trader's Sharpe ratio, from which the authors suggest that markets are not efficient and the EMH does not hold. Now if only they could find a hormone that was correlated with increased returns, then I think they'd really have something...

12 November 2009

It's in the news...

I went along to the Forum on News Analytics over in Canary Wharf on Monday evening, organised by Professor Gautam Mitra from OptiRisk / Carisma at Brunel University. We seem to be in the early days of transforming news articles into quantifiable/machine-readable data so that it can be processed automatically/systematically in trading and risk management. It was a good event with both vendors and practititioners attending so was reasonably balanced between vendor hype and the current state of market practice.

As background on what is meant by news analytics data, then for example you might count the number of news articles about a particular company and look at whether the quantity of news articles might be a predictor of some change in the company's stock price or volatility. Moving on from this simple approach (assuming that you are clever enough to be certain about what news is about what company), then you can then move towards assessing whether the news is negative, neutral or positive in sentiment about a company/stock.

The context here is about having the capability to automatically process/analyse any kind of text-based news story, not just those from research analysts that might be nicely tagged with such quantifiers of sentiment (see http://www.rixml.org/ on xml standards for analyst data). The way in which the meaning of the text is "quantified" uses some form of Natural Language Processing.

The event started with a brief talk by Dan di Bartolemeo of Northfield Information Services. I hadn't heard of him or his company before (maybe I should pay more attention!) but he seemed a very solid speaker with strong academic and practical background in investment management and modelling. He referenced a few academic papers (available via their web site) on news analytics, and how news analytics and implied volatility could provide better estimates of future volatility than implied volatility alone. He also made some good points about how investment "models" are calibrated to history and how such models need to adapt to "today" - he put it as "how are things different now from the past?" and put forward the idea of a framework for assessing and potentially modifying a model to respond to the "now" situation. He also suggested that the market can react very differently to "expected news" (having a range of investment "what ifs" planned for a known earnings announcement) as opposed to unexpected information (we are back into the realms of the Black Swan and the ultimate in uncertainty wisdom from Donald Runsfeld)

Armando Gonzalez of RavenPack then began by explaining how RavenPack had become involved in applying text analysis to finance (it seems the subject has its origins, like a lot of things, in the military). RavenPack seem to be highest profile quantified news vendor at the moment, and whilst Armando is obviously biassed towards pushing the concept that money can be made by adding quantified news data to trading models, he said that not many firms are as yet systematically processing news and most people are relying upon manual interpretation of the news they buy/use. Some of the studies Ravenpack have on market news and prices are very interesting, showing how a news event can take up to 20 mins before the market settles on a new "fair" price level for a stock. Additionally, and maybe an interesting reflection on human behaviour, was that in bull markets there are usually twice as many positive stories about companies than negative, but strikingly in a bear market there was still almost equal amounts of positive and negative news - so humans are basically optimists! (or delusional, or just plain greedy...take your pick!)

Mark Vreijling of Semlab followed Armando and suggested that a lot of their sales prospects understandably desire "proof" of the benefits of adding quantified news to trading, but this was a little ironic since most financial institutions have been paying to receive "raw" news for years, presumably because they perceive beneift from it. Mark also mentioned that the application of quantified news to risk management was a new but growing area for him and his colleagues.

Gurvinder Brar of Macquarie then went into some of the practicallities of quantifying and using news in automated trading. He suggested that you need to understand what is really "news" (containing information on something that has just happened) and what is merely an news "article" (like a "feature" in a magazine etc). Assessing relevance of news was also difficult and he added that setting a hierarchy of what kind of events are important to your trading was a key step in dealing with news data. Fundamentally he suggested that why wait for five days for analysts to publish their assessment of a market or company-specific event when you could react to the event in near real-time.

The event then went into "panel" mode where the following points came out:

  • Dan thought that a real challenge was integrating quantified news with all of the other relevant datasets (market data, but also reference data etc)
  • Armando picked up on Dan's point by giving the example news about Gillette which at one point was about Gillette the company but then on acquisition became news about the Gillette "brand" which became a part of Proctor and Gamble.
  • Dan said that a key problem with processing news was also understanding what news was simply ignored by the news wires i.e. we know what is being talked about, but what could have been talked about, why was it ignored and is it (even so) relevant to trading?
  • Mark and Armando said that the "context" for the news story was vital and that market expectations can turn many "negative" news stories into positive outcomes for trading e.g. the market likes bad news when it is not as "bad" as everyone thought.
  • Dan made a very interesting point about trading in terms of categorising trades as "want to" trades and "have to" trades. He gave the example of a trade being observed that seemingly has no news associated/prompting it - so does this mean the trade is occuring because somebody "has to" make the trade (a fund facing an welcome client redemption for example?) or because there has been some information leak to a market participant and such a participant "wants to" make a trade before the news becomes available to the market as a whole.
  • I think all of the panel members then collectively hesitated before answering the next question from the audience, with Microsoft having one of their "text search" R&D team (think Bing...) asking about news categorisation and quantification.
  • Dan also mentioned something that I have only recently become more aware of, which is that apart from major markets in the US, most exchanges world-wide do not publish whether a trade was a "buy" or "sell" trade (they just publish the price and transaction size). Obviously knowing the direction of the trade would be useful to any trading model, and Dan referred to this as wanting to know the "signed volume".
  • A member of the audience then asked whether most quantified news had been based on just the English language and the concensus was that most was based on English, but Natural Language Processing can be trained in other languages relatively easily. A few members of the panel pointed out that all languages change, even English, requiring constant retraining, and also that certain languages, countries and cultures added further complication to the recognition process.
  • The next question asked was whether the panel could outline the major areas that quantified news is applied in - the answer included intraday (but not quite real-time) trading, algorithmic execution, lower frequency portofolio rebalancing and in compliance/risk/market abuse detection.
  • A good debate ensued about whether "news" was provided by the official newswires or by the web itself. The panel (and audience) concensus seemed to favour the premise the news wires are the source of news and the web is a reflection/regurgitation of this news. That said, Gurvinder of Macquarie gave the nice counter example of the analysts/news wires not making much of the new Apple iPod, when looking at the web it was possible to see that the public were in contrast very enthusiastic about it.

Overall an interesting event. I think the application of "quantified news" to risk management is interesting - maths and financial theory is very interesting but markets are driven by people's behaviour and if "quantified news" can help us understand this better it has to help in avoiding (some!) of the future problems to be faced in the market.

02 October 2009

High Frequency Trading vs Flash Trading

Economist Tim Worstall has an distinction to make on the differences between high frequency trading and flash trading in a recent article.

Essentially it is the difference between getting your orders in quicker than every one else, and having a peek at what everyone else is doing before putting your money down.  The SEC appears to be conflating the two and has concerns.

With the world condition in banking, could we see some poorly thought out legislation rushed through so that regulators can be seen "doing something"?  Or would it level the playing field a little so that those trading operations that cannot afford the overhead of super fast computers and networks are not excluded?

09 July 2009

Tick Size Harmony...

...in a rare show of co-operation (I wonder what is the carrot or (regulatory) stick here to motivate this?) European exchanges and MTFs seem to have agreed on standardising tick sizes (or at least to have two standards rather than twenty five!). Extract from article on AutomatedTrader:

"From the perspective of each trading venue, strong incentives exist to undercut others in terms of tick sizes, which is not in the interest of market efficiency or the users and end investors. This might, in turn, lead to excessively reduced tick sizes in the market. Excessively granular tick sizes in securities can have a detrimental effect to market depth (i.e. to liquidity). An excessive granularity of tick sizes could lead to significantly increased costs for the many users of each exchange throughout the value chain; and have spillover costs for the derivatives exchanges' clients."

02 July 2009

Best execution 2009 - July 1st 2009

A few summary points I took from the Best Execution Europe 2009 event courtesy of Incisive Media that I attended yesterday morning.

The event started with a presentation by Michael Fridrich, Legal and Policy Affairs Officer of the European Commission:

  • From what Michael was saying then in my view, it seems that the EU is using the G20 declaration on financial stability in April as a remit to regulate in many areas (not all of which related to the current crisis, see last paragraph in this post)
  • He said that the EU is currently working on removing national options/discretions with respect to financial markets in order to create a single EU rule book and combining this with stronger powers for supervisors including much harsher sanctions against offending institutions
  • They are also reviewing the necessary information provided to investors in OTCs, even if the investors qualify as "professional investors" under Mifid.
  • The EU is currently reviewing Mifid and the Market Abuse Directive (called "MAD" which is at least humorous...)
  • EU is also unsurprisingly looking at the regulation of Credit Ratings Agencies (CRAs) given their involvement in rating CDOs and other structured products

So in summary it was a civil servant PR exercise with few surprises, other than we are going to regulate anything that moves. On to a panel debate on "build vs. buy" for execution management software. I will try and put my obvious vendor bias to one side in summarising this one:

  • The panel summarised that this decision was about the usual issues of time to market and what is an institutions core IP
  • A senior IT manager from JPMorgan said they both build and buy - but given the size of their organisation and the need to innovate they do build a lot
  • The COO of Majedie Asset Management said that "build" was "20th Century" and the IT should focus now on "assembly"
  • He added that if IT lead a procurement process he finds this tends to lead to more proprietary solutions than if business is managing it.
  • He summarised that business people should have the mandate to define inputs/outputs to a requirement and that IT were not qualified to do this.
  • Putting it more controvertially he suggested that IT people should work for IT companies
  • The JPMorgan guy responded that "assembly" of external components can lead to excessive staffing in managing all the plumbing, and that build in house could build a more generic and targetted platform that would need less management
  • The moderator summarised the build vs. buy decision as one of balancing time to market and how bespoke a solution is alongside of looking at the risks for buying of 1) integration risk 2) vendor risk and for building of 1) delivery risk 2) key man risk

The debate on this was pretty standard, but the guy from Majedie was at least controvertial in what he was saying, (including at one point that "investment management does not scale"). I assume he is trading simple products and as such is able to outsource more than the JPMorgan manager. My own slant is that more vendor products need to be designed to integrate easily with the IPR of a financial institution i.e. less black box.

Tom Middleton of Citi then did a presentation on (equity) market liquidity and market fragmentation:

  • He started by saying the Smart Order Routing (SOR) was like "Putting Humpty-Dumpty back together again" from all the sources of liquidity now available under Mifid.
  • Being no expert in SOR, I was excited (?) to learn a new term which was "finding Icebergs" - apparently an "Iceberg" is a large non-public ("dark")  order being posted with a much smaller public trade order.
  • He said that market fragmentation will increase further but there will be less trading venues as the market consolidates.
  • New algorithms will be developed more specifically for trading on dark pools of liquidity
  • Clearing and settlement costs are still high across Europe which limits the usage of small size orders in trading but trading volumes will continue to grow
  • The drive to ever-lower latency will also continue
  • Usage of SOR will grow

Tom's presentation was then followed by a panel debate on Smart Order Routing:

  • A manager from Baader said that the German area market of Europe was not very sophisticated yet, with most German clients specifying exactly where the trade should be executed hence nullifying the need for SOR.
  • Deutsche Bank (DB) mentioned that having both US and EU operations had helped them get SOR in place for the EU quicker given their US experience.
  • UBS and Baader both said that Algo trading and SOR are increasingly integrated and will merge with the Algo define what and how to trade and the SOR component determining where
  • DB said that a "tipping point" towards usage of SOR in the EU will occur when more than 20% of trading occurs away from the primary exchanges.
  • DB said that 60% of US liquidity was due to algorithmic trading and that there were now no EU barriers to this happening in European markets and bringing with it increased liquidity, although issues such as not having a consolidated market tape for trading made things more difficult
  • Neonet said that clearing and settlement costs were still a barrier to widescale SOR adoption.
  • IGNIS Asset Management said that SOR was a "high touch" service for them, requiring SOR vendors to be very responsive and client focussed. In selecting SOR vendors they were concerned with data privacy and also with having a real-time reporting facility to see how orders were being filled.

And finally (at least before I had to leave) there was a presentation by Richard Semark of UBS on Transaction Cost Analysis (TCA):

  • He was surprised to find that there were not many presentations around on TCA
  • TCA vendors are behind the times and are not up to date with current developments
  • Historically TCA was about what had happened (about 3-4 months ago!)
  • Mifid has driven fund managers and traders to talk more and TCA is a key part of this conversation
  • It is hard to look bad against traditional TCA measures such as VWAP if a stock is always rising or always falling, and this can hide a lack of performance and "value add"
  • Using "Dark" for non-displayed liquidity has been a publicity disaster for the electronic trading industry
  • Much Smart Order Routing (SOR) is still based on static tables of trading venues that are updated on a monthly or quarterly basis
  • Market share by volume of a venue is not necessarily correlated with obtaining the best prices in the market
  • TCA should be based upon a dynamic benchmark that responds to the market and trades done not against a static one
  • Trade performance is not linear with trade size which is an incorrect assumption in much of TCA
  • Trade risk (variability in outcomes) deserves more focus
  • Portfolio TCA is much more complicated where the trading of a single stock cannot be looked at in isolation of its effects on the whole portfolio
  • Real-Time TCA is becoming ever more important to clients since it allows them to understand more of what is going wrong/right with filling an order
  • TCA providers are not doing a good job for clients, not using the right data or answering the right questions for clients

Not sure who the TCA providers he refers to are, but maybe I should find out to see what they offer...

 

 

 


 



25 June 2009

Twittering the Wisdom of Crowds

Deserving an award for title alliteration, an article on Finextra has announced that Streambase Systems have connected their system to Twitter, the fashionable microblogging site. Regardless of the intent, it is an excellent marketing exercise by Streambase (er, maybe one that I should remember for the future!...).

Reasonable comments from Finextra at the end of the article, saying that Twitter is a notoriously bad source of information, very open to (designed for?) rumour, and as such it would be difficult to see what real information traders could extract from the noise. At one level, then rumour and counter-rumour are the basis of markets, although the recent financial crisis has illustrated how powerful rumours can be. I would suggest it begs the question as to when rumour and counter-rumour is part of the price formation process, and when it becomes market manipulation.

On a related note, the Efficient Market Hypothesis (EMH), the financial theory that all information (including rumours) is reflected in current prices, has been coming under some attack in the press recently. With a fund-management and Monty-Pythonesque slant, James Montier of Société Générale takes EMH to task in his recent article in the FT (see Pablo Triana for an alternative view).

My opinion is that EMH has still got some legs in it as a model, but behavioural finance probably has a lot more to explain (or rationalise?) about this theory and others in light of recent events. Anyone got a different opinion, or do I need to open a Twitter account to find out?...

19 May 2009

Alternatives Need a Bigger Umbrella?

Interesting article in the FT today about why the US exodus from traditional exchanges might not be repeated here in Europe, which is contrary to the recent marketing mantra of the alternative trading venues such as Chi-X, Turquoise and Equiduct. If correct, the economics outlined in the article look justifiably prohibitive:

"Merely to break even, an alternative platform with a cost base of about €10m would need to do 100m trades a year. Quite a task, given that the 208-year-old London Stock Exchange, which reports full-year figures on Wednesday, said in March it was on course for about 190m in its UK orderbook."

The article points out the difficulty of starting an alternative trading venue against a dire economic background and emphasises this by ending with:

“Xavier Rolet, the LSE’s new chief executive, should be praying for rain.”

14 May 2009

Microsoft CEP Surfaces as "Orinoco"

Seems like Microsoft have now gone public on the Microsoft TechEd site that they have a Complex Event Processing (CEP) engine that will be coming to market shortly (see MagmaSystems blog post ). One of my colleagues Mark Woodgate attended a briefing event at Microsoft for this technology back in February this year - here's an extract from some internal notes that Mark made back then:

"Microsoft CEP is very similar to StreamBase conceptually (and not unsurprisingly), in the sense that there are adapters and streams and how you merge and split them via some kind of query language is the same. However, StreamBase uses the StreamSQL which as we have seen is SQL-like in syntax but Microsoft CEP uses LINQ and .NET and although conceptually it is doing the same thing, it does not look the same. StreamBase’s argument was you can be an SQL programmer to use it and don’t need lower-level like .NET; however, it’s not SQL really as it has all these ‘extensions’ you have to learn so using .NET might look more tricky but in fact it makes sense. They don’t have a sexy GUI yet for designing CEP applications like StreamBase but it will be done in Visual Studio 2008.

 

Currently, you build various assemblies (I/O adapters, queries and functions) and then bolt them all together, called ‘binding’ by command line tool. You then deploy the application onto one or more machines using another tool so it’s a manual process right now. They are aware this needs to be made easier and more visual. They are allowing other libraries to be bolted in via the various SDKs so it’s pretty open and flexible. It works well with HPC and clusters/grids (or so they say) and of course can be used with SQL Server. The CEP engine also has a web interface based on SOAP so at least non-Windows based systems can talk to it"

 

The release of this technology will be an interesting addition to the CEP market and to the Microsoft technology stack in general. Assuming performance is at credible levels (i.e. not necessarily leading but not appalling either) it will certainly bring both technical and commercial pressure to bare on existing CEP vendors (see earlier post on Aleri/Coral8) and has the potential to broaden the usage of CEP. Obviously Linux-Lovers (sorry, I didn't mean to be personal...) will not agree with this, but Microsoft is putting together an interesting stack of technology when you see this CEP engine, Microsoft HPC and Microsoft Velocity coming together under .NET.

 

20 March 2009

Merging in public is difficult...

Sounds like Aleri and Coral8 in the CEP (Complex Event Processing) market are not doing the best job they could of managing the publicity surrounding their recent merger, not helped by announcement of a CEP capability by Sybase, based on Coral8 source code. 

Explained more in a post on the Magmasystems Blog, and made more entertaining by the aggressive marketing tactics of Streambase in responding to the merger by offering a software trade-in facility for clients of Aleri and Coral8 (see press release).

25 January 2009

CEP in 2009

Interesting predictions for complex event processing (CEP) in 2009 (click here for link) - sounds like some form of reality is appearing in this area of the market, accelerated by the current financial crisis. Entry of bigger players and usage of LINQ in CEP will be interesting too.

Xenomorph: analytics and data management

About Xenomorph

Xenomorph is the leading provider of analytics and data management solutions to the financial markets. Risk, trading, quant research and IT staff use Xenomorph’s TimeScape analytics and data management solution at investment banks, hedge funds and asset management institutions across the world’s main financial centres.

@XenomorphNews



Blog powered by TypePad
Member since 02/2008