This post was originally published in LinkedIn pulse on June 4, 2017
What is the most significant contributing factor to the performance of a quantitative fund: its signal generators or its risk allocators? Can we still succeed if we have good signal generators but poor risk management?
While preparing this presentation, I came across a recent insightful interview from Ex-LTCM trader Victor Haghani. Although, we may feel faint-hearted to follow an advice from an ex-LTCM person, there is a lot of wisdom in Haghani’s words:
- “I realised that investing involves two problems: the first one is identifying attractive investment opportunities and the second one is sizing them”
- “Ninety per cent of the literature out there is all about how you can find the gems, whether they are strategies or actual investments”
- “The second problem seems pretty pedestrian but, actually, that is the critical one”
- “The sizing of the trade is what resulted in the failure of LTCM”
Equally, after many years I came to the same conclusion:
The sizing of the trade and the ability to dynamically manage our risk exposures along with sound infrastructure is what would most contribute to our performance in the long run.
Indeed, all “attractive” investment opportunities are well-known and documented in academic and industrial literature. For an encyclopaedia of quant strategies, I refer to the excellent book “Expected Returns” by Antti Ilmanen. In fact, I think that more than 95% of all quant investment solutions follow some modification of base strategies presented and analysed in the encyclopaedia. Moreover, a few of key quant funds are now also keen to share some of their expertise. As very good examples, I will refer to: “Two centuries of trend following” by CFM quant team and “Trend Following: Equity and Bond Crisis Alpha” by AHL quants.
In the light of all the amount of high calibre research in public domain, I don’t think there is much edge in trying to “discover” a new sustainable strategy. Of course, there are many ways to generate signals for, say, carry or trend-following strategies. However, what these signals provide to us are the entry/exit points and not ways of managing our portfolio across multiple strategies.
Could we instead “discover” an edge in developing models for risk-managing our strategies?
We all know what happened to LTCM in the end. It is commonly mentioned that LTCM failed despite having two Nobel prize winners on board. However, I believe that what killed the LTCM were not their academic “alpha” models but their poor risk management. Eventually, LTCM may have failed in understanding the cyclicality and the liquidity risk of their strategies as well as taking on too much leveraged risk on carry strategies with strong tail correlation in high volatility regime (even though these strategies appeared to be uncorrelated in a low volatility regime).
What should be then considered when designing and run a portfolio of quantitative investment strategies? I find that we need to consider the following three key components.
1) At the level of individual strategies: we should aim at understanding the cyclicality risk of each individual strategies. For an example, does a strategy outperform or underperforms in the regime with a trend, a mean-reversion or a low volatility? We should then ensure that the strategy has a positive “alpha” over the long-term period that includes its unfavourable cycle.
We should never rely on our ability to time the market cycle for an individual strategy, although we could try to time cycles at the portfolio level. At this stage, we should sort out strategies that have too strong exposure to the cyclicality risk with too little compensation to bear this risk.
2) At the level of strategy class: After we have built models for generating signals in individual strategies, we now can build up an allocator for a portfolio of strategy classes. For an example, for a portfolio of carry strategies, we would aim at diversifying signals and risk of strategy components by considering the multi-asset universe along with a risk-based allocation.
In this way, we target at allocating to components with the highest ratios of expected reward to the risk. It goes without saying that the traditional covariance-based risk allocators can only diversify the idiosyncratic risk of strategy components.
Although the risk parity may be a sound framework, it is still a backward looking assessment of the risk. Within the strategy class (say carry or trend-following) we can only aim at diversifying the idiosyncratic risk of strategy components, not the systemic risk of the strategy class.
3) At the portfolio level: The ultimate step involves the allocation to strategy classes we identified in the second step and incorporating some forward-looking measures of risk. For an example, how should we allocate between carry and trend-following strategies in the current market cycle?
We could consider the two approaches: the top-down and the bottom-up allocations. For the top-down allocation, we would estimate and forecast market regimes and make tactical allocation to our strategy classes. For the bottom-up allocation, we would generate scenarios for risk factors and project the P&L of live strategies and finally balance their exposures to the risk factors.
Therefore, if we could understand the cyclicality risk of strategies at individual level and we could forecast the cycles, we would dynamically reduce our exposure to underperforming strategies when faced with their unfavourable cycles. At the same time, we could increase the exposure to strategies that are expected to outperform in the current cycle.
The interested readers can find more details on this approach in my presentation for buy side summit at Global Derivatives.
I have also led a webinar on this topic. I thank the participants for interesting questions. The webinar can be viewed on youtube.
I wrote some notes for the Q&A part.
Q&A from the webinar
1) How do you reduce cyclicality risk? Do you forecast cyclicality regimes?
How you forecast it and hedge it? Do you use Markov models?
How sensitive is are the results to the regime identification?
Yes, I use a Markov chain model with the three regimes for the cycle with positive drift and small volatility, for the cycle with range-bound dynamics, and for the stressed cycle with large volatility. I apply this model to produce regular forecasts for multi-asset universe and apply some combination of inferred probabilities to make tactical allocations.
I would not hedge the cycle risk directly because of the forecasting risk and trading costs. Some hedging comes from overlaying strategies that perform differently in different cycles. On the level of strategy design, we need to make sure that the strategy produces positive alpha over long-term period which also includes its unfavourable cycle. We cannot rely on our ability to time the cycles to turn a highly cyclical strategy into a gem. On the strategy class level, we can partially rely on the diversification along with a risk parity method that will help to reduce exposure to underperforming components of the strategies class.
The key to reduce the sensitivity to the regime identification is to allocate to strategies which outperform in different regime. As I show even a static combination of carry, trend-following, and mean-reversion strategies produces significant improvement of the risk-reward profile.
2) Could you expand a bit more on relative value filtering for volatility carry strategies?
What kind of filter do you use?
For volatility trading, I apply a stochastic volatility model with heavy-tails which is estimated using time series data. Using this model, I infer the statistical value of an option, associated delta-hedging costs and the gap risk. As a result, I estimate the statistical value of options along with their hedging costs. Finally, I compare statistical values against the market traded prices across multiple assets and create a ranking based on reward to risk ratios which are then utilized for trading signals. The advantage of the relative value approach is that it needs no external variables to make trading decisions. I have seen a lot of volatility strategies which are conditioned on various external variables, for an example, a recent performance of the S&P 500 index, the VIX, the term structure of VIX futures, etc. Nothing of it makes sense because all deterministic rules are designed to optimise the back-test and the strategy cannot adopt itself in future. The only reasonable approach is to estimate the relative value of option replication and build up a model which can produce accurate forecasts for the volatility and distribution of price returns.
3) Isn’t risk premium size model-dependent for options?
Yes, the risk-premium is model dependent for options. The reason is simple. On the one hand, we do delta-hedging, our delta and, as a result, the realized P&L will depend on the choice of our hedging model and its estimation method as well as our specifications for delta-hedge executions. On the other hand, when we apply a model for delta-hedging and signal generation we need to estimate model parameters and there is no unique way of doing the estimation. As a result, the proper way to analyse the risk-premium is to develop a statistical model which could reproduce empirical features of asset dynamics along with hedging methods. Of course, there is a model risk… We can reduce the model risk by training our model for making forecast of realised statistics of price returns and analysing its predictive ability.
4) Do you use option implied information to infer probability of regime? Thank you!
Not at this moment, I don’t use any but I do believe that the option implied data can be used to quantify the risk-premia conditional on the regime and this can be applied for trading strategies. I have implemented a similar model for risk-premia in credit default swaps a while ago. I really liked the implications of that model because the model showed that a significant part of risk-premia is attributed to the stressed regime where credit spreads widen by larger amount due to the systemic risk. The risk premia realized in normal regimes on all carry strategies can be viewed as expected compensation to bear losses in stressed regimes. As a result, the understanding of the cyclicality of risk-premia from the implied market data is important for designing trading strategies. It’s on top of my list to implement a regime conditional model for volatility risk-premia where I would also apply the options market data for the inference.
5) What maturity is the short straddle strategy in slide 8? I’m surprised the strategy has stalled for 6 years
It is an illustration of a short volatility strategy that rolls at-the-money straddles on the S&P 500 index. It’s a simple volatility strategy which is about delta-neutral at the inception of each roll but does not apply delta-hedge till option expiry in one month. The strategy has stalled for past few years because of the positive trend in the S&P500 index: most of the time the call leg has been exercised with exercise costs exceeding the premium paid. In fact, the strategy returned 40% during the market recovery in 2009 so that it is the level of volatility risk-premia that matters the most. It is true that strategies which sell either naked puts or short VIX futures without delta-hedging has been performing well over last few years. However, these strategies are always long index delta with a very strong correlation to the index so that their performance is largely attributed to the positive performance of the S&P 500 index. The point of illustrating this strategy is because it a simple volatility strategy which outperforms in the mean-reversion regime but underperforms in the low volatility regime. A popular believe that short volatility strategies outperform in low volatility regime is not correct from my experience. In my implementations, the volatility carry strategies are designed to benefit from the mean-reversion and from the dispersion but not from the low volatility directly.
6) Do you use any kind of Bayesian filters? Like Kalman filters?
Yes, since my PhD studies in statistics I favour using Bayesian inference and robust statistics. Kalman filter is necessary for estimation and forecast of market regimes. For the estimation of the stochastic volatility model for volatility trading I apply a Bayesian estimation which allows to introduce some prior distribution for the volatility. From my experience, by appropriate conditioning of model parameters, we can achieve more stable estimation and more reliable forecasts.
Artur Sepp works as a Quantitative Strategist at the Swiss wealth management company Julius Baer in Zurich. His focus is on quantitative models for systematic trading strategies, risk-based asset allocation, and volatility trading. Prior to that, Artur worked as a front office quant in equity and credit at Bank of America, Merrill Lynch and Bear Stearns in New York and London with emphasis on volatility modelling and multi- and cross-asset derivatives valuation, trading and risk-managing. His research area and expertise are on econometric data analysis, machine learning, and computational methods with their applications for quantitative trading strategies, asset allocation and wealth management. Artur has a PhD in Statistics focused on stopping time problems of jump-diffusion processes, an MSc in Industrial Engineering from Northwestern University in Chicago, and a BA in Mathematical Economics. Artur has published several research articles on quantitative finance in leading journals and he is known for his contributions to stochastic volatility and credit risk modelling. He is a member of the editorial board of the Journal of Computational Finance. Artur keeps a regular blog on quant finance and trading at http://www.artursepp.com.
The views and analysis presented in this article are those of the author alone and do not represent any of the views of his employer. This article does not constitute an investment advice.