Artur Sepp Blog on Quantitative Investment Strategies

  • Blog
  • Research
  • About
    • Log-normal Stochastic Volatility Model for Assets with Positive Return-Volatility Correlation – research paper

      Posted at 3:04 pm by artursepp, on August 10, 2022

      I am introducing my most recent research on log-normal stochastic volatility model with applications to assets with positive implied volatility skews, such as VIX index, short index ETFs, cryptocurrencies, and some commodities.

      Together with Parviz Rakhmonov, we have extended my early work on Karasinski-Sepp log-normal volatility model and we have written an extensive paper with an extra focus on modelling implied volatilities of assets with positive return-volatility correlation in addition to deriving a closed-form solution for option valuation under this model.

      Assets with positive implied volatility skews and return-volatility correlations

      While it is typical to observe negative correlation between returns of an asset and changes in its implied and realized volatilities, there are in fact many assets with positive return-volatility correlation and, as a consequence, with positive implied volatility skews. In below Figure, I show some representative examples.

      (A) The VIX index provides protection against corrections in the S&P 500 index, so that out-of-the-money calls on VIX futures are valuable and demand extra risk-premia than puts.

      (B) Short and leveraged short ETFs on equity indices have positive implied volatility skews because of their anti-correlation with underlying equity indices. I use 3x Short Nasdaq ETF with NYSE ticker SQQQ, which is the largest short ETF in US equity market and which has very liquid listed options market.

      (C) Cryptocurrencies, including Bitcoin and Ethereum, and “meme” stocks, such as AMC, have positive skews during speculative phases when positive returns feed speculative demand for upside. These self-feeding price dynamics increase the demand for calls following a period of rising prices. However, positive return-volatility correlation tend to reverse once “greed” regime is over and “risk-off” regime prevails.

      (D) Gold and commodities in general may have positive volatility skews dependent on supply-demand imbalances, seasonality, etc.

      Importantly, the valuation of options on these assets is not feasible using conventional stochastic volatility models applied in practice such as Heston, SABR, Exponential Ornstein-Uhlenbeck stochastic volatility models, because these models fail to be arbitrage-free (forwards and call prices are not martingals). Curiously enough, the topic of no-arbitrage for SV models with positive return-volatility correlation has not received attention in literature, despite a large number of assets with positive return-volatility correlation.

      Applications to Options on Cryptocurrencies

      Additional, yet important application of our work is the pricing of options on cryptocurrencies, where call and put options with inverse pay-offs are dominant. The advantage of inverse pay-offs for cryptocurrency markets is that all option-related transactions can be handled using units of underlying cryptocurrencies, such as Bitcoin or Ethereum, without using fiat currencies. Critically, since both inverse options (traded on Deribit exchange) and vanilla (traded on CBOE) are traded for cryptocurrencies, a stochastic volatility must satisfy the martingale condition for both money-market-account and inverse measures to exclude arbitrage opportunities between vanilla and inverse options. We show that prices dynamics in our model are martingales under the both inverse and money-market-account measures.

      In below Figure, I show the model fit to Bitcoin options observed on 21-Oct-2021 (the period with positive skew) for most liquid maturities of 2 weeks, 1 month, and 2 and 3 months. We see that the model calibrated to Bitcoin options data is able to capture the market implied skew very well across most liquid maturities with only 5 model parameters. The average mean squared error (MSE) is about 1% in implied volatilities, which is mostly within the quoted bid-ask spread. Calibration to ATM region can be further improved using a term structure of the mean volatility or augmenting the SV model with a local volatility part to fit accurately to the implied volatility surface.

      Model applications

      The quality of model fit is similar for other assets with either positive or negative skews. The main strength of our model is that it can be used for the following purposes.

      1. Cross-sectional no-arbitrage model for different exchanges and options referencing the same underlying.
      2. Model for time series analysis of implied volatility surfaces.
      3. Dynamic valuation model for structured products and option books.

      Further resources

      SSRN paper Log-normal Stochastic Volatility Model with Quadratic Drift https://ssrn.com/abstract=2522425

      Github project with the example of model implementation in Python: https://github.com/ArturSepp/StochVolModels

      Youtube video with lecture I made at Imperial College for model applications for Bitcoin volatility surfaces: https://youtu.be/dv1w_H7NWfQ

      Youtube podcast with introduction of the paper and review of Github project with Python analytics for model implementation: https://youtu.be/YHgw0zyzT14

      Disclaimer

      The views and opinions presented in this article and post are mine alone. This research is not an investment advice.

      Share this:

      • Share on LinkedIn (Opens in new window) LinkedIn
      • Share on X (Opens in new window) X
      • Share on WhatsApp (Opens in new window) WhatsApp
      • Email a link to a friend (Opens in new window) Email
      • Print (Opens in new window) Print

      Like this:

      Like Loading...
      Posted in Crypto, Python, Volatility Modeling, Volatility Trading | 1 Comment
    • Developing systematic smart beta strategies for crypto assets – QuantMinds Presentation

      Posted at 3:09 pm by artursepp, on February 23, 2022

      I am delighted to share the video from my QuantMinds presentation that I made in Barcelona in December 2021. Many thanks to QuantMinds organizers for allowing me to share this video. First, it was nice to attend the onsite conference in a while and to meet old friends and colleagues. I was positively surprised by how many people attended. Many thanks to organizers for making it happen during these uncertain times!

      I presented a framework for the design of sector-based smart beta indices and products for diversified investing to crypto assets. There are thee challenges to account for when designing a systematic strategy on crypto assets.

      First, the data quality is poor indeed. We need to tackle the enormous challenge to accommodate and filter data from multiple data providers. Unlike the traditional asset classes, the market data for public data (such as market cap and traded volumes) can be a source of alpha for systematic strategies.

      Second, the time history of data is very short. For example, most of protocol tokens for Decentralized Finance (DeFi) applications were listed during the second half of 2020, which means that we have to ascertain the design and risk-reward profile of a strategy using one year of data.

      Third, the liquidity of crypto assets may be insufficient when contrasted with traditional assets. Therefore, we need to carefully design strategies by screening and incorporating the liquidity into the process. One of the challenges is that most crypto exchanges (there are about 30 tier one exchanges) tend to over-estimate their traded volumes.

      To overcome these challenges, I constructed a bootstrapping simulation engine which allows to generate joint paths of price and fundamental data for the empirical distributions without breaking the correlation and auto-correlation structure of dependencies in the data.

       

       

       

      Share this:

      • Share on LinkedIn (Opens in new window) LinkedIn
      • Share on X (Opens in new window) X
      • Share on WhatsApp (Opens in new window) WhatsApp
      • Email a link to a friend (Opens in new window) Email
      • Print (Opens in new window) Print
      Posted in Asset Allocation, Crypto, Decentralized Finance, Quantitative Strategies | 2 Comments
    • Toward an efficient hybrid method for pricing barrier options on assets with stochastic volatility – research paper

      Posted at 2:00 pm by artursepp, on February 23, 2022

      I am excited to share the latest paper with Prof. Alexander Lipton.

      https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4035813

      We find the semi-analytical solution to one of the unsolved problems in Quantitative Finance, which is to compute survival probabilities and barrier option values for two-dimensional correlated dynamics of stock returns and stochastic volatility of returns.

      An analytical solution to such a problem does not appear feasible because the valuation equation is asymmetric in the log-price variable when the correlation between returns and the volatility of returns is non-zero. In the case of zero correlation, an analytic closed-form solution is achievable involving a numerical integration in the Fourier space.

      In this article, we combine one-dimensional Monte Carlo simulations and the semi-analytical one-dimensional heat potential method (MHP) to design an efficient technique for pricing barrier options on assets with correlated stochastic volatility. Our approach to barrier options valuation utilizes two loops. First, we run the outer loop by generating volatility paths via the Monte Carlo method. Second, we condition the price dynamics on a given volatility path and apply the method of heat potentials to solve the conditional problem in closed-form in the inner loop. Next, we illustrate the accuracy and efficacy of our semi-analytical approach by comparing it with the two-dimensional Monte Carlo simulation and a hybrid method, which combines the finite-difference technique for the inner loop and the Monte Carlo simulation for the outer loop. Finally, we apply our method to compute state probabilities (Green function), survival probabilities, and the value of call options with barriers.

      As a byproduct of our analysis, we generalize Willard’s (1997) conditioning formula for valuation of path-independent options to path-dependent options. Additionally, we derive a novel expression for the joint probability density for the value of drifted Brownian motion and its running minimum or maximum in the case of time-dependent drift.

      Our approach provides better accuracy and is orders of magnitude faster than the existing methods. The methodology is general and can equally efficiently manage all known stochastic volatility models. Besides, relatively simple extensions (will be described elsewhere) can also handle rough volatility models. With minimal changes, one can use the method to price popular double-no-touch options and other similar instruments.

      Share this:

      • Share on LinkedIn (Opens in new window) LinkedIn
      • Share on X (Opens in new window) X
      • Share on WhatsApp (Opens in new window) WhatsApp
      • Email a link to a friend (Opens in new window) Email
      • Print (Opens in new window) Print
      Posted in Quantitative Strategies, Volatility Modeling | 0 Comments
    • Paper on Automated Market Making for DeFi: arbitrage-fee exchange between on-chain and traditional markets

      Posted at 2:37 pm by artursepp, on September 29, 2021

      I have been delighted to collaborate with Alexander Lipton on a paper where we develop a quantitative approach for making arbitrage-free pricing between decentralized exchanges (DEX), relying on Automated Market Making (AMM), and traditional exchanges, relying on the order book. As a very relevant case for developing central bank digital coins (CBDC) on interoperable blockchains, we simulated our model using high-frequency FX data from a traditional exchange to validate our approach.

      This post is a small communication of the background and key results from our paper that can be downloaded from SSRN https://ssrn.com/abstract=3939695

      Automated Market Making

      Automated market making (AMM) for crypto asset has become one of the most interesting developments in the Decentralized Finance (DeFi) space.

      Vitalik Buterin, the founder of Ethereum protocol, originally proposed AMM in 2016 as a concept to exchange on-chain assets on decentralized exchanges which operate entirely on-chain . The purpose was to reduce the spreads and gas fees, that had been excess of 10% at the time. The solution was suggested to create two-sided pools of different coins (for an example, ETH vs BTC) and to fix the exchange rate relative to the pool depth (liquidity).

      This concept was formalized by the Uniswap protocol that introduced the so-called constant function market maker (CFMM) using product rule as for marginal pricing of one token vs the other by mean of smart contracts (SC).

      The AMM is an interesting concept like a dark pool (in a good sense) where investors can place a large orders and get immediate executions without revealing their intentions prior to their trades.

      In Figure 1, I show the relative pricing of a representative USDC-EUDC (US Dollar – Euro) pool (the initial parameters are EUR/USD rate of 1.25) using the three CFMM rules:

      1. Sum rule that allows to swap full balances of one token into another so that the change in the relative rate is a constant.
      2. Product rule that fixes the relative exchange rate inversely proportional to pool balances. Outside of the equilibrium rate of 0.8 EUDC per 1.0 USDC, the relative rate of EUDC will decline or increase faster than the constant exchange rate
      3. Mixed rule with a parameter alpha which is a blended rule between the sum and the product rule.

      Bid/Ask marginal rates

      Using the CFMM we can derive the marginal exchange rates as functions of the ratio of the order size to the pool liquidity. This is a very convenient feature that enables to explicitly assign the exchange rate to each order size.

      In Figure 2, I show the marginal AMM rates as functions of the CFMM specification. I use the EUR-USD FX spot of 1.25 and equivalent USD-EUR spot of 0.8. Then we can present a representative bid/ask book for trading in both EUDC and USDC from the same USDC-EUDC pool.

      It follows that the sum rule enforces no feedback from pool liquidity for the marginal exchange (zero slippage costs) while the product rule produces strong feedback from the pool liquidity (slippage costs proportional to the ratio of traded order to the pool liquidity). By introducing the mixed rule with a parameter alpha between 0 (product rule) and infinity (sum rule), we can design flexible CFMM.

       

      Pool arbitrage

      One of the most interesting challenges for on-chain exchanging of different CBDCs is how to avoid arbitrage opportunities between on-chain exchanges and traditional markets. We solve this problem by introducing a pool arbitrageur (either a pool operator or designated market-maker) who follows an optimization problem to arbitrage opportunities between the on-chain pool and traditional markets. Because of the pool arbitrageur, the pool bid/ask spreads for small orders are consistent with a traditional exchange.

      We apply our model for simulation of hypothetical CBDC pools using actual high-frequency data FX data. In Figure 3, I show the simulation of USDC-EUDC pool using intraday EUR-USD FX spot rate on 3rd June 2021. For convenience, I normalize the sport FX rate to 1.0 at the start of the trading session. I apply the constant product CFMM.

      In the first panel I show the optimal pool balances that are determined by the pool arbitrageur to exclude arbitrage between the pool and the FX spot rate. In the second panel I show the bid/ask spreads for trading 1bp of the pool liquidity. We see that the actual FX spot rate is sandwiched between the AMM bid/ask rates. The final figure is the arbitrage profits.

       

      Application to G-10 currencies

      As as a final validation, we also included the volumes for simulations of CBDC pools using the actual FX buy and sell orders. Intraday volumes are normalized so that the pool daily turnover is 100% for each day in our sample of last 3 years of FX data.

      In the Figure 4, I show the boxplot of key variables from the simulation of the CBDC pools for G-10 currencies including the Chinese Yuan. I apply the mixed rule CFMM with alpha equal to 5 and the transaction fees of 1bp.

      In the first panel, I show the volume-weighted average bid-ask spread. The average spread is about 1.3 across all FX pair, which is competitive to traditional FX markets. The second panel shows the annual P&L (daily P&L multiplied by 260). The last panel shows the Hedged P&L which is produced by hedging the spot exposure or equivalent by allocation to the pool using borrowed CBDCs. It is clear that liquidity providers benefit from both pool fees and the convexity generated by the trading volumes

       

      Summary

      Automated market making is one of the core elements for on-chain exchange of digital assets. Of course, one of the most important questions is the arbitrage between on-chain and off-chain exchanges. Alexander Lipton and myself have developed a quantitative approach in this direction.

       

      References

      Lipton, A. and Sepp, A., Automated Market-Making for Fiat Currencies (2021). Working Paper, available at SSRN: https://ssrn.com/abstract=3939695

      Share this:

      • Share on LinkedIn (Opens in new window) LinkedIn
      • Share on X (Opens in new window) X
      • Share on WhatsApp (Opens in new window) WhatsApp
      • Email a link to a friend (Opens in new window) Email
      • Print (Opens in new window) Print

      Like this:

      Like Loading...
      Posted in Crypto, Decentralized Finance, Uncategorized | 2 Comments
    • Tail risk of systematic investment strategies and risk-premia alpha

      Posted at 2:55 pm by artursepp, on April 9, 2019

      Everyone knows that the risk profile of systematic strategies can change considerably when equity markets turn down and volatilities spike. For an example, a smooth profile of a short volatility delta-hedged strategy in normal regimes becomes highly volatile and correlated to equity markets in stressed regimes.

      Is there a way to systematically measure the tail risk of investment products including hedge funds and alternative risk premia strategies? Further, how do we measure the risk-premia compensation after attribution for tail risks? Finally, would we discover patterns in cross-sectional analysis of different hedge fund strategies?

      I have been working through years on a quantitative framework to analyse the above raised questions and recently I wrote two articles on the topic:

      1. The regime-conditional regression model is introduced in The Hedge Fund Journal (online paper).
      2. A short review of the methodology and results is presented for QuantMinds

      I would like to highlight the key results of the methodology so that interested readers can further follow-up with the original sources.

      Regime conditional index betas

      In the top Figure, I show the regime conditional betas for a selection of hedge fund style from HFR indices data using the S&P 500 index as the equity benchmark.

      We can classify the strategies into defensive and risk-seeking based on their return profile in bear market regimes:

      1. Defensive strategies (long volatility, short bias, trend-following CTAs) have negative equity betas in bear regime so that these strategies serve as diversifiers of the equity downside risk.
      2. Risk-seeking strategies (short volatility, risk-parity) have positive and significant equity betas in bear regime. Equity betas of most of risk-seeking strategies are relatively small in normal and bull periods but equity betas increase significantly in bear regimes. I term these strategies as Risk-seeking risk-premia strategies.
      3. I term strategies with insignificant betas in normal bear regimes as Diversifying strategies. Examples include equity market neutral and discretionary macro strategies because, even though these strategies have positive betas to the downside, the beta profile does not change significantly between normal and bear regimes. As a result, the marginal increase in beta exposure between normal and bear periods is insignificant.

      Risk-premia alpha vs marginal bear beta

      I define the risk-premia alpha as the intercept of the regime-conditional regression model for strategy returns regressed by returns on the benchmark index. To show a strong relationship between the risk-premia alpha and marginal bear beta (the marginal bear betas are computed as the difference between betas in normal and bear regimes), I apply the cross-sectional analysis of risk premia for the following sample of hedge fund indices and alternative risk premia (ARP) products, using quarterly returns from 2000 to 2018 against the S&P 500 total return index:

      1. HF: Hedge fund indices from major index providers including HFR, SG, BarclayHedge, Eurekahedge with the total of 73 composite hedge fund indices excluding CTA indices;
      2. CTA: 7 CTA indices from the above providers and 15 CTA funds specialized on the trend-following;
      3. Vol: 28 CBOE benchmark indices for option and volatility based strategies;
      4. ARP: ARP indices using HFR Bank Systematic Risk-premia Indices with a total of 38 indices.

      In figure below, I plot risk-premia alphas against marginal bear betas grouped by strategy styles. For defensive strategies, their marginal bear betas are negative; for risk-seeking strategies, the marginal bear betas are positive and statistically significant.

      cross_sectional_rp 20190405-085150

      We see the following interesting conclusions.

      1. For volatility strategies, the cross-sectional regression has the strongest explanatory power of 90%. Because a rational investor should require a higher compensation to take the equity tail risk, we observe such a clear linear relationship between the marginal tail risk and the risk-premia alpha. Defensive volatility strategies that buy downside protection have negative marginal betas at the expense of negative risk-premia alpha.
      2. For alternative risk premia products, the dispersion is higher (most of these indices originate from 2007), yet we still observe the pattern between the defensive short and risk-seeking risk-premia strategies with negative and positive risk-premia alpha, respectively.
      3. For hedge fund indices, the dispersion of their marginal bear beta is smaller. As a result, most hedge funds serve as diversifiers of the equity risk in normal and bear periods; typical hedge fund strategies are not designed to diversify the equity tail risk.
      4. All CTA funds and indices have negative bear betas with insignificant risk-premia alpha. Even though their risk-premia alpha is negative and somewhat proportional to marginal bear beta is proportional, the risk-premia alpha is not statistically significant. In this sense, CTAs represent defensive active strategies. The contributors to slightly negative risk-premia alpha may include transaction costs and management fees.

       

      References

      Sepp A., Dezeraud L., (2019), “Trend-Following CTAs vs Alternative Risk-Premia: Crisis beta vs risk-premia alpha”, The Hedge Fund Journal, Issue 138, page 20-31, https://thehedgefundjournal.com/trend-following-ctas-vs-alternative-risk-premia/

      Sepp, A. The convexity profile of systematic strategies and diversification benefits of trend-following strategies, QuantMinds, April 2019

      Share this:

      • Share on LinkedIn (Opens in new window) LinkedIn
      • Share on X (Opens in new window) X
      • Share on WhatsApp (Opens in new window) WhatsApp
      • Email a link to a friend (Opens in new window) Email
      • Print (Opens in new window) Print
      Posted in Asset Allocation, Quantitative Strategies, Trend-following, Uncategorized, Volatility Modeling | 1 Comment
    • Trend-Following CTAs vs Alternative Risk-Premia (ARP) products: crisis beta vs risk-premia alpha

      Posted at 3:00 pm by artursepp, on February 5, 2019

      Year 2018 was eye-opener for investors in alternative risk-premia products. A lot of these products have been sold as market-neutral but they did not live up to expectations… I think, the reason is simple: most of ARP products have been driven by marketing with nice looking back-tested results obtained by over-fitted models. I made a presentation on this topic back in early November 2018. Yet, traditional alternatives had a bad year too.

      We published an article in the Hedge Fund Journal to explain the difference between traditional trend-following CTAs and Alternative Risk Premia. Here I will post the introduction and the key insight from our model to define the risk-premia alpha.

       

      Introduction

      The turbulence of 2018 made it a difficult year for most systematic investment products. To the surprise of several investors, many of these quant products had been sold as market neutral. In particular, the new breed of alternative risk-premia (ARP) products – that had flooded the market a few years prior to 2018 – performed exceptionally badly. For example, the composite HFR Bank Systematic Risk-premia Multi-Asset Index lost -18%, in comparison with a loss of -4% on the S&P 500 total return index. However, traditional alternative asset classes also underperformed, with the flagship HFRX Global Hedge Fund Index losing -7% and the SG Trend Index losing -8%.

      In the face of such losses, both investors and managers are asking how and why so many quant strategies underperformed? Still more importantly, what are the implications for the diversification of traditional equity-bond portfolios and alternative investments? In particular, since trend-following CTAs belong to a handful of tried-and-tested diversifiers, why did trend-followers not diversify in 2018?

      To address such questions, we first intend to look at how trend-following programs are expected to perform when crises last for extended periods of at least two months, because trend-followers need to adjust to profit from sustained crises in equity markets. Second, we shall focus on the way in which the risk profile of ARP products, hedge funds, and trend-following CTAs can change in bear and bull market regimes because of their potential exposures to tail-risk. We analyse the risk-premia alpha in these products by taking into account regime-conditional risk.

      For this analysis, we are proposing a new quantitative model to explain the risk of investment strategies by accounting for extreme market conditions and for their exposure to tail risk, such as selling volatility and credit protection. We apply this model to the cross-sectional risk attribution of about 200 composite indices of hedge funds and ARP products. We show that there is a strong linear relationship between risk-premia alpha and the tail risk of systematic ARP strategies. We can demonstrate that our model explains nearly 90% of the risk-premia for volatility strategies and about 35% of the risk-premia for hedge fund and ARP products. In this way, most ARP and hedge fund type products can be seen as risk-seeking strategies. Importantly, our model predicts that ARP products offer smaller risk-premia compensation compared to hedge funds.

      We are able to illustrate that, interestingly, trend-following CTAs are exceptions since they belong to defensive strategies with negative market betas in bear regimes, yet risk-premia alphas for CTAs are insignificant. CTAs cannot be seen either as ARP products with positive risk-premia alpha from exposures to tail risk, or as defensive products with negative risk-premia designed to reduce tail risk, such as long volatility strategies. Instead, trend-following CTAs should be viewed as an actively managed defensive strategy with the goal to deliver protective negative market betas in strongly downside markets along with risk-seeking positive market betas in strongly upside markets. Overall, after adjusting for the downside and upside betas, the risk-premia alpha of CTAs is insignificant. Yet, because of the negative protective betas in bear markets, trend-followers well deserve their place as diversifiers in alternative portfolios to improve risk-adjusted performance and capture risk-premia alpha on a portfolio level, as we will show in the last section.

      Finally, since our risk-attribution model assumes conditional equity betas in specific market regimes, we are able to illustrate the misunderstanding behind strategies claiming to be “zero-correlated” and “market-neutral”. Given a specific market regime, most typically in the bear regime, many risk-premia strategies tend to produce a strong exposure to equity markets because of their hidden tail exposures. For example, a strategy selling delta-hedged put options would have a small market beta during normal regime; yet the strategy would exhibit a significant market beta during crisis periods because of its negative gamma and vega exposures. When we analyse systematic strategies unconditional to market regimes, the performance may appear to be smooth and uncorrelated because of the aggregation across different regimes.

      We will conclude the introductory section and our article by answering the above questions in the following way. Firstly, ARP strategies are expected to perform well during normal regimes. However, since the excess performance of these strategies is derived from a hidden tail risk, these strategies are expected to underperform during turbulent markets, as in 2018. To earn risk-adjusted alpha from these products, investors need to look at long time horizons that include both bull and bear markets. Second, while the performance of trend-following CTAs is not derived from risk-premia alpha as compensation for hidden tail risks, the performance of trend-followers is conditional on trends lasting for sustained periods. Since trends reversed rapidly multiple times during 2018, trend-followers underperformed. As a result, in what proved to be an extraordinary year, both ARP products and trend-followers underperformed, but for different reasons.

      Going forward, investors and allocators need to understand how different strategies are expected to perform during bear and normal markets and how to diversify their portfolios accordingly. Our results provide a valuable aid in quantifying the hidden tail behaviour of systematic strategies as well as suggesting an approach for the risk attribution and diversification of alternative portfolios.

      Risk-premia Alpha

      Risk-premia alpha measures the excess return on a strategy after adjusting for conditional beta exposures. According to the regime conditional CAPM, a strategy should produce higher risk-premia alpha if it assumes higher equity risk in a bear market measured by marginal bear market betas.

      The figure in the top illustrates different risk profiles of hedge fund and ARP products. We apply the regime conditional model to a large universe of indices grouped into three categories:

      1. Hedge fund indices from major index providers including HFR, SG, BarclayHedge, Eurekahedge with the total of 73 composite hedge fund indices excluding CTA indices;
      2. 7 CTA indices from the above providers; and
      3. ARP indices using HFR Bank Systematic Risk-premia Indices with a total of 38 indices.

      According to our model we see a clrear differentiation among risk-seeking strategies, defensive strategies and trend-following CTAs.

      Risk-seeking strategies: the marginal bear beta is positive (increased risk in bear regime) compensated by positive risk-premia alpha. Most hedge fund and ARP products are risk-seeking strategies with tail risk. We observe almost a linear relationship between risk-premia alpha and marginal bear betas for the cross-section of hedge funds and ARP indices. ARP products deliver less risk-premia alpha for the same level of tail risk compared to hedge funds.

      Defensive strategies: the marginal bear beta is negative (reduced risk in bear regime) compensated by negative risk-premia alpha. Defensive strategies diversify equity risk in bear regimes but deliver negative risk-premia alpha.

      Trend-following CTAs: produce negative marginal bear market betas and hence strongly diversify equity risk in bear market regimes. Because their risk-premia alpha is flat, trend-followers can be considered as an anomaly, or as a differentiation from ARP and traditional hedge fund products.

      Share this:

      • Share on LinkedIn (Opens in new window) LinkedIn
      • Share on X (Opens in new window) X
      • Share on WhatsApp (Opens in new window) WhatsApp
      • Email a link to a friend (Opens in new window) Email
      • Print (Opens in new window) Print
      Posted in Uncategorized | 1 Comment
    • My talk on Machine Learning in Finance: why Alternative Risk Premia (ARP) products failed

      Posted at 2:56 pm by artursepp, on November 27, 2018

      I have recently attended and presented at Swissquote Conference on Machine Learning in Finance. With over 250 participants, the event was a great success to hear from the industry leaders and to see the recent developments in the field.

      The conference featured very interesting talks ranging from an application of natural language processing (NLP) for industry classifications to a systematic trading in structured products using deep learning. For the interested, the slides and videos are available on the conference page.

      I would like to share and introduce my talk presented at the conference on applications of machine learning for quantitative strategies (the video of my talk available here).

      In my talk, I address the limitations of applying machine learning (ML) methods for quantitative trading given limited sample sizes of financial data. I illustrate the concept of probably approximately correct (PAC) learning that serves as a foundation to the complexity analysis of machine learning.

      In particular, the PAC learning establishes model-free bounds on the sample size to estimate a parametric function from the sample data for a specified level of approximation and estimation error. I recommend very nice textbooks An Elementary Introduction to Statistical Learning Theory and The Nature Of Statistical Learning Theory to study more about the PAC learning.

      I also present an example of using supervised learning for the selection of volatility models for systematic trading from my earlier presentation.

      Finally, I touch on the important topic of the risk-profile of quantitative investment strategies and, in particular, Alternative Risk Premia (ARP) products. For the past few years, since about 2015, the sell-side have been marketing a plethora of ARP products as “cheap” substitutes for hedge fund strategies. However, ARP products fared miserably throughout year 2018 despite the fact that most of these products were marketed as market-neutral. I wanted to share my view why ARP products failed…

      The typical creation process of ARP products is as follows. First, a research team runs multiple back-tests of “academic” risk factors (value, carry, momentum, etc) across many markets until a specific parametrization of their strategy produces a satisfactory Sharpe ratio (around 1.0 or so). Once the necessary performance target is achieved in the back-test, the research team along with a marketing team would write a research paper with economic justification of the strategy. Then the marketing team would pitch the strategy to institutional clients. If the marketing team is successful, they would raise money for the strategy. Finally, the successful strategy (out of dozens of attempted) would reach to the execution team who would implement the strategy in a trading system and execute on behalf of clients.

      The creation of ARP products serve as a prime example why we need to understand the limitations of statistical learning given limited sample sizes of financial data. Also, there is the incentive to fit a rich model to the limited sample to optimize the in-sample performance. For an example, using PAC learning, to estimate a model with 10 parameters at an approximation error within 10% we need to apply 2,500 daily observations!

      It is no coincidence that ARP product suffered a major blow once market conditions changed. As we speak, post October 2018, quants are facing a crisis of confidence.

      In the hindsight, year 2018 brought to the failure the two very popular strategies:

      1) The short volatility ETNs: the figure at the top of the post illustrates how would a naive 5-parameter regression fit the in-sample data of past two years with the accuracy of 98%, but the fitted model fails miserably in February 2018 (I posted a detailed statistical analysis of the crash).

      2) The alternative risk-premia products: the figure below shows the risk-profile of Bank Systematic Risk Premia Multi-Asset Index compiled by the Hedge Fund Research.

      In the figure below, as the predictor, I use the quarterly returns on the S&P 500 index which I condition into the three regimes: bear (16% of the sample), normal (68%), and bull (16%). Then I consider the quarterly returns on the HFR index conditional on these regimes and illustrate the corresponding regression of returns on the HFR index predicted by returns on the S&P 500 index.

      It is clear that the HFR index sells 3 puts to buy 5 calls to obtain the leveraged exposure to the S&P 500 index. Well, over the past decade these models learned to leverage the upside at the cost of selling the downside.

      BankRiskPremia.png

      The key message from my talk is that, we may be able to avoid the traps of applying machine and statistical learning methods for systematic trading strategies by understanding the theoretical grounds of the ML methods and the potential limitations of using only limited sample sizes for the estimation of these models.

       

      Disclaimer

      All statements in this presentation are the author personal views. The information and opinions contained herein have been compiled or arrived at in good faith based upon information obtained from sources believed to be reliable. However, such information has not been independently verified and no guarantee, representation or warranty, express or implied, is made as to its accuracy, completeness or correctness. Investments in Alternative Investment Strategies are suitable only for sophisticated investors who fully understand and are willing to assume the risks involved. Alternative Investments by their nature involve a substantial degree of risk and performance may be volatile.

       

       

      Share this:

      • Share on LinkedIn (Opens in new window) LinkedIn
      • Share on X (Opens in new window) X
      • Share on WhatsApp (Opens in new window) WhatsApp
      • Email a link to a friend (Opens in new window) Email
      • Print (Opens in new window) Print
      Posted in Quantitative Strategies, Uncategorized, Volatility Modeling, Volatility Trading | 2 Comments
    • Why Python for quantitative trading?

      Posted at 12:41 pm by artursepp, on October 24, 2018

      “Today a new language is overtaking French as the most popular language taught in primary school. Its name is Python… 6 out of 10 parents want their kids to learn Python”, Joel Clark.

      Well, when I attended school, I learnt BASIC… But I must confess, I do share the excitement taking over the Python language.

      I have recently taken part in a webinar organised by Risk.net and Fincad where we discussed the advantages and challenges in using Python for developing quantitative trading applications. The panel included experts from various corners of the industry including myself and:

      1. Joel Clark, contributing editor, Risk.net (Moderator)
      2. Gary Collier, CTO, Man Group Alpha Technology
      3. Per Eriksson, senior executive, enterprise risk and valuation solutions, FINCAD
      4. Ronnie Shah, head of US quantitative research and quantitative investment solutions, Deutsche Bank

      The webinar was a success with over 500 participants. Since Python is on everyone’s mind, I wanted to highlight some interesting questions and thoughts from our discussion. The audio of the webinar is available here

       

      Why Python has become an increasingly popular programming language in financial markets?

      One of the major advantages of using Python is the ease to interconnect different systems with data feeds and databases, to process data, and to output results into user and trading applications.

      My first experience with Python came in 2012, when Bank of America Merrill Lynch, where I worked as a front office quant strategist, introduced the Quartz system developed in Python. The Quartz was supposed to be the bank-wide solution to share data and trading risks. The reason is that the insufficient centralization and aggregation of positions and risks across all trading books (traditionally differentiated by geographies and asset classes) was one of the key weaknesses shared by large investment banks during and in the aftermath of the 2008 financial crisis.  As a result, the Quartz and Python-based analytics were thought as a bridge to connect different parts of analytics, data centres, and development teams. A daunting task for any large organization employing hundreds of developers and users!

      Moving fast forward, Python has been widely applied by major financial institutions for developing tools to connect different parts of analytics and to increase collaboration within a firm. Over time, people have also started to do more core development in Python in addition to using Python as a glue language.

      New developments using the Python language have been leveraged thanks to a rich Python ecosystem with huge number of libraries for data analytics and visualization. For an example, Man AHL illustrated how they benefited by moving both research and production code to Python.

      Summarising their paper and our panel, Python has become increasingly popular because:

      1. Python enhances the communication between different teams.
      2. Python provides an advanced ecosystem with packages for numerical and statistical analysis, data handling and visualization.
      3. Python is easy to learn and it is flexible to apply, and it’s actually fun to program using the Python language. As a person with many years of doing quantitative modelling in C++ and Matlab, I fully support this view.

       

      How Python works among other languages for data analysis?

      Since data analytics is currently one of the key drivers across all industries including the finance and investment management, choosing the right ecosystem for development may have a crucial impact on the business development and success.

      Presently, the three development tools are widely applied for the data analytics.

      1. Python along with pandas for tabular data structures and multiple packages for data analysis (statsmodels for statistical analysis, matplotlib for data visualization, scikit-learn for machine learning, etc). The advantage is that Python provides a free and open-source solution with plentiful resources for data fetching, processing, and visualization. Python can be easily deployed on either a PC or a server to make scalable firm-wide solutions.
      2. Traditionally, Matlab has been widely applied in academic and research labs but it comes with a heavy cost for commercial firms. Matlab has numerous packages for data processing, analysis, and visualization, however each package is available at a separate price. Personally, I have used Matlab a lot along with its capabilities for the object-oriented programming. While I value some capabilities of Matlab, the major drawback of Matlab, apart from its licensing cost, is that the deployment of Matlab-based analytics is problematic and comes with separate fees. Matlab applications can be compiled and deployed on a server but the deployment process looks complex and not well documented and it may be costly if external consultancy is needed. In my opinion, the insufficient portability and scalability are major obstacles for developing firm-wide solutions using Matlab.
      3. R along with its multiple packages for statistical data analytics. While R is free and it has many packages to do various statistical analyses, the deployment of R across firm-wide platform may not be as efficient. In my opinion, the R language is suitable only for the development of stand-alone tools for statistical analyses. In fact, Jupiter Lab enables to apply R functionality within the Python ecosystem.

       

      How long would it take to convert Matlab production code to Python?

      Given the advantages of Python over Matlab, most firms would now employ Python to start any new development from scratch. How is about converting the legacy code and systems?

      Gary Collier gave one example of AHL converting a fairly complicated trading system for single stock equities to Python within 8-9 months.

      In fact, my friend Saeed Amen has just written a short overview paper on moving from Matlab to Python. The transition is feasible… While there will be short-term costs, the long-term benefit is to have a firm-wide solution developed in one multi-purpose language that everyone can understand and contribute to.

       

      Python everywhere?

      To conclude, the top figure shows the share of questions about various programming languages asked each month at Stack Overflow, which is the largest online community for developers. We clearly see the growing trend for Python against all other major programming languages. Perhaps soon enough the Python will overtake all other languages taught not only in primary school but e,plyed everywhere else…

      Share this:

      • Share on LinkedIn (Opens in new window) LinkedIn
      • Share on X (Opens in new window) X
      • Share on WhatsApp (Opens in new window) WhatsApp
      • Email a link to a friend (Opens in new window) Email
      • Print (Opens in new window) Print
      Posted in Python, Uncategorized | 1 Comment
    • Machine Learning for Volatility Trading

      Posted at 6:33 am by artursepp, on May 29, 2018

      Recently I have been working on applying machine learning for volatility forecasting and trading. I presented some of my findings at QuantMinds Conference 2018 which I wanted to share in this post.

      My presentation is available at SSRN with the video of the talk in YouTube.

      Continue reading →

      Share this:

      • Share on LinkedIn (Opens in new window) LinkedIn
      • Share on X (Opens in new window) X
      • Share on WhatsApp (Opens in new window) WhatsApp
      • Email a link to a friend (Opens in new window) Email
      • Print (Opens in new window) Print
      Posted in Asset Allocation, Quantitative Strategies, Uncategorized, Volatility Modeling, Volatility Trading | 3 Comments
    • Trend-following strategies for tail-risk hedging and alpha generation

      Posted at 11:39 am by artursepp, on April 24, 2018

      Because of the adaptive nature of position sizing, trend-following strategies can generate the positive skewness of their returns, when infrequent large gains compensate overall for frequent small losses. Further, trend-followers can produce the positive convexity of their returns with respect to stock market indices, when large gains are realized during either very bearish or very bullish markets. The positive convexity along with the overall positive performance make trend-following strategies viable diversifiers and alpha generators for both long-only portfolios and alternatives investments.

      I provide a practical analysis of how the skewness and convexity profiles of trend-followers depend on the trend smoothing parameter differentiating between slow-paced and fast-paced trend-followers. I show how the returns measurement frequency affects the realized convexity of the trend-followers. Finally, I discuss an interesting connection between trend-following and stock momentum strategies and illustrate the benefits of allocation to trend-followers within alternatives portfolio.

      Interested readers can download the pdf of my paper on SSRN

      Key takeaway

      1. Risk-profile of quant strategies

      The skewness and the convexity of strategy returns with respect to the benchmark are the key metrics to assess the risk-profile of quant strategies. Strategies with the significant positive skewness and convexity are expected to generate large gains during market stress periods and, as a result, convex strategies can serve as robust diversifiers. Using benchmark Eurekahedge indices on major hedge fund strategies, I show the following.

        • While long volatility hedge funds produce the positive skewness, they do not produce the positive convexity.
        • Tail risk hedge funds can generate significant skewness and convexity, however at the expense of strongly negative overall performance.
        • Trend-following CTAs can produce significant positive convexity similar to the tail risk funds and yet trend-followers can produce positive overall performance delivering alpha over long horizons.
        • On the other spectrum, short volatility funds exibit significant negative convexity in tail events.

      Fig2HFconv

      HFSkew

      Continue reading →

      Share this:

      • Share on LinkedIn (Opens in new window) LinkedIn
      • Share on X (Opens in new window) X
      • Share on WhatsApp (Opens in new window) WhatsApp
      • Email a link to a friend (Opens in new window) Email
      • Print (Opens in new window) Print
      Posted in Asset Allocation, Quantitative Strategies, Trend-following, Uncategorized | 1 Comment
    ← Older posts
    Newer posts →
    • Receive notifications of new posts by email

      Join 344 other subscribers
    • Recent Posts

      • The Science and Practice of Trend-following Systems: paper and presentation
      • Lognormal Stochastic Volatility – Youtube Seminar and Slides
      • Optimal allocation to cryptocurrencies in diversified portfolios – update on research paper
      • Unified Approach for Hedging Impermanent Loss of Liquidity Provision – Research paper
      • Log-normal stochastic volatility with quadratic drift – open access publication
      • Stochastic Volatility for Factor Heath-Jarrow-Morton Framework – research paper
      • AD Derivatives podcast on volatility modeling and DeFi
      • What is a robust stochastic volatility model – research paper
      • Robust Log-normal Stochastic Volatility for Interest Rate Dynamics – research paper
      • Optimal Allocation to Cryptocurrencies in Diversified Portfolios – research paper
      • Log-normal Stochastic Volatility Model for Assets with Positive Return-Volatility Correlation – research paper
      • Developing systematic smart beta strategies for crypto assets – QuantMinds Presentation
      • Toward an efficient hybrid method for pricing barrier options on assets with stochastic volatility – research paper
      • Paper on Automated Market Making for DeFi: arbitrage-fee exchange between on-chain and traditional markets
      • Tail risk of systematic investment strategies and risk-premia alpha
      • Trend-Following CTAs vs Alternative Risk-Premia (ARP) products: crisis beta vs risk-premia alpha
      • My talk on Machine Learning in Finance: why Alternative Risk Premia (ARP) products failed
      • Why Python for quantitative trading?
      • Machine Learning for Volatility Trading
      • Trend-following strategies for tail-risk hedging and alpha generation
    • Categories

      • Asset Allocation (10)
      • Crypto (8)
      • Decentralized Finance (5)
      • Python (4)
      • Quantitative Strategies (17)
      • Trend-following (4)
      • Uncategorized (20)
      • Volatility Modeling (18)
      • Volatility Trading (13)

 

Loading Comments...
 

    %d