So I decided to revisit this to see for myself since AQR is a big proponent of volatility targeting and they're very empirically inclined. Here's what I did. I want to see how it compares to what you've done:Uncorrelated wrote: ↑Sun Aug 30, 2020 4:33 pmI'm aware of volatility clustering. According to my tests using either past volatility or VIX as an estimate for future volatility doesn't work well enough to beat the market. Only with a high quality volatility forecast (better than VIX), merton's portfolio theory and a trading strategy that limits turnover rate, I was able to obtain higher utility than a constant allocation (semi out-of-sample).langlands wrote: ↑Sun Aug 30, 2020 2:55 pmYou don't have to use recent volatility as a measure of future volatility. First of all, because of volatility clustering, it's not that bad an estimate and is the baseline in any predictive model. Second of all, you can just use the VIX or any other measure of implied volatility. I have no idea why you would say that it's absurd to claim 1/σ is a compromise between 1 and 1/σ^2. I mean, your prediction of future stock volatility doubles. Your return expectations haven't changed. Merton tells you to decrease your stock allocation by a factor of 4. You decrease it by a factor of 2. That's absurd? It's simply a fact that applying TV will lead to an allocation in between constant and Merton's.
I've noticed a rather dogmatic tendency in your way of thinking where you feel the need to put any problem in a precise framework and then find the optimal solution within that framework. Coming from a math background, I can of course sympathize. However, finance is quite a bit messier than math or physics and there is always the danger of model error lurking in the background. When it comes to these murkier topics, I find it much more illuminating to get the "gist" of what each model is telling me rather than actually believing the numerical answer it spits out. So for example, the idea that when volatility increases, it makes sense to decrease allocation to that asset.
1) I used VOX and VIX end-of-month closing price from back to 1986. For US TSM returns, I'm using FF data.
2) The strategy is brutally simple. The point is to target ~20% volatility, since that was the volatility of US stocks since 1926 to 1986. The strategy uses the previous month's VIX and either invests in T-Bills or leverages in order to target 20% volatility. So if the VIX was 40, it would invest 50/50 stocks/T-Bills. If the VIX was 15, it would invest 133/-33 stocks/T-Bills. I give it a 1% penalty for any short T-Bills (borrowing friction).
I get a CAGR of 11.57% and St Dev. (monthly, annualized) of 14.81%, for a Sharpe of 0.61.
US TSM OTOH got a CAGR 10.87%, St Dev of 15.39% and Sharpe of 0.547.
I didn't even re-size VIX to account for the volatility premium since realistically, people back in 1986 wouldn't have known how to do that. But I found that even if I did, the results just don't change much (Sharpe basically stays at 0.61). The above doesn't have transaction costs though. I have reason to believe this doesn't matter a whole lot though.
Is this roughly consistent with what you've found?
EDIT: Once I look at the Sharpe on an annual basis, it's basically neck and neck with US TSM. In other words, (avg monthly returns-avg monthly T-bill returns)/(annualized monthly St Dev) is better for volatility targeting (0.611 vs 0.547 Sharpe). But (avg yearly returns-avg yearly T-Bill returns)/(annual St Dev) is basically the same (0.537 vs 0.511). That tiny advantage, surely transaction costs would eat and just doesn't seem big enough to matter.
So I guess my conclusion actually is similar to yours. Not sure there's much excitement to be had here. Not that I volatility target any ways, just an interesting topic is all.