sean.mcgrath wrote:
AlohaJoe wrote:
I don't think standard deviation is a useful metric for non-normal returns that exhibit kurtosis and skew and when upside volatility isn't a bad thing.

Hi Joe, no offense as your posts have been some of the most useful for me to read here, but isn't that a bit of a cop-out? I mean, there's enough data: it's pretty easy to assume a different distribution with the kurtosis and skew you'd like and calculate the probabilities.

Here are two different distributions:

Even though the red line has a lower standard deviation, clearly no one would ever pick it. Obviously that's a contrived example but we can also see something similar with more realistic data.

Between 1950 and 1980 Japanese equity real returns had a standard deviation of around 35%. Which is pretty huge, even in comparison to US equities (which have a standard deviation of around 20%). But 20% of that standard deviation (i.e. 7 percentage points) was due to a single year, 1950, when equities returned 134%. No one would consider that a bad outcome, yet it increased "risk" if risk means standard deviation.

The problem is exacerbated when you think that standard deviation is reported annually but few (no?) human beings work that way. We all check our portfolios more than once a year and we don't do it only in January.

If you only check your portfolio every 7 years then a 100% equity portfolio will have the same standard deviation as a 100% bond portfolio that you check every year. Surely how often you check shouldn't affect the risk of your portfolio?

Similarly, if you check your portfolio once a week (which is probably closer to what most people do, at least on Bogleheads!) then a 100% bond portfolio has the same "risk" (19% standard deviation) as a 100% equity portfolio checked every year.

You can try to account for the things I mentioned but then you start to get pretty far from people being able to have any intuitive grasp of what's going on. It is already hard enough with standard deviation. How much extra standard deviation is "worth" extra return?

Here are two distributions that have the same standard deviation and the same average return. Which one is "better"? What does that even mean when we've made the two normal differentiators identical?

The upside volatility I do account for: it's a normal distribution, so I divide by two.

But that's not how it works, especially for distributions that are skewed, and one reason why I increasingly dislike standard deviation as a metric.

There does seem to be quite a bit of "the past can't predict the future, so why model it" in the replies.

I agree that people are often selectively dismissive about past returns. Just because I don't like standard deviation doesn't mean there aren't other measures out there that I like better. I mentioned one: length & breadth of drawdown. Semideviation is an improvement on standard deviation, especially if paired with some goal (e.g. semideviation of returns below inflation or some such).

Ultimately, I'm in the camp of those that think the real risk is that we run out of money before we expect to. Standard deviation is, at best, a very very imperfect indicator of that.