There's some compelling math behind the common practice of using standard deviation as a measure of the informal notion of "risk," at least in contexts compatible with the exposition that follows.

Define a

*risk averse investor* to be any investor who has the following two properties:

1. No matter how much money he has, the investor would always be happy to have another dollar. This property is called

*non-satiation*.

2. The more money he has, the less important it becomes to the investor to have one more dollar. For example, if he only has $1 to start with, getting another $1 is more important than it would be if he has $1 million to start with. This property is called

*decreasing marginal utility of wealth*.

Note that this definition does not use the notion of standard deviation or any other statistical concepts. It's quite simple. It seems clear that nearly all investors are "risk-averse" in this sense.

We need one more definition. An investment I1 is defined to be

*more efficient than* an investment I2 if

**all** risk averse investors (in the sense defined above) prefer I1 to I2.

With these two quite reasonable definitions, we can prove the following theorem:

Suppose I1 and I2 are two investments with lognormally distributed returns, with instantaneous yearly expected returns alpha1 and alpha2 and standard deviations of instantaneous yearly returns sigma1 and sigma2 respectively. Then I1 is more efficient than I2 if and only if:

alpha1 >= alpha2 and sigma1 <= sigma2

with strict inequality holding in at least one of the inequalities. This result holds for all time horizons and initial levels of wealth.

This theorem from mathematical finance is at the heart of all those efficient frontier graphs. In more-or-less plain English, it tells us that risk-averse investors like expected return (they want to increase it) and they dislike standard deviation (they want to decrease it).

Put another way, if we think informally of "return" as something we like (and would like to increase all else being equal), and if we think informally of "risk" as something we dislike (and would like to decrease all else being equal), then in the context of comparing competing feasible lognormally distributed investment alternatives, the theorem tells us that standard deviation is the proper way to measure "risk".

Don't worry about the notion of "instantaneous return" in this theorem. I won't define that here. It's a technical detail. It's OK for our purposes as an approximation to think of the more familiar "simply compounded yearly return".

The theorem holds for all time horizons. Contra Dick, there's nothing special about short time horizons.

Note the two big assumptions in the theorem. First, investors are risk-averse. This seems more than reasonable. Second, the two investments are lognormally distributed. That's a bit more controversial, but still not unreasonable as a first-order rough approximation to how most "normal" investments behave.

For a proof see Theorem 4.2 on page 20 of

An Introduction to Portfolio Theory.

With best wishes to all of my fellow risk-averse investors for a very happy and prosperous new year, I remain faithfully yours,

John Norstad

p.s. I have no comment on the original topic of "risk" versus "uncertainty". That's philosophy and above my pay grade. My remarks here are much more modest. They are just about math facts, which is an area where I can sometimes manage to muddle through and have at least some confidence that I actually know what I'm talking about.