This recent thread discussed the 4% SWR in light of the current market, mostly interest rate, conditions. One concern repeatedly raised in that thread was the lack of independence of 30 years of market returns. That is, if one 30 year stretch of market returns resulted in a successful retirement funding, there is a good chance the 30 year stretch starting a year later will too.

Years ago, I wrote a comprehensive financial tracking and planning program, which I still use today. It tracks my investments (updating their value after the close of the market every day), expenses, allows for entry of social security, estimates taxes, and has a Monte Carlo retirement simulator built-into it whose simulated values are inflation, investment return, and interest rates. Since the programming language (Java) had a built-in normal distribution function, I used it as the shape of the returns of all three simulated values. This is a shortcoming of the model, but I don't think it is fatal and it does remove the "lack of independent observation" objection.

According to my current estimates, my withdrawal rate is 3.5%. The model maintains that level of inflation-adjusted spending all the way to the end. While the model doesn't explicitly address potential nursing home expenses, I am attempting to simulate it by not reducing spending as a function of age.

Attempting to simulate a low interest rate, low return, reasonably high inflation environment, I set the distributions to

Inflation mean: 3.5% variance 1.5% (note can't go below -2%)

Investment mean: 5% variance 10% (nominal returns)

Interest mean: 2% variance 1%

The results from 10,000 runs of this model show a 10% chance of running out of money in 30 years. However, in all of those "out of money" years, stock returns averaged 2.6% or less, inflation averaged no less than 3.5%, and interest rates were never higher than 2.2%.

If I raise the investment mean to 7%, the percentage of 30 year failures falls to 1.3% and in all the failures, the average investment return was 3.5% or less.

The model has shortcomings. Inflation, investment returns, and interest rates are not normally distributed. When interest rates rise, the value of the bond portion of the retirement savings doesn't go down (bond duration not part of the model). There is no way to predict income tax rates or changes 30 years hence. Do these problems invalidate the model's results? It is hard to know if these problems all skew the results in the same direction or cancel one another out.

A 10% chance of financial failure is nothing to feel safe about. But, between the model's limitations and the financial conditions necessary to cause failure, I am not too concerned about them and believe that I could adjust my expenses down, if necessary.

I just wanted to add another data point to the SWR debate.

Years ago, I wrote a comprehensive financial tracking and planning program, which I still use today. It tracks my investments (updating their value after the close of the market every day), expenses, allows for entry of social security, estimates taxes, and has a Monte Carlo retirement simulator built-into it whose simulated values are inflation, investment return, and interest rates. Since the programming language (Java) had a built-in normal distribution function, I used it as the shape of the returns of all three simulated values. This is a shortcoming of the model, but I don't think it is fatal and it does remove the "lack of independent observation" objection.

According to my current estimates, my withdrawal rate is 3.5%. The model maintains that level of inflation-adjusted spending all the way to the end. While the model doesn't explicitly address potential nursing home expenses, I am attempting to simulate it by not reducing spending as a function of age.

Attempting to simulate a low interest rate, low return, reasonably high inflation environment, I set the distributions to

Inflation mean: 3.5% variance 1.5% (note can't go below -2%)

Investment mean: 5% variance 10% (nominal returns)

Interest mean: 2% variance 1%

The results from 10,000 runs of this model show a 10% chance of running out of money in 30 years. However, in all of those "out of money" years, stock returns averaged 2.6% or less, inflation averaged no less than 3.5%, and interest rates were never higher than 2.2%.

If I raise the investment mean to 7%, the percentage of 30 year failures falls to 1.3% and in all the failures, the average investment return was 3.5% or less.

The model has shortcomings. Inflation, investment returns, and interest rates are not normally distributed. When interest rates rise, the value of the bond portion of the retirement savings doesn't go down (bond duration not part of the model). There is no way to predict income tax rates or changes 30 years hence. Do these problems invalidate the model's results? It is hard to know if these problems all skew the results in the same direction or cancel one another out.

A 10% chance of financial failure is nothing to feel safe about. But, between the model's limitations and the financial conditions necessary to cause failure, I am not too concerned about them and believe that I could adjust my expenses down, if necessary.

I just wanted to add another data point to the SWR debate.

No matter how long the hill, if you keep pedaling you'll eventually get up to the top.

You've put a lot of effort into your model and it looks like it's serving you well. 3.5% SWR looks pretty safe to me.

I regard historical data and Monte Carlo techniques as useful "smoke tests" of a plan, since the future tends to play tricks on us (some bad, some good). Why not replace your:

"believe that I could adjust my expenses down, if necessary"

with an actual algorithm in the model? Have a band around the size of the portfolio you'd like to have in year N and if a run of the model escapes the band (up or down), that you adjust your expenses likewise up or down for that year. You could play with how wide the band is and how severe the response to portfolios gyrations. Then instead of calculating failure rate, you could measure how many times you had to tighten your belt and how many times you got to take that trip to Hawaii you always wanted.

I regard historical data and Monte Carlo techniques as useful "smoke tests" of a plan, since the future tends to play tricks on us (some bad, some good). Why not replace your:

"believe that I could adjust my expenses down, if necessary"

with an actual algorithm in the model? Have a band around the size of the portfolio you'd like to have in year N and if a run of the model escapes the band (up or down), that you adjust your expenses likewise up or down for that year. You could play with how wide the band is and how severe the response to portfolios gyrations. Then instead of calculating failure rate, you could measure how many times you had to tighten your belt and how many times you got to take that trip to Hawaii you always wanted.

- Peter Foley
**Posts:**2607**Joined:**Fri Nov 23, 2007 10:34 am**Location:**Lake Wobegon

Interesting analysis and thanks in advance for your efforts.

What AA did you use?

(or) Did you use a number of AA's with different results?

What AA did you use?

(or) Did you use a number of AA's with different results?

What AA did you use?

My current AA is 40 stocks (50/50 US/Int) / 55 Bonds / 5 Cash

The model doesn't do any rebalancing, though does take the money for expenses in the same percentage as exists.

I don't do anything to change this as it gets too complicated to try and follow how things are going. My main reason for doing this was to see what happens if my current plan is moved forward through time.

No matter how long the hill, if you keep pedaling you'll eventually get up to the top.

These things are fun and shed some interesting light.

I would suggest your stock market variance is rather too low compared to historical data. More like 20% than 10% (I seem to remember last I looked it was 18% a few years ago, don't recall the start date).

This might make things a little worse, but may not change the overall story.

Thanks for the work.

I would suggest your stock market variance is rather too low compared to historical data. More like 20% than 10% (I seem to remember last I looked it was 18% a few years ago, don't recall the start date).

This might make things a little worse, but may not change the overall story.

Thanks for the work.

We live a world with knowledge of the future markets has less than one significant figure. And people will still and always demand answers to three significant digits.

- Peter Foley
**Posts:**2607**Joined:**Fri Nov 23, 2007 10:34 am**Location:**Lake Wobegon

I'm not sure how difficult it is for you to run the simulation, but it would be interesting to see such a simulation for the very common 50% equities portfolio.

Before I retired, I built my own monte carlo engine in Matlab, a mathematical modeling tool.

I verified the correctness of the code partly by generally duplicating Bernstein's numbers in the retirement calculator from hell:

http://www.efficientfrontier.com/ef/101/hell101.htm

Also, he suggests a Standard Deviation of 12% in there for stock returns and explains why (explaining why it is not even higher, that is). And a Standard Deviation of 5% for intermediate bond instruments.

I verified the correctness of the code partly by generally duplicating Bernstein's numbers in the retirement calculator from hell:

http://www.efficientfrontier.com/ef/101/hell101.htm

Also, he suggests a Standard Deviation of 12% in there for stock returns and explains why (explaining why it is not even higher, that is). And a Standard Deviation of 5% for intermediate bond instruments.

kramer wrote:Before I retired, I built my own monte carlo engine in Matlab, a mathematical modeling tool.

I verified the correctness of the code partly by generally duplicating Bernstein's numbers in the retirement calculator from hell:

http://www.efficientfrontier.com/ef/101/hell101.htm

Also, he suggests a Standard Deviation of 12% in there for stock returns and explains why (explaining why it is not even higher, that is). And a Standard Deviation of 5% for intermediate bond instruments.

Bernstein's comment

I assumed a relatively low stock SD of 12%. This is worth a comment here. When one runs Monte Carlo simulations of stock returns using the historical 15%-20% SD, one comes up with higher long-period (>20 years) variability than is actually observed. The reason is, over the long haul stock returns have a tendency to mean-revert. To correct for this, I’ve lowered the SD a bit.

Basically he is assuming a fairly strong form of mean reversion. This is plausible, but perhaps optimistic; after all it is derived from looking at things like (anti)correlation at approximately the 20-year time scale of which we have only about 4 or 5 independent observations. Correlation of across 20-years is not very helpful if you only live from 65 to 85, and 4 or 5 data points hardly makes a strong case.

I prefer to err on the side of caution so I use the full value of SD. Note that one can also properly include a correlation in returns across time which would be a better approach, which I have done. But at any rate these methods tend to be both optimistic and a bit arbitrary so I don't like them.

While we are on the subject, the largest source of error in all this the choice of the mean return.

Picking nice round numbers, if you have an unknown distribution, and you use a finite set of measurements to estimate that distribution you introduce errors. If you flip a fair coin 100 time you most likely will not get 50 heads and tails. Say you get 53 heads. If you build a coin flip simulator with the assumption that a fair coin will land heads up 53% of the time you get poor results. Same thing here. If you assume 6% real annual returns with 20% annual SD and you have 100 years of data, (assuming a normal distribution which itself is optimistic) there is about a 1 in 3 chance the actual mean that you want is less than 4% or greater than 8%. Note this is huge uncertainty: a factor of 100%! There is about a 1 in 20 chance the mean is less than 2% or greater than 10%. And note: this is not the mean you happen to get due to random chance, this is the actual mean of the hidden market return distribution. Random chance is a variation in addition to this that is modeled by the Monte Carlo simulation.

Given that level of uncertainty in the most fundamental input, it does not make sense to me to fiddle with second order and third order corrections.

We live a world with knowledge of the future markets has less than one significant figure. And people will still and always demand answers to three significant digits.

One approach I've used to have conservative estimates of returns is to pick the 35th or 40th percentile of the historical returns. I do this with the asset allocation I plan to use during retirement. So I take historical values of equity, bond and cash returns, weighting them by my asset allocation, and pick the 35th percentile values.

I'll mention one other rather crazy thing I've done to back test plans. I run historical data in reverse order. So instead of data being 1978, 1979, 1980 I run the test through 1980, 1979, 1978... as the sequence. For some reason that I can't explain, this gives lower SWR. Go figure.

I'll mention one other rather crazy thing I've done to back test plans. I run historical data in reverse order. So instead of data being 1978, 1979, 1980 I run the test through 1980, 1979, 1978... as the sequence. For some reason that I can't explain, this gives lower SWR. Go figure.

I've done some more work on the simulation code in the program and have added an independent bond return variable to the model. Now, the change in value of investments, IRA, and Roth IRA are calculated as a function of both stock returns and bond returns. I maintain the stock/bond split throughout the model's run, so there isn't any change in AA as I age.

The results are interesting.

Using these distributions (same as above but Bonds now added):

Inflation mean: 3.5% variance 1.5% (note can't go below -2%)

Investment mean: 5% variance 10% (nominal returns)

Interest mean: 2% variance 1%

Bond mean: 1% variance 3%

Not surprisingly, due to the lower returns of bonds, my 30 year failure rate has now gone up to 37%! But, the first year that the model ran out of money is now later, as the lower bond variance also means fewer really bad years.

Does this worry me? A bit. But, I'm not yet ready to change my plan because I think it is unlikely that bonds will return 2.5% below inflation for the next 30 years. The model's results are highly sensitive to the mean investment return, though in most reasonable future scenarios, there is always some chance that I will run out of money in 30 years.

How sensitive is the model? Changing the investment value to a mean of 7% with a variance of 13 cuts the failure rate almost in half, from 38% to 20%! Keeping this new investment distribution and changing the bond value to a mean of 2% with a variance of 4, cuts the failure rate almost in half, again, to 12%. Note that this last scenario calls for a mean of 3.5% real growth of stocks and mean negative interest on bonds of 1.5%.

I've found the previous discussion useful and hope others find this information of value.

The results are interesting.

Using these distributions (same as above but Bonds now added):

Inflation mean: 3.5% variance 1.5% (note can't go below -2%)

Investment mean: 5% variance 10% (nominal returns)

Interest mean: 2% variance 1%

Bond mean: 1% variance 3%

Not surprisingly, due to the lower returns of bonds, my 30 year failure rate has now gone up to 37%! But, the first year that the model ran out of money is now later, as the lower bond variance also means fewer really bad years.

Does this worry me? A bit. But, I'm not yet ready to change my plan because I think it is unlikely that bonds will return 2.5% below inflation for the next 30 years. The model's results are highly sensitive to the mean investment return, though in most reasonable future scenarios, there is always some chance that I will run out of money in 30 years.

How sensitive is the model? Changing the investment value to a mean of 7% with a variance of 13 cuts the failure rate almost in half, from 38% to 20%! Keeping this new investment distribution and changing the bond value to a mean of 2% with a variance of 4, cuts the failure rate almost in half, again, to 12%. Note that this last scenario calls for a mean of 3.5% real growth of stocks and mean negative interest on bonds of 1.5%.

I've found the previous discussion useful and hope others find this information of value.

No matter how long the hill, if you keep pedaling you'll eventually get up to the top.

Raybo,

Useful results. They illustrate the points I have been making for a few years after doing many such runs.

No one knows the correct mean to use for stocks to better than +/-2%. That uncertainty leads to the sorts of results you posted: a particular plan may have a 38% failure rate or 20%. (and that does not yet include a look at -2% from your baseline run).

Then if one factors in for the same reasons we don't know the standard deviation to use which adds more error.

We don't know which shape the distribution takes, nomal, lognormal, or something else (some awful fractal thing?) If the correct answer is something other than normal or lognormal failure is almost certain to go up.

Now instead of 38% vs 20% or a spread of 18%, the spread will be rather wider.

Given that huge level of error or uncertainty, is why I don't see the point is fiddling with shaving SD or adding the tiny amount of serial correlation that while seen in the little data we have may or may not be real.

What your numbers show is the truth.

About the best we can say from either modeling or historical data analysis is that a plan is "very likely to work", "fairly likely to work", "maybe it will work and maybe not", "fairly likely to fail" and "very likely to fail". Less than a significant digit of accuracy.

Rod

Useful results. They illustrate the points I have been making for a few years after doing many such runs.

No one knows the correct mean to use for stocks to better than +/-2%. That uncertainty leads to the sorts of results you posted: a particular plan may have a 38% failure rate or 20%. (and that does not yet include a look at -2% from your baseline run).

Then if one factors in for the same reasons we don't know the standard deviation to use which adds more error.

We don't know which shape the distribution takes, nomal, lognormal, or something else (some awful fractal thing?) If the correct answer is something other than normal or lognormal failure is almost certain to go up.

Now instead of 38% vs 20% or a spread of 18%, the spread will be rather wider.

Given that huge level of error or uncertainty, is why I don't see the point is fiddling with shaving SD or adding the tiny amount of serial correlation that while seen in the little data we have may or may not be real.

What your numbers show is the truth.

About the best we can say from either modeling or historical data analysis is that a plan is "very likely to work", "fairly likely to work", "maybe it will work and maybe not", "fairly likely to fail" and "very likely to fail". Less than a significant digit of accuracy.

Rod

We live a world with knowledge of the future markets has less than one significant figure. And people will still and always demand answers to three significant digits.

Raybo wrote:

Attempting to simulate a low interest rate, low return, reasonably high inflation environment, I set the distributions to

Inflation mean: 3.5% variance 1.5% (note can't go below -2%)

Investment mean: 5% variance 10% (nominal returns)

Interest mean: 2% variance 1%

The results from 10,000 runs of this model show a 10% chance of running out of money in 30 years. However, in all of those "out of money" years, stock returns averaged 2.6% or less, inflation averaged no less than 3.5%, and interest rates were never higher than 2.2%.

If I raise the investment mean to 7%, the percentage of 30 year failures falls to 1.3% and in all the failures, the average investment return was 3.5% or less.

When you say variance 10%, do you really mean standard deviation = 10 %

Because variance = sigma^2, and standard deviation = sigma.

So if variance = sigma^2 = 0.10, then standard deviation = sigma = sqrt(0.10) = 0.316

Do you want to use 31.6% standard deviation of investment return or 10% standard deviation?

Variance of 10% of annual return would be huge volatility. Maybe emerging markets or some risky individual stock might have standard deviation of annual returns = 31.6%.

Last edited by grayfox on Fri Jan 25, 2013 9:57 am, edited 1 time in total.

Gott mit uns.

Have you tried varying the standard deviation? I hypothesize that changes in variance are likely to have a larger impact on success rates. One of the two main causes of portfolio failure in this kind of simulation is poor market returns early in retirement; the higher the variance, the more likely this is to occur.

I'm glad to see that you that you explicitly model the effects of inflation. Wade Pfau, in his simulators, uses just real returns. When I inquired about this, the response was that they'd done some tests and it didn't seem to make much difference on the outcomes. I'm sceptical; as can be verified with even a spreadsheet, there's definitely a "sequence of returns" effect with inflation; for the same mean inflation, it matters whether the high inflation periods occur early or late in retirement. I don't see how subtracting out the inflation from nominal returns captures this, unless the variance of the real returns is increased to compensate.

I'm glad to see that you that you explicitly model the effects of inflation. Wade Pfau, in his simulators, uses just real returns. When I inquired about this, the response was that they'd done some tests and it didn't seem to make much difference on the outcomes. I'm sceptical; as can be verified with even a spreadsheet, there's definitely a "sequence of returns" effect with inflation; for the same mean inflation, it matters whether the high inflation periods occur early or late in retirement. I don't see how subtracting out the inflation from nominal returns captures this, unless the variance of the real returns is increased to compensate.

Sorry for the poor use of statistical terms. While I used the term "variance" it is the standard deviation that is used by the model.

As you might expect, a larger standard deviation results in a wider distribution of returns (higher highs and lower lows) but doesn't change the failure percentages much.

As you might expect, a larger standard deviation results in a wider distribution of returns (higher highs and lower lows) but doesn't change the failure percentages much.

No matter how long the hill, if you keep pedaling you'll eventually get up to the top.

The next question is, when you say investment mean, do you mean geometric mean or arithmetic mean?

The geometric mean, which is CAGR, will be less than the arithmetic mean. For instance, I saw one number where the arithmetic mean for U.S. stocks over some period was 8.5%, and the geometric mean was 6.52%. Another article showed 12.30% arithmetic mean and 10.19% geometric mean from 1926 to 1994. Big difference!

So if your arithmetic mean mu = 5% and standard deviation sigma = 10%, then your expected CAGR is less. The bigger the standard deviation, the smaller the geometric mean. **

In Monte Carlo simulation, the arithmetic mean and standard deviation would be used.

** I looked this up and it's called Volatilty Drag. There is a formula that Volatility Drag = .5 * sigma ^ 2

So if arithmetic mean is 5% and sd = 10%, then geometric mean 0.05 - (.5 * 0.10^2) = 0.045 = 4.5%

I don't know how accurate the formula is. I think it is just the expected volatility drag, because volatility drag is just another random variable.

The geometric mean, which is CAGR, will be less than the arithmetic mean. For instance, I saw one number where the arithmetic mean for U.S. stocks over some period was 8.5%, and the geometric mean was 6.52%. Another article showed 12.30% arithmetic mean and 10.19% geometric mean from 1926 to 1994. Big difference!

So if your arithmetic mean mu = 5% and standard deviation sigma = 10%, then your expected CAGR is less. The bigger the standard deviation, the smaller the geometric mean. **

In Monte Carlo simulation, the arithmetic mean and standard deviation would be used.

** I looked this up and it's called Volatilty Drag. There is a formula that Volatility Drag = .5 * sigma ^ 2

So if arithmetic mean is 5% and sd = 10%, then geometric mean 0.05 - (.5 * 0.10^2) = 0.045 = 4.5%

I don't know how accurate the formula is. I think it is just the expected volatility drag, because volatility drag is just another random variable.

Gott mit uns.

I'm not sure how to respond.

The formula I am using is for the standard normal distribution where there is a mean and a standard deviation. Getting into distinctions of what kind of mean is beyond what I am trying to do.

By saying a mean of 5%, I am expecting that the average of the specified returns will be 5%. In the case of stocks, I assume (and see in the results) that overall the average yearly return for stocks will be 5%.

Note that the program adds up all the returns at the end of the entire simulation and does an arithmetic mean calculation (add all of them together and divide by the number of observations) that usually equals the stated mean, or is very close.

The formula I am using is for the standard normal distribution where there is a mean and a standard deviation. Getting into distinctions of what kind of mean is beyond what I am trying to do.

By saying a mean of 5%, I am expecting that the average of the specified returns will be 5%. In the case of stocks, I assume (and see in the results) that overall the average yearly return for stocks will be 5%.

Note that the program adds up all the returns at the end of the entire simulation and does an arithmetic mean calculation (add all of them together and divide by the number of observations) that usually equals the stated mean, or is very close.

No matter how long the hill, if you keep pedaling you'll eventually get up to the top.

What I'm saying is that when people report that, over the long term, stocks have returned about 10% per year, they are referring to the annualized return, which is the geometric mean. Same thing when Jeremy Siegel says that stocks had 7% real return per year over the long term. They are reporting the annualized return, i.e. geometric mean.

But because of volatility drag, the annualized return is less than the average annual return. On average by that formula: Volatility Drag = .5 * sigma^2. [I think volatility drag is a random variable, so the number is not exact.] In other words, the underlying probability distribution doesn't have a mean of 10%. The underlying distribution might have mean return mu = 12% and standard deviation sigma = 20%

Now a lot of people are forecasting lower returns for stocks going forward, say only 4.5% real return instead of the historical 7%. But again, that forecast is for the annualized return. The underlying distribution they are forecasting might have a mean mu = 6.5% and sigma = 20%. But because of volatility drag, you will only get about 4.5% annualized return.

So if you were going generate random returns from a normal distribution, and you were forecasting 4.5% real annualized return, you might use mu=6.5% and sigma=20%. After you run it, you can calculate average return and annualized return. If you do enough runs, the average return should be close to 6.5%, but the annualized return would be less, probably averaging closer to the 4.5% forecast.

But because of volatility drag, the annualized return is less than the average annual return. On average by that formula: Volatility Drag = .5 * sigma^2. [I think volatility drag is a random variable, so the number is not exact.] In other words, the underlying probability distribution doesn't have a mean of 10%. The underlying distribution might have mean return mu = 12% and standard deviation sigma = 20%

Now a lot of people are forecasting lower returns for stocks going forward, say only 4.5% real return instead of the historical 7%. But again, that forecast is for the annualized return. The underlying distribution they are forecasting might have a mean mu = 6.5% and sigma = 20%. But because of volatility drag, you will only get about 4.5% annualized return.

So if you were going generate random returns from a normal distribution, and you were forecasting 4.5% real annualized return, you might use mu=6.5% and sigma=20%. After you run it, you can calculate average return and annualized return. If you do enough runs, the average return should be close to 6.5%, but the annualized return would be less, probably averaging closer to the 4.5% forecast.

Last edited by grayfox on Sat Jan 26, 2013 11:00 am, edited 1 time in total.

Gott mit uns.

grayfox wrote:What I'm saying is that when people report that, over the long term, stocks have returned about 10% per year, they are referring to the annualized return, which is the geometric mean. Same thing when Jeremy Siegel says that stocks had 7% real return per year over the long term. They are reporting the annualized return, i.e. geometric mean.

But because of volatility drag, the annualized return is less than the average annual return. On average by that formula: Volatility Drag = .5 * sigma^2. [I think volatility drag is a random variable, so the number is not exact.] In other words, the underlying probability distribution doesn't have a mean of 10%. The underlying distribution might have mean return mu = 12% and standard deviation = 20%

Now a lot of people are forecasting lower returns for stocks going forward, say only 4.5% real return instead of the historical 7%. But again, that forecast is for the annualized return. The underlying distribution they are forecasting might have a mean mu = 6.5% and sd = 20%. But because of volatility drag, you will own get about 4.5% annualized return.

So if you were going generate random returns from a normal distribution, and you were forecasting 4.5% real annualized return, you might use mu=6.5% and sd=20%. After you run it, you can calculate average return and annualized return. If you do enough runs, the average return should be close to 6.5%, but the annualized return would be less, probably averaging closer to 4.5%

Raybo,

Just to chime in but the above is correct. If you are using some code for a normal distribution (or lognormal) the "mean" is the standard vanilla average of annual returns, not the annualized, or geometric, or average compound growth rate (all the same thing), just as Grayfox says.

You can use the stock return data from this site to compute both arithmetic mean and geometric mean: http://www.econ.yale.edu/~shiller/data.htm

Rod

Just so I understand what greyfox and rodc are saying. It isn't that I need to change how my simulation works, it is that the mean and standard deviation I've chosen might not be accurate to how the market numbers actually work. In fact, the numbers I am using might be too low for what I am trying to simulate.

Is this correct?

Is this correct?

No matter how long the hill, if you keep pedaling you'll eventually get up to the top.

Raybo wrote:Just so I understand what greyfox and rodc are saying. It isn't that I need to change how my simulation works, it is that the mean and standard deviation I've chosen might not be accurate to how the market numbers actually work. In fact, the numbers I am using might be too low for what I am trying to simulate.

Is this correct?

That would be my understanding of what they are saying. Your simulation and the data have to be according to consistent definitions of the statistics used.

Here's an experiment I put together.

I simulated 30 annual returns drawn from a normal distribution with mean mu = 0.065 and sigma = 0.20, and used simple returns.

I calculate mean return, annualized return and volatility drag for the sample.

Then I ran 1000 simulations and calculated statistics and drew a histogram of volatility drag

Here are results:

Over 1000 runs, the estimate of the average return was 6.546% with standard deviation 3.845% (The true mean was 6.5%).

The estimate of the annualized return was 4.512% with sd 3.8% (The theoretical formula gives 4.5%)

Volatility drag had mean of 2.03% with sd = .048% (The theoretical formula gives 2.0%) But you can see in the histogram that the bulk of the volatility drags ranged from 1.25% to 2.75%

One thing to note is how big the standard deviation is on the estimates of the average return and annualized return. I think that is because there is only 30 years for each simulation. For example, the true mean was 6.5, but the estimate is 6.546 with sd 3.8. The sd is more than 50% of the mean. Just from chance, some run may have only say a 3% average return and you would think that is the mean, if that's the only one you saw.

I simulated 30 annual returns drawn from a normal distribution with mean mu = 0.065 and sigma = 0.20, and used simple returns.

I calculate mean return, annualized return and volatility drag for the sample.

Then I ran 1000 simulations and calculated statistics and drew a histogram of volatility drag

Here are results:

Code: Select all

` ret.avg ret.ann vol.drag`

Mean 6.546 4.512 2.03454

SD 3.894 3.845 0.04874

Histogram of Volatility Drag

for 1000 Runs

Midpoint Count

0.75 3

1.25 144

1.75 383

2.25 279

2.75 149

3.25 34

3.75 4

4.25 2

4.75 2

Over 1000 runs, the estimate of the average return was 6.546% with standard deviation 3.845% (The true mean was 6.5%).

The estimate of the annualized return was 4.512% with sd 3.8% (The theoretical formula gives 4.5%)

Volatility drag had mean of 2.03% with sd = .048% (The theoretical formula gives 2.0%) But you can see in the histogram that the bulk of the volatility drags ranged from 1.25% to 2.75%

One thing to note is how big the standard deviation is on the estimates of the average return and annualized return. I think that is because there is only 30 years for each simulation. For example, the true mean was 6.5, but the estimate is 6.546 with sd 3.8. The sd is more than 50% of the mean. Just from chance, some run may have only say a 3% average return and you would think that is the mean, if that's the only one you saw.

Gott mit uns.

I started this thread with these numbers:

Inflation mean: 3.5% sd 1.5% (note can't go below -2%)

Investment mean: 5% sd 10% (nominal returns)

Interest mean: 2% sd 1%

Bond mean: 1% sd 3%

If I understand you correctly, if I am trying to simulate 5% annualized returns, I should be using a higher mean, say 7% but with a wider variance. What about inflation and bond values?

Inflation mean: 3.5% sd 1.5% (note can't go below -2%)

Investment mean: 5% sd 10% (nominal returns)

Interest mean: 2% sd 1%

Bond mean: 1% sd 3%

If I understand you correctly, if I am trying to simulate 5% annualized returns, I should be using a higher mean, say 7% but with a wider variance. What about inflation and bond values?

No matter how long the hill, if you keep pedaling you'll eventually get up to the top.

I don't want to tell you what parameters you should use. It's your forecast.

And that's the beauty of forecasting. Everybody can have their own beliefs and their own forecast.

Ex-ante, everybody is right and nobody is wrong.

However, you do want your simulation to be consistent with your beliefs.

So, for instance, if you believe that annualized returns for your investment over 30 years will be 5% and standard deviation of annual returns will be 10%, then you want your simulation to reflect that.

But if I try mu=0.05 and sigma=0.10, because of volatility drag, over 1,000 runs I got annualized return = 4.393%, on average, which is not consistent with your 5% forecast. Volatility drag averaged 0.642%. [The theoretical formula says 0.5%]

By trial and error, I tried mu = 5.7% sigma = 10% and it get pretty close to annualized 5%, on average. Volatility drag again averaged 0.642%.

There is still a large error in the estimate, and it varied from run to run.

And that's the beauty of forecasting. Everybody can have their own beliefs and their own forecast.

Ex-ante, everybody is right and nobody is wrong.

However, you do want your simulation to be consistent with your beliefs.

So, for instance, if you believe that annualized returns for your investment over 30 years will be 5% and standard deviation of annual returns will be 10%, then you want your simulation to reflect that.

But if I try mu=0.05 and sigma=0.10, because of volatility drag, over 1,000 runs I got annualized return = 4.393%, on average, which is not consistent with your 5% forecast. Volatility drag averaged 0.642%. [The theoretical formula says 0.5%]

Code: Select all

` ret.avg ret.ann vol.drag`

Mean 5.016 4.393 0.62320

SD 1.831 1.785 0.04535

Midpoint Count

[1,] 0.25 2

[2,] 0.35 39

[3,] 0.45 152

[4,] 0.55 256

[5,] 0.65 279

[6,] 0.75 173

[7,] 0.85 71

[8,] 0.95 21

[9,] 1.05 7

By trial and error, I tried mu = 5.7% sigma = 10% and it get pretty close to annualized 5%, on average. Volatility drag again averaged 0.642%.

There is still a large error in the estimate, and it varied from run to run.

Code: Select all

` ret.avg ret.ann vol.drag`

Mean 5.724 5.082 0.64240

SD 1.836 1.787 0.04976

Histogram of Volatility Drag

Midpoint Count

[1,] 0.35 21

[2,] 0.45 121

[3,] 0.55 238

[4,] 0.65 315

[5,] 0.75 193

[6,] 0.85 74

[7,] 0.95 27

[8,] 1.05 10

[9,] 1.15 1

Gott mit uns.

I appreciate your taking the time to do a bit of numbers work for me. It helps make the distinction between basic statistics and financial calculation.

In truth, raising the investment mean by .75% doesn't make all that much difference, just a few percent in 30 years.

The best news is that out of 5000 runs, no plan failed before I was 81 (21 years from now). That gives me a long time to adjust my burn rate and other factors before it gets critical.

In truth, raising the investment mean by .75% doesn't make all that much difference, just a few percent in 30 years.

The best news is that out of 5000 runs, no plan failed before I was 81 (21 years from now). That gives me a long time to adjust my burn rate and other factors before it gets critical.

No matter how long the hill, if you keep pedaling you'll eventually get up to the top.

Probability of Success isn't the only important output when analyzing MCS results. Two plans with different inputs that each show a 90% probability of success can have meaningfully different levels of risk that probability of success doesn't capture.

I added an output to The Flexible Retirement Planner called Average Spending Shortfall to help show this.

Below are images from a run of the planner's sensitivity analysis feature. This analysis was done using a standard 30 year retirement plan with a 4% withdrawal rate as the base. The sensitivity analysis heat map shows the results of running several hundred different simulations, each with different settings for the portfolio return and standard deviation.

The two images below show details (on the right) from two different cells on the heat map. Both cells are from simulation runs that had a 90% probability of success. However, you can see that the run with a low return/low standard deviation had a much lower Average Spending Shortfall (7% shortfall) than the run with the higher return and std deviation (17% shortfall).

Moshe Milevsky introduced a measure called SORDEX (Sequence of Returns Downside Exposure) to help quantify a retirement plan's sensitivity to downside risk from high volatility. I described a way to compute SORDEX using The Flexible Retirement Planner here.

Jim

I added an output to The Flexible Retirement Planner called Average Spending Shortfall to help show this.

Along with probability of success, another important output is the size of the shortfall for those simulation iterations where the plan failed (ran out of money). A plan with a reasonably high probability of success that usually fails in the early years may be less robust than a plan with a lower probability of success that usually fails near the end of the plan. The Average Spending Shortfall shows the average percent of total planned retirement spending that couldn’t be funded in those simulation iterations that failed. For example, consider a retirement plan with level spending planned for 40 years. Further, assume that when the retirement plan fails, on average it fails in simulation year 30. Such a plan would have an average spending shortfall of 25%. This is because on average 1/4 of the retirement plan’s spending wouldn’t get funded in those iterations that failed.

Below are images from a run of the planner's sensitivity analysis feature. This analysis was done using a standard 30 year retirement plan with a 4% withdrawal rate as the base. The sensitivity analysis heat map shows the results of running several hundred different simulations, each with different settings for the portfolio return and standard deviation.

The two images below show details (on the right) from two different cells on the heat map. Both cells are from simulation runs that had a 90% probability of success. However, you can see that the run with a low return/low standard deviation had a much lower Average Spending Shortfall (7% shortfall) than the run with the higher return and std deviation (17% shortfall).

Moshe Milevsky introduced a measure called SORDEX (Sequence of Returns Downside Exposure) to help quantify a retirement plan's sensitivity to downside risk from high volatility. I described a way to compute SORDEX using The Flexible Retirement Planner here.

Jim

The above phenomenon is one that has often been overlooked. It should be a general consequence of high risk, high return investing that the worst cases, though rare, will be much worse than the worst cases of low risk, low return investing. A boundary point that would illustrate the issue is that zero risk investment returns a certain result with no chance of any case being worse than the average case. A risky investment would always have some chance of doing worse than that.

Return to “Investing - Theory, News & General”

Users browsing this forum: aelfa, backpacker, bja02, dlee116, edge, Ever Ready, Google [Bot], gwrvmd, hornet96, itstoomuch, Maynard F. Speer, mindboggling, rotorhead, TheImmoralist and 64 guests