lack_ey wrote:I don't get why you're upset by this. That's how these things work. Of course this stuff isn't all that accurate, and it's not too controversial that new information can improve forecasts.
I'm upset by this because leading financial experts are stating that we should set our AA and withdrawal strategy based on valuations. Valuations are HUGELY important, we are told.
But like you said, this stuff isn't all that accurate, and changes year-to-year. Should we change our AA every year based on valuations?
Who has a list of the expected returns MADE AT THE TIME (not backwards looking) for the past 10-20 years? I'd love to see how accurate they turned to be.
They keep tweaking the model, so it looks better.
The only example I know is Shiller himself forecasting 0% real returns in 1996 and instead we got 6%+ real over the past 20 years.
Yet when I bring this up, experts today state, "Oh no, our latest models forecast 4.4% expected returns in 1996, so we weren't that far off".
But yes they were. The models have been adjusted AFTER the fact, and then the experts gloss over the failure, and continue to state that valuations are great predictors. They'll bring up a list of extenuating circumstances to explain away the failure of the model. But who knows what variables will change next? They'll admit valuations are only 40% effective as a predictor, but then state people should change their AA based on valuations and work 7 years longer to get down to 3%.
But I probably should just let it all go. My apologies to everyone for getting too worked up....
I have a few notes to make in response that don't all address specific points so I'll just enumerate them below.
1. If we have information that is actionably reliable enough, sure, it makes sense to adjust AA or withdrawal rate some from year to year. The discussion should be on how much and how useful the information is. If 30-year TIPS were yielding 3% I'd feel a lot better about a given withdrawal rate than if they were yielding -1%. Based on moderate-to-low changes in stock valuations that we see most of the time, the amount of "signal" available is low enough that there's not enough new or changed information to make any significant changes in AA. But this is not necessarily always the case.
2. With stocks I believe a lot of the academics and financial writers are likely overstating the predictability of returns based on valuations, and here they already admit it's not very high.
3. Evaluating forecasts for accuracy is very important, good. Agreed there.
4. You're right to be suspicious of people changing models used over time (which is different from the same model being updated over time with new data, which I think you might have also criticized? not sure). Of course a newer model is going to be tuned to not do so poorly over the past, so that doesn't really say anything.
But your one anecdote of Shiller in 1995 speaks to an analysis that was objectively aggressive, potentially prone to overfitting the past and being relatively poor in predicting a future. This is not just some canned excuse we can pull out in hindsight; forgetting all the context if you just look at it from a statistical point of view it would be clear at the time in 1995—albeit debatable which technique would turn out best. If today you have a forecast that assumes on average that valuations stay put (which you derided earlier), this would be less aggressive in (over?)fitting the past data, for better or worse. There's a tradeoff between overreacting to noise and underreacting to signal.
5. Sometimes a model can be right and operating properly within the range of expectations, but the world just surprises with an outcome on the fringes. If the weather forecasts gives a chance of rain of 30% and it rains, then the chance of rain the next day as 20% and it rains, and the chance of rain the day after that as 40% and it rains again, maybe the forecast is broken. Or maybe it was right and that 30% probability was hit, the 20% was hit, and the 40% was hit. Over the long haul with a lot of data you can see if a forecast is getting things about right. For example, if it rains about 20% of the time the forecast says that there's a 20% chance of rain then it's working properly.
The problem in finance is relatively slow updating and paucity of data so here we're all basically just guessing a lot with models that are difficult to evaluate. It's possible even Shiller's 0% was right for an average but what happened was really surprising on the upside. I don't think that sounds right but we'll never really know. Going back to the rain example, the difficulty we have is in assessing, if it actually rained three times in a row, the probability that are model is messed up and by how much. Without much data it's just hard to say. Some will suspect the model is bad; others will suspect we just got unlucky with rain. It might actually be something in between.
6. Even though there are problems with the models, you can't avoid using any, implicitly or explicitly. If you want to criticize a certain prediction, you need to offer your own. If you assume historical returns, this is itself a model with certain strong assumptions that is also probably wrong to some degree and can be attacked.