It's probably debatable whether the loss in utility from 1M to 500k or 100k to 50k is greater, but I think it would be tough to argue the loss from 10k to 5k is as great or from 2k to 1k. It some point the utility also relates to the absolute loss and not just the relative loss. The 99% loss scenarios that become more likely on longer horizons don't necessarily have the massive loss to utility indicated by RRA (RRA is a necessary assumption of Samuelsons proof).comeinvest wrote: ↑Sun Sep 22, 2024 11:57 pmAll your paragraphs except the second are implications of your assumption that the "final deathblow" after an already near complete wipeout has less negative utility than the same relative loss from a higher starting point. I'm not sure about that. I think many people who own "only" $100k would see that as a big safety buffer and source of possible spending and financial independence in excess of government benefits, with equal if not higher relative risk aversion. But that is subjective and everybody can make their own determination. I personally would want to avoid both a 90% and a 95% final wipeout at nearly all cost; so it is rather irrelevant to me. Let's go back to the objective part, your second paragraph.skierincolorado wrote: ↑Sat Sep 21, 2024 8:54 amYour first sentence mixes up risk aversion and relative risk aversion. All the people i know with 50k have much less relative risk aversion than I do. Their loss or utility from losing half their money is much less than mine. This is because loss aversion is not just relative, it is absolute. They have more risk aversion of losing 50k than I do. I've lost 50 multiple times and not even noticed. But their relative risk aversion is surely higher.comeinvest wrote: ↑Thu Sep 19, 2024 12:58 amWe are getting on slippery slopes here. I would disagree that a person with $100k has less risk aversion than someone with $2M; I know people in either category. But it is rather irrelevant for the topic at hand.skierincolorado wrote: ↑Wed Sep 18, 2024 10:45 pmI agree with most of this but want to point out that decreasing RRA (relative risk aversion) with higher wealth is an argument for less leverage not more. Decreasing RRA with high wealth is equivalent to having smaller changes in utility from relative changes in wealth. For example, doubling from 5M to 10M or halving from 10M to 5M have very little difference in utility. Taking risk today (for someone with no future additional savings) can help increase the odds of very high wealth, but if there is little additional utility there is little incentive. A person with decreasing RRA at high wealth must have increasing RRA at low wealth. Taking lots of leverage increases the risk of very large loss at long horizons. If the person has increasing RRA at lower wealth they would view these outcomes extremely negatively.comeinvest wrote: ↑Sun Sep 15, 2024 6:23 pm

I would agree with all of this. But I would argue that both the time in the market and the reduced risk aversion conditions are likely fulfilled for most people reading this thread and seriously taking care of their financial independence; so the formula given is obsolete in most cases. I don't mean to make statements that include or exclude more or less affluent readers; but I think what matters is the savings rate relative to current consumption (which b.t.w. is often worse for very affluent people, who occasionally also go broke). That is simply because the savings grow exponentially, while the earnings from your career typically just grow with inflation after the first few years. Someone with less than 15-20 years time in the market can use the formula; probably for less than 15 only, because the yellow or blue curve with the earlier upswings would apply. 15-20 years is probably also the minimum to seriously benefit from of mHFEA. Probably not coincidentally, 15 years is about the maximum that momentum or "trending" market dislocations of valuation ratios last, or at least before the exponential function "wins" and dominates the returns.

I think a potential argument against using RRA to calculate samuelson share is that RRA actually decreases at low wealth for most people. To take it to an extreme, losing half your money when you have $2 isn't as bad as losing half when you have $2M. Risk Aversion is both absolute and relative.

Because samuelaon assumes risk aversion is relative and constant, the worst case scenarios cost more utility than they would for most people. Even if the assumption of normality isn't perfect, the odds of losing 99% of your money are surely higher over 20 years than 1 year. Assuming constant RRA of 1 means that losing 99% instead of 98% has the same effect on utility as doubling your money (in opposite directions). I don't think that's true.

You are repeating your thesis "losing 99% of your money are surely higher over 20 years" that contradicts your charts (for moderate leverage). History has no precedence for either losing 99% in one year or in 20 or 30 years; in fact not only the probability of a 99% loss, but even the probability of a 1% loss was zero for ca. 20 year or longer horizons and no or moderate leverage, if I read your chart right. So you are wildly speculating here on something with no factual support whatsoever. The question becomes, what would the world look like if that were to happen. In many such catastrophic scenarios I think your portfolio would outlive you, in which case the utility of $1, $100k, and $1M would all be zero; you could then argue the $2M -> $4M increment, which you would likely realize in case a global catastrophic event does not happen, has more utility, even if the additional utility is small. (Obviously not saving at all, but consuming everything might have been best in this case in a catastrophic scenario; but that is off topic.) Another scenario that I could imagine that you would survive is political upheaval with a full or partial expropriation of property, in which case the utility of any amounts of assets is pure speculation. To stay on topic, the question whether a loss larger than historical precedence over your investment horizon can happen although the system survives, is also speculation. There are a lot of empirical and theoretical arguments that the capital will keep growing exponentially in the long run, as long as the system survives; although of course I would always use leverage ratios way below the historical optimal. Other than that, the empirical evidence per your charts is that the risk of loss (and/or the retirement target shortfall risk, if applicable) decreases with increasing time after 15-20 years.

Can you still answer my question about the ratios in your charts please?

It is also absolutely true that losing 99% is more likely over 20 years than 1 year. Both the charts I posted and common sense tell me this. The charts show loss of more than 0% rises at first and then falls. It establishes that returns are close (not perfectly) to the assumption of i.i.d. even at 20 years. If returns are i.i.d. the risk of losing 99% is much higher at 20 years than in 1 year. This also is just common sense I think. The risk of catastrophe over 20 years is higher than over 1 year. A way to visual it is that the 30th percentile goes down at first and then improves. The 10th percentile would go down longer, and then improve. The 5th percentile goes down even longer but eventually improves. Etc. This is backed mathematically under approximate assumption of normality, but also common sense. The higher risk of 99% loss at 20 years would be true even if returns are auto correlating. They would have to be extremely autocorrelating/mean reverting for it Not to be true. And the charts show that while there might be some autocorrelation/mean reversion by year 20 it is by no means extreme.

The higher probability of 99% loss at 20 years than 1 year is a big problem for risk taking and leverage IF relative risk aversion is constant. In other words, of the loss of utility from 100k to 50k is the same as the loss of utility from 1M to 500k. This is the part that I think is the bigger flaw. Relative Risk Aversion is surely lower for somebody with $10 than 1M.

At longer time horizons, proving the fallacy of time diversifcation relies more and more on very low probability events with extremely negative utility. But if the loss in utility has a lower bound, we start to care less about about these very low probability events and more about the 10th percentile, which is likely improving at 20 years even though the 99th percentile is still deteriorating.

So I think that's the bigger issue over longer horizons. The assumption of normality and the rising probability of 99% losses is likely pretty accurate but maybe the odds are a bit better on longer horizons but I'm not sure there's enough data on long horizons to be sure. On shorter horizons the returns of 15 yrars are quite close to what normality predicts.

What was your question about the ratios?

Like I said, I find that extrapolating a mathematical formula to derive probabilities of a 90% or 99% or whatever drawdown over some time horizon, when that formula was a closed-end formula mathematical approximation of empirical data that showed 0% probability of even a 1% drawdown, let alone any larger drawdowns, is completely futile and pure speculation.

For this reason alone I'm tossing out almost all your sentences in your second paragraph.

Secondly and independently, I think the fact that the solid and the dashed lines in your chart show significant and consistent (across leverage ratios) divergence from each other after about 15-20 years, is prima facie evidence that the returns process is not i.i.d. (Loosely speaking the returns appear to have negative correlation with probably 5-10 year lags, with the negative cumulative correlation increasing from there to 15-20 year lags, before they probably become less correlated again as the exponential growth dominates the mean-reverting underlying components of returns; but that interpretation and analysis is rather irrelevant; your charts that show an important portfolio level "end user" risk metric speak for themselves.)

Not only the amount, but the directions of the solid and the dashed lines in the second chart become consistently opposite starting between ca. year 6 and year 17 depending on leverage. In short, the solid and dashed lines are diametrically opposed for investment horizons longer than ca. 15-20 years.

"They would have to be extremely autocorrelating/mean reverting for it Not to be true. And the charts show that while there might be some autocorrelation/mean reversion by year 20 it is by no means extreme." - I disagree. I acknowledge that the historical data points are few, which might prompt you to extrapolate the <15 years behavior. But the divergence after 15-20 years seems significant and consistent. Short of more sophisticated parameterization of the empirical data and more sophisticated statistical hypothesis tests, if anything I would extrapolate the divergent trend after 20 years. Plot the 35 year chart and it becomes more clear.

My conclusion is also consistent with the plots that show 10-year and 15-year returns against CAPE ratios for many decades worth of data. There would be no visible correlation if the return process were i.i.d. (It's more complex than this and not a direct proof, as I'm jumping from autocorrelation to CAPE ratios; but you get the general idea regarding the existence of mean-reverting components of the returns process.)

I think the yellow, blue, and red lines are the relevant ones, as nobody would leverage at, near, or above the Kelly optimal in the first place. I am magnifying the relevant parts of your charts.

Like I said, just like you, I would use leverage ratios way below the historical optimal, to reflect an unknown probability of tail risk that might have not been exposed in history. But that is completely independent of the evidence regarding the question at hand, that risk decreases with time horizon after ca. 15-20 years (less than 15 years for the low leverage yellow line), and that the return distributions are not i.i.d. but have some mean-reverting characteristics. You can create a chart of 30 or 35 years and it becomes even more evident. I think your mistake is jumping from < 15-20 years to > 15-20 years (less for no or lower leverage). For short periods, the returns appear almost i.i.d.; the non i.i.d. behavior is not evident. For longer investment horizons and lag times, the non i.i.d behavior shows and eventually dominates, or else the solid lines in your charts would not diverge and eventually turn to the opposite direction from the dashed lines in the average loss chart. An intuitive explanation is that the portfolio has more time to recover from a catastrophic event the longer the time horizon after ca. 15-20 years (less than 15 years for the yellow line), as the exponential growth will eventually dominate the end results. Economic downturns and surprise effects, as well as dislocations of financial valuation multiples, can develop some momentum or trend in the same direction, but typically last not more than a few years and are, loosely speaking, "naturally" range bound and mean-reverting. Another possible intuitive explanation would be that stocks as real assets cannot just "evaporate" into nothing or into almost nothing. I know reality is more complicated; there is book value and goodwill etc.; but you get the idea. I guess individual productive assets can become obsolete, but hardly the aggregate of productive assets.

My other question was:I noticed that you selected the proportion of Kelly optimal leverage ratios (if I understand it right) for your legend. Is this because the charts would scale similarly for varying expected returns and volatilities, as long as this proportion stays constant? Could you please generate charts or explain how the charts would be different for our consensus 4% expected real returns, 20% volatility assumptions, and what would be the Kelly optimal leverage ratio in this case; or what leverage ratios do the ratios in your charts correspond to?

There's really not enough data to say how much more likely (if any) a 95% or 99% loss become on longer time horizons. It's true the probability of loss diverges somewhat from the normal distribution after 15 years, but there are only 5 non-overlapping 20 year periods in the data. Anything regarding normality or autocorrelation beyond 10 or 15 years is speculation based on too little data.

However, I don't need data to tell me the probability of a 90% loss is greater over 30 years than it is over 5. The probability of a major natural disaster or social/political upheaval event is greater over 30 years than over 5. Maybe it's not exactly as more likely as extrapolating a normal distribution of returns would theorize. But I think the overall point is very much intact. I do agree with your point that real assets are unlikely to evaporate and would have some mean reverting properties. So it's not perfectly normal. But I'd also speculate that while the partial/total collapse of the U.S. government (let's say it is replaced with some sort of violent populist/fascist/socialist government that doesn't respect private property rights) only has a 1-5% chance in the next 5 years, the probability over the next 30 years might be as high as 10%. The probability of an asteroid large enough to mostly destroy the economy might be .1% in 5 years but 1% in 50 years (or maybe it's 0% for both and this is a bad exaple). The risk of extreme events large enough to mostly or totally destroy the stock market certainly increases over longer horizons.

At a certain point it stops mattering though and this is where I have more of an issue with Samuelson's assumptions. I don't know that I care that much about a 99% loss vs 98% loss (or 100% loss).

For your question about the chart, they're not my charts, here's the link below. The post says that 100% stock was .38 kelly crit. So the kelly crit was 2.63x leverage (1 kelly crit). The most leveraged line is about 5.5x leverage, while the least leveraged line is 0.65x stock allocation.

https://outcastbeta.com/time-diversific ... ventually/