In the discussions of Traditional vs. Roth IRA, all the examples (that I found) assume a person starts with no more than the maximum contribution. E.g., start with $5000 and either invest it all in a tIRA and pay tax later, or pay taxes now and invest the remainder in a Roth tax free. Then, if the marginal tax rate doesn't change

G = Gross starting amount, $

i = Annual investment return, fraction

n = Number of years

T = Marginal tax rate, fraction

Roth = G * (1 - T) * (1 + i)^n

tIRA = G * (1 + i)^n * (1 - T)

...and the results are identical.

So far so good and that horse has been beaten to death. But...what if...the comparison is for a maximum Roth contribution? In that case (both options starting with equal amount "G"):

a = Fraction of marginal tax rate applied to investment returns in taxable accounts

Roth = G * (1 - T) * (1 + i)^n Same as above, with G * (1 - T) = IRS maximum

tIRA = (G - G*T) * (1 + i)^n * (1 - T) Same as above, except starting with G * (1 - T) to observe IRS rule

+ GT * (1 - T) * (1 + i*(1-a*T))^n Amount G*T is initially taxed, then annual returns are also taxed at rate a*T.

Dividing tIRA/Roth we get

tIRA/Roth = 1 - T * (1 - x), where x = (1 + i - i*a*T)^n / (1 + i)^n

If a = 0 (so x = 1) or T = 0 then tIRA = Roth. But if a > 0 and T > 0 then x < 1 and Roth > tIRA.

One gets a similar result if, instead of annual taxation of returns, a capital gains tax is applied at the end of the taxable account growth. In that case, tIRA/Roth = 1 - a*T^2*(1-1/(1+i)^n).

Again if a = 0 or T = 0 then tIRA = Roth. But if a > 0 and T > 0 then Roth > tIRA.

In other words, when the marginal tax rate

*does not change*, and one has enough income to reach the maximum IRA contribution,

*the Roth is better than the traditional.*Not what I expected but that is what the math seems to say.

Has this been covered elsewhere? Is the algebra wrong above? Something else wrong with the analysis?