Well, you found an old bug which has been lingering for quite a while... The intent was to compare to Tbills, of course. I fixed the formula in the currently official version of the Simba spreadsheet (v15d), and a couple of other temporary versions of the spreadsheet. Good catch, LadyGeek, thank you.LadyGeek wrote:'Simba 15e', row 161: Sharpe Ratio  the entire row is blank. I don't understand why each fund in this row points to Column K, SCG / VISGX (Vanguard Small Capitalization Growth) for the ratio calculation. Is the intent to compare each fund to VISGX?
Simba's backtesting spreadsheet [a Bogleheads community project]
Re: Spreadsheet for backtesting (includes TrevH's data)
Re: Spreadsheet for backtesting (includes TrevH's data)
Sigh. After a rather protracted exchange with the MSCI Support folks, I finally got this weird answer:siamond wrote:(I am stuck on MSCI Emerging, as the index was created in 1988, but somehow the MSCI Web site only provides returns since 2000 for the Net Series while the Gross series does start in 1988, and I can't find more on Morningstar  arumph. The 19881994 numbers in our Simba spreadsheet are gross numbers, coming from the DFA Matrix document, rounded up from the corresponding MSCI data series.)
Kindly note there is no data available before December 29, 2000 for the MSCI Emerging Markets Index in Net variant. Data can be created as back tested, but it would require a subscription.
Seems to me that they are trying to make me pay for what is essentially a bug on their Web site... Yeah, well, maybe not.
I compared the EM net and gross series for the known returns (20002015), and there is a delta of roughly 0.3%. I also compared to the actual VEIEX returns, and there is no question that the net series is a better match. Oh, and I did verify that Vanguard was using the Net numbers as the benchmark of reference when applicable (they recently switched to FTSE). So it seems to me that we have two choices:
1. we stay factual and stick to the gross series for 19881994
2. we try to get it 'right' and subtract 0.3% from the gross series to approximate a net data series
Thoughts?
PS. all details in my comparison spreadsheet, which I am beefing up with various findings as a I plow through the numbers:
https://drive.google.com/open?id=0B0svR ... XJaLTN3c3M
Re: Spreadsheet for backtesting (includes TrevH's data)
Hm. Not a lot of feedback on my pending questions (see past few posts, and also this thread about VTRIX). Some opinions would really very welcome as I am in the process of making multiple decisions (and I have a couple more I didn't even document yet)... Help!
I need to put my scrubbing project on hold for a little while, so here is the current status for those of you who might be interested in details.
 the current *draft* for the new 16a version of the Simba spreadsheet can be found here.
 backup material (the Vanguard data I received, plus more stuff about indexes, NR vs GR returns, etc) can be found here (aka the comparison spreadsheet)
I need to put my scrubbing project on hold for a little while, so here is the current status for those of you who might be interested in details.
 the current *draft* for the new 16a version of the Simba spreadsheet can be found here.
 backup material (the Vanguard data I received, plus more stuff about indexes, NR vs GR returns, etc) can be found here (aka the comparison spreadsheet)
Last edited by siamond on Fri Oct 28, 2016 12:20 pm, edited 1 time in total.
Re: Spreadsheet for backtesting (includes TrevH's data)
I'm having a tough time directly interpreting your spreadsheet comparison (there's a lot of data!).siamond wrote:I compared the EM net and gross series for the known returns (20002015), and there is a delta of roughly 0.3%. I also compared to the actual VEIEX returns, and there is no question that the net series is a better match. Oh, and I did verify that Vanguard was using the Net numbers as the benchmark of reference when applicable (they recently switched to FTSE). So it seems to me that we have two choices:siamond wrote:(I am stuck on MSCI Emerging, as the index was created in 1988, but somehow the MSCI Web site only provides returns since 2000 for the Net Series while the Gross series does start in 1988, and I can't find more on Morningstar  arumph. The 19881994 numbers in our Simba spreadsheet are gross numbers, coming from the DFA Matrix document, rounded up from the corresponding MSCI data series.)
1. we stay factual and stick to the gross series for 19881994
2. we try to get it 'right' and subtract 0.3% from the gross series to approximate a net data series
Assuming the 0.3% delta is consistent, then it makes sense. But from your earlier post, I got the impression that net vs. gross isn't quite that consistent. In that case, I can see the argument for keeping it "factual" with the best data we have available.
There's also the matter of the older DFA EM data representing gross numbers, and they're sketchy enough asis without also adding additional modifiers. So we're still likely to have a gross/net change in that data set either way.
Re: Spreadsheet for backtesting (includes TrevH's data)
Feel free to reach out by private message if you want to understand more details. I admittedly didn't try to make the comparison spreadsheet friendly (this isn't intended to become a sustained public document), I'm just trying to capture all the data points I gather. Mostly for myself to avoid redoing the same research two years down the road!Tyler9000 wrote:I'm having a tough time directly interpreting your spreadsheet comparison (there's a lot of data!).
Assuming the 0.3% delta is consistent, then it makes sense. But from your earlier post, I got the impression that net vs. gross isn't quite that consistent. In that case, I can see the argument for keeping it "factual" with the best data we have available.
There's also the matter of the older DFA EM data representing gross numbers, and they're sketchy enough asis without also adding additional modifiers. So we're still likely to have a gross/net change in that data set either way.
I went through all the International series (except for the IFA part), and you're right, the NR vs GR delta is NOT consistent at all, and at times truly puzzling. Note that the MSCI Emerging Markets data series is the *only* one suffering from lack of NR history. And I checked, Vanguard always used those as benchmarks. So I'm pretty set on the idea of using NR data whenever possible. And yet I do understand your point. It does bug me to provide EM returns that are artificially 'rosy' though. Hm, that one is tough.
=> Other views?

 Posts: 2905
 Joined: Sat Aug 11, 2012 8:44 am
Re: Spreadsheet for backtesting (includes TrevH's data)
I am in favor of estimating net index returns for MSCI Emerging, but the fundamental question that remains (and that applies to all backtesting) is: how useful are the historical returns of asset classes in which it was nearly impossible for retail investors to invest at the time?siamond wrote:=> Other views?
Examples: gold returns in years it was illegal to own gold, S&P 500 returns before Jack Bogle created VFINX, Barclays Aggregate Bond Index before the creation of VBMFX, TIPS returns before the creation of TIPS in 1997 (or the introduction of TIPS funds and ETFs), etc.
I guess that my question does not belong to this technical thread, though.
Bogleheads investment philosophy 
Lifelong Portfolio: 25% each of (domestic/international)stocks/(nominal/inflationindexed)bonds 
VCN/VXC/VLB/ZRR
Re: Spreadsheet for backtesting (includes TrevH's data)
It's an excellent question, and one worth discussing.longinvest wrote:<snip> but the fundamental question that remains (and that applies to all backtesting) is: how useful are the historical returns of asset classes in which it was nearly impossible for retail investors to invest at the time?
On the plus side, I think the idea is that the historical data has merit because we can now easily invest in these things, so it's worth looking at the "what if we could have invested in them back then" scenario. Or maybe it's: if future economic conditions are similar to economic conditions at various times in the past, then the past historical returns should contribute to the sample population that we can use to develop etimates of future returns. Of course nothing is exactly the same as in the past, but there are fundamentals like interest rates, inflation, valuations, growth rates, etc. that can be compared.
On the minus side, if something was harder and/or more expensive to trade in the past, then you might expect an illiquidity premium in past returns. I believe this argument has been used to explain, for example, the historical smallcap value premium, especially for the smallest decile.
There's also the argument that the world is generally a safer place than it was in the past, and risk premiums decline with increasing safety, so we should expect a smaller equity risk premium, for example, going forward.
Kevin
....... Suggested format for Asking Portfolio Questions (edit original post)
Re: Spreadsheet for backtesting (includes TrevH's data)
From my perspective, index funds don't make markets  they simply track them. So reconstructing the performance of an index prior to its inception using the same methods of the index is a perfectly reasonable thing to do.
Note that indices themselves also do this. For example, the Russell 2000 index was founded in 1984. But if you search online for Russell 2000 data, they provide numbers since 1979. Like any company selling a new product, they reconstructed the previous 5 years of data to demonstrate how the index worked. The most important thing from a backtesting perspective is the methodology of the index rather than the fund itself.
Whether the performance of an index changed over time due to a myriad of factors is a different matter. Uncovering those trends is why we need the data to begin with.
Note that indices themselves also do this. For example, the Russell 2000 index was founded in 1984. But if you search online for Russell 2000 data, they provide numbers since 1979. Like any company selling a new product, they reconstructed the previous 5 years of data to demonstrate how the index worked. The most important thing from a backtesting perspective is the methodology of the index rather than the fund itself.
Whether the performance of an index changed over time due to a myriad of factors is a different matter. Uncovering those trends is why we need the data to begin with.
Re: Spreadsheet for backtesting (includes TrevH's data)
Ok, I think I'll stick with a 'Net' estimate for Emerging Markets. Doing otherwise would be inconsistent and misleading, I'm afraid. Yes, this implies a slightly clunky computation for 19881994, but be it. Hopefully, at some point, MSCI will fix their data gap. I'll take a note to monitor that from time to time.
As to the previous years (19721987), we currently use the IFA EM index, which is defined as:
* January 1970 – December 1987: 50% IFA Int'l Value and 50% IFA Int'l Small Cap
Well, instead of reusing those IFA EM numbers, I'd rather simply follow the corresponding core logic and compute 50% of our Int'l Value series and 50% of our Int'l Small Cap series. This core logic sounds rather dubious, but we don't have anything better, and at least the Int'l Value data series is more solid, cf. MSCI EAFE Value (net) index for 1975+. Anyhoo, I didn't dive enough in the IFA numbers yet (I know they've been updated), but this is my current line of thinking, trying to make the best out of lemons...
As to the previous years (19721987), we currently use the IFA EM index, which is defined as:
* January 1970 – December 1987: 50% IFA Int'l Value and 50% IFA Int'l Small Cap
Well, instead of reusing those IFA EM numbers, I'd rather simply follow the corresponding core logic and compute 50% of our Int'l Value series and 50% of our Int'l Small Cap series. This core logic sounds rather dubious, but we don't have anything better, and at least the Int'l Value data series is more solid, cf. MSCI EAFE Value (net) index for 1975+. Anyhoo, I didn't dive enough in the IFA numbers yet (I know they've been updated), but this is my current line of thinking, trying to make the best out of lemons...
Re: Spreadsheet for backtesting (includes TrevH's data)
Moving on, let me share another tentative change... Let's discuss International Value. Here are our data sources in the official v15d Simba spreadsheet:
 IFA website (IFA IV, added back expenses): 19721974
 MSCI EAFE Value 19751996 ==> gross returns
 Vanguard Intl Value (VTRIX) 1997+
About International Value, as discussed in this thread, VTRIX has a complicated history, it wasn't always a value fund (it pivoted in 1997/98), plus it remains an active fund to this day, with no less than three independent active managers! We also have solid history for the MSCI EAFE Value index (from 1975 till now, net and gross returns). VTRIX can be compared to such index, but it doesn't track it (being an active fund).
Now stlutz and myself had the same thought. There is a passive fund for Int'l Value, namely iShares EFV. Which tracks the MSCI EAFE Value index, and seems to do a good job doing it. EFV was launched in 2005, so we have annual returns since 2006. Therefore, we could combine the MSCI EAFE Value (net) series with EFV, and this would be a better and more consistent match for the International Value asset class. A bit of a departure from the Simba spreadsheet being centered on Vanguard funds, but this wouldn't be the only exception.
Moving from EAFE(gross)+VTRIX to EAFE(net)+VTRIX would be a significant change (CAGR 1975+ drops by 0.7%), and we minimally should do that. Moving to EAFE(net)+EFV goes down another notch (0.3% more), but we're staying in the same order of magnitude, and this seems an approach more consistent with passive investing principles. Interested folks can compare the exact numbers in the comparison spreadsheet I posted before.
If we were to do that, I would also provide the entire VTRIX history (starting from 1984) in the new Data_Misc tab, for those interested. So if somebody is truly attached to the old way of combining numbers, a simple copy & paste will do the trick. Feedback welcome.
 IFA website (IFA IV, added back expenses): 19721974
 MSCI EAFE Value 19751996 ==> gross returns
 Vanguard Intl Value (VTRIX) 1997+
About International Value, as discussed in this thread, VTRIX has a complicated history, it wasn't always a value fund (it pivoted in 1997/98), plus it remains an active fund to this day, with no less than three independent active managers! We also have solid history for the MSCI EAFE Value index (from 1975 till now, net and gross returns). VTRIX can be compared to such index, but it doesn't track it (being an active fund).
Now stlutz and myself had the same thought. There is a passive fund for Int'l Value, namely iShares EFV. Which tracks the MSCI EAFE Value index, and seems to do a good job doing it. EFV was launched in 2005, so we have annual returns since 2006. Therefore, we could combine the MSCI EAFE Value (net) series with EFV, and this would be a better and more consistent match for the International Value asset class. A bit of a departure from the Simba spreadsheet being centered on Vanguard funds, but this wouldn't be the only exception.
Moving from EAFE(gross)+VTRIX to EAFE(net)+VTRIX would be a significant change (CAGR 1975+ drops by 0.7%), and we minimally should do that. Moving to EAFE(net)+EFV goes down another notch (0.3% more), but we're staying in the same order of magnitude, and this seems an approach more consistent with passive investing principles. Interested folks can compare the exact numbers in the comparison spreadsheet I posted before.
If we were to do that, I would also provide the entire VTRIX history (starting from 1984) in the new Data_Misc tab, for those interested. So if somebody is truly attached to the old way of combining numbers, a simple copy & paste will do the trick. Feedback welcome.
Last edited by siamond on Mon Oct 31, 2016 3:11 pm, edited 1 time in total.
Re: Spreadsheet for backtesting (includes TrevH's data)
Now, as to the use of some of the IFA data series for some of the early years... Some IFA data series were themselves derived from other sources (e.g. MSCI, DFA Matrix), either fairly directly, or doing some crude math to fill some historical gaps. The MSCI original sources were gross series. So I elected to come back to the original sources, including MSCI net numbers.
IFA EM (Emerging): 19721987
=> IFA does crude math here, e.g. 50% IFA Int'l Value Index and 50% IFA Int'l Small. Not having a better idea for methodology (MSCI didn't track Emerging until 1988), I kept the same formula, while applying directly it to our Int'l Value and Int'l Small series.
IFA IV (Int'l Value): 19721974
=> here this is simple, IFA uses the MSCI EAFE (gross) data series, I simply switched to MSCI EAFE (net). Note that MSCI EAFE Value starts in 1975.
IFA (Int'l Small): 19721996
=> here IFA uses the DFA International Small Cap Index, so I came back to the corresponding DFA Matrix data. There is no corresponding MSCI data series until the end of the 90s.
So overall, the only IFA dependency which remains is the crude methodology used to approximate Emerging returns between 19721987. Which is really quite dubious, so I added a **CRUDE DATA** warning in the corresponding Data_Source description of the Emerging Market data series.
PS. I am thinking to introduce a new IFA dependency though. We will have 197071 data for pretty much every data series that used to start in 1972 (as this old post from Trev_H had already documented). The main gap is REITs. IFA approximates pre1972 REITs to 50% Small Caps, 50% Small Caps Value. Why not...
IFA EM (Emerging): 19721987
=> IFA does crude math here, e.g. 50% IFA Int'l Value Index and 50% IFA Int'l Small. Not having a better idea for methodology (MSCI didn't track Emerging until 1988), I kept the same formula, while applying directly it to our Int'l Value and Int'l Small series.
IFA IV (Int'l Value): 19721974
=> here this is simple, IFA uses the MSCI EAFE (gross) data series, I simply switched to MSCI EAFE (net). Note that MSCI EAFE Value starts in 1975.
IFA (Int'l Small): 19721996
=> here IFA uses the DFA International Small Cap Index, so I came back to the corresponding DFA Matrix data. There is no corresponding MSCI data series until the end of the 90s.
So overall, the only IFA dependency which remains is the crude methodology used to approximate Emerging returns between 19721987. Which is really quite dubious, so I added a **CRUDE DATA** warning in the corresponding Data_Source description of the Emerging Market data series.
PS. I am thinking to introduce a new IFA dependency though. We will have 197071 data for pretty much every data series that used to start in 1972 (as this old post from Trev_H had already documented). The main gap is REITs. IFA approximates pre1972 REITs to 50% Small Caps, 50% Small Caps Value. Why not...
Re: Spreadsheet for backtesting (includes TrevH's data)
IMO I think you're probably going too far in trying to generate numbers. Longinvest's question is validdoes it make sense to backtest against something that you wouldn't have even thought to invest in back at the time? And I'm not talking about the S&P 500 vs. an activelymanaged largecap fund. If you can't find REIT data, it's probably because it wasn't investible to individuals back thenin index or activelymanaged form.
In the cases where you are saying the data is questionable (i.e. the IFA data), I would just fill it in the with the "default" choice and then leave notes as to what you did. For example, before emerging markets funds were around, you would have just invested it all in an EAFE fund. Before REITS, you probably would have just used additional normal stocks, so just fill those gaps in with TSM.
My $.02.
In the cases where you are saying the data is questionable (i.e. the IFA data), I would just fill it in the with the "default" choice and then leave notes as to what you did. For example, before emerging markets funds were around, you would have just invested it all in an EAFE fund. Before REITS, you probably would have just used additional normal stocks, so just fill those gaps in with TSM.
My $.02.
Re: Spreadsheet for backtesting (includes TrevH's data)
For better or worse, the simba spreadsheet is out there and in the wild. I think it is better for the data to be coherent and correct than otherwise.stlutz wrote:IMO I think you're probably going too far in trying to generate numbers. Longinvest's question is validdoes it make sense to backtest against something that you wouldn't have even thought to invest in back at the time?
There were no real investable indexes before the late1980s (December 1986 for bonds, June 1990 for Europe & Pacific). There was no such thing as defined maturity bond index funds before 1994 (so no backtesting against "intermediate" this or "short" that).
Though it pays to be humble in the face of those massive changes, throwing out the previous 100+ years of market data seems like an overcorrection to that uncertainty. And I don't see a good reason to allow it for some asset classes (say, the "S&P 500" and "intermediate government bonds") but not others (international value and EM).
longinvest's own spreadsheet performs backtesting on things that no human could have ever purchased (there was no "index fund" in 1966 after all and no actual person used bonds in the 1930s the way Shiller's data does) so it seems he's already decided it is acceptable to backtest something people couldn't, and didn't, invest it. I think that's the right choice.
Some people spend a lot of time chasing up obscure details. Think of the original Cowles Commission poring over historical stock data in the US or James Murray's herculean effort in compiling the Oxford English Dictionary. I'm glad those people did it; not all of it turns out to be worthwhile but who am I say which efforts are futile, which merely quixotic, and which will pay unanticipated dividends? It isn't as if every minute of my day is decided to "worthwhile" pursuits
Re: Spreadsheet for backtesting (includes TrevH's data)
There was no TSM fund before 1992, so that doesn't help very much if we're trying to avoid synthetic, uninvestable things. The same applies to EAFE funds, the first of which didn't exist until 2001 when iShares created EFA.stlutz wrote:For example, before emerging markets funds were around, you would have just invested it all in an EAFE fund. Before REITS, you probably would have just used additional normal stocks, so just fill those gaps in with TSM.
Most of the things we invest in today are extraordinarily recent, which is probably part of the urge to try to understand how they may have performed in very different circumstances.
Re: Spreadsheet for backtesting (includes TrevH's data)
As long as we have a decent index, personally, I am fairly comfortable using those historical numbers. This isn't perfect, but does remain informative. Plus, being quite blunt, the Simba spreadsheet would be completely gutted and pretty much useless if we were to only use numbers coming from real funds. So I'm trying to stick to its core philosophy.stlutz wrote:In the cases where you are saying the data is questionable (i.e. the IFA data), I would just fill it in the with the "default" choice and then leave notes as to what you did. For example, before emerging markets funds were around, you would have just invested it all in an EAFE fund. Before REITS, you probably would have just used additional normal stocks, so just fill those gaps in with TSM.
But I do agree that those IFA 50/50 tricks are more than dubious. For REITs, it's just a couple of years, so I don't think this really matters, and I have sympathy for your 'just use TSM' suggestion. For EM, this is trickier as we're speaking of a much longer period of time (19701987). Also, I checked the 50/50 IFA EM formula against known numbers (1988+) last night, and the match is absolutely horrendous (while there might be a small kernel of truth to the REIT formula).
I also had the opportunity to compare the IFA numbers to the GFD EM numbers (which we can't quote), and the 50/50 rule 197087 CAGR seems way too rosy. On the other hand, backtracking to EAFE (Developed) would seem quite at odds with what appears to have happened in the 70s and 80s, cf. *the* primary reference, the DMS/CreditSuisse study from 2014.
Or... we could just call a cat a cat, and acknowledge that EM is an asset class that should be part of the 1985+ data set instead of the 1972+ (or 1970+) data set. And then using EAFE for the 198587 gap would be fine.
Re: Spreadsheet for backtesting (includes TrevH's data)
Just to make sure I was clear aboveI'm not arguing that anything that could not have been invested in index form should be excluded from the spreadsheet. If one couldn't buy any kind of mutual fundactive or passivethat targeted the asset class, then I do raise a serious question.
The oldest "emerging markets" mutual fund launched in 1989. So, how exactly how could any of us invested in the asset class before it even existed?
A google search turned up this interesting article about such funds from 1998: http://www.nytimes.com/1998/01/11/busin ... tcome.html
The oldest "emerging markets" mutual fund launched in 1989. So, how exactly how could any of us invested in the asset class before it even existed?
A google search turned up this interesting article about such funds from 1998: http://www.nytimes.com/1998/01/11/busin ... tcome.html
Not too many years ago, such countries were labeled ''undeveloped countries.'' Then, in a bit of verbal political correctness, they became ''less developed countries,'' or L.D.C.'s. That name was often used in discussions of bad bank loans, which did not help their image.
But starting little more than a decade ago, people began to speak of ''emerging markets.'' There were two nice aspects to that. The emerging part implied progress, and the markets part assumed that countries were following the capitalist path. That assumption could become widespread only with the fall of Communism in most of the world, and with the general view that even Communist countries would adopt market economies.
Re: Spreadsheet for backtesting (includes TrevH's data)
Ok, I made a final call on International Value based on feedback received privately and publicly. Data Sources will be the following:
MSCI EAFE NR USD 19701974
MSCI EAFE Value NR USD 19752005
iShares MSCI EAFE Value (EFV) 2006+
This drops the 19722015 CAGR by nearly a point, but such is the price of seeking better data sources...
MSCI EAFE NR USD 19701974
MSCI EAFE Value NR USD 19752005
iShares MSCI EAFE Value (EFV) 2006+
This drops the 19722015 CAGR by nearly a point, but such is the price of seeking better data sources...
Re: Spreadsheet for backtesting (includes TrevH's data)
Makes sense to me! I always prefer the most accurate data possible.
Re: Spreadsheet for backtesting (includes TrevH's data)
Let's discuss TBills now. Which we primarily use as Risk Free Benchmark for Sharpe ratios. In Simba 15d, the data sources are:
 prior to 1972, coarse assumption that 1yr interest rate (from Prof. Shiller, then FRED) provides a decent estimate for TBills
 TBills 19721983
 Vanguard Treasury Money Market Fund (VMPXX) 1984+
 VG Treasury (VMPXX) merged with Admiral Treasury (VUSXX) in Aug 2009
The 197283 description wasn't quite specific, but I finally figured out that this came (via indirect means) from SBBI/Ibbotson TBills 30days data series. Ahem.
At the time of the merge between VMPXX and VUSXX, the respective ERs were 0.28% and 0.15% (and VUSXX is now 0.09%), according to the Vanguard press release (hence a discontinuity in our Simba series). With the recent historically low yields, VMPXX was clearly not sustainable any more. As to VUSXX, its first annual return was in 1993. Note that the average duration for VUSXX bills is 71 days, hence between 2 and 3 months. Oh, and VMPXX is kind of lost in history since the fund disappeared (Morningstar doesn't have a trace any more).
Ok, this all seems pretty clunky and inconsistent. Discussing with Kevin_M, we came up with the idea to use the FRED TBill 3month data series, which provides corresponding interest rates on a monthly basis starting from 1934. Which is also what Prof. Damodaran uses for his own TBill data series (see here). Prof. Damodaran approximates TBills annual return to the arithmetic average of the Jan, Apr, Jul, Oct rates though. It would seem more accurate to do a geometric average. So the plan is to streamline:
 prior to 1934, keep the 1yr interest rate coarse estimate (Prof. Shiller)
 19341992: TBills returns as geometric average of Jan, Apr, Jul, Oct rates provided by FRED (TB3MS)
 1993+: use VUSXX (and use its latest ER to adjust the synthetic returns pre1993)
As usual, feedback welcome.
 prior to 1972, coarse assumption that 1yr interest rate (from Prof. Shiller, then FRED) provides a decent estimate for TBills
 TBills 19721983
 Vanguard Treasury Money Market Fund (VMPXX) 1984+
 VG Treasury (VMPXX) merged with Admiral Treasury (VUSXX) in Aug 2009
The 197283 description wasn't quite specific, but I finally figured out that this came (via indirect means) from SBBI/Ibbotson TBills 30days data series. Ahem.
At the time of the merge between VMPXX and VUSXX, the respective ERs were 0.28% and 0.15% (and VUSXX is now 0.09%), according to the Vanguard press release (hence a discontinuity in our Simba series). With the recent historically low yields, VMPXX was clearly not sustainable any more. As to VUSXX, its first annual return was in 1993. Note that the average duration for VUSXX bills is 71 days, hence between 2 and 3 months. Oh, and VMPXX is kind of lost in history since the fund disappeared (Morningstar doesn't have a trace any more).
Ok, this all seems pretty clunky and inconsistent. Discussing with Kevin_M, we came up with the idea to use the FRED TBill 3month data series, which provides corresponding interest rates on a monthly basis starting from 1934. Which is also what Prof. Damodaran uses for his own TBill data series (see here). Prof. Damodaran approximates TBills annual return to the arithmetic average of the Jan, Apr, Jul, Oct rates though. It would seem more accurate to do a geometric average. So the plan is to streamline:
 prior to 1934, keep the 1yr interest rate coarse estimate (Prof. Shiller)
 19341992: TBills returns as geometric average of Jan, Apr, Jul, Oct rates provided by FRED (TB3MS)
 1993+: use VUSXX (and use its latest ER to adjust the synthetic returns pre1993)
As usual, feedback welcome.
Re: Spreadsheet for backtesting (includes TrevH's data)
As I was reading your post I was thinking to myself, "Just use the 3 month Tbill rate.".
Re: Spreadsheet for backtesting (includes TrevH's data)
If this is the 52week TBill then shouldn't we be using FRED's 1year data series[1] instead of trying our own interpolation?siamond wrote:Let's discuss TBills now.
At the very least, the 1year series seems like a benchmark for the averaged 3month series.
It would also be interesting to ask Damodaran why he used the arithmetic average, just to make sure there isn't some deeper reason that isn't apparent (at least not apparent to me!)
[1]: https://fred.stlouisfed.org/series/DTB1YR
Re: Spreadsheet for backtesting (includes TrevH's data)
If simple interest is used, an arithmetic average is correct. If compounding interest is used, geometric averages are correct (due to the exponential function).AlohaJoe wrote:It would also be interesting to ask Damodaran why he used the arithmetic average, just to make sure there isn't some deeper reason that isn't apparent (at least not apparent to me!)
Bonds use simple interest to determine accrued interest between coupon periods: Bond pricing (Price between coupon periods)
I'm going by textbook theory, which may be different than how the returns are used in this spreadsheet. I'm also not an expert on bonds, but I wanted to explain where one would use arithmetic vs. geometric return calculations.
Re: Spreadsheet for backtesting (includes TrevH's data)
Are you suggesting to completely skip VUSXX? Not too sure about that... For me, a riskfree benchmark should map to something investable, at least in the years where we have such direct history. Or did you mean something else?stlutz wrote:As I was reading your post I was thinking to myself, "Just use the 3 month Tbill rate.".
If we keep VUSXX as the fund of reference, then we're speaking of 13 months shortterm bills, not 1yr bills. From the VUSXX description:AlohaJoe wrote:At the very least, the 1year series seems like a benchmark for the averaged 3month series.
The average maturity typically ranges from 30–60 days, and the fund maintains a dollarweighted average maturity of 60 days or less, and a dollarweighted average life of 120 days or less.
I am really not savvy about bonds & bills, but if I were to use 3month bills as a riskfree investment, I would buy such bills in January, get the principal back plus interest 3 months later, buy new bills on this basis, get principal/interest 3 months later, etc. Therefore I would use a geometric average to compute my annual return. Or am I missing something?LadyGeek wrote:If simple interest is used, an arithmetic average is correct. If compounding interest is used, geometric averages are correct (due to the exponential function).
Good idea. Will do.AlohaJoe wrote:It would also be interesting to ask Damodaran why he used the arithmetic average, just to make sure there isn't some deeper reason that isn't apparent (at least not apparent to me!)
Re: Spreadsheet for backtesting (includes TrevH's data)
Treasury bills are investable, and as a matter of fact Bill Bernstein recommends buying T bills directly rather than using a fund, since the ER is 0 and there is no credit risk to diversify away. Well, Bernstein actually recommends buying all Treasuries directly, not just bills. But I think the argument for just using something like the 3month T bill to represent T bills is better than for using Treasuries for intermediateterm or longterm bonds, since the short term nature makes it likely that the returns will be quite close to a Treasury money market fund. Have you checked this?siamond wrote:Are you suggesting to completely skip VUSXX? Not too sure about that... For me, a riskfree benchmark should map to something investable, at least in the years where we have such direct history. Or did you mean something else?stlutz wrote:As I was reading your post I was thinking to myself, "Just use the 3 month Tbill rate.".
Having said that, I think it's fine to use the fund where data is available if you think it makes more sense, and most Bogleheads probably use a fund rather than following Bernstein's advice. I think using data from FRED makes sense for when fund data is not available and FRED data is. FRED is a reliable source and used by Damodaran, Shiller, et al.
Either 3month or 1month T bills would be more consistent for what typically seems to be used for the riskfree rate by academics. Damodaran uses 3month and FamaFrench use 1month. The idea for the RF rate, at least the nominal rate, is that there is littletono term risk in addition to no credit risk. This argues for using the 1month rate, but there is much more history for the 3month rate (1month only goes back to 2001). On the other hand, I guess you could use RF from French data as the 1month T bill.siamond wrote:If we keep VUSXX as the fund of reference, then we're speaking of 13 months shortterm bills, not 1yr bills. From the VUSXX description:AlohaJoe wrote:At the very least, the 1year series seems like a benchmark for the averaged 3month series.
The average maturity typically ranges from 30–60 days, and the fund maintains a dollarweighted average maturity of 60 days or less, and a dollarweighted average life of 120 days or less.
If you sort FRED T Bills series by popularity, the 3month, secondary market rate is the most popular, probably for exactly the reasons we're discussing here.
My thinking is the same as siamond's. Each bill pays its principal and interest at the end of the term, at which point you need to do something with it, which in the case we're discussing here would be to reinvest it in a new bill of the same term. Since you would be reinvesting the interest as well as the prinicpal, it makes sense to compound by the term of the bill (so quarterly for the 3month bill, monthly for a 1month bill, annually for a 1year bill).siamond wrote:I am really not savvy about bonds & bills, but if I were to use 3month bills as a riskfree investment, I would buy such bills in January, get the principal back plus interest 3 months later, buy new bills on this basis, get principal/interest 3 months later, etc. Therefore I would use a geometric average to compute my annual return. Or am I missing something?LadyGeek wrote:If simple interest is used, an arithmetic average is correct. If compounding interest is used, geometric averages are correct (due to the exponential function).
Couldn't hurt. While you're at it, you might ask him where he got his tbill rates prior to 1934, since he only mentions FRED as his data source for t bills, but FRED 3month secondary rates only go back to 1934.siamond wrote:Good idea. Will do.AlohaJoe wrote:It would also be interesting to ask Damodaran why he used the arithmetic average, just to make sure there isn't some deeper reason that isn't apparent (at least not apparent to me!)
Kevin
....... Suggested format for Asking Portfolio Questions (edit original post)
Re: Spreadsheet for backtesting (includes TrevH's data)
The wiki has some some background info: Aswath DamodaranKevin M wrote:...Couldn't hurt. While you're at it, you might ask him where he got his tbill rates prior to 1934, since he only mentions FRED as his data source for t bills, but FRED 3month secondary rates only go back to 1934.siamond wrote:Good idea. Will do.AlohaJoe wrote:It would also be interesting to ask Damodaran why he used the arithmetic average, just to make sure there isn't some deeper reason that isn't apparent (at least not apparent to me!)
Kevin
His website: Data
The spreadsheet is downloaded from here: Useful Data Sets > Historical Returns on Stocks, Bonds and Bills  United States: (html), (Spreadsheet)

 Posts: 2905
 Joined: Sat Aug 11, 2012 8:44 am
Re: Spreadsheet for backtesting (includes TrevH's data)
Personally, I would look at the data source to try identify what kind of numbers FRED gives us.LadyGeek wrote: If simple interest is used, an arithmetic average is correct. If compounding interest is used, geometric averages are correct (due to the exponential function).
Bonds use simple interest to determine accrued interest between coupon periods: Bond pricing (Price between coupon periods)
I'm going by textbook theory, which may be different than how the returns are used in this spreadsheet. I'm also not an expert on bonds, but I wanted to explain where one would use arithmetic vs. geometric return calculations.
On https://fred.stlouisfed.org/series/TB3MS, there's a link to http://www.federalreserve.gov/releases/h15/ which says:
At first sight, I don't see an explanation telling us if FRED is making any adjustment to the H.15 series or not. If not, and if I was to translate these numbers into annual returns, I would try to estimate cumulative total returns over a 365.25 day period, but that's just me.Treasury bills (secondary market) 3 4
...
6month ...
...
3. Annualized using a 360day year or bank interest.
Bogleheads investment philosophy 
Lifelong Portfolio: 25% each of (domestic/international)stocks/(nominal/inflationindexed)bonds 
VCN/VXC/VLB/ZRR
Re: Spreadsheet for backtesting (includes TrevH's data)
Perhaps this has something to do with the definition of Bond Equivalent Yield (BEY)?
The wiki has the formula: Treasury bill
Now, look at the US Treasury: Daily Treasury Bill Rates Data, fine print:
*This is my guess, but it seems right to me.
The wiki has the formula: Treasury bill
Now, look at the US Treasury: Daily Treasury Bill Rates Data, fine print:
The Bank Discount rate is the rate at which a Bill is quoted in the secondary market and is based on the par value, amount of the discount and a 360day year. The Coupon Equivalent, also called the Bond Equivalent, or the Investment Yield, is the bill's yield based on the purchase price, discount, and a 365 or 366day year. The Coupon Equivalent can be used to compare the yield on a discount bill to the yield on a nominal coupon bond that pays semiannual interest.
The above comments frame this in a different context. Long story short, the calculations are simple (not exponential) math and is why an arithmetic average is used.*longinvest wrote:...
3. Annualized using a 360day year or bank interest.
*This is my guess, but it seems right to me.
Re: Spreadsheet for backtesting (includes TrevH's data)
I don't think this addresses what we're talking about. This applies to the way the annualized yield for a single bill is calculated. Siamond is not proposing use geometric mean for the calculation of yield for a single bill; I believe he would simply divide the annualized yield of the 3month T bill by 4 to get the return for three months (so no assumption of compounding here). The compounding comes in when you use the proceeds of the matured bill (say after 3 months) to buy the next bill.LadyGeek wrote:Long story short, the calculations are simple (not exponential) math and is why an arithmetic average is used.*
*This is my guess, but it seems right to me.
I don't think there's enough precision in what we're talking about here that it matters whether you use 360 or 365 days in the calculations. Similarly, it's actually a 13week bill, which is 91 days not actually three months, so 364 days to roll four 3month bills in about a year.
My 2 cents.
Kevin
....... Suggested format for Asking Portfolio Questions (edit original post)
Re: Spreadsheet for backtesting (includes TrevH's data)
Yes, this is exactly what I did. I took the geometric average of the Jan interest rate divided by 4, the April interest rate divided by 4, etc.Kevin M wrote:I believe he would simply divide the annualized yield of the 3month T bill by 4 to get the return for three months (so no assumption of compounding here).
Yup.Kevin M wrote:The compounding comes in when you use the proceeds of the matured bill (say after 3 months) to buy the next bill.
Re: Spreadsheet for backtesting (includes TrevH's data)
Thanks, that helps. I misunderstood where the compounding would be applied.Kevin M wrote:I don't think this addresses what we're talking about.
(And noting siamond's agreement just after Kevin M's post.)
Re: Spreadsheet for backtesting (includes TrevH's data)
Since we're talking about T bills as the riskfree rate, for example to calculate Sharpe ratio, it's worth remembering that the definition of risk free is that there is no uncertainty in return for a specified holding period. So if we're dealing in annual returns, and calculating Sharpe ratios based on annual returns, the holding period is one year, and the riskfree rate in nominal terms is a 1year T bill. For monthly returns the riskfree rate in nominal terms is a 1month T bill, which might be why Fama/French use the 1month T bill as RF in their monthly return series.
We're still faced with the issue that the longest data series from FRED is for the 3month T bill. There are 1year T bill rates back to 1959 for 1year, and back to 1934 for 3month.
Kevin
We're still faced with the issue that the longest data series from FRED is for the 3month T bill. There are 1year T bill rates back to 1959 for 1year, and back to 1934 for 3month.
Kevin
....... Suggested format for Asking Portfolio Questions (edit original post)
Re: Spreadsheet for backtesting (includes TrevH's data)
Thanks, Kevin, I keep forgetting that one can directly buy bills and bonds...Kevin M wrote:Treasury bills are investable, and as a matter of fact Bill Bernstein recommends buying T bills directly rather than using a fund, since the ER is 0 and there is no credit risk to diversify away. Well, Bernstein actually recommends buying all Treasuries directly, not just bills. But I think the argument for just using something like the 3month T bill to represent T bills is better than for using Treasuries for intermediateterm or longterm bonds, since the short term nature makes it likely that the returns will be quite close to a Treasury money market fund. Have you checked this?siamond wrote:Are you suggesting to completely skip VUSXX? Not too sure about that... For me, a riskfree benchmark should map to something investable, at least in the years where we have such direct history. Or did you mean something else?stlutz wrote:As I was reading your post I was thinking to myself, "Just use the 3 month Tbill rate.".
Having said that, I think it's fine to use the fund where data is available if you think it makes more sense, and most Bogleheads probably use a fund rather than following Bernstein's advice. I think using data from FRED makes sense for when fund data is not available and FRED data is. FRED is a reliable source and used by Damodaran, Shiller, et al.
Still, this isn't possible in every type of investment vehicle (e.g. 401k). Also I'd rather stay consistent with similar data series we have (e.g. longterm treasuries, intermediateterm treasuries, etc) where the same consideration would apply. The TBill series is used for Sharpe ratios computations, but it is ALSO a regular asset class on its own right, that could be part of a portfolio's AA. So I'd rather stick with the use of a fund in all cases.
PS. as a side note, I don't believe that Vanguard has a money market fund holding 1year tbills?
Re: Spreadsheet for backtesting (includes TrevH's data)
No, but per my PM, a Treasury MM fund also may not be availble in 401k/403b plans, and certainly few plans have the VG Treasury fund.siamond wrote:Thanks, Kevin, I keep forgetting that one can directly buy bills and bonds...
Still, this isn't possible in every type of investment vehicle (e.g. 401k).
Seems reasonable.siamond wrote:Also I'd rather stay consistent with similar data series we have (e.g. longterm treasuries, intermediateterm treasuries, etc) where the same consideration would apply. The TBill series is used for Sharpe ratios computations, but it is ALSO a regular asset class on its own right, that could be part of a portfolio's AA. So I'd rather stick with the use of a fund in all cases.
No, it looks like max maturity for bills in the VG fund is a little less than five months, and as you said, average maturity is closer to two months.siamond wrote:PS. as a side note, I don't believe that Vanguard has a money market fund holding 1year tbills?
Kevin
....... Suggested format for Asking Portfolio Questions (edit original post)
Re: Spreadsheet for backtesting (includes TrevH's data)
I was just comparing 1year to 3month Treasuries where there is overlap, which is 1960  2016 with a gap from 9/2001 to 5/2008 when there were no 1year Treasuries. While at it I compared arithmetic mean to geometric mean for quarterly 3month T bills over oneyear periods, and the average difference is only 2 basis points with a standard deviation of 4 basis points. Bottom line is that it doesn't make enough difference to matter.
Back to 1year vs 3month, I was wondering if these were close enough that it doesn't matter. Looking at a chart of both it kind of looks like it, but the average of the differences (using January values each year) is about 10 basis points and standard deviation is about 50 basis points. The difference between January 1year bills and average of Jan, Apr, Jul, Oct 3month bills is 13 basis point with SD of 62 basis points, so even worse. These differences seem like too much to ignore. Just FYI.
Kevin
Back to 1year vs 3month, I was wondering if these were close enough that it doesn't matter. Looking at a chart of both it kind of looks like it, but the average of the differences (using January values each year) is about 10 basis points and standard deviation is about 50 basis points. The difference between January 1year bills and average of Jan, Apr, Jul, Oct 3month bills is 13 basis point with SD of 62 basis points, so even worse. These differences seem like too much to ignore. Just FYI.
Kevin
....... Suggested format for Asking Portfolio Questions (edit original post)
Re: Spreadsheet for backtesting (includes TrevH's data)
Agreed on the overall trajectory, but in the years of high inflation (e.g. early 80s), the difference is getting more significant, and a tad harder to ignore.Kevin M wrote:While at it I compared arithmetic mean to geometric mean for quarterly 3month T bills over oneyear periods, and the average difference is only 2 basis points with a standard deviation of 4 basis points. Bottom line is that it doesn't make enough difference to matter.
Anyhoo, to the risk of splitting hairs, we should just try to "do the right thing" here. I am no longer sure if the right thing to do is to use the 3months rates, or the oneyear rates... On one hand, the 3months rates seem closer to VUSXX. On the other hand, your reasoning that an annual riskfree benchmark should be equated to a oneyear TBill is intriguing...
UPDATE: barring some new inputs (e.g. from Prof. Damodaran), I am going to stick with the simple approach suggested at the end of this post. Seems good enough.
Re: Spreadsheet for backtesting (includes TrevH's data)
Now as to commodities... This one is the most puzzling entry in the 2015 Simba spreadsheet. Current data sources are:
1) Collaterallized Chase Index 19721990
2) DJAIJ plus TBills 19911996
3) DJAIG plus Yahoo's IPS 'category' returns 19972001 (prior to VIPSX)
4) DJAIJ plus VIPSX  20012002
5) Pimco PCRIX 2003+
This refers to an Excel model (http://gnobility.com/Syn_Comm/CCF_19722007_san.xls) which has been lost in history... See old post.
I assume that DJAIJ is a typo, and line items 2 to 4 actually refer to the DJAIG commodity index. Wikipedia documents that it actually got renamed a couple of times and is now known as the Bloomberg Commodity Index (Morningstar ticker: XIUSA04GUL, 1991+ returns). DJAIG is a bit of a strange index, the weightings for each commodity are based on market liquidity, and subject to caps.
Browsing a classic book from Roger Gibson, he's somewhat dismissive about DJAIG and clearly prefers the more venerable S&P GSCI (Morningstar ticker: XIUSA0001Q). The author has been a big advocate for decades of including commodities in one's portfolio. And we have S&P GSCI historical returns since 1970, which is very convenient. Market weightings are based on market production.
As far as I understand, the Excel model was trying to emulate the fact that PCRIX (which is a Pimco fund) uses TIPS as a collateral reserve for active management of commodity securities. More information can be found in threads like Rethinking commodities. Note that the Total Return flavor of indexes like S&P GSCI or DJAIG does measure the returns of collateralized commodity futures (as explained here).
So I would suggest to simplify the whole thing and do the following (which would lead to a fairly similar 19722015 CAGR as before):
1) S&P GSCI TR USD 19702002
2) Pimco PCRIX 2003+
We could also replace PCRIX by iShares GSG, which uses S&P GSCI as its benchmark, but GSG is fairly new (2006) and a bit of a strange ETF (check the factsheet). Feedback welcome.
EDIT: see more discussion in this thread.
1) Collaterallized Chase Index 19721990
2) DJAIJ plus TBills 19911996
3) DJAIG plus Yahoo's IPS 'category' returns 19972001 (prior to VIPSX)
4) DJAIJ plus VIPSX  20012002
5) Pimco PCRIX 2003+
This refers to an Excel model (http://gnobility.com/Syn_Comm/CCF_19722007_san.xls) which has been lost in history... See old post.
I assume that DJAIJ is a typo, and line items 2 to 4 actually refer to the DJAIG commodity index. Wikipedia documents that it actually got renamed a couple of times and is now known as the Bloomberg Commodity Index (Morningstar ticker: XIUSA04GUL, 1991+ returns). DJAIG is a bit of a strange index, the weightings for each commodity are based on market liquidity, and subject to caps.
Browsing a classic book from Roger Gibson, he's somewhat dismissive about DJAIG and clearly prefers the more venerable S&P GSCI (Morningstar ticker: XIUSA0001Q). The author has been a big advocate for decades of including commodities in one's portfolio. And we have S&P GSCI historical returns since 1970, which is very convenient. Market weightings are based on market production.
As far as I understand, the Excel model was trying to emulate the fact that PCRIX (which is a Pimco fund) uses TIPS as a collateral reserve for active management of commodity securities. More information can be found in threads like Rethinking commodities. Note that the Total Return flavor of indexes like S&P GSCI or DJAIG does measure the returns of collateralized commodity futures (as explained here).
So I would suggest to simplify the whole thing and do the following (which would lead to a fairly similar 19722015 CAGR as before):
1) S&P GSCI TR USD 19702002
2) Pimco PCRIX 2003+
We could also replace PCRIX by iShares GSG, which uses S&P GSCI as its benchmark, but GSG is fairly new (2006) and a bit of a strange ETF (check the factsheet). Feedback welcome.
EDIT: see more discussion in this thread.
Re: Spreadsheet for backtesting (includes TrevH's data)
After nagging Prof. Damodaran a second time, I did catch his attention, and he was kind enough to give me this answer:siamond wrote:Agreed on the overall trajectory, but in the years of high inflation (e.g. early 80s), the difference is getting more significant, and a tad harder to ignore.Kevin M wrote:While at it I compared arithmetic mean to geometric mean for quarterly 3month T bills over oneyear periods, and the average difference is only 2 basis points with a standard deviation of 4 basis points. Bottom line is that it doesn't make enough difference to matter.
Anyhoo, to the risk of splitting hairs, we should just try to "do the right thing" here. I am no longer sure if the right thing to do is to use the 3months rates, or the oneyear rates... On one hand, the 3months rates seem closer to VUSXX. On the other hand, your reasoning that an annual riskfree benchmark should be equated to a oneyear TBill is intriguing...
UPDATE: barring some new inputs (e.g. from Prof. Damodaran), I am going to stick with the simple approach suggested at the end of this post. Seems good enough.
First, it depends on how FRED annualizes the T.Bill returns in the first place (since the 3month T.Bill returns are reported in annualized forms). If they multiply by four, your point is well taken and you should compound. If not, you should not. FRED is very cagey about this. Second, it the rate is not compounded, you should computing the compounded effect, but the effect on the equity risk premium (my primary output from these tables) will be in the second decimal point and not significant. But you are right! I should be more precise and I will try to see if the annualized rates already include compounding or not.
I don't fully agree about the significance of the issue, cf. late 70s as previously discussed. Anyhoo, I guess we'll stick to our geometric math for now, and stay tuned for further updates from the good professor. Very nice of him to answer in such open manner.
Re: Spreadsheet for backtesting (includes TrevH's data)
This doesn't make sense to me. We get from FRED t bill rates, not t bill returns. For the series going back to 1934, the rates are monthly. For the series starting in 1954 you can get daily rates, and aggregate to either average or monthend monthly rates.siamond wrote: First, it depends on how FRED annualizes the T.Bill returns in the first place (since the 3month T.Bill returns are reported in annualized forms). If they multiply by four, your point is well taken and you should compound. If not, you should not. FRED is very cagey about this. Second, it the rate is not compounded, you should computing the compounded effect, but the effect on the equity risk premium (my primary output from these tables) will be in the second decimal point and not significant. But you are right! I should be more precise and I will try to see if the annualized rates already include compounding or not.
Typically when you select a longer time period than the finest granularity you get an average by default (which you can switch to end of period), so I would guess that the monthly rates for the 1934 series are monthly averages of daily rates, but I guess they didn't save the actual daily values. We can verify that the 1934 series is average of daily rates by adding the 1954 daily series, calculating a  b, and see that the line is horizontal at 0 for average daily but not for end of period daily.
Using average daily:
Using end of period daily:
Kevin
....... Suggested format for Asking Portfolio Questions (edit original post)
Re: Spreadsheet for backtesting (includes TrevH's data)
Yes, I would tend to agree that there is a bit of confusion. It could make sense to use a mathematical formula to go from annual returns to monthly returns, and one has to be careful with that, as Ladygeek pointed out. But one should go from historical daily/monthly rates to annual rates when annualizing, not the reverse way around. And I don't really care how FRED does it, as it seems to me that the historical monthly rate (or purchase date daily rate) is what it is, this is what applies when buying a 3month tbill.Kevin M wrote:This doesn't make sense to me. We get from FRED t bill rates, not t bill returns. For the series going back to 1934, the rates are monthly. For the series starting in 1954 you can get daily rates, and aggregate to either average or monthend monthly rates.
Actually, thinking again to the topic made me realize that we're assuming coupon payment reinvestment when possible (i.e. when buying the next 3month tbill). While the rest of the Simba returns, at least the US ones, are expressed as TotalReturns (TR series), which include dividends endofthemonth reinvestment (see here). So we're consistent. Prof. Damodaran Excel spreadsheet is *also* consistent, but the reverse way around, it does NOT assume any form of intrayear dividends reinvestment in his S&P 500 computation as a case in point, nor intrayear coupon reinvestment for bonds, etc. But in both cases, we really should do a better job of documenting what type of data series we're providing...
Re: Spreadsheet for backtesting (includes TrevH's data)
I swapped a couple more emails with Prof. Damodaran, and we basically agreed on what I stated in the previous post, his spreadsheet (tbills as well as bonds and stocks) documents returns *without* intrayear reinvestment (e.g. coupon payments, dividends). While our Simba spreadsheet documents returns *with* intrayear reinvestment (e.g. TR series). Which explains his way of estimating tbills (using arithmetic averages) and our way of estimating tbills (using geometric averages). Those are just two distinct data series, with different goals.
I added a couple of lines of explanations to this effect in my working version of the spreadsheet, and this should close the tbill topic for now.
I added a couple of lines of explanations to this effect in my working version of the spreadsheet, and this should close the tbill topic for now.
Last edited by siamond on Wed Nov 16, 2016 12:14 pm, edited 1 time in total.
Re: Spreadsheet for backtesting (includes TrevH's data)
Back to commodity futures (aka CCFs), based on the discussion in this thread, I think we'll settle on a more 'passive' way of viewing things, and replace the Chase/DJAIG/PCRIX approach by the following:
1) S&P GSCI TR USD 19702006
2) iShares GSG 2007+ (passive ETF, tracking the S&P GSCI)
I did the update in my working version of the spreadsheet, this makes the overall CAGR drop significantly (be it), while still displaying the low correlation (slightly negative) between commodities and regular equities that one can expect.
As a side note and for archive, I stumbled upon this writeup, which provides extensive background and thought process about the previous way of doing things (Chase/DJAIG/PCRIX). Perfectly understandable, but still overly complicated, imho.
1) S&P GSCI TR USD 19702006
2) iShares GSG 2007+ (passive ETF, tracking the S&P GSCI)
I did the update in my working version of the spreadsheet, this makes the overall CAGR drop significantly (be it), while still displaying the low correlation (slightly negative) between commodities and regular equities that one can expect.
As a side note and for archive, I stumbled upon this writeup, which provides extensive background and thought process about the previous way of doing things (Chase/DJAIG/PCRIX). Perfectly understandable, but still overly complicated, imho.
Re: Spreadsheet for backtesting (includes TrevH's data)
AlohaJoe wrote:I noticed today that AQR has a number of datasets that may be interesting to add in. From what I can tell, AQR is fine with including them in things like this spreadsheet so long as they are credited. (The full requirements on their website when you download something.)
 Momentum Indexes back to 1980: https://www.aqr.com/library/datasets/m ... esmonthly
Back to AQR and momentum. I wasn't too hot about including them, but I decided to give it a fair try in my working spreadsheet. I spliced the AQR indexes for largecap and smallcap momentum with the actual funds (AMOMX and ASMOX), using the actual funds ERs for the entire time series (19802015). Then I compared to TSM (Total US), LCV (LargeCap Value) and SCV (SmallCap Value). Looks like a nice trajectory.Fryxell wrote:In my Google backtesting spreadsheet I use MTUM and AMOMX for large momentum. I use DWAS and ASMOX for small momentum. The AQR funds only go back to 2010. [...] There is no Vanguard momentum fund.
A closer look at the telltale chart shows that the time period with the reallife returns isn't so convincing though. Let's focus on the 20052015 period (see below). Meh, doesn't look interesting anymore. Admittedly, this is a very short period of time, and other more established factors like SCV didn't make much of a difference either, but I don't know, I have some troubles to convince myself that those AQR data series are worth keeping in there. Note that the value factor appears to be cyclical, and maybe there are good reasons for that, but it is harder to believe that MoM is cyclical. Thoughts?
PS. hm, those telltale charts would look cleaner if I were to start one year earlier (all lines aligned on the same starting point). Will do.
Re: Spreadsheet for backtesting (includes TrevH's data)
It is probably worth trying to think about "what are the criteria for adding something new", since there is an ongoing maintenance cost. Both in terms of the updates (which is small), trying to extend the series back further historically, and in arguing about whether the data series chosen is appropriate/realistic/whatever.siamond wrote:Back to AQR and momentum. I wasn't too hot about including them, but I decided to give it a fair try in my working spreadsheet. [...] I have some troubles to convince myself that those AQR data series are worth keeping in there. Note that the value factor appears to be cyclical, and maybe there are good reasons for that, but it is harder to believe that MoM is cyclical. Thoughts?
I think there is as much (or as little) value in having momentum as having commodities. Actually, I think there is more value in momentum, since most academics consider it on par with value and size. For instance, in a recent paper "Five concerns with the fivefactor model" one of the concerns is that it leaves out momentum:
On the other hand, I am wary of including factors given the current factor hype. I am a moderate believer in factors but I'm suspicious of the fact that the entire world seems to have jumped on the bandwagon right now. Having talked myself through it, I'd leave momentum out this year but probably add it in next year unless the factorcraze totally implodes.Because momentum is too pervasive and important to ignore, most studies do not only report 3factor alphas, but also 4factor alphas, based on the 3factor model augmented with a momentum factor (WML, winner minus losers).
However, the momentum premium has turned out to be too “pervasive” (Fama and French, 2008, p. 1653) and strong to simply ignore, and is by now an established factor. The 4factor model, i.e. the 3factor model augmented with a momentum factor, is as popular in the asset pricing literature as the 3factor model.
Re: Spreadsheet for backtesting (includes TrevH's data)
Commodities were already in the spreadsheet, and were very popular a few years ago (e.g. before 2008 disproved some of their vaunted properties!). So I'd rather keep them in there (hype comes and goes), but I see your point, momentum has much more... momentum nowadays!AlohaJoe wrote:I think there is as much (or as little) value in having momentum as having commodities. Actually, I think there is more value in momentum, since most academics consider it on par with value and size. [...]
On the other hand, I am wary of including factors given the current factor hype. I am a moderate believer in factors but I'm suspicious of the fact that the entire world seems to have jumped on the bandwagon right now. Having talked myself through it, I'd leave momentum out this year but probably add it in next year unless the factorcraze totally implodes.
I confess that I am a big skeptic about momentum. It seems to me that this is one of those things which may disappear once identified and overhyped. Maybe I am biased by my intuition here. Additional maintenance cost is minimal, don't worry about that. I don't mind adding those in the next official release if there is more than one individual asking!
Re: Spreadsheet for backtesting (includes TrevH's data)
Yeah, that's the scenario I'm thinking about: that the spreadsheet accumulates the detritus of all kinds of "once popular" things that no one uses any more. On the other hand, if there is little upkeep cost maybe it is worth being able to look back at various outoffavor things. It seems unlikely that we'd add 510 things every year, so the scope creep is likely to be quite minimal.siamond wrote:Commodities were already in the spreadsheet, and were very popular a few years ago (e.g. before 2008 disproved some of their vaunted properties!). So I'd rather keep them in there (hype comes and goes), but I see your point, momentum has much more... momentum nowadays!AlohaJoe wrote:I think there is as much (or as little) value in having momentum as having commodities. Actually, I think there is more value in momentum, since most academics consider it on par with value and size. [...]
On the other hand, I am wary of including factors given the current factor hype. I am a moderate believer in factors but I'm suspicious of the fact that the entire world seems to have jumped on the bandwagon right now. Having talked myself through it, I'd leave momentum out this year but probably add it in next year unless the factorcraze totally implodes.
Asness & AQR have a paper on the "myths of momentum": www.iinews.com/site/pdfs/JPM_40th_AQR.pdf that you might (or might not!) find persuasive.I confess that I am a big skeptic about momentum. It seems to me that this is one of those things being arbitraged into nothingness.
Re: Spreadsheet for backtesting (includes TrevH's data)
Thank you for sharing, this was a good read. Ok, so I'm prone to myth #7. And no, I didn't find the answer terribly persuasive, but nice try! This being said, you did convince me that I have to put my bias aside in the decisionmaking process for inclusion or not, fair point.AlohaJoe wrote:Asness & AQR have a paper on the "myths of momentum": http://www.iinews.com/site/pdfs/JPM_40th_AQR.pdf that you might (or might not!) find persuasive.
Another reason for my reluctance was to use a vehicle which seems more active than passive, and to have the fund provider (AQR) be the same as the index provider (AQR!). This being said, if we were to track iShares MTUM (as Fryxell suggested a while ago), this relies on the MSCI USA Momentum Index. Both the fund and the index were launched in 2013, which is very recent, but the index was backtracked to 1981 (Morningstar symbol F00000Q9HR). And amazingly, the MTUM expense ratio is very low, 0.15%. So I entered the numbers and gave it a quick try, starting in 1985. (SCMOM and LCMOM refer to the smallcaps and largecaps AQR data series).
Ok, I am starting to think that we have a possible candidate for an addition to the spreadsheet. I am a little bothered by the heavy backtracking though. Say the index were actually created in 1981. In all this time, wouldn't the index definition have changed with all the latest and greatest formulas to capture momentum? (in all fairness, the same happened to value, to a certain extent).
Re: Spreadsheet for backtesting (includes TrevH's data)
With respect to iShares MTUM using the MSCI index and AQR, I'd say that AQR's momentum definition is a bit more traditional, vanilla, etc. They're doing the equivalent of using straightup P/B for value rather than what is generally recommended by people these days, to use a mix of multiple measures.siamond wrote:Ok, I am starting to think that we have a possible candidate for an addition to the spreadsheet. I am a little bothered by the heavy backtracking though. Say the index were actually created in 1981. In all this time, wouldn't the index definition have changed with all the latest and greatest formulas to capture momentum? (in all fairness, the same happened to value, to a certain extent).
Docs:
https://www.aqr.com/~/media/files/fund ... dology.pdf
https://www.msci.com/eqb/methodology/me ... dology.pdf
For example, using multiple Xmonth measures (MSCI does 12month and 6month) is a common improvement over just the single 12monthexceptthelastone. I know AQR uses multiple measures for some of their other funds. MSCI also adjusts for volatility, which kind of makes sense, though I don't know off the top of my head the historical effect of that change.
Re: Spreadsheet for backtesting (includes TrevH's data)
This is certainly one of the downsides of including new things  we have to decide which flavor to include My vote would be to pick MTUM over AQR, mostly because the existing choices already have what I'll call a "nonpurist Vanguardslant". By that I mean: the spreadsheet doesn't include the "best" (or more pure) small or value factor funds; it includes Vanguard's offerings. I think MTUM's lowER makes it more Vanguardish than AQR's offerings and more in keeping with the other entries in the spreadsheet. I think it more likely that Vanguard brings to market an MTUM competitor than an AQR competitor; if that ever happens I imagine it likely that the spreadsheet would start tracking the Vanguard fund.lack_ey wrote:With respect to iShares MTUM using the MSCI index and AQR, I'd say that AQR's momentum definition is a bit more traditional
Re: Spreadsheet for backtesting (includes TrevH's data)
Several of ishares' factor offerings are surprisingly cheap: USMV (US low volatility), QUAL (US quality factor), MTUM, VLUE (US value), SIZE (US size), and EUSA (equal weighted) all have an ER of .15. The international variants are a bit more expensive but not a lot. Generally another .05 for global and .10 for emerging markets. So the US lowvol is .15, the global lowvol is .20, and the emerging markets lowvol is .25.siamond wrote:And amazingly, the MTUM expense ratio is very low, 0.15%.
I can only assume these are priced as lossleaders to try to capture some of the factor market, since the emerging markets dividend ETF has an ER that is double that.
Re: Spreadsheet for backtesting (includes TrevH's data)
Yup, agreed, same thinking here. Done, MTUM added. Thanks for the friendly 'push'.AlohaJoe wrote:My vote would be to pick MTUM over AQR, mostly because the existing choices already have what I'll call a "nonpurist Vanguardslant". By that I mean: the spreadsheet doesn't include the "best" (or more pure) small or value factor funds; it includes Vanguard's offerings. I think MTUM's lowER makes it more Vanguardish than AQR's offerings and more in keeping with the other entries in the spreadsheet.