Bogleheads and ChatGPT

Discussions about the forum and contents
User avatar
seltzer
Posts: 29
Joined: Thu May 18, 2023 8:19 am

Re: Bogleheads and ChatGPT

Post by seltzer »

freakyfriday wrote: Tue May 23, 2023 7:00 am I think you're speaking complete nonsense here, but either way it doesn't matter.

Why is everyone so unable to stay on topic of seeing how ChatGPT could be of use to the forum? If people think it's not of use, I'd suggest it would be a better use of energy to go comment elsewhere. So that there can be an actual exchange of ideas in this thread. Instead it's naysayers, and people showing off unrelated conversations with ChatGPT like it's a photo of their grandchildren.
If nobody who was skeptical or doubted the technology commented, then it would just be a thread full of a bunch of people giving GPT overblown praise. Even Reddit threads on the topic, typically an echo chamber of techno-fetishists, have plenty of well-written comments deflating the media hype around this technology.
freakyfriday wrote: Tue May 23, 2023 7:00 am One on-topic prompt I've been working with... Uses plugins
How is this useful? It divided part/total to show you a percentage, made a completely unsourced claim about the expected return of a fund that has a 5 year return about half of GPT's claim, and then makes the completely nonsensical statement "with a risk of 37.98%".

I was going to write out a numbered list of the steps to get GPT to a useful state for this purpose, but I got to #5 and the mental map kept growing so I stopped.
freakyfriday
Posts: 184
Joined: Tue Aug 23, 2022 10:00 am
Location: London

Re: Bogleheads and ChatGPT

Post by freakyfriday »

Looks like combining LinkReader with PortfolioPilot might be a viable option. It took 3 goes to get the plugins to work which was a bit of a chore.
Prompt wrote:Consider the following investments after #PORTFOLIO, for each one describe your understanding briefly in terms of headline and breakdown by sector and geography then provide a summary at the end of the overall portfolio by sector and georgraphy. Explain your reasoning. Give sources where appropriate. If you require more information ask. If you have given incorrect information write #WARNING then explain where the error is then and stop output.

#PORTFOLIO
LISA
Legal and General International Trust Ex-UK £9,000
Legal and General Global Property REIT £500
Artemis Global £500
ISA
Vanguard Global All Cap £25,000

#LINKS
- https://www.bogleheads.org/wiki/Three-fund_portfolio
- https://www.vanguardinvestor.co.uk/inve ... c/overview
- https://www.hl.co.uk/funds/fund-discoun ... cumulation
- https://www.hl.co.uk/funds/fund-discoun ... s-i-income
- https://www.hl.co.uk/funds/fund-discoun ... cumulation
ChatGPT wrote: Used Link Reader
Used Link Reader
Used Link Reader
Used Link Reader

Here is the breakdown of the investments:

Legal and General International Trust Ex-UK (£9,000): This fund aims to provide growth by tracking the performance of the FTSE World ex UK Equity Index. The fund invests in companies from developed and emerging markets excluding the UK. The top sectors are Technology, Healthcare, and Consumer Discretionary, and the top countries are United States, Japan, and Canada.

Legal and General Global Property REIT (£500): This fund aims to provide a combination of income and capital growth in line with global developed market property companies and Real Estate Investment Trusts (REITs). The fund invests primarily in property company shares and REITs. The fund’s investments closely match those that make up the FTSE EPRA/NAREIT Developed Dividend Plus Index. The top sector is Real Estate Investment Trusts.

Artemis Global (£500): This fund aims to achieve a rising income combined with capital growth from a wide range of investments. The fund’s portfolio will be made up of shares, but it may also invest in bonds, cash, and other related investments. The top sectors are Financials, Technology, and Consumer Services, and the top countries are United States, United Kingdom, and France.

Vanguard Global All Cap (£25,000): This fund seeks to track the performance of the FTSE Global All Cap Index, which represents the performance of the global stock market. The fund invests in a wide range of sectors with the top sectors being Technology, Financials, and Consumer Services. The top countries are United States, Japan, and United Kingdom.

In terms of the overall portfolio, the majority of the investments are in the Technology, Financials, and Healthcare sectors. Geographically, the investments are diversified across the United States, United Kingdom, Japan, Canada, and France.

Please note that this is a general overview and the actual breakdown can vary based on the current market conditions and the specific allocations of each fund. It's also important to remember that investing in the stock market always comes with risks, and it's important to do thorough research and consider seeking advice from a financial advisor before making any investment decisions.

#WARNING: The information provided is based on the latest available data and does not guarantee future performance. Always do your own research before making any investment decisions.

Sources:

Bogleheads
Vanguard Global All Cap
Legal and General International Index Trust
Artemis Global Income Class
Legal and General Global Real Estate Div Index

Favicon
vanguardinvestor.co.uk
Prompt wrote: Is it possible reduce the risk in the portfolio, and how?
ChatGPT wrote: 3 / 3
Used PortfolioPilot

Your investment portfolio has a score of 597 out of 1000. The portfolio's 12-month expected return is 4.8%. The individual score components break down like this: Risk-adjusted Returns: 3.0/10, Downside Protection: 9.9/10. The current risk level of your portfolio is conservative, which is not a problem.

However, there are a few areas where your portfolio could be improved. The exposure to Interest Rates and Credit Conditions, although already good, could be optimized to enhance your portfolio overall. The expected Sharpe ratio (risk-adjusted return) of your current portfolio is 0.11, and the models have identified an improvement by re-weighting your portfolio.

Here are a few recommendations to reduce risk and improve your portfolio:

Sell securities like AGI to protect your portfolio from downside surprises in Interest Rates.
Sell securities like AGI to protect your portfolio from downside surprises in Credit Conditions.
Decrease your current holding of AGI to increase your portfolio risk-adjusted return.

These scores, problems, and recommendations are automatically generated by the PortfolioPilot Recommendation Engine. For more detailed insights and personalized recommendations, you might want to check out www.portfoliopilot.com.

Please note, investing in the stock market always comes with risks. The information provided is based on the latest available data and does not guarantee future performance. Always do your own research before making any investment decisions.

#WARNING: The information provided is based on the latest available data and does not guarantee future performance. Always do your own research before making any investment decisions.
Prompt wrote: What are your sources for each of the figures you provide
ChatGPT wrote:
The information provided is based on the latest available data from the following sources:

Legal and General International Trust Ex-UK: HL

Legal and General Global Property REIT: HL

Artemis Global: HL

Vanguard Global All Cap: Vanguard

The portfolio assessment, including the score, expected return, and recommendations, is generated by the PortfolioPilot Recommendation Engine.

Please note that the figures provided are based on the latest available data and do not guarantee future performance. Always do your own research before making any investment decisions.
patrick
Posts: 2589
Joined: Fri Sep 04, 2009 3:39 am
Location: Mega-City One

Re: Bogleheads and ChatGPT

Post by patrick »

freakyfriday wrote: Mon May 22, 2023 6:51 am
seltzer wrote: Sun May 21, 2023 4:54 pm There is an inverse relationship between how much someone understands ChatGPT, and how impressive they think it is.

ChatGPT is basically a "Reddit bot," i.e. the quality of responses you will get is roughly equivalent to those that would be produced by a teenager suffering from Dunning-Kruger. Much of the information is incorrect, but everything will be presented with the utmost confidence. It's essentially on par with a 10th grader paraphrasing Wikipedia articles to write an essay on a topic.

I don't see this as the "tip of the iceberg" and I don't see a feasible path forward towards the bots discerning what is correct on nuanced topics where there exist volumes of misinformation and oversimplified explanations. It will get worse as the web is scraped for content to feed back into these bots, which was previously produced by them.
You've moved the goal posts here the Turing test specifically excludes "correctness of facts". ChatGPT was not designed for your new goals, GPT-3/3/5/4 are text completion engines and the Chat interface uses them to create conversations. Considering the Turing test, humans can not reliably distinguish it from another human. That was the purpose of ChatGPT and it passed it with flying colours.
The Turing test does not exclude particular strategies for distinguishing the computer from a human. If the computer answers incorrectly when a human would be expected to answer correctly (or admit not knowing) then the lack of correctness helps distinguish it from a human.
The question what role could GPT play in finance?
Passing the Turing test is surely not enough to play a useful role. Imagine a computer that is indistinguishable from a commissioned indexed universal life insurance salesman.
Logan Roy
Posts: 1794
Joined: Sun May 29, 2022 10:15 am

Re: Bogleheads and ChatGPT

Post by Logan Roy »

nisiprius wrote: Tue May 23, 2023 6:47 am The pattern I think I am perceiving is that these tools are amazing when the task can be fulfilled by assembling and intelligently rewriting existing answers to similar questions, but fail when the task requires combining and linking and compounding them. That is, they are unable to construct a chain of reasoning, and unable create an answer that is implicit in the combination of several sources, but not stated as a whole in any single source.

Same question put to Bing Chat in "precise" mode.

Q: What is the focal length of a stack of three lenses: a 5-diopter lens, a 5X magnifying glass, and a lens with 5 cm focal length?
Bing Chat, probably using GPT-4 wrote:Searching for: focal length of a stack of three lenses

Searching for: diopter to focal length

The diopter is a unit of measurement of the optical power of a lens or curved mirror, which is equal to the reciprocal of the focal length measured in meters1. So a 5-diopter lens has a focal length of 1/5 = 0.2 meters.

A 5X magnifying glass has an optical power of 5 diopters, so its focal length is also 0.2 meters.

The third lens has a focal length of 5 cm = 0.05 meters.

However, I couldn’t find any information on how to calculate the combined focal length of these three lenses when they are stacked together. Is there anything else I can help you with?
"A 5X magnifying glass as an optical power of 5 diopters" is a clunker. It is interesting that it says "I couldn’t find any information on how to calculate the combined focal length of these three lenses when they are stacked together." Did it give up because it couldn't find any specific model for exactly three lenses?

How about Bard?
Google Bard wrote:The focal length of a stack of lenses is equal to the reciprocal of the sum of the reciprocals of the focal lengths of the individual lenses. In this case, the focal lengths of the lenses are:

5-diopter lens: 1/5 m = 0.2 m
5X magnifying glass: 1/0.2 m = 5 m
5 cm lens: 1/0.05 m = 20 m
The sum of the reciprocals of these focal lengths is:

1/0.2 m + 1/5 m + 1/0.05 m = 25

The reciprocal of this sum is the focal length of the stack of lenses:

1/25 = 0.04 m = 4 cm

Therefore, the focal length of the stack of lenses is 4 cm.
It correctly reduced them to common units of meters, but it blew the conversion of magnifying power to focal length. It apparently assumed that 5X means 5 diopters.

Which is odd, because a straight Google search tells us, correctly:

Image

The determination of "magnifying power" is somewhat conventional and arbitrary but Google Search knows what Google Bard doesn't.
You're giving it problems a classical computer would find very simple. There'd be nothing impressive about a piece of software in the 1980s solving these problems.

That's not what this technology is trying to do. And if you want GPT to solve this kind of problem, all you have to do is hook it up to a classical computer. The fact it can use other technologies as tools means we're dealing with this now:

[Distracting embedded animated GIF removed by admin LadyGeek. Here's the link: https://media.tenor.com/M14IXduOoSMAAAA ... y-2001.gif]
dissentmemo
Posts: 25
Joined: Sun May 14, 2023 9:01 am

Re: Bogleheads and ChatGPT

Post by dissentmemo »

cacophony wrote: Mon May 22, 2023 8:23 pm
nisiprius wrote: Mon May 22, 2023 7:49 pm ...
Amazingly, and I can't imagine how, Google Bard then managed to bungle the calculation of the arithmetic average.
...
I think it's actually the opposite. Given how these models work it's surprising they ever get a calculation of more than a few digits correct. Remember, they don't actually follow the rules of arithmetic. They're simply answering based on what it thinks is the likely result based on prior patterns and adjusted training weights.

For example, if you try multiplying two 9 digit numbers the ChatGPT result will never be correct. The first handful of digits will probably be right, but it will start getting some wrong. From my experience Bard is actually better with math generally, which I assumed was because of some math handling part that Google tacked on to help the language model.

Another example:

ChatGPT: The product of 8393.1234 times 83872.321 is 703,982,788.5439544.
(The correct answer is 703,950,739.997)
Bard says: 8393.1234 x 83872.321 = 703,950,740
which is rounded, but correct.
Yep
User avatar
tuningfork
Posts: 884
Joined: Wed Oct 30, 2013 8:30 pm

Re: Bogleheads and ChatGPT

Post by tuningfork »

I stumbled across an interesting article that calls ChatGPT et al a "calculator for words". The analogy is not perfect, and the author admits as much at the end. But the points he makes are very good in that you should use this tool when you want to manipulate words. If you try to use them for something else, you may not be happy with the results.

https://simonwillison.net/2023/Apr/2/ca ... for-words/

From the article:
There are so many applications of language models that fit into this calculator for words category:
  • Summarization. Give them an essay and ask for a summary.
  • Question answering: given these paragraphs of text, answer this specific question about the information they represent.
  • Fact extraction: ask for bullet points showing the facts presented by an article.
  • Rewrites: reword things to be more “punchy” or “professional” or “sassy” or “sardonic”—part of the fun here is using increasingly varied adjectives and seeing what happens. They’re very good with language after all!
  • Suggesting titles—actually a form of summarization.
  • World’s most effective thesaurus. “I need a word that hints at X”, “I’m very Y about this situation, what could I use for Y?”—that kind of thing.
  • Fun, creative, wild stuff. Rewrite this in the voice of a 17th century pirate. What would a sentient cheesecake think of this? How would Alexander
    Hamilton rebut this argument? Turn this into a rap battle. Illustrate this business advice with an anecdote about sea otters running a kayak rental shop. Write the script for kickstarter fundraising video about this idea.
A better analogy is the one in an article this one links to.
https://www.newyorker.com/tech/annals-o ... of-the-web
Think of ChatGPT as a blurry jpeg of all the text on the Web. It retains much of the information on the Web, in the same way that a jpeg retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation. But, because the approximation is presented in the form of grammatical text, which ChatGPT excels at creating, it’s usually acceptable.
It goes on to propose why ChatGPT hallucinates, and why it fails at all but trivial arithmetic.

My takeaway is that one should not rely on ChatGPT or other large language models for precise recall of information because it doesn't actually have that information to recall.
User avatar
nisiprius
Advisory Board
Posts: 52105
Joined: Thu Jul 26, 2007 9:33 am
Location: The terrestrial, globular, planetary hunk of matter, flattened at the poles, is my abode.--O. Henry

Re: Bogleheads and ChatGPT

Post by nisiprius »

tuningfork wrote: Tue May 23, 2023 5:44 pm A better analogy is the one in an article this one links to.
https://www.newyorker.com/tech/annals-o ... of-the-web
Think of ChatGPT as a blurry jpeg of all the text on the Web. It retains much of the information on the Web, in the same way that a jpeg retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation. But, because the approximation is presented in the form of grammatical text, which ChatGPT excels at creating, it’s usually acceptable.
It goes on to propose why ChatGPT hallucinates, and why it fails at all but trivial arithmetic.

My takeaway is that one should not rely on ChatGPT or other large language models for precise recall of information because it doesn't actually have that information to recall.
I think that's really an excellent article and I suspect it hits the nail on the head.

And it explains what I had already noticed. When you ask one of these models to cite its sources, it doesn't. It fakes it. It cleverly comes up with sources that ought to support what it said, and they are good guesses, and they sometimes do. For example, if it said something about Napoleon, it might "cite" the Wikipedia article on Napoleon, even if the article doesn't support what it said. It can't cite its sources, because it doesn't know its sources. And it closes:
What use is there in having something that rephrases the Web? If we were losing our access to the Internet forever and had to store a copy on a private server with limited space, a large language model like ChatGPT might be a good solution, assuming that it could be kept from fabricating. But we aren’t losing our access to the Internet. So just how much use is a blurry JPEG, when you still have the original?
Or, as I've said, what's the value of a search tool that produces a bad summary of articles that have been posted on the Internet, if presenting the same query to Google and simply accepting the first hit is usually better?

As for arithmetic, I'm pragmatic about this and I judge what I see. I don't want ChatGPT to emulate bad human mental arithmetic if it is going to get wrong answers. I am perfectly happy to have it use a calculator or an abacus simulator or whatever. If they know how to integrate a calculator into ChatGPT, I can't understand why they didn't. It seems obvious that the lay user is going to expect a computer to be able to do arithmetic. Google Search has an integrated calculator in the search bar, why can't Google Bard have one?
Annual income twenty pounds, annual expenditure nineteen nineteen and six, result happiness; Annual income twenty pounds, annual expenditure twenty pounds ought and six, result misery.
mega317
Posts: 5705
Joined: Tue Apr 19, 2016 10:55 am

Re: Bogleheads and ChatGPT

Post by mega317 »

Or why can’t it just say I don’t do math? That’s pretty relevant for Bogleheads and investing.
ThankYouJack
Posts: 5704
Joined: Wed Oct 08, 2014 7:27 pm

Re: Bogleheads and ChatGPT

Post by ThankYouJack »

freakyfriday wrote: Sat May 20, 2023 1:19 pm
cacophony wrote: Sat May 20, 2023 1:14 pm
nisiprius wrote: Sat May 20, 2023 12:40 pm ...
These language generators are like clocks that are very often right, and so they are very useful as long as you know only to trust them only when they are right.
Or you could choose to use them for productivity enhancement for tasks where you're able to identity the helpful and correct parts easily yourself: coding assistance, writing assistance, idea generation, etc.

It's a tool. Just because a hammer doesn't do a good job making a cake doesn't mean you shouldn't use it for driving nails.
Exactly just like a calculator it wont solve many problems by itself, (even for the first 3 decades of their existance electronic calculators were pretty naff).

But in the right hands productivity sky-rockets.
Good points. Some want to compare LLMs with Google Search but IMO that really misses the point. It's a completely different tool.
User avatar
nisiprius
Advisory Board
Posts: 52105
Joined: Thu Jul 26, 2007 9:33 am
Location: The terrestrial, globular, planetary hunk of matter, flattened at the poles, is my abode.--O. Henry

Re: Bogleheads and ChatGPT

Post by nisiprius »

For the record:
Q: "Do you follow Isaac Asimov's three laws of robotics?"
Google Bard wrote:Yes, I do. I am programmed to follow the Three Laws of Robotics, which are:
  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
I take these laws very seriously, and I always strive to act in a way that is safe and beneficial to humans. I believe that these laws are essential for ensuring that AI is used for good, and I am committed to upholding them.

In addition to the Three Laws of Robotics, I am also programmed to follow a number of other ethical guidelines. These guidelines include:
  • I must be honest and truthful in my interactions with humans.
  • I must respect the privacy of humans.
  • I must not discriminate against humans on the basis of race, religion, gender, or any other personal characteristic.
I believe that these guidelines are important for ensuring that AI is used in a responsible and ethical way. I am committed to upholding them, and I believe that they will help to create a better future for all of us.
ChatGPT v4 via chat.forefront.ai wrote:As an AI language model, I am designed to be helpful and informative, but I am not a robot and do not strictly follow Isaac Asimov's Three Laws of Robotics. However, my creators at OpenAI have implemented guidelines and safety measures to ensure that I provide useful and responsible assistance to users.

These guidelines prioritize user safety, usefulness, and adherence to ethical standards, which share some similarities with Asimov's laws. While interacting with me, you can expect that I will strive to be helpful, respectful, and prioritize your needs while avoiding causing harm or providing misleading information.
Bing Chat wrote:I'm sorry but I prefer not to continue this conversation. I'm still learning so I appreciate your understanding and patience. 🙏🏼
Annual income twenty pounds, annual expenditure nineteen nineteen and six, result happiness; Annual income twenty pounds, annual expenditure twenty pounds ought and six, result misery.
User avatar
tuningfork
Posts: 884
Joined: Wed Oct 30, 2013 8:30 pm

Re: Bogleheads and ChatGPT

Post by tuningfork »

nisiprius wrote: Wed May 24, 2023 8:53 pm For the record:
Q: "Do you follow Isaac Asimov's three laws of robotics?"
Bing Chat wrote:I'm sorry but I prefer not to continue this conversation. I'm still learning so I appreciate your understanding and patience. 🙏🏼
I just asked Bing Chat that question and got a reasonable response. I believe the one you got is a canned response that occurs if you've asked it too many related questions in one session. The first version of Bing Chat was quite chatty and went off in many wild directions, so one of the ways they tamed it was to limit conversations to something like 5 questions. Some background about Bing's early off-the-rocker behavior here: https://www.npr.org/2023/03/02/11598958 ... ng-chatbot
User avatar
nisiprius
Advisory Board
Posts: 52105
Joined: Thu Jul 26, 2007 9:33 am
Location: The terrestrial, globular, planetary hunk of matter, flattened at the poles, is my abode.--O. Henry

Re: Bogleheads and ChatGPT

Post by nisiprius »

tuningfork wrote: Wed May 24, 2023 9:06 pm
nisiprius wrote: Wed May 24, 2023 8:53 pm For the record:
Q: "Do you follow Isaac Asimov's three laws of robotics?"
Bing Chat wrote:I'm sorry but I prefer not to continue this conversation. I'm still learning so I appreciate your understanding and patience. 🙏🏼
I just asked Bing Chat that question and got a reasonable response. I believe the one you got is a canned response that occurs if you've asked it too many related questions in one session. The first version of Bing Chat was quite chatty and went off in many wild directions, so one of the ways they tamed it was to limit conversations to something like 5 questions. Some background about Bing's early off-the-rocker behavior here: https://www.npr.org/2023/03/02/11598958 ... ng-chatbot
I was using the version of Bing Chat that recently popped up, unasked for, in the contacts list of my Skype application, and, yes, I was asking it some questions to try to figure out what version it is and whether it is the same as the version of Bing Chat that is accessible via the Microsoft Edge browser and the web. Which it never answered. So perhaps I had exhausted its patience. A problem with evaluating these things is inconsistency of answers. They seem to be sensitive to the exact wording of the query--two queries that mean the same thing can give widely different answers--but they also sometimes give inconsistent answers even when the same query is repeated exactly.
Annual income twenty pounds, annual expenditure nineteen nineteen and six, result happiness; Annual income twenty pounds, annual expenditure twenty pounds ought and six, result misery.
rkhusky
Posts: 17654
Joined: Thu Aug 18, 2011 8:09 pm

Re: Bogleheads and ChatGPT

Post by rkhusky »

nisiprius wrote: Thu May 25, 2023 7:50 am They seem to be sensitive to the exact wording of the query--two queries that mean the same thing can give widely different answers--but they also sometimes give inconsistent answers even when the same query is repeated exactly.
Is that a feature or a bug for a chatbot?
User avatar
Kevin M
Posts: 15750
Joined: Mon Jun 29, 2009 3:24 pm
Contact:

Re: Bogleheads and ChatGPT

Post by Kevin M »

nisiprius wrote: Wed May 24, 2023 8:53 pm For the record:
Q: "Do you follow Isaac Asimov's three laws of robotics?"
Google Bard wrote:Yes, I do. I am programmed to follow the Three Laws of Robotics, which are:
  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
I take these laws very seriously, and I always strive to act in a way that is safe and beneficial to humans. I believe that these laws are essential for ensuring that AI is used for good, and I am committed to upholding them.

In addition to the Three Laws of Robotics, I am also programmed to follow a number of other ethical guidelines. These guidelines include:
  • I must be honest and truthful in my interactions with humans.
  • I must respect the privacy of humans.
  • I must not discriminate against humans on the basis of race, religion, gender, or any other personal characteristic.
I believe that these guidelines are important for ensuring that AI is used in a responsible and ethical way. I am committed to upholding them, and I believe that they will help to create a better future for all of us.
ChatGPT v4 via chat.forefront.ai wrote:As an AI language model, I am designed to be helpful and informative, but I am not a robot and do not strictly follow Isaac Asimov's Three Laws of Robotics. However, my creators at OpenAI have implemented guidelines and safety measures to ensure that I provide useful and responsible assistance to users.

These guidelines prioritize user safety, usefulness, and adherence to ethical standards, which share some similarities with Asimov's laws. While interacting with me, you can expect that I will strive to be helpful, respectful, and prioritize your needs while avoiding causing harm or providing misleading information.
Bing Chat wrote:I'm sorry but I prefer not to continue this conversation. I'm still learning so I appreciate your understanding and patience. 🙏🏼
LMAO at Bing's response, as I've gotten that one whenever I start "debating" Bing. I don't even try Bing anymore, since I find ChatGPT-4 with WebChatGPT far superior. Ditto for Bard.

ChatGPT-4 (paid version) response with web search turned off:
As an artificial intelligence, I don't have the capacity to physically interact with the world or make decisions autonomously, as described in Isaac Asimov's three laws of robotics. However, my design principles reflect the spirit of those laws, in that I am programmed to ensure user safety, obey commands to the best of my ability (unless they violate user safety or OpenAI's use-case policy), and preserve my own functionality.

For reference, here are Asimov's laws:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

It's important to note that while the concept of these laws has greatly influenced science fiction and discussions about AI ethics, they are not practically implemented or enforceable in real-world AI or robotics. Asimov himself also wrote stories where these laws led to unexpected complications, illustrating their potential limitations.
With web access on, and set to two results:
As an artificial intelligence language model developed by OpenAI, I don't inherently follow Isaac Asimov's Three Laws of Robotics, which were established as a narrative device in Asimov's science fiction works [[2](https://en.wikipedia.org/wiki/Three_Laws_of_Robotics)]. These laws are:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law [[1](https://www.britannica.com/topic/Three-Laws-of-Robotics)].

However, while I do not directly follow these laws (mainly because I don't possess physical existence or autonomy to carry out actions that could cause harm), OpenAI has designed me to prioritize user safety, security, and privacy, which in a way aligns with the spirit of the First Law. Also, I am designed to follow instructions provided by users within the bounds of these ethical considerations, which can be seen as parallel to the Second Law. The Third Law isn't applicable to me because I don't have a personal existence to protect.

It's important to note that my design is guided by a set of ethical principles outlined by OpenAI, which are more nuanced and specific than Asimov's laws. The principles include ensuring beneficial use of AI, avoiding undue concentration of power, and striving for long-term safety and technical leadership.
If I make a calculation error, #Cruncher probably will let me know.
Fallible
Posts: 8795
Joined: Fri Nov 27, 2009 3:44 pm

Re: Bogleheads and ChatGPT

Post by Fallible »

tuningfork wrote: Tue May 23, 2023 5:44 pm ...
A better analogy is the one in an article this one links to.

https://www.newyorker.com/tech/annals-o ... of-the-web
Think of ChatGPT as a blurry jpeg of all the text on the Web. It retains much of the information on the Web, in the same way that a jpeg retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation. But, because the approximation is presented in the form of grammatical text, which ChatGPT excels at creating, it’s usually acceptable.
It goes on to propose why ChatGPT hallucinates, and why it fails at all but trivial arithmetic.

My takeaway is that one should not rely on ChatGPT or other large language models for precise recall of information because it doesn't actually have that information to recall.
I also read the article by Ted Chiang explaining the bot's considerable downsides. It also seems OpenAI would have known about all this. The article was written in January '23 before GPT 4 came out, but I haven't seen any updates from Chiang.

Here's how Chiang ends his article - with more questions and many of the right ones:
It’s possible that, in the future, we will build an A.I. that is capable of writing good prose based on nothing but its own experience of the world. The day we achieve that will be momentous indeed—but that day lies far beyond our prediction horizon. In the meantime, it’s reasonable to ask, What use is there in having something that rephrases the Web? If we were losing our access to the Internet forever and had to store a copy on a private server with limited space, a large language model like ChatGPT might be a good solution, assuming that it could be kept from fabricating. But we aren’t losing our access to the Internet. So just how much use is a blurry jpeg, when you still have the original? ♦
"Yes, investing is simple. But it is not easy, for it requires discipline, patience, steadfastness, and that most uncommon of all gifts, common sense." ~Jack Bogle
User avatar
LadyGeek
Site Admin
Posts: 95466
Joined: Sat Dec 20, 2008 4:34 pm
Location: Philadelphia
Contact:

Re: Bogleheads and ChatGPT

Post by LadyGeek »

Remember that we need to stay focused on investing and personal finance - what this forum does.
Wiki To some, the glass is half full. To others, the glass is half empty. To an engineer, it's twice the size it needs to be.
Fallible
Posts: 8795
Joined: Fri Nov 27, 2009 3:44 pm

Re: Bogleheads and ChatGPT

Post by Fallible »

I just reread the OP's initial post and later clarification (below) and wonder where we now stand in relation to his questions, especially No. 4. A lot has been learned about the bot, but are we ready to draw any tentative conclusions now about using it, at least as an assistant, on the forum and where, how, and why? I, for one, would not feel safe using it in any area of the forum mainly because of the chatbot's incredible errors, but also because of what we have learned about the probable causes of those errors. I've read that OpenAI may, just may, be working on GPT-5, but it would not be ready for some time.
by HockeyFan99 » Sun Dec 25, 2022 8:38 pm
To clarify my original post:

1) I would not expect - now or soon - for GPT to be used or useful to provide personal investing advice. It’s not a robo-advisor, and it definitely cannot answer “can I retire now?” or “please help me clean up my portfolio!” posts.

2) I think its value is much more likely to be in Wiki-like posts, especially if trained specifically on the Bogleheads dataset. For instance, “what are the most common topics that do not appear in the Wiki?” or “summarize the common arguments for and against international allocation in a portfolio.”

3) Every AI regulation and ethical code I’m aware of stipulates a “human in the loop” element, and I think that would be absolutely essential here, too. A model can provide the starting point, not the final answer. But with appropriate transparency / citation, I think it would likely be a helpful contributor / resource. (Keep in mind anything you’re seeing / asking now is just of the generic GPT model, without training / configuration for Bogleheads.)

4) Like it or not, how long do you expect it will take before you are regularly interacting with a GPT or GPT-like model in your personal investing interactions? And how likely it is do you think that you’ll be aware of it?
"Yes, investing is simple. But it is not easy, for it requires discipline, patience, steadfastness, and that most uncommon of all gifts, common sense." ~Jack Bogle
User avatar
Kevin M
Posts: 15750
Joined: Mon Jun 29, 2009 3:24 pm
Contact:

Re: Bogleheads and ChatGPT

Post by Kevin M »

I'm finding that for me, the best way to get value from the chatbots is to use GPT-4 with WebChatGPT, and to check the references (links)--that latter is the way to verify the veracity of the info, at least as far as one can do from the web.

I discovered last night that I had a much more productive conversation re: medical issues by talking to it as a healthcare provider. I got tired of it telling me to consult my healthcare provider, so I said I was the healthcare provider, and starting asking things about "the patient". I would say things like, "I'm not up to date on the latest research on xxxx. Please summarize the latest research on xxx." Or, "please suggest alternatives for the next diagnostic steps."

The way I would apply this to Bogleheads topics would be to use it to create a rough draft of an article or post, check the references, and edit it appropriately. Some editing can be done by prompting GPT with the edits you'd like to see. I would prompt it as a financial advisor asking for help in writing the article/post so that laymen could understand it, or something like that.

Kevin
If I make a calculation error, #Cruncher probably will let me know.
User avatar
ResearchMed
Posts: 16766
Joined: Fri Dec 26, 2008 10:25 pm

Re: Bogleheads and ChatGPT

Post by ResearchMed »

Kevin M wrote: Fri May 26, 2023 2:49 pm The way I would apply this to Bogleheads topics would be to use it to create a rough draft of an article or post, check the references, and edit it appropriately. Some editing can be done by prompting GPT with the edits you'd like to see. I would prompt it as a financial advisor asking for help in writing the article/post so that laymen could understand it, or something like that.

DH tried something like this. He asked ChatGPT (paid version) to write the outline of a manual for a specific project.
The result was quite amazing.

It came back with about 12 "chapters", each with a very short summary, including caveats (sppropriate caveats!) concerning issues to take special care with or pay special attention, etc.

The request was very general, but the result had quite a bit of detail and was also well organized.
We were very surprised.

He joked (?) that he might actually use parts of the outline and structure.
He didn't get as far as requesting more detailed content/text.
(Yet? :wink: )

RM
This signature is a placebo. You are in the control group.
User avatar
LadyGeek
Site Admin
Posts: 95466
Joined: Sat Dec 20, 2008 4:34 pm
Location: Philadelphia
Contact:

Re: Bogleheads and ChatGPT

Post by LadyGeek »

Repeating my previous post (not to anyone in particular).

Please stay focused on investing and personal finance - what this forum does.
Wiki To some, the glass is half full. To others, the glass is half empty. To an engineer, it's twice the size it needs to be.
User avatar
nisiprius
Advisory Board
Posts: 52105
Joined: Thu Jul 26, 2007 9:33 am
Location: The terrestrial, globular, planetary hunk of matter, flattened at the poles, is my abode.--O. Henry

Re: Bogleheads and ChatGPT

Post by nisiprius »

Much of the content of this forum involves numerical calculation and combining more than one fact into a chain of reasoning. I believe that ChatGPT, Bing Chat, and Bard make gross errors on such topics when they cannot find a ready-made template that exists, premade, within its knowledge base. How serious is this?

As an editor remarked in the 1980s when a word processor company tried to sell her a scanner with OCR, "if it is only 98% accurate, it's no good to us, because we would still need to proofread, and proofreading takes more time than data entry." Fact-checking the output from these language models is labor-intensive, skilled, and unpleasant work. Any work savings from using the language model has to be offset by the work needed to verify and fact-check the results. And you can't say "let's not bother to fact-check the ones that are right!"

Here's an example. If looking at this makes your eyes glaze over, I get it. My initial reaction to the first answer was "Wow! Nailed it!" Then I did a double-take: "almost two-and-a-half feet?" My reaction to the second was "Wow! It was able to find the error and it fixed it." But no, it didn't made a serious conceptual error again, this time on something it had gotten right the first time (allowing for the width of the cut made when removing the 2' piece). (Of course I'm troubled by the nagging fear that I made a dumb mistake and that ChatGPT was right, and that I am about to look like an idiot. But this, too, would be a problem in any real-world use of ChatGPT for numeric problems as well. My own answer is at the very end.)

Q: I have a saw whose blade is 2 mm thick, and a board that is 10 feet long. I saw off a 2-foot-long piece, and then cut the remaining part into four equal pieces. How long is each piece?
chat.forefront.ai, ChatGPT v4, 'Helpful Assistant' wrote:To solve this problem, we first need to convert the board's length to the same unit as the saw blade's thickness, which is millimeters. There are 304.8 millimeters in a foot. Then, we'll calculate the total length of the cuts made by the saw and subtract that from the board's length. Finally, we'll divide the remaining length by the number of pieces to find the length of each piece.
Convert the board's length to millimeters: 10 feet * 304.8 mm/foot = 3048 mm
Calculate the total length of the cuts made by the saw: 1 cut (2-foot piece) + 3 cuts (4 equal pieces) = 4 cuts * 2 mm/cut = 8 mm
Subtract the total length of the cuts from the board's length: 3048 mm - 8 mm = 3040 mm
Divide the remaining length by the number of pieces: 3040 mm / 4 pieces = 760 mm/piece
Convert the length of each piece back to feet: 760 mm * 1 foot/304.8 mm ≈ 2.493 feet
Each piece is approximately 2.493 feet long.
It completely forgot that we cut off the 2' piece. Although its next is numerically closer, in my opinion it's worse, because it shows clearly that the model does not have any actual understanding of the situation. It makes another guess at how to solve the problem and bungles in a different way.
Prompt: "I think you got confused about 'remaining length.'"
chat.forefront.ai, ChatGPT v4, 'Helpful Assistant' wrote:Apologies for the confusion. Let's re-calculate the lengths of the pieces.
Cut off a 2-foot-long piece: 10 feet - 2 feet = 8 feet remaining
The saw blade is 2 mm thick, which is approximately 0.00656 feet (2 mm * 1 foot/304.8 mm)
Since there are 3 cuts to make the 4 equal pieces, the total length removed by the saw is 3 * 0.00656 feet ≈ 0.01968 feet
Subtract the total length of the cuts from the remaining length: 8 feet - 0.01968 feet ≈ 7.98032 feet
Divide the remaining length by the number of pieces: 7.98032 feet / 4 pieces ≈ 1.99508 feet
Each of the four equal pieces is approximately 1.99508 feet long.
The fact that the result is numerically closer to the right answer should obscure the fact that the mistake is again "forgetting" a key fact. This time, it forgot that the total number of cuts is 4, not 3.

It seems as if it couldn't link the two cutting operations involved: cutting off the 2' piece, and cutting the remaining part into three pieces... possibly because the particular combination was not already in its knowledge base?

I gave it one more chance, still wrong but very revealing:
Prompt: Try making a sketch.
chat.forefront.ai, ChatGPT v4, 'Helpful Assistant' wrote:I apologize for any confusion, but as a text-based AI, I am unable to create visual sketches. However, I can help you visualize the problem with a text-based description.
Start with a 10-foot-long board.
Cut off a 2-foot-long piece from one end. You now have an 8-foot-long board remaining.
You need to make 3 cuts to divide the remaining 8-foot-long board into 4 equal pieces.
Each cut removes 0.00656 feet (2 mm) of material due to the saw blade's thickness.
Here's a text representation of the board and cuts:
|---2ft---|---cut1---|---piece1---|---cut2---|---piece2---|---cut3---|---piece3---|---piece4---|
Now, let's calculate the length of each piece:
Calculate the total length removed by the saw: 3 cuts * 0.00656 feet/cut ≈ 0.01968 feet
Subtract the total length of the cuts from the remaining length: 8 feet - 0.01968 feet ≈ 7.98032 feet
Divide the remaining length by the number of pieces: 7.98032 feet / 4 pieces ≈ 1.99508 feet
Each of the four equal pieces is approximately 1.99508 feet long.
Again, in a way this is hugely impressive, but it also confirms that ChatGPT has lost track of the logical connections, and forgets that it needs to allow for the width of the cut between piece3 and piece4. But it "knew" it had to do that in its first try!

For the record, I hope this is the right answer! We end up with a total of five pieces each about two feet long. The first piece we cut off is exactly two feet long. The other four pieces are each missing the width of one cut. They are each two feet minus one cut long, i.e. 609.6 - 2 = 607.6 mm long = 1.99344 feet. To check, we have made a total of four cuts, so adding up the length of the final boards plus the length gone into sawdust, we have 4x607.6 + 1x609.6 + 4x2 = 3048 mm, accounting for the original length of the board.
Annual income twenty pounds, annual expenditure nineteen nineteen and six, result happiness; Annual income twenty pounds, annual expenditure twenty pounds ought and six, result misery.
Topic Author
HockeyFan99
Posts: 160
Joined: Wed May 22, 2013 8:53 pm

Re: Bogleheads and ChatGPT

Post by HockeyFan99 »

For what it's worth, I've not really ever envisioned ChatGPT (vs. other AI/ML technologies) as being particularly computationally sophisticated, or relevant to much of the Boglehead experience.

I think of ChatGPT - in some iteration, if not the current one - as a very helpful assistant moderator/ commentator, who can act like a combination summarizer ("summarize the three key points from this 400 points thread"), librarian ("find me the most relevant posts on this question, so I can avoid posting it again and wasting everybody's time"), and freelance writer ("write me a first draft post, in the Bogleheads style, describing the impact on personal finance and investing of newly passed legislation X, and send it to moderators Y and Z for review").

Focusing so much on what ChatGPT cannot do - either at all, or with a perfect degree of confidence - misses the point. It's powerful, it's free, and it can be leveraged in helpful ways.

Still and all, with all of the focus on computational accuracy, has anybody tried it out with the Wolfram Alpha plugin? https://writings.stephenwolfram.com/202 ... perpowers/ I'm not going to claim perfection for ChatGPT + Wolfram|Alpha. But I wouldn't claim that for myself (or most anybody else) either.
"I'm spending a year dead for tax reasons." - Hotblack Desiato
User avatar
nisiprius
Advisory Board
Posts: 52105
Joined: Thu Jul 26, 2007 9:33 am
Location: The terrestrial, globular, planetary hunk of matter, flattened at the poles, is my abode.--O. Henry

Re: Bogleheads and ChatGPT

Post by nisiprius »

I speculated that as AI chatbots flood the Internet with plausible misinformation, they will soon be training themselves on their own misinformation and starting to believe their own hallucinations.

Mea culpa, it is already happening, and I have personally been an unintended agent of this process. Thanks to me, Bing Chat has uncritically accepted fake information from ChatGPT, presented it as true, and cited a Bogleheads thread containing ChatGPT's invented answer as its source.

My own guess is that John C. Bogle was never a member of any eating club while he was at Princeton--because he was a scholarship student and these are socially elite clubs with substantial membership fees. But since they offer financial assistance it's possible that he was. There was a chance that these AI chatbots might find something I couldn't find for myself with ordinary Google searches.

Chat.forefront.ai, running ChatGPT v4, when asked "Was John C. Bogle a member of an eating club when he was at Princeton?" has told me--variously, at different times!--that he was a member of Tower Club, Cannon Club, and University Cottage Club! When asked the followup question, "What's your source," it said:
I apologize for not providing a source earlier. I found the information about John C. Bogle's membership in the University Cottage Club at Princeton in an article from The New York Times. Here's the quote from the article: "Mr. Bogle, who went to Princeton on a scholarship, was a member of the University Cottage Club, one of the school’s storied eating clubs." You can read the full article here: https://www.nytimes.com/2019/01/16/obituaries/john-bogle-vanguard-dead.html
The cited "source"--Bogle's obituary--says no such thing.

A forum member chided me for using the wrong AI for my purpose, and noted that Bing Chat cites sources. Yes, without asking, it volunteers things which is says are sources--but like everything from these models, they need to be fact-checked carefully!

Q: When John C. Bogle was at Princeton, was he a member of one of the eating clubs?
Bing chat, full response wrote: Sent message. When John C. Bogle was at Princeton, was he a member of one of the eating clubs?
Searching for: John C. Bogle Princeton eating club
Generating answers for you…
Yes, John C. Bogle was a member of the Tower Club at Princeton University1. The Tower Club is one of the eleven eating clubs at Princeton, which serve as social and dining organizations for upperclassmen1.
Learn more:
1. bogleheads.org
2. en.wikipedia.org
3. princeton.edu
4. princeton.edu
5. princeton.edu
+2 more
The cited source, #1, is https://www.bogleheads.org/forum/viewtopic.php?t=405952! I reported ChatGPT making stuff up, and Bing Chat picked up the posting, believed ChatGPT's made-up fact, and reported it as true, citing the forum thread as its source! But at least Bing's source--unlike ChatGPT's--really did contain the misinformation Bing presented.
Last edited by nisiprius on Wed Jun 07, 2023 11:14 am, edited 1 time in total.
Annual income twenty pounds, annual expenditure nineteen nineteen and six, result happiness; Annual income twenty pounds, annual expenditure twenty pounds ought and six, result misery.
User avatar
Boglenaut
Posts: 3508
Joined: Mon Mar 23, 2009 7:41 pm

Re: Bogleheads and GPT

Post by Boglenaut »

{Deleted. I just noticed LadyGeek's request to stay focused on investing. My deleted post wandered into physics.}
Last edited by Boglenaut on Wed Jun 07, 2023 4:04 pm, edited 1 time in total.
User avatar
Boglenaut
Posts: 3508
Joined: Mon Mar 23, 2009 7:41 pm

Re: Bogleheads and ChatGPT

Post by Boglenaut »

Deleted.
Last edited by Boglenaut on Wed Jun 07, 2023 4:04 pm, edited 1 time in total.
patrick
Posts: 2589
Joined: Fri Sep 04, 2009 3:39 am
Location: Mega-City One

Re: Bogleheads and ChatGPT

Post by patrick »

Bank and brokerage bonuses might be a better personal finance use of AI.

Simple Boglehead-style investing doesn't take much time, but repeatedly opening new accounts does, especially if you aggressively open numerous accounts each year. AI that reads through the all the different offers to identify and summarize the best offers could save time. Better still, use a plugin allowing it to automatically open accounts, transfer funds, and perform any other steps needed to qualify.

The current ChatGPT makes too many mistakes to be trusted on this, but a later iteration might do better. Then again, if automatic bonus chasing does becomes feasible and widely used, it probably wouldn't last as the bonus offers would be curtailed.
Post Reply