It doesn't loop though -- it has continuously updating context -- and if that context continues to head one direction it will eventually break down.
My own personal experience with LLMs is that after enough context they just become useless -- starting to make stupid mistakes that they successfully avoided earlier.
I'm not gonna watch a video but did you even read that article before you linked to it?
"Fink is still a big believer in the U.S. economy and argues things are looking mostly constructive at this point. He feels the bull story is still intact, but its durability matters a lot more."
It's not even just an atheist issue. You have to have spiritual beliefs that value the specific repetitive church rituals so as not to be bored out of your mind.
Unless you grew up surrounded by nonbelievers I'm guessing half a year ago wasn't the first time you've ever been to a church and there's a little more to this anecdote.
I don't understand how this claim can even be tested:
> In 2029, AI will not be able to read a novel and reliably answer questions about plot, character, conflicts, motivations, etc. Key will be going beyond the literal text, as Davis and I explain in Rebooting AI.
Once you are "going beyond the literal text" the standard is usefulness of your insight about the novel, not whether your insight is "right" or "wrong".
> I don't think the article said anything about statistics?
I don't think I said or implied that it did. It's merely one of the many positions that people commonly (and defensively) take for why LLMs aren't and/or can't be intelligent like humans, ignoring that humans exhibit exactly the same patterns.
>The best thing about a good deep conversation is when the other person gets you: you explain a complicated situation you find yourself in, and find some resonance in their replies.
>That, at least, is what happens when chatting with the recent large models.
The first sentence says a good conversation is between two people. The author then pulls the rug out and says "Psych. A good conversation is when I use LLMs."
The author points out humans have decades of memories but is surprised that when they tell someone they are wrong they don't immediately agree and sycophantically mirror the author's point of view.
The author thinks it's weird they don't know when the next eclipse is. They should know this info intuitively.
The author claims humans have a habit of being wrong even in issues of religion but models have no such flaw. If only humans embraced evidence based religious opinions like LLMs.
The author wonders why they bothered writing this article instead of asking ChatGPT to write it.
Did you ask an LLM if this is satire?
I did and Opus said it wasn't satire.
This was clearly a hallucination so I informed it it was incorrect and it changed it's opinion to agree with me so clearly I known what I'm talking about.
I'll spare you the entire output but among other things after I corrected it it said:
The "repeating the same mistakes" section is even better once you see it. The complaint is essentially: "I told someone they were wrong, and they didn't immediately capitulate. Surely pointing out their error should rewire their brain instantly?" The author presents this as a human deficiency rather than recognizing that disagreement isn't a bug.
reply