Hacker Newsnew | past | comments | ask | show | jobs | submit | staticman2's commentslogin

I don't think it's inevitable often the AI will just keep looping again and again. It can happily without frustration loop forever.

It doesn't loop though -- it has continuously updating context -- and if that context continues to head one direction it will eventually break down.

My own personal experience with LLMs is that after enough context they just become useless -- starting to make stupid mistakes that they successfully avoided earlier.


Author thinks he's cute to do things like mention Google without typing Google but I wouldn't call him unhinged.

I'm not gonna watch a video but did you even read that article before you linked to it?

"Fink is still a big believer in the U.S. economy and argues things are looking mostly constructive at this point. He feels the bull story is still intact, but its durability matters a lot more."

It's the human equivilent of AI slop.


It's not even just an atheist issue. You have to have spiritual beliefs that value the specific repetitive church rituals so as not to be bored out of your mind.

Unless you grew up surrounded by nonbelievers I'm guessing half a year ago wasn't the first time you've ever been to a church and there's a little more to this anecdote.

I don't understand how this claim can even be tested:

> In 2029, AI will not be able to read a novel and reliably answer questions about plot, character, conflicts, motivations, etc. Key will be going beyond the literal text, as Davis and I explain in Rebooting AI.

Once you are "going beyond the literal text" the standard is usefulness of your insight about the novel, not whether your insight is "right" or "wrong".


Don't you need to do reinforcement learning through human feedback to get non gibberish results from the models in general?

1900 era humans are not available to do this so I'm not sure how this experiment is supposed to work.


>about how LLMs are just statistical machines and not really thinking,

I don't think the article said anything about statistics?

This seems to be a sort of Rorchasch test but looking at it again:

>This does not bode well for my interest in meeting new people

It really does seem to me the article is making fun of people who think this sort of article is on point.

There's a genre of satire where the joke is that it makes you ask "Who the heck is the sort of person who would write this?"

It could fit in that genre but of course I could be wrong.


> I don't think the article said anything about statistics?

I don't think I said or implied that it did. It's merely one of the many positions that people commonly (and defensively) take for why LLMs aren't and/or can't be intelligent like humans, ignoring that humans exhibit exactly the same patterns.


You ask why this would be satire?

Well let's take a look at this:

>The best thing about a good deep conversation is when the other person gets you: you explain a complicated situation you find yourself in, and find some resonance in their replies.

>That, at least, is what happens when chatting with the recent large models.

The first sentence says a good conversation is between two people. The author then pulls the rug out and says "Psych. A good conversation is when I use LLMs."

The author points out humans have decades of memories but is surprised that when they tell someone they are wrong they don't immediately agree and sycophantically mirror the author's point of view.

The author thinks it's weird they don't know when the next eclipse is. They should know this info intuitively.

The author claims humans have a habit of being wrong even in issues of religion but models have no such flaw. If only humans embraced evidence based religious opinions like LLMs.

The author wonders why they bothered writing this article instead of asking ChatGPT to write it.

Did you ask an LLM if this is satire?

I did and Opus said it wasn't satire.

This was clearly a hallucination so I informed it it was incorrect and it changed it's opinion to agree with me so clearly I known what I'm talking about.

I'll spare you the entire output but among other things after I corrected it it said:

The "repeating the same mistakes" section is even better once you see it. The complaint is essentially: "I told someone they were wrong, and they didn't immediately capitulate. Surely pointing out their error should rewire their brain instantly?" The author presents this as a human deficiency rather than recognizing that disagreement isn't a bug.


I think you may mean Sperber and Mercier define "reasoning" as the capacity to produce and evaluate arguments?


True, they use the word "reasoning". Part of my point was just to focus on the more concrete concept: the capacity to produce and evaluate arguments.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: