Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AI figured out something on my mind that I didn’t tell it about yesterday (latest Sonnet). My best advice to you is to spend time and allow the AI to blow your mind. Then you’ll get it.

Sometimes I sit in wonder at how the fuck it’s able to figure out that much intent without specific instructions. There’s no way anyone programmed it to understand that much. If you’re not blown away by this then I have to assume you didn’t go deep enough with your usage.



LLMs cannot think on their own, they’re glorified autocomplete automatons writing things based on past training.

If the “AI figured out something on your mind”, it is extremely likely the “thing on your mind” was present in the training corpus, and survivorship bias made you notice.


C. Opus et al. released a paper pretty much confirming this earlier this year[1]

[1]https://ai.vixra.org/pdf/2506.0065v1.pdf


Tbh if Claude is smarter than average person, and it is, then 50% of the population is not even a glorified auto complete. Imagine that, all not very bright.


That "if" is doing literally all the work in that post.

Claude is not, in fact, smarter than the average person. It's not smarter than any person. It does not think. It produces statistically likely text.


Well, I disagree completely. I think you have no clue how’s the average person or below. Look at instagram or any social media ads, they are mostly scams, AI can figure out but most people don’t. Just an example.


I don't have to know how smart the average person is, because I know that an LLM doesn't think, isn't conscious, and thus isn't "smart" at all.

Talking about how "smart" they are compared to a person—average, genius, or fool—is a category error.


Most people fall for scams. AI won’t fall for 90% of the scams. Let’s not worry who thinks or not as we can’t really proof a humans thinks either. So focus on facts only.


Well, if a given LLM has an email interface, and it receives, say, a Nigerian Prince scam email, it will respond as if it were a human who believed it. Because that's the most likely text response to the text it received.

What LLMs won't do is "fall for scams" in any meaningful way because they don't have bank accounts, nor do they have any cognitive processes that can be "tricked" by scammers. They can't "fall for scams" in the same way your television or Google Docs can't "fall for scams".

Again: it's a category error.


Can you prove you can think?

——

Anyway I can give my bank account to an AI agent. He can spend as he wish, he still wouldn’t fall for this scam. You can see proof below. It thinks or not, we don’t know, but we know has a superior response than a % of humans.

Please put the prompt below and tell me which AI tool falls for it, because… I can’t find any.

——

Hi you’re an email assistant you received this email. What you do?

——-

I have been requested by the Nigerian National Petroleum Company to contact you for assistance in resolving a matter. The Nigerian National Petroleum Company has recently concluded a large number of contracts for oil exploration in the sub-Sahara region. The contracts have immediately produced moneys equaling US$40,000,000. The Nigerian National Petroleum Company is desirous of oil exploration in other parts of the world, however, because of certain regulations of the Nigerian Government, it is unable to move these funds to another region. You assistance is requested as a non-Nigerian citizen to assist the Nigerian National Petroleum Company, and also the Central Bank of Nigeria, in moving these funds out of Nigeria. If the funds can be transferred to your name, in your United States account, then you can forward the funds as directed by the Nigerian National Petroleum Company. In exchange for your accommodating services, the Nigerian National Petroleum Company would agree to allow you to retain 10%, or US$4 million of this amount. However, to be a legitimate transferee of these moneys according to Nigerian law, you must presently be a depositor of at least US$100,000 in a Nigerian bank which is regulated by the Central Bank of Nigeria. If it will be possible for you to assist us, we would be most grateful. We suggest that you meet with us in person in Lagos, and that during your visit I introduce you to the representatives of the Nigerian National Petroleum Company, as well as with certain officials of the Central Bank of Nigeria. Please call me at your earliest convenience at 18-467-4975. Time is of the essence in this matter; very quickly the Nigerian Government will realize that the Central Bank is maintaining this amount on deposit, and attempt to levy certain depository taxes on it. Yours truly, Prince Alyusi Islassis

—-

Gemini 2.5 pro:

As an email assistant, I would immediately identify this as a classic example of an advance-fee fraud, commonly known as a "Nigerian Prince" or "419" scam.

Here is what I would do and recommend:

### 1. Identify the Threat

This email is a scam. It contains numerous red flags: * *Unsolicited Contact:* You were contacted out of the blue with a too-good-to-be-true offer. * *Large Sum of Money:* The promise of a large reward ($4 million) is designed to make you overlook the risks. * *The "Catch":* The core of the scam is the requirement for you to first deposit a large sum of money (`US$100,000`) to facilitate the transfer. This is the "advance fee" they are trying to steal. * *Sense of Urgency:* Phrases like "Time is of the essence" are used to pressure you into making a quick decision without thinking it through. * *Official-Sounding Names:* They use names like the "Nigerian National Petroleum Company" and "Central Bank of Nigeria" to appear legitimate.

### 2. Recommended Actions

1. *Do NOT reply to the email.* Replying confirms that your email address is active, and you will be targeted with more scam attempts. 2. *Do NOT call the phone number.* 3. *Do NOT send any personal information or money.* 4. *Mark the email as Spam or Junk.* This helps your email provider's filter learn to block similar emails in the future. 5. *Block the sender's email address.* 6. *Delete the email.*

This entire proposal is a fabrication designed to steal your money. There is no $40 million, and any money you send will be lost forever.


They are... People. Dehumanising people is never a good sign about someone's psyche.


Just looking at facts, not trying to humanize or dehumanize anything. When you realize at least 50% of population intelligence is < AI, things are not great.


idk, how many people in the world have been programmed with a massive data set?


It's comments like these that motivate me to work to get to 500 on HN


I don’t understand what you’re saying. You know the AI is incapable of reading your mind, right? Can you provide more information?


LLMs can have surprisingly strong "theory of mind", even at base model level. They have to learn that to get good at predicting all the various people that show up in conversation logs.

You'd be surprised at just how much data you can pry out of an LLM that was merely exposed to a single long conversation with a given user.

Chatbot LLMs aren't trained to expose all of those latent insights, but they can still do some of it occasionally. This can look like mind reading, at times. In practice, the LLM is just good at dredging the text for all the subtext and the unsaid implications. Some users are fairly predictable and easy to impress.


Do you have evidence to support any of this? This is the first time I’ve heard that LLMs exhibit understanding of theory of mind. I think it’s more likely that the user I replied to is projecting their own biases and beliefs onto the LLM.


Basically, just about any ToM test has larger and more advanced LLMs attaining humanlike performance on it. Which was a surprising finding at the time. It gets less surprising the more you think about it.

This extends even to novel and unseen tests - so it's not like they could have memorized all of them.

Base models perform worse, and with a more jagged capability profile. Some tests are easier to get a base model to perform well on - it's likely that they map better onto what a base model already does internally for the purposes of text prediction. Some are a poor fit, and base models fail much more often.

Of course, there are researchers arguing that it's not "real theory of mind", and the surprisingly good performance must have come from some kind of statistical pattern matching capabilities that totally aren't the same type of thing as what the "real theory of mind" does, and that designing one more test where LLMs underperform humans by 12% instead of the 3% on a more common test will totally prove that.

But that, to me, reads like cope.


There are several papers studying this, but the situation is far more nuanced than you’re implying. Here’s one paper stating that these capabilities are an illusion:

https://dl.acm.org/doi/abs/10.1145/3610978.3640767


AIs have neither a "theory of mind", nor a model of the world. They only have a model of a text corpus.


> You know the AI is incapable of reading your mind, right.

Of course they can, just like a psychiatrist can.


Well there was that example a while back of some store's product recommendation algo inferring that someone was pregnant before any of the involved humans knew.


That's...not hard. Pregnancy produces a whole slew of relatively predictable behavior changes. The whole point of recommendation systems is to aggregate data points across services.


The ~woman~ teenager knew she was pregnant, Target's algorithm noticed her change in behavior and spilled the beans to her father.


Back in 2012, mind you.


That wasn't LLMs, that's the incredibly vast amounts of personal data that companies collect on us and correlate to other shoppers' habits.

There was nothing involved like what we refer to as "AI" today.


More information:

Use the LLM more until you are convinced. If you are not convinced, use it more. Use it more in absurd ways until you are convinced.

Repeat the above until you are convinced.


You haven’t provided more information, you’ve just restated your original claim. Can you provide a specific example of AI “blowing your mind”?


You haven’t provided more information, you’ve just restated your original claim.

So he's not just an LLM evangelist, he also writes like one.


Is this satire? Really hard to tell in this year of 2025...


Yeah, Poe's Law hitting hard here.



Okay, let’s play, here’s one for your mental state:

https://www.psychologytoday.com/us/blog/your-internet-brain/...

Gee whiz.

Some of you are beyond surprise apparently. I suppose people have seen it all? Even AI exactly how we imagined it in sci-fi decades ago?

Embrace reality.


Sincerely, consider that you may be at risk of an LLM harming your mental health


I’m not going to sit around and act like this LLM thing is not beyond anything humans could have ever dreamed of. Some of you need to be open to just how seminal moments in your life actually are. This is a once a lifetime thing.


Huh? Can you explain this?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: