AI figured out something on my mind that I didn’t tell it about yesterday (latest Sonnet). My best advice to you is to spend time and allow the AI to blow your mind. Then you’ll get it.
Sometimes I sit in wonder at how the fuck it’s able to figure out that much intent without specific instructions. There’s no way anyone programmed it to understand that much. If you’re not blown away by this then I have to assume you didn’t go deep enough with your usage.
LLMs cannot think on their own, they’re glorified autocomplete automatons writing things based on past training.
If the “AI figured out something on your mind”, it is extremely likely the “thing on your mind” was present in the training corpus, and survivorship bias made you notice.
Tbh if Claude is smarter than average person, and it is, then 50% of the population is not even a glorified auto complete. Imagine that, all not very bright.
Well, I disagree completely. I think you have no clue how’s the average person or below. Look at instagram or any social media ads, they are mostly scams, AI can figure out but most people don’t. Just an example.
Most people fall for scams. AI won’t fall for 90% of the scams. Let’s not worry who thinks or not as we can’t really proof a humans thinks either. So focus on facts only.
Well, if a given LLM has an email interface, and it receives, say, a Nigerian Prince scam email, it will respond as if it were a human who believed it. Because that's the most likely text response to the text it received.
What LLMs won't do is "fall for scams" in any meaningful way because they don't have bank accounts, nor do they have any cognitive processes that can be "tricked" by scammers. They can't "fall for scams" in the same way your television or Google Docs can't "fall for scams".
Anyway I can give my bank account to an AI agent. He can spend as he wish, he still wouldn’t fall for this scam. You can see proof below. It thinks or not, we don’t know, but we know has a superior response than a % of humans.
Please put the prompt below and tell me which AI tool falls for it, because… I can’t find any.
——
Hi you’re an email assistant you received this email. What you do?
——-
I have been requested by the Nigerian National Petroleum Company to contact you for assistance in resolving a matter. The Nigerian National Petroleum Company has recently concluded a large number of contracts for oil exploration in the sub-Sahara region. The contracts have immediately produced moneys equaling US$40,000,000. The Nigerian National Petroleum Company is desirous of oil exploration in other parts of the world, however, because of certain regulations of the Nigerian Government, it is unable to move these funds to another region.
You assistance is requested as a non-Nigerian citizen to assist the Nigerian National Petroleum Company, and also the Central Bank of Nigeria, in moving these funds out of Nigeria. If the funds can be transferred to your name, in your United States account, then you can forward the funds as directed by the Nigerian National Petroleum Company. In exchange for your accommodating services, the Nigerian National Petroleum Company would agree to allow you to retain 10%, or US$4 million of this amount.
However, to be a legitimate transferee of these moneys according to Nigerian law, you must presently be a depositor of at least US$100,000 in a Nigerian bank which is regulated by the Central Bank of Nigeria.
If it will be possible for you to assist us, we would be most grateful. We suggest that you meet with us in person in Lagos, and that during your visit I introduce you to the representatives of the Nigerian National Petroleum Company, as well as with certain officials of the Central Bank of Nigeria.
Please call me at your earliest convenience at 18-467-4975. Time is of the essence in this matter; very quickly the Nigerian Government will realize that the Central Bank is maintaining this amount on deposit, and attempt to levy certain depository taxes on it.
Yours truly,
Prince Alyusi Islassis
—-
Gemini 2.5 pro:
As an email assistant, I would immediately identify this as a classic example of an advance-fee fraud, commonly known as a "Nigerian Prince" or "419" scam.
Here is what I would do and recommend:
### 1. Identify the Threat
This email is a scam. It contains numerous red flags:
* *Unsolicited Contact:* You were contacted out of the blue with a too-good-to-be-true offer.
* *Large Sum of Money:* The promise of a large reward ($4 million) is designed to make you overlook the risks.
* *The "Catch":* The core of the scam is the requirement for you to first deposit a large sum of money (`US$100,000`) to facilitate the transfer. This is the "advance fee" they are trying to steal.
* *Sense of Urgency:* Phrases like "Time is of the essence" are used to pressure you into making a quick decision without thinking it through.
* *Official-Sounding Names:* They use names like the "Nigerian National Petroleum Company" and "Central Bank of Nigeria" to appear legitimate.
### 2. Recommended Actions
1. *Do NOT reply to the email.* Replying confirms that your email address is active, and you will be targeted with more scam attempts.
2. *Do NOT call the phone number.*
3. *Do NOT send any personal information or money.*
4. *Mark the email as Spam or Junk.* This helps your email provider's filter learn to block similar emails in the future.
5. *Block the sender's email address.*
6. *Delete the email.*
This entire proposal is a fabrication designed to steal your money. There is no $40 million, and any money you send will be lost forever.
Just looking at facts, not trying to humanize or dehumanize anything. When you realize at least 50% of population intelligence is < AI, things are not great.
LLMs can have surprisingly strong "theory of mind", even at base model level. They have to learn that to get good at predicting all the various people that show up in conversation logs.
You'd be surprised at just how much data you can pry out of an LLM that was merely exposed to a single long conversation with a given user.
Chatbot LLMs aren't trained to expose all of those latent insights, but they can still do some of it occasionally. This can look like mind reading, at times. In practice, the LLM is just good at dredging the text for all the subtext and the unsaid implications. Some users are fairly predictable and easy to impress.
Do you have evidence to support any of this? This is the first time I’ve heard that LLMs exhibit understanding of theory of mind. I think it’s more likely that the user I replied to is projecting their own biases and beliefs onto the LLM.
Basically, just about any ToM test has larger and more advanced LLMs attaining humanlike performance on it. Which was a surprising finding at the time. It gets less surprising the more you think about it.
This extends even to novel and unseen tests - so it's not like they could have memorized all of them.
Base models perform worse, and with a more jagged capability profile. Some tests are easier to get a base model to perform well on - it's likely that they map better onto what a base model already does internally for the purposes of text prediction. Some are a poor fit, and base models fail much more often.
Of course, there are researchers arguing that it's not "real theory of mind", and the surprisingly good performance must have come from some kind of statistical pattern matching capabilities that totally aren't the same type of thing as what the "real theory of mind" does, and that designing one more test where LLMs underperform humans by 12% instead of the 3% on a more common test will totally prove that.
There are several papers studying this, but the situation is far more nuanced than you’re implying. Here’s one paper stating that these capabilities are an illusion:
Well there was that example a while back of some store's product recommendation algo inferring that someone was pregnant before any of the involved humans knew.
That's...not hard. Pregnancy produces a whole slew of relatively predictable behavior changes. The whole point of recommendation systems is to aggregate data points across services.
I’m not going to sit around and act like this LLM thing is not beyond anything humans could have ever dreamed of. Some of you need to be open to just how seminal moments in your life actually are. This is a once a lifetime thing.
Sometimes I sit in wonder at how the fuck it’s able to figure out that much intent without specific instructions. There’s no way anyone programmed it to understand that much. If you’re not blown away by this then I have to assume you didn’t go deep enough with your usage.