Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

[flagged]


Grasping nuance and implications of human replies is also something LLMs struggle with.


You're claiming there's a detectable "mental signature" but dodging the fact that this claim is inherently testable.

Either the signature is recognizable enough to put you in that "offended 22%," or it isn't. You can't invoke pattern recognition to justify your irritation, then hide behind "nuance" when the logical implication—that you should be able to spot it blind—gets pointed out.

Turns out humans are just as evasive as LLMs when pressed to back up what they actually said.


I am not arguing with bots sweetie. Sorry the little model that almost could, flagging your replies here.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: