Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not me, if I'm going to take the time to read something, I want it to have been written, reviewed and edited by a human. There is far too much high fidelity information to assimilate that I'm missing out on to put in low fidelity stuff


Most human authors are frankly far too stupid to be worth reading, even if they do put care into their work.

This, IMO, is the actual biggest problem with LLMs training on whatever the biggest text corpus us that's available: they don't account for the fact that not all text is equally worthy of next-token-predicting. This problem is completely solvable, almost trivially so, but I haven't seen anyone publicly describe a (scaled, in production) solution yet.


> This problem is completely solvable, almost trivially so, but I haven't seen anyone publicly describe a (scaled, in production) solution yet.

Can you explain your solution?


I imagine it looks something like "Censor all writing that contradicts my worldview"


It hardly matters what sources you are using if you filter it through something that has less understanding than a two year old, if any, no matter how eloquent it can express itself.


Then don't copy and paste your copy of Thinking Fast and Slow into your AI along with my prompt then?


(My comment was less about my behavior but an attempt to encourage others to evaluate my thinking in hopes that they may apply it to their own in order to benefit our collective understanding)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: