Hacker Newsnew | past | comments | ask | show | jobs | submit | mucha's commentslogin

Adam added a comment to that thread with the 75% number and more context.

LLMs are non-deterministic for everyone. Give it time.


I'll be the first to say I've abandoned a chat and started a new one to get the result I want. I don't see that as a net negative though -- that's just how you use it.



That's actually an ad for X Premium+. No embarrassing ads!


What Satya says: “I’d say maybe 20%, 30% of the code that is inside of our repos today and some of our projects are probably all written by software,”

First line from the article: In April, Microsoft’s CEO said that artificial intelligence now wrote close to a third of the company’s code.

Software != AI

Source: https://www.cnbc.com/2025/04/29/satya-nadella-says-as-much-a...

CNBC misquotes Satya in the same article with his actual quote.


Interesting. How do existing systems catch Task Requirement hallucinations?


They don't. My comment was about "hallucinations in generated code".



Aphantasia solved hallucinations for me.


What happens when the LLM responsible for checking decides to ignore your explicit conditions?


you bury your hand in the sand and pretend the 2nd llm magically lacks the limitations of the first.

Which begs the question... why not use the 2nd llm in the first place if it is the one who actually "knows" the answer?


That's happening. Unmodified LLM outputs aren't copyrightable.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: