Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A type of reasoning. It's still bad at mathematical reasoning and advanced programming or at least translating very complicated written instructions into working code without any human intervention. We also don't know how good it is at reasoning about the physical world although I think Microsoft was doing some research on that. Then there's theory of mind and the reasoning that goes along with it. Then there's reasoning about the future, how one's actions will affect outcomes and then reasoning about that subsequent future.


Not even advanced programming.

ChatGPT is impressive, but gets many things wrong. If you know what you are doing it's an amazing programming assistant. It makes me noticeably more productive. It may lead someone who doesn't know what they are doing in weird rabbit holes that will lead nowhere however.

One silly example. I was using a library I hadn't use before, and I asked how I could get certain attributes. It gave me an answer that would't compile at all, the imports didn't exist.

Then when I mentioned that it didn't work, it game me a slightly different answer, that also didn't work, and explained that the previous answer was valid for 3.x. in 1.x or 2.x the new answer was the correct one.

But there's the catch. There's no version 3.x. there's not even a 2.x. It's language model just statically got to that conclusion.

Doesn't make it any less impressive to me. It gets things right often enough, or at least points me in a good direction. I effectively learned new things using it. But it can't replace a developer.

Using ChatGPT as if it was General AI is similar to eat a meal using a hammer and a screwdriver as utensils. You can probably do it, but nobody will have a good time.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: