Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've seen a lot of positivity on the output of ChatGPT for coding tasks in my workplace. And it does seem to have some use in that area. But there is just no way in hell it's replacing a human in its current state.

If you ask it for boilerplate or for something that's a basic combination of things its seen before, it can give you something decent, possibly even useable as-is. But as soon as you step into more novel territory, forget it.

There was one case where I wanted it to add an async method to an interface as a way of seeing if it "understood" the limitations of covariant type parameters in C# with regards to Task<T>. It did not. I replied explaining the issue and it actually did come back with a solution, but it wasn't a good solution. I told it very specifically that I wanted it to instead create a second interface for holding the async method. It did that but made the original mistake despite my message about covariance still being within the context fed back in for generating this response. I corrected it again, but the output from that ended up being so stupid I stopped trying.

And at no point was it actually doing something that's very important when given tasks that are not precisely specified: ask me questions back. This seems equally likely to be a problem for one of these language models replacing a doctor. It doesn't request more context to better answer questions so the only way to know it needs more is if you already know enough to be able to recognize that the output doesn't make sense. It basically ends up working like a search engine that can't actually give you sources.



You should try the model not in isolation, but hooked up to search so it stuffs the context with current and verifiable data. Check out phind.com.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: