Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My experience has been polar opposite with GPT4. As long as I structure my thoughts and present it with what needs to be done - not like a product manager but like a development lead, it spits out stuff that works on first try. It also writes code with a lot of best practices baked in (like better error handling, comments, descriptive names, variable initialization).

Some times this presenting of problem to it means I spend anywhere from 5-10 mins actually writing the points down that describes the requirement - which would result in a working component/module (UI/backend).

We have been trialing GPT4 in my company and unfortunately almost everyone's experience is more on the lines of yours than mine. I know it shouldn't, but honestly it frustrates me a lot when I see people complain that it doesn't work :). It definitely works but it depends on the problem domain and inputs. Often people forget that it has no other context about the problem than just the input you are providing. It pays to be descriptive.



LLMs currently have this problem where they will give confident-sounding responses to prompts where they are lacking enough context. Humans are built to read that as accuracy. It’s wholly a human interface problem.


Some humans find that style of response infuriating, but apparently we are in the minority.

It's almost like "AI hacking people's brains" turned out to happen accidentally, and a huge number of supposedly smart people are getting turned into mindless enthusiasts by nothing more than computer generated bullshit.


Agreed. It would be great if it could ask questions about missing context but maybe language models are bad for that task. Maybe one needs a second run, evaluate the answer and then make it look for info that could improve the answer.


I wonder which personality profiles interact with it best. Probably some function of which abstract layers people start with when thinking.


Code yeah if you know it's pitfalls you can get it correct pretty fast. But I just don;t believe it gives better answers doctors, it makes silly mistakes that signal it doesn't understand things deeply.

I only believe if they actually trained chatgpt on those type of tests specifically.

Not the actualy dynamic nature of dealing with patients & lawsuits.


I’d say keep quiet and even join them in their incessant whining, meanwhile building your skills. Use this advantage.


there's definitely an art to asking the questions, likely because of subtle differences in how a lot of people communicate in writing.

NLP can recognize alt accounts of individuals on places like HN and reddit, but a person would probably need to study the comments pretty hard to determine the same thing, its not natural for people imo but it seems to be the foremost aspect of any kind of model that's processing human writing.


How do you do that for patients?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: