Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I had a pretty cool experience with that the other day. I wrote some production code (LLM had no idea what was going on), then I measured the coverage and determined a test case that would increase it (again, just using my brain), BUT when I typed "testEmptyString" or whatever, the LLM filled in the rest of the test. Not a massive change to the way I work, but it certainly saved me a bunch of time.


I swear half the people in this thread have spent 5 minutes with the first chatgpt, pre 3.5, wrote it off and are so convinced of their superiority that they won't spend the time required to even see where it's at.

Ever saw someone really bad at googling? It's the exact same thing with LLMs (for now). They're not magic crystal balls, and they certainly can't read everyone's minds at the same time. But give them a bunch of context, and they'll surprise you.


Sshhh, we've still got like a year of advantage over the folks that haven't learned about searching the Internet and still have to drive to their local university library...don't squander it!


Engineers are also notorious for having hit and miss soft skills. The interface for LLMs is natural language. I wouldn’t be surprised if much of the variance in usefulness boils down to how effectively you can communicate what you want it to do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: