Yes, I think using it for code that you could write yourself fairly easily is a sweet spot since you can quickly check it over and are unlikely to be fooled by hallucinations. It can save significant time on typing out boilerplate, refreshing on api calls and type signatures, error handling, and so on.
It’s a save 15 minutes here, 20 minutes there kind of thing that can add up to hours saved over the course of a day.
One of my go-to examples is that I asked ChatGPT to write a DNS server. It didn't get it perfect and needed followup questions, but it 1) got it better than I could (having written a couple in the past) without spending time reading the RFCs, 2) because it's something I have done before, even though I'd have to look up docs to do it again, I could instantly tell which areas it doing the right thing, and where I needed to check specs and adjust details.
But if I didn't know how a DNS server works, on the other hand, it'd have been little help to do it that way because I'd have no idea what.it got right or not, and it's have been far more productive in that case to ask it for a step by step guide of what to do and which portions of the specs to look up.
You have to treat it as a junior:
Either give it tasks you can trivially validate and improve because you know the subject, or ask it to help condense down and explore the search space.
Don't try to get it to complete a task you don't understand how to quickly validate, because you'll waste tremendous amounts of time trying to figure out whether the answer is right.
Most of the people complaining seems to expect it to work well for the latter even when told it's not (yet, anyway) a good use.
It’s a save 15 minutes here, 20 minutes there kind of thing that can add up to hours saved over the course of a day.