I think in either case, GC or not, you write something intuitively and then when it becomes an actual problem, you study the patterns and improve them, but most of the time you can leave it alone and the general purpose thing is good enough.
Or if you are in a niche like audio/video or something, you avoid allocations all together during the bulk of the code.
Oh for sure. Premature optimizing is usually bad form.
What is tricky about performance issues in allocation is that they can be hard to profile. There are tools for analyzing GC performance in GC'd languages, but sometimes malloc/free can just be a big black box.
Premature optimization is not bad form when it's re-framed as good architecture. So it's not 'usually bad form' to architect something from the outset, using your experience, and that's something everyone understands. This pervasive disdain for premature optimization leads to bad architecture that often leads to expensive rewrites. So because the word optimization is so overloaded and treated with disdain it feels like we need a new language to talk about what's really meant by 'premature optimization' ( the bad kind)
That's one reason I point out a few comments above that certain niches will need to take it into account ahead of time. Eg. An a/v application will typically allocate all buffers up front and re-use them frequently rather than return them to the allocator. A lot of server applications will want to keep per-client memory usage low. For general purposes, there's the general purpose allocator.
Or if you are in a niche like audio/video or something, you avoid allocations all together during the bulk of the code.