That's actually a bit of an interesting question. General intelligence and human-like intelligence are not necessarily the same, and I'm not 100% convinced there's overlap. We can solve lots of the problems it occurs to us to try to solve, but that alone doesn't prove that we're "general intelligences". There are probably categories of problems we can't solve or even properly conceive, just like every other specialized intelligence. In short, be careful with your definitions. :)
Good point and interesting idea, but it's well established in common usage and the field of AGI that the term is meant to refer to human-like intelligence. So that's the default understanding, and you would need to qualify the term to mean something more general as you are describing.
Are these the same people in the field who worry about runaway self-improvement and paperclip optimization? If so, then someone is being sloppy with their definitions, because those are not properties of human-like intelligence.
Human-like in capability, not in goals. Ability to do AI research and craft paperclips are both human abilities. If the AI has human-like goals then there's very little need to worry.