Are these the same people in the field who worry about runaway self-improvement and paperclip optimization? If so, then someone is being sloppy with their definitions, because those are not properties of human-like intelligence.
Human-like in capability, not in goals. Ability to do AI research and craft paperclips are both human abilities. If the AI has human-like goals then there's very little need to worry.