If you say a whale is stronger than submarine or vice versa, it will be misleading unless you unpack the sub-claims, whereas you can say a blue whale is stronger than a killer whale or that submarine X is stronger than the Titan without unpacking it. Saying ChatGPT is more intelligent than Claude makes some sense, but there's no real point in comparing the intelligence of a person vs. an LLM any more than you would ask whether a humming bird flies better or worse than a Cessna.
> no real point in comparing the intelligence of a person vs. an LLM
Say you're deciding whether to employ an AI on a fairly open-ended family of tasks, or put out a want-ad to hire someone. What do you call what you're doing?
You’re not comparing their intelligence in some general sense; you’re comparing their ability to perform a collection of tasks that define a role. And more likely, you’ll do both because the answer is that each is better at some tasks that comprise the role. Eg, instead of hiring two positions, you’ll hire one and task the LLM with some of the work.
I said "fairly open-ended" and didn't limit the question to a mid-2023 LLM.
The point I'm aiming at is that an increasing ability to solve an open-ended range of problems affects the choices we have to make, no matter whether you object to calling it "intelligence".
Yeah, I agree, and that's one problem with focusing on what "intelligence" means: it kind of connotes, as you say, abstract or academic problems over practical shrewdness. (It doesn't have to have this restriction but people sure seem to take it that way a lot.)
earthboundkid is arguing that there's no point in making the claim that one flies better than the other because that claim breaks into many subclaims; some support one position and some support the other.
You named some subclaims that support the position that a hummingbird flies better than a Cessna.
But there are also many that support the other position: range, speed, load...