Intelligence, whether you use it as a fuzzy concept or a narrow one, assumes a goal. It's impossible to say if an actor acts intelligently, without assuming what it's trying to achieve.
It's peak this guy, and peak "rationalists" in general, to defend the concept of "objective" intelligence, intelligence towards unspecified goals. This is what they actually care about: that there's an objective scale of intelligence, and that they're above most people on it. Everything else they believe is tangential. They choose other beliefs according to how they can strengthen the core belief, that they are the intelligent ones. (And you bet they downvote people challenging it).
What goals their intelligence are directed at, they're evasive and inconsistent about, and that others might have different goals is certainly not a permissible option. In THAT area, they are platonists, no matter how fuzzy their forms are: playing the flute is the task of a professional flautist, and steering the ship of humanity is a task for professional captain smart. Where the ship should be heading is not a subject of debate.
If a process has in it a place where a representation of a goal is stored, and for a wide range of goals, if represented in this format, if stored in this location, the process would tend to effectively achieve the goal, then it seems like the “what the goal is” (within the aforementioned wide (but not necessarily universal) range of goals) and the “capability of achieving goals” (within that range of goals) would be things that could be factored out as separate properties.
Do you think that that condition doesn’t occur?
Or, do you think that the “the range of goals for which this works” being limited, makes the “how capable is this process at achieving goals” concept illegitimate, no matter how broad the limited range is?
Also, the reason you are getting downvoted is presumably in part because you talked about being downvoted.
Your accusations are false and uninteresting, so I’m only addressing the object-level topics I can extract from your comment.
You can't pursue all goals equally well, unless you don't pursue goals at all. So when you start talking about "goal representations", you implicitly talk about a set of possible sub-goals - the set you would like to be able to represent, and the process by which you plan to pick them, implicitly defines the actual goal.
You're not interesting either, "rationalist", and I think you believe a lot of false things too. But unfortunately people who should know better listen to you.
I said the accusations weren't interesting, not that you weren't interesting. I'm sure you have plenty of interesting things to say on other topics, and the other parts of what you are saying on this topic are (clearly) interesting enough to engage with. I'm just not interested in mud-slinging.
I'm not entirely sure what it would mean to say that something is equally competent at pursuing any possible goal, but it does seem very likely to me that, whatever it might mean, things can't be equally competent at all possible goals (for reasons of description length, if nothing else).
Ah, hm, it seems you are phrasing things in terms of simultaneously pursuing all possible goals?
I think that's a rather different thing than what I'm describing. Do you agree?
Something can be highly capable with regards to each of two goals which are each-other's opposite (while perhaps not in fact pursuing either), but it certainly can't be simultaneously be effective in pursuing two goals that are in direct opposition.
[emphasis]
None of these things are obstacles to it being possible to say of two mechanisms W and Z, that for every goal, that if W has any competency towards achieving that goal at all, then Z is at least as competent towards achieving that goal.
[/emphasis]
(where what it means for W or Z to have competency towards a goal, is that, the goal can be encoded in whatever goal-representation system the mechanism uses, and if that representation were to be what is stored in the part of the mechanism that determines what goal is pursued, then the mechanism would be effective at achieving that goal.
Though I suppose in some cases it might be unclear whether two representations in different systems are representing "the same goal"?)
You say that talking about "goal representations" implicitly talks about a set of possible sub-goals. I guess you mean, like, goals that would be used to achieve some larger fixed goal? I'm not convinced that has to be true. It could be that the end-goal is exactly the one which is represented.
Additionally, I don't see a reason why there can't be any languages capable of representing arbitrary goals.
At the least, it seems pretty clear to me that there is a language capable of describing any particular goal you could communicate to me, and so any refutation that you might provide me, of the claim that there are such universal goal description languages, would have to be non-constructive, and this makes me doubt the relevance of any such counterexamples.
You need to be really careful about not assuming what you're trying to show when talking about teleology. I know teleology is something like a swear word in old new-atheist circles, but when we're talking about goals, it's teleology as literally as it gets.
You brought up goal representations. Can you explain what a representation is, without sneaking in the notion of "meaning", and thus the notion of "goal"? I certainly can't!
I’m not an atheist. I’m a Christian. I don’t have an objection to bringing up teleology.
Are you asking if I can, (without sneaking in a reference to meaning) explain what a representation of something is, or specifically what a representation of a goal is?
It seems possible that my choice of the word “representation” gave a different impression than I intended. I meant basically the same thing as “encoding”. If that’s the meaning you got from it, then cool, my word choice didn’t cause a miscommunication.
If I have a computable function from finite bit strings to Turing machines, this defines an encoding scheme for Turing machines. A system of representations of Turing machines.
Is that “sneaking in a reference to meaning”? In a sense that implies a notion of “goal”? If so, not in a way that I think matters to the point.
Perhaps one could say that, by describing it as being an encoding scheme of Turing machines, that I am therefore saying it is an encoding scheme for Turing machines, as in, with the purpose/goal of specifying a Turing machine. This, I think has some truth to it, but it doesn’t imply that some artifact which relates to such an encoding scheme in some way, has that as its goal, so much as, me describing something in terms of an encoding of Turing machines, says something about my goals. Namely, I want to talk about the thing in a way relating to Turing machines.
If what you were challenging me to define without a hidden reference to meaning/purpose was specifically a system of representations of goals, then,
well, if by “meaning” you just mean like, “statements and predicates and such”, then, I would say that defining what it means for something to be a scheme for representing goals, should, yes, require referring to something like a correspondence between (encodings/representations) and, something like predicates or conditions about the world or orderings on potential configurations of the world or something like this. Which, in some sense of “meaning”, would, I think, include at least an implicit reference to that sense of “meaning”.
So, if that’s your claim, then I would agree?
But, I don’t think that would imply much about the overall goal of a system.
If I purchase a DVD player which has as parts various microcontrollers which technically are capable of general computation, just because it has some processors in it that could hypothetically execute general-purpose programs, doesn’t prevent the overall purpose of the DVD player from being “to play DVDs”.
Of course, there’s a (probably big) difference between “encoding a program” and “encoding a goal”.
But, in the same way that a device can have components capable of general computation, if only the program were swapped out, without the use of the device intended by the manufacturers being “do general computation”,
I would think that a system could be such that, considered in terms of a particular encoding scheme for goals, if some part of it which (viewed in terms of that scheme) stores some encoding of some goal, and if modified to have an encoding of a variety of other goals (viewed in terms of the encoding scheme) would result in the goal being furthered by the system,
That doesn’t imply that the goal the system was designed to achieve, nor the goal which it currently pursues, is “be able to pursue any goal in this encoding scheme”?
(... wow I phrased that badly... I should edit the wording of this, but going to post it as is first because phone low on power.)
It seems fairly likely to me at this point that I’ve misunderstood you? If you notice a way I’ve likely misunderstood, please point it out.
It's peak this guy, and peak "rationalists" in general, to defend the concept of "objective" intelligence, intelligence towards unspecified goals. This is what they actually care about: that there's an objective scale of intelligence, and that they're above most people on it. Everything else they believe is tangential. They choose other beliefs according to how they can strengthen the core belief, that they are the intelligent ones. (And you bet they downvote people challenging it).
What goals their intelligence are directed at, they're evasive and inconsistent about, and that others might have different goals is certainly not a permissible option. In THAT area, they are platonists, no matter how fuzzy their forms are: playing the flute is the task of a professional flautist, and steering the ship of humanity is a task for professional captain smart. Where the ship should be heading is not a subject of debate.