Ai will overtake all human capabilities on the right timescale. (We just don't know whether that's an order of years, decades, or centuries). It... doesn't make sense that an entire human couldn't be perfectly artificially replicated eventually (and then, obviously, surpassed immediately), unless you believe in a non-physical soul which powers some aspect of our conscious thought.
Even if you believe in such a soul, does it govern the non-conscious systems that do almost all of the work of thinking and feeling? If not - once again, artificial systems will certainly surpass natural ones.
In other words, it's not a question of whether humans will always be the masters, but a question of whether the "gods" we create will love us or hate us. Anthropic is aiming for the former.
My point was not about potential future AI (and not about current NNs). My point was that Anthropic MARKETING team CLAIMS that it is aiming for the AI which would love humans, and it is a commendable effort. But I think, that just like any other corporation in the history, Anthropic cares only about the unrestricted growth and quarterly revenue/profit values. And any "ethics" bs they throw around is a marketing gimmick, designed to improve their image despite them creating a human job replacements right now, right here. That's both due to the direct job replacements, and also by crapping all around older industries, ignoring copyright laws and stealing private IP just because law don't specify NNs directly.
> but a question of whether the "gods" we create will love us or hate us.
You (and pretty much the entire debate around AI safety) have smuggled in the notion that these AIs even have the capacity to "love" and "hate," and have the agency to perhaps act vindictively. How we get from predicting the most plausible next token to a god-like entity with agency is... not clear at all.
Do people think that we're actually sleepwalking towards skynet? Or are they just saying that so that Sam Altman can get the government to put in rules that bind everyone else while allowing OpenAI to proceed without competition and capture the extremely mundane real market of writing ad copy and such.
> You (and pretty much the entire debate around AI safety) have smuggled in the notion that these AIs even have the capacity to "love" and "hate," and have the agency to perhaps act vindictively.
It's a metaphor, actually. I guess I needed heavier use of quotes.
The one thing we can be sure of is that if goal-oriented AI exists, it will follows goals.
> How we get from predicting the most plausible next token to a god-like entity with agency is... not clear at all.
It doesn't have to be clear. If it was clear, we'd be working on it. That's... how invention works.
Even if you believe in such a soul, does it govern the non-conscious systems that do almost all of the work of thinking and feeling? If not - once again, artificial systems will certainly surpass natural ones.
In other words, it's not a question of whether humans will always be the masters, but a question of whether the "gods" we create will love us or hate us. Anthropic is aiming for the former.