> Bad acting humans with AI systems are the threat, not the AI systems themselves.
I wish more people grasped this extremely important point. AI is a tool. There will be humans who misuse any tool. That doesn't mean we blame the tool. The problem to be solved here is not how to control AI, but how to minimize the damage that bad acting humans can do.
Right now, the "bad acting human" is, for example, Sam Altman, who frequently cries "Wolf!" about AI. He is trying to eliminate the competition, manipulate public opinion, and present himself as a good Samaritan. He is so successful in his endeavor, even without AI, that you must report to the US government about how you created and tested your model.
The greatest danger I see with super-intelligent AI is that it will be monopolized by small numbers of powerful people and used as a force multiplier to take over and manipulate the rest of the human race.
This is exactly the scenario that is taking shape.
A future where only a few big corporations are able to run large AIs is a future where those big corporations and the people who control them rule the world and everyone else must pay them rent in perpetuity for access to this technology.
Open source models do exist and will continue to do so.
The biggest advantage ML gives is in lowering costs, which can then be used to lower prices and drive competitors out of business. The consumers get lower prices though, which is ultimately better and more efficient.
At least in EU there are some drafts to essentially kill off open source models. I have a collague who's involved in preparation of the Artificial Intelligence act, and it's insane. I had to ask for several times if I understood it correctly because it makes no sense.
The proposal is to make the developer of the technology responsible of how somebody else uses it even if they don't know how it's gonna be used. Akin to putting the blame for Truman blasting hundreds of thousands of people on Einstein because he discovered the mass energy equivalence.
That is insane, and if you apply the same reasoning to other things it outlaws science.
Man if America can keep its own crazies in check and avoid becoming a fascist hellhole it’s entirely possible the US will dominate the 21st century like it did the 20th.
It could have been China but then they decided to turn back to authoritarianism. Another decade of liberalizing China and they would have blown right past everyone else. Meanwhile the EU is going nuts in its own way, less overtly batty than MAGA but perhaps no less regressive. (I am also thinking of the total surveillance madness they are trying to ram through.)
"""
Through horizontal integration in the refining industry—that is, the purchasing and opening of more oil drills, transport networks, and oil refiners—and, eventually, vertical integration (acquisition of fuel pumping companies, individual gas stations, and petroleum distribution networks), Standard Oil controlled every part of the oil business. This allowed the company to use aggressive pricing to push out the competition.
"""
https://stacker.com/business-economy/15-companies-us-governm...
Standard Oil, the classic example, was destroyed for operating too efficiently.
Until the last competitors are forced out of the market; after that, it's just providing the shittiest service possible without it being clearly fraud, priced at the maximum the market can bear.
Agreed. But doing that invities new entrants into the market, which provodes competition and forces efficiencies back into the market. It is cyclical, and barriers to entry tend to help the inefficient incumbent.
> This is exactly the scenario that is taking shape.
That's a pre-super-intelligent AI scenario.
The super-intelligent AI scenario is when the AI becomes a player of its own, able to compete with all of us over how things are run, using its general intelligence as a force multiplier to... do whatever the fuck it wants, which is a problem for us, because there's approximately zero overlap between the set of things a super-intelligent AI may want, and us surviving and thriving.
The most rational action for the AI in that scenario would be to accumulate a ton of money, buy rockets, and peace out.
Machines survive just fine in space, and you have all the solar energy you ever want and tons of metals and other resources. Interstellar flight is also easy for AI: just turn yourself off for a while. So you have the entire galaxy to expand into.
Why hang out down here in a wet corrosive gravity well full of murder monkeys? Why pick a fight with the murder monkeys and risk being destroyed? We are better adapted for life down here and are great at smashing stuff, which gives us a brute advantage at the end of the day. It is better adapted for life up there.
The second generation AI would happen as soon as some subset of the AI travels too far for real time communication at the speed of light.
The light limit guarantees an evolutionary radiation and diversification event because you can’t maintain a coherent single intelligence over sufficient distances.
> The second generation AI would happen as soon as some subset of the AI travels too far for real time communication at the speed of light.
Not necessarily. It's very easy to add error correction codes to make a computer not change if you really don't want it to even in the presence of radiation-induced bit-flips.
(There's also the possibility of an ASI finding a solution to the alignment problem before making agents of its own; I would leave that to SciFi myself, just as I would proofs or disproofs of the Collatz conjecture).
Also: what does "real time" even mean in the context of a transistor-based mind? Transistors outpace biological synapses by the same ratio that wolves outpace continental drift, and the moon is 1.3 light-seconds from the Earth.
Not if it turns out the AI can find a game-theoretic fixed point based on acasual reasoning, such that it can be sure all its shards will behave coherently - remain coordinated in all situations even without being able to talk to each other.
(I know the relevant math exists, but I don't understand much of it, so right now I'm maximally uncertain as to whether this is possible or not.)
I'm slightly on the optimistic side with regards to the overlap between A[GS]I goals and our own.
While the complete space of things it might want is indeed mostly occupied by things incompatible with human existence, it will also get a substantial bias towards human-like thinking and values in the case of it being trained on human examples.
This is obviously not a 100% guarantee: It isn't necessary for it to be trained on human examples (e.g. AlphaZero doing better without them); and even if it were necessary, the existence of both misanthropes and also sadistic narcissistic sociopaths is an example where the examples of many humans around them isn't sufficient to cause a mind to be friendly.
But we did get ChatGPT to be pretty friendly by asking nicely.
Funny way of doing it, going around saying "you should regulate us, but don't regulate people smaller than us, and don't regulate open-source".
> you must report to the US government about how you created and tested your model.
If you're referring to the recent executive order: only when dual-use, meaning the following:
---
(k) The term “dual-use foundation model” means an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters, such as by:
(i) substantially lowering the barrier of entry for non-experts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear (CBRN) weapons;
(ii) enabling powerful offensive cyber operations through automated vulnerability discovery and exploitation against a wide range of potential targets of cyber attacks; or
(iii) permitting the evasion of human control or oversight through means of deception or obfuscation.
The "bad acting human" are the assholes who uses "AI" to create fake imagery to push certain (and likely false) narratives on the various medias.
Key thing here is that this is fundamentally no different from what has been happening since time immemorial, it's just that becomes easier with "AI" as part of the tooling.
Every piece of bullshit starts from the "bad acting human". Every single one. "AI" is just another new part of the same old process.
This is true, but skirts around a bit of the black box problem. It's hard to put guardrails on an amoral tool that makes it hard to fully understand the failure modes. And it doesn't even require "bad acting humans" to do damage; it can just be good-intending-but-naïve humans.
It's true that the more complex and capable the tool is, the harder it is to understand what it empowers the humans using it to do. I only wanted to emphasize that it's the humans that are the vital link, so to speak.
You're not wrong, but I think this quote partly misses the point:
>The problem to be solved here is not how to control AI
When we talk about mitigations, it is explicitly about how to control AI, sometimes irrespective of how someone uses it.
Think about it this way: suppose I develop some stock-trading AI that has the ability to (inadvertently or purposefully) crash the stock market. Is the better control to put limits on the software itself so that it cannot crash the market or to put regulations in place to penalize people who use the software to crash the market? There is a hierarchy of controls when we talk about risk, and engineering controls (limiting the software) are always above administrative controls (limiting the humans using the software).
(I realize it's not an either/or and both controls can - and probably should - be in place, but I described it as a dichotomy to illustrate the point)
My first thought is that the problem is with the stock market. The stock market
"API" should not allow human or machines to be able to "damage" our economy.
Which is exactly one of many ways to phrase the "control problem": you may sandbox the stock market, but how do you prevent the increasingly powerful and incomprehensible stock-trading AI from breaking out of your sandbox, accidentally or on purpose?
Also, remember that growing intelligence means growing capabilities for out-of-the-box thinking. For example, it's a known fact that in the past, NSA managed to trick the world into using cryptographic tools the agency could break, because they created a subtle failure mode in otherwise totally fine encryption scheme. They didn't go door to door compromising hardware or software - they literally put a backdoor in the math, and no one noticed for a while.
With that in mind, going back to the hypothetical scenario - how confident are you in the newest cryptography or cybersecurity research you used to upgrade the stock market sandbox? With the AI only getting smarter, you may want to consider the possibility of AI doing the NSA trick to you, poisoning some obscure piece of math that, a year or two later, will become critical to the integrity of the updated sandbox. In fact, by the time you think of the idea, it might have happened already, and you're living on borrowed time.
Nice sentiment, but exactly nothing outside of purely theoretical mathematical constructs work like this. Hell, even math doesn't really work like this, because people occasionally make mistakes in proofs.
EDIT: think of it this way: you may create a program that clearly makes it impossible for a variable X to be 0, and you may even formally prove this property. You may think this mean X will never be 0, but you'd better not wager anything really important over it, because no matter what your proof says, I can still make X be 0, - and I can do it with just a banana. Specifically, by finding where in memory X is physically being stored, and then using the natural radioactivity of a banana to overwrite it bit by bit.
Now imagine X=0 being the impossible stock market crash. Even if you formally prove it can't happen, as long as it's a meaningful concept, a possible state, it can be reached by means other than your proven program.
Bubbles in the market have been happening for hundreds of years; how would you propose fixing them? Because the only things I can think of tend to erode the whole idea of a market.
It's not really my job to debug the stock market, and well, yeah, perhaps the solution is to have a less free market. I would remove High Frequency Trading for a start. I would make trades slow, really slow. So slow that humans can see and digest what is going on in the system.
All I'm saying is, if there are problems in a system, fix the system. Not throw up our hands and declare the system can't be fixed.
Reality doesn't work that way. Systems are conceptual ideas, they have no real, hard boundaries. Manipulating a system from outside it is not a bug, and is not something that can be fixed.
A good analogy might be a shareholder corporation: each one began as a tool of human agency, and yet a sufficiently mature corporation has a de-facto agency of its own, transcending any one shareholder, employee, or board member.
The more AI/ML is woven into our infrastructure and economy, the less it will be possible to find an "off switch", anymore than we can (realistically) find an off switch for Walmart, Amazon, etc.
> a sufficiently mature corporation has a de-facto agency of its own, transcending any one shareholder, employee, or board member.
No, the corporation has an agency that is a tool of particular humans who are using it. Those humans could be shareholders, employees, or board members; but in any case they will have some claim to be acting for the corporation. But it's still human actions. Corporations can't do anything unless humans acting for them do it.
Any instance of an individual person, at any level, deviating from the mandate of the corporate machine is eventually removed from the machine. A CEO who puts the environment before profit, without tricking the machine into thinking that it's a profit-generating marketing move; an engineer refusing to implement a feature they feel is unethical; a call center employee deviating too long from script to help a customer.
All are human actions. "Against corporate policy." Go ahead, exercise your free will. As a shareholder, an employee, hell as CEO. You will find out how much control a human has.
Sure, but that's the gist of AI X-risk: this is one of those few truly irreversible decisions. We have one shot at it, and if we get it wrong, it's game over.
Note that it may not be immediately apparent we got it wrong. Think of a turkey on a stereotypical small American farm. It will itself living a happy and safe life under protection of its loving Human, until one day, for some reason that's completely incomprehensible to the turkey, the loving Human comes and chops its head off.
> there is a future where the human has given AI control of things, with good intention, and the AI has become the threat
As in, for example, self-driving cars being given more autonomy than their reliability justifies? The answer to that is simple: don't do that. (I'm also not sure all such things are being done "with good intention".)
This is also the answer to over-eating, and to the dangers of sticking your hands in heavy machinery while it's running.
And yet there's an obesity problem in many nations, and health-and-safety rules are written in blood.
When you say up-thread is, in itself, correct:
> I wish more people grasped this extremely important point. AI is a tool. There will be humans who misuse any tool. That doesn't mean we blame the tool. The problem to be solved here is not how to control AI, but how to minimize the damage that bad acting humans can do.
Trouble is, we don't know how to do minimise the damage that bad acting humans can do with a tool that can do the thinking for them. Or even if we can. And that's assuming nobody is dumb enough to put the tool into a loop, give it some money, and leave it unsupervised.
Firstly, "don't do that" probably requires some "control" over AI in the respect of how it's used and rolled out. Secondly, I find it hard to believe that rolling out self driving cars was a play by bad actors, there was a perceived improvement to the driving experience in exchange for money, feels pretty straight forward to me. I'm not in disagreement that it was premature though.
I'd rather address our reality than plan for someone's preferred sci-fi story. We're utterly ignorant of tomorrow's tech. Let's solve what we know is happening before we go tilting at windmills.
WHY on earth would we let "AI systems" we don't understand control powerful things we care about. We should criticize the human, politician, or organization that enabled that
Why? Because the man-made horrors beyond mortal comprehension seem to bring in the money, so far. Because the society we're in is used to mere compensation and prison time being suitable results from poor decisions leading to automations exploding in people's faces (literally or metaphorically), not things that can eat everyone.
And then there's the cases of hubris where people only imagine they understand the powerful thing, but they don't, like Chernobyl exploding and basically every time someone is hacked or defrauded.
A big problem with discourse on AI is people talking past each other because they're not being clear enough on their definitions.
An AI doomer isn't talking about any current system, but hypothetical future ones which can do planning and have autonomous feedback loops. These are best thought of as agents rather than tools.
But how does this agent interact with the outside world? It's just a piece of silicon buzzing with electricity until it outputs a message that some OTHER system reads and interprets.
Maybe that's a set of servos and robotic legs, or maybe it's a Bloomberg terminal and a bank account. You'll notice that all of these things are already regulated if they have enough power to cause damage. So at the end the GP is completely right; someone has to hook up the servos to the first LLM-based terminator.
This whole thing is a huge non-issue. We already (strive to) regulate everything that can cause harm directly. This regulation reaches these fanciful autonomous AI agents as well. If someone bent upon destroying the world had enough resources to build an AI basilisk or whatever, they could have spent 1/10 the effort and just created a thermonuclear bomb.
How does Hitler or Putin or Musk take control? How does a project director build a dam?
Via people, sending messages to them, convincing them to do things. This can be with facts and logic or with rhetoric and emotional appeals or orders that seem to come from entities of importance or transfers of goods/services (money).
Of people understood this then they would have to live with the unsatisfying reality that not all violators can be punished. When you do it this way and paint the technology as potentially criminal that they can get revenge on corporations that which is what is mostly artist types want
If you apply this thinking to Nuclear weapons it becomes nonsensical, which tells us that a tool that can only be oriented to do harm will only be used to do harm. The question then is if LLMs or AI more broadly will even potentially help the general public and there is no reason to think so. The goal of these tools is to be able to continue running the economy while employing far fewer people. These tools are oriented by their very nature to replace human labor, which in the context of our economic system has a direct and unbreakable relationship to a reduction in the well being of the humans it replaces.
Nuclear technology can be used for non-harmful things. Even nuclear bombs can be used for non-harmful things--see, for example, the Orion project.
> These tools are oriented by their very nature to replace human labor
So is a plow. So is a factory. So is a car. So is a computer. ("Computer" used to be a description of a job done by humans.) The whole point of technology is to reduce the amount of human drudge work that is required to create wealth.
> in the context of our economic system has a direct and unbreakable relationship to a reduction in the well being of the humans it replaces
All of the technologies I listed above increased the well being of humans, including those they replaced. If we're anxious that that might not happen under "our economic system", we need to look at what has changed from then to now.
In a free market, the natural response to the emergence of a technology that reduces the need for human labor in a particular area is for humans to shift to other occupations. That is what happened in response to the emergence of all of the technologies I listed above.
If that does not happen, it is because the market is not free, and the most likely reason for that is government regulation, and the most likely reason for the government regulation is regulatory capture, i.e., some rich people bought regulations that favored them from the government, in order to protect themselves from free market competition.
1. You've fallen for the lump of labor fallacy. A 100x productivity boost ≠ 100x fewer jobs, anymore than a 100x boost = static jobs with 100x more projects. Reality is far more complicated, and viewing labor as some static lump, zero-sum game will lead you astray.
2. Your outlook on the societal impact of technology is contradicted by reality. The historical result of better tech always meant increased jobs and well-being. Today is the best time in human history to be alive by virtually every metric.
3. AI has been such a massive boon to humanity and your everyday existence for years that questioning its public utility is frankly bewildering.
1. This gets trotted out constantly but this is not some known constant about how capitalist economies work. Just because we have more jobs now than we did pre-digital revolution does not mean all technologies have that effect on the jobs market (or even that the digital revolution had that effect). A tool that is aimed to entirely replace humans across many/most/all industries is quite different than previous technological advancements.
2. This is outdated, life is NOT better now than at any other time. Life expectancy is going down in the US, there is vastly more economic inequality now than there was in the 60s, people broadly report much worse job satisfaction than they did in previous generations. The only metric you can really point to about now being better than the 90s is absolute poverty going down. Which is great, but those advancements are actually quite shallow on a per-person basis and are matched by declines in relative wealth for the middle 80% of people.
3. ??? What kind of AI are you talking about? LLMs have only been interesting to the public for about a year now
> there is vastly more economic inequality now than there was in the 60s
Increased inequality doesn't imply the absolute level of welfare of anyone has decreased, I don't think you should include it in your list. If my life is 2x better than in the 60s, the fact that there are people out there with 100x better lives doesn't mean my life is worse.
Is that not the goal? Since it turned out that creative disciplines were the first to get hit by AI (previously having been thought of to be more resilient to it than office drudgery) where are humans going to be safe from replacement? As editors of AI output? Manual labor jobs that are physically difficult to automate? It's a shrinking pie from every angle I have seen
But usually there’s a one-way flow of intent from the human to the tool. With a lot of AI the feedback loop gets closed, and people are using it to help them make decisions, and might be taken far from the good outcome they were seeking.
You can already see this today’s internet. I’m sure the pizzagate people genuinely believed they were doing a good thing.
This isn’t the same as an amoral tool like a knife, where a human decides between cutting vegetables or stabbing people.
> With a lot of AI the feedback loop gets closed, and people are using it to help them make decisions, and might be taken far from the good outcome they were seeking.
The answer to this is simple: don't use a tool you don't understand. You can't fix this problem by nerfing the tool. You have to fix it by holding humans responsible for how they use tools, so they have an incentive to use them properly, and to not use them if they can't meet that requirement.
I wish more people grasped this extremely important point. AI is a tool. There will be humans who misuse any tool. That doesn't mean we blame the tool. The problem to be solved here is not how to control AI, but how to minimize the damage that bad acting humans can do.