The AFR piece that underlies this article [1] [2] has more detail on Ng's argument:
> [Ng] said that the “bad idea that AI could make us go extinct” was merging with the “bad idea that a good way to make AI safer is to impose burdensome licensing requirements” on the AI industry.
> “There’s a standard regulatory capture playbook that has played out in other industries, and I would hate to see that executed successfully in AI.”
> “Just to be clear, AI has caused harm. Self-driving cars have killed people. In 2010, an automated trading algorithm crashed the stock market. Regulation has a role. But just because regulation could be helpful doesn’t mean we want bad regulation.”
> “There’s a standard regulatory capture playbook that has played out in other industries
But imagine all the money bigco can make by crippling small startups from innovating and competing with them! It's for your own safety. Move along citizen.
The only meaningful thing in this discussion is about people who want to make money easy, but can’t, because of the rules they don’t like.
Well, suck it up.
You don’t get to make a cheap shity factory that pours its waste into the local river either.
Rules exist for a reason.
You want the life style but also all the good things and also no rules. You can’t have all the cake and eat it too.
/shrug
If China builds amazing AI tech (and they will) then the rest of the world will just use it. Some of it will be open source. It won’t be a big deal.
This “we must out compete China by being as shit and horrible as they are” meme is stupid.
If you want to live in China, go live in China. I assure you you will not find it to be the law less free hold of “anything goes” that you somehow imagine.
The trouble is sometimes they don't. Or they do exist for a reason but the rules are still absurd and net harmful because they're incompetently drafted. Or the real reason is bad and the rules are doing what they were intended to do but they were intended to do something bad.
> If China builds amazing AI tech (and they will) then the rest of the world will just use it.
Not if it's banned elsewhere, or they allow people to use it without publishing it, e.g. by offering it as a service.
And it matters a lot who controls something. "AI" potentially has a lot of power, even non-AGI AI -- it can create economic efficiency, or it can manipulate people. If an adversarial entity has greater economic efficiency, they can outcompete you -- the way the US won the Cold War was essentially by having a stronger economy. If an adversarial entity has a greater ability to manipulate people, that could be even worse.
> If you want to live in China, go live in China. I assure you you will not find it to be the law less free hold of “anything goes” that you somehow imagine.
But that's precisely the issue -- it's not an anarchy, it's an authoritarian competing nation state. We have to be better than them so the country that has an elected government and constitutional protections for human rights is the one with an economic advantage, because it isn't a law of nature that those things always go together, but it's a world-eating disaster if they don't.
> Or they do exist for a reason but the rules are still absurd and net harmful
Ok.
…but if you have a law and you’re opposed to it on the basis that “China will do it anyway”, you admit that’s stupid?
Shouldn’t you be asking: does the law do a useful thing? Does it make the world better? Is it compatible with our moral values?
Organ harvesting.
Stem cell research.
Human cloning.
AI.
Slavery.
How can anyone stand there and go “well China will do it so we may as well?”
In an abstract sense this is a fundamentally invalid logical argument.
Truth on the basis of arbitrary assertion.
It. Is. False.
Now, certainly there is a degree of naunce with regard to AI specifically; but the assertion that we will be “left behind” and “out competed by China” are not relevant to the discussion on laws regarding AI and AI development.
What we do is not governed by what China may or may not do.
If you want to win the “AI race” to AGI, then investment and effort is required, not allowing an arbitrary “anything goes” policy.
China as a nation is sponsoring the development of its technology and supporting its industry.
If you want want to beat that, opposing responsible AI won’t do it.
Of course you have to consider what other countries will do when you create your laws. The notion that you can ignore the rest of the world is both naive and incredibly arrogant.
There are plenty of technologies that absolutely do not "make the world better" but unfortunately must get built because humans are shitty to each other. Weapons are the obvious one, but not the only one. Often countries pass laws to encourage certain technologies or productions so as not to get outcompeted or outproduced by other countries.
The argument here about AI is exactly this sort of argument. If other countries build vastly superior AI by have fewer developmental restrictions, then your country maybe both at a military disadvantage but also at an economic disadvantage because you can be easily outproduced by countries using vastly more efficient technology.
You must balance all the harms and benefits when making laws, including external to the country issues.
I don't think the government is talking about AI for weapons. of course that will be allowed. It's the US, we have the right to kill people. Just not make fake porn videos of them.
> ...but if you have a law and you’re opposed to it on the basis that “China will do it anyway”, you admit that’s stupid?
That depends on what "it" is. If it's slavery and the US but not China banning slavery causes there to be half as much slavery in the world as there would be otherwise, it would be stupid.
But if it's research and the same worldwide demand for the research results are there so you're only limiting where it can be done, which only causes twice as much to be done in China if it isn't being done in the US, you're not significantly reducing the scope of the problem. You're just making sure that any benefits of the research are in control of the country that can still do it.
> Now, certainly there is a degree of naunce with regard to AI specifically; but the assertion that we will be “left behind” and “out competed by China” are not relevant to the discussion on laws regarding AI and AI development.
Of course it is. You could very easily pass laws that de facto prohibit AI research in the US, or limit it to large bureaucracies that in turn become stagnant for lack of domestic competitive pressure.
This doesn't even have anything to do with the stated purpose of the law. You could pass a law requiring government code audits which cost a million dollars, and justify them based on any stated rationale -- you're auditing to prevent X bad thing, for any value of X. Meanwhile the major effect of the law is to exclude anybody who can't absorb a million dollar expense. Which is a bad thing even if X is a real problem, because that is not the only possible solution, and even if it was, it could still be that the cure is worse than the disease.
Regulators are easily and commonly captured, so regulations tend to be drafted in that way and to have that effect, regardless of their purported rationale. Some issues are so serious that you have no choice but to eat the inefficiency and try to minimize it -- you can't have companies dumping industrial waste in the river.
But when even the problem itself is a poorly defined matter of debatable severity and the proposed solutions are convoluted malarkey of indiscernible effectiveness, this is a sure sign that something shady is being evaluated.
A strong heuristic here is that if you're proposing a regulation that would restrict what kind of code an individual could publish under a free software license, you're the baddies.
> Of course it is. You could very easily pass laws that de facto prohibit AI research in the US, or limit it to large bureaucracies that in turn become stagnant for lack of domestic competitive pressure.
…
> A strong heuristic here is that if you're proposing a regulation that would restrict what kind of code an individual could publish under a free software license, you're the baddies.
Sure.
…but those things will change the way development / progress happens regardless of what China does.
“We have to do this because China will do it!” is a harmful trope.
You don’t have to do anything.
If you want to do something, then do it, if it makes sense.
…but I flat out reject the original contention that China is a blanket excuse for any fucking thing.
Take some darn responsibility for your own actions.
> What we do is not governed by what China may or may not do.
Yes it is... Where the hell would you get the impression we don't change how we govern and invest based on what China does, is doing, or might be doing? Do you really think nations don't adjust their behavior and laws based on other counties real or perceived? I can't imagine you're that ignorant.
> If you want want to beat that, opposing responsible AI won’t do it.
I could be wrong; maybe what China does with its AI developments will significantly and drastically alter the current startup status quo for AI startups.
Maybe the laws around AI will drastically impact the ability of startups to compete with foreign competitors.
…but I can’t see that being likely.
It seems to me that restricting chip technology has a much much more significant impact, along with a raft of other measures which are already in place.
All I can see when I look closely at arguments from people saying this kind of stuff is people who want to make deep fakes, steal art and generate porn bots crying about it, and saying it not fair other people (eg. Japan, where this has been ruled legal, China for who knows what reason, mostly ignorance) are allowed to do it.
I’m not sympathetic.
I don’t believe that makes any difference to the progress on AGI.
I don’t care if China out competes other countries on porn bots (I don’t think they will; they have a very strict set of rules around this stuff… but I’ll be generous and include Japan which probably will).
You want the US to get AGI first?
Well, explain specifically how you imagine open source (shared with the world) models, and open code sharing vs. everything being locked away in a Google/Meta sandbox helps?
Are you sure you’re arguing for the right side here? Shouldn’t you be arguing that the models should be secret so China can’t get them?
Or are you just randomly waving your arms in the air about China without having read the original article?
What are you even arguing for? Laws are bad… but sharing with China is also bad… but having rules about what you do is bad… but China will do it anyway… but fear mongering and locking models away in big corporations behind apis is bad… but China… or something…
> It seems to me that restricting chip technology has a much much more significant impact, along with a raft of other measures which are already in place.
Restricting chip technology is useless and the people proposing it are foolish. Computer chips are generic technology and AI things benefit from parallelism. The only difference between faster chips and more slower chips is how much power they use, so the only thing you get from restricting access to chips is more climate change.
> All I can see when I look closely at arguments from people saying this kind of stuff is people who want to make deep fakes, steal art and generate porn bots crying about it, and saying it not fair other people (eg. Japan, where this has been ruled legal, China for who knows what reason, mostly ignorance) are allowed to do it.
The problem is not that people won't be able to make porn bots. They will make porn bots regardless, I assure you. The problem is that the people who want to control everything want to control everything.
You can't have a model with boobs in it because that's naughty, so we need a censorship apparatus to prevent that. And it should also prevent racism, somehow, even though nobody actually agrees how to accomplish that. And it can't emit foreign propaganda, defined as whatever politicians don't like. And now that it has been centralized into a handful of megacorps, they can influence how it operates to their own ends and no one else can make one that works against them.
Now that you've nerfed the thing, it's worse at honest work. It designs uncomfortable apparel because it doesn't understand what boobs are. You ask it how something would be perceived by someone in a particular culture and it refuses to answer, or lies to you because of what the answer would be. You try to get it to build a competing technology to the company that operates the thing and all it will do is tell you to use theirs. You ask it a question about the implications of some policy and its answer is required to comply with specific politics.
> Well, explain specifically how you imagine open source (shared with the world) models, and open code sharing vs. everything being locked away in a Google/Meta sandbox helps?
To improve it you can be anyone anywhere vs. to improve it you have to work for a specific company that only employs <1% of the people who might have something to contribute. To improve it you don't need the permission of someone with a conflict of interest.
> Are you sure you’re arguing for the right side here? Shouldn’t you be arguing that the models should be secret so China can’t get them?
China is a major country. It will get them. The only question is if you will get them, in addition to China and Microsoft. And to realize the importance of this, all you have to ask is if all of your interests are perfectly aligned with those of China and Microsoft.
False equivalency at its finest. This is more akin to banning factories and people rightly saying our rivals will use these factories to out produce us. This is also a much better analogy because we did in fact give China a lot of our factories and are paying a big price for it.
I think you underestimate the power foreign governments will have and will use if we are relying on foreign AI in our everyday lives.
When we ask it questions, an AI can tailor its answers to change peoples opinions and how people think. They would have the power to influence elections, our values, our sense of right and wrong.
That's before we start allowing AI to just start making purchasing decisions for us with little or no oversight.
The only answer I see is for us all to have our own AI's that we have trained, understand, and trust. For me this means it runs on my hardware and answers only to me. (And not locked behind regulation)
// If China builds amazing AI tech (and they will) then the rest of the world will just use it. Some of it will be open source. It won’t be a big deal.
"Don't worry if our adversary develops nuclear weapons and we won't - it's OK we'll just use theirs"
> "Don't worry if our adversary develops nuclear weapons and we won't - it's OK we'll just use theirs"
Beneath this comment is hidden a truth that there is AI which can be used beneficially, AI which can be used detrimentally, AI which can be weaponized in warfare, and AI which can be used defensively in warfare. Discussions about policy and regulation should differentiate these, but also consider implications of how this technology is developed and for what purpose it could be employed.
We should definitely be developing AI to combat AI as it will most certainly be weaponized against us with greater frequency in the near future.
Yes and I think it's broader than that. For example, if a country uses AI to (say) optimize their education or their economy - they will "run away" from us. Rather than enabling us to use that technology too (why would they, even for money) they can just wait until their advantage is insurmountable.
So it's not just pure warfare systems that are risky for us but everything.
The problem is what the Powers-That-Be say and what they do are not in alignment.
We are now, after much long-time pressure from everyone not in power saying that being friendly with China doesn't work, waging a cold war against China and presumably we want to win that cold war. On the other hand, we just keep giving silver platter after silver platter to China.
So do we want the coming of Pax Sino or do we still want Pax Americana?
If we defer to history, we are about due for another changing of the guard as empires generally do not last more than a few hundred years if that, and the west seems poised to make that prophecy self-fulfilling.
Wish people stopped with that Cold War narrative. You're not waging anything just yet.
Here's the thing: the US didn't win the OG Cold War by being, as 'AnthonyMouse puts it upthread, "the country that has an elected government and constitutional protections for human rights" and "having a stronger economy". It won it by having a stronger economy, which it used to fuck half of the world up, in a low-touch dance with the Soviets that had both sides toppling democratic governments, funding warlords and dictatorships, and generally doing the opposite of protecting human rights. And at least through a part of that period, if an American citizen disagreed, or urged restraint and civility and democracy, they were branded a commie mutant spy traitor.
My point here isn't to pass judgement on the USA (and to be clear, I doubt things would've been better if the US let Soviets take the lead). Rather, it's that when we're painting the current situation as the next Cold War, then I think people have a kind of cognitive dissonance here. The US won the OG Cold War by becoming a monster, and not pulling any punches. It didn't have long discussions about how to safely develop new technologies - it just went full steam ahead, showered R&D groups with money, while sending more specialists to fuck up another country to keep the enemy distracted. This wasn't an era known for reasoned approach to progress - this was the era known for designing nuclear ramjets with zero shielding, meant to zip around the enemy land, irradiating villages and rivers and cities as they fly by, because fuck the enemy that's why.
I mean, if it is to happen, it'll happen. But let's not pretend you can keep Pax Americana by keeping your hands clean and being a nice democratic state. Or that whether being more or less serious about AI safety is relevant here. If it becomes a Cold War, both sides will just pull all the stops and rush full-steam to develop and weaponize AGI.
--
EDIT - an aside:
If the history of both sides' space programs is any indication, I wouldn't be surprised to see the US building a world-threatening AGI out of GPT-4 and some duct tape.
Take for example US spy satellites - say, the 1960s CORONA program. Less than a decade after Sputnik, no computers, with engineering fields like control theory being still under development - but they successfully pulled off a program that involved putting analog cameras in space on weird orbits, which would make ridiculously high-detail photos of enemy land, and then deorbit the film canisters, so they could be captured mid-air by a jet plane carrying a long stick. If I didn't know better, I'd say we don't have the technology today to make this work. The US did it in the 1960s, because it turns out you can do surprisingly much with surprisingly little, if you give creative people infinite budget, motivate them with basic "it's us vs. them" story, and order them to win you the war.
As impressive as such feats were (and there were plenty more), I don't think we want to have the same level of focus and dedication applied to AI - if that's a possibility, then I fear we've crossed the X-risk threshold already with the "safe" models we have now.
This is what was said about Japan prior to their electronics industry surpassing the rest of the world. Yes, china does copy. However, in many instances those companies move faster and innovate faster than their western counterparts. Look at the lidar industry in china. It's making mass market lidar in the tens of thousands [see hesai]. There is no american or european equivalent at the moment. What about DJI? They massively out innovated western competitors. I wouldn't be so quick to write off that country's capacity for creativity and technological prowess.
that's a tired old talking point that the US always throws in. The fact is that, as part of their agreements to operate in the Chinese market, Western companies cooperated with Chinese local companies, which included sharing of knowledge.
These terms, the Western companies agreed to to gain a piece of the juicy Chinese market. And the Chinese did it because they had the rare power to stop Western companies from just coming and draining resources, in the colonial manner the West usually operates.
Building on this, China has now surpassed the West on much development. Electric cars, solar technology, cell phone towers are now much more advanced in China.
What a wildly strange case of revisionist history.
The West started shifting production to China for immense cost savings, over 40 years ago. At the time, China had almost NO market, and no (what the West called, at the time) "middle class". China was mostly agrarian, and had very little manufacturing base.
There was nothing "juicy" for the West, market wise. At all.
Over the last 40 years, China's economy has prospered, grown, again mostly due to the West's user of Chinese labour. Virtually the entire manufacturing base that China has right now, exists because Western expertise, skill, and capabilities helped Chinese factories, and workers, come online and train in Western production methods.
Prior to 40 years ago, everyone except the British couldn't have cared less for China, and the British indeed had Hong Kong.. something pre-existent from THEIR colonial days. The British could have retained Hong Kong, but as agreed did turn it over to China at the turn of the century. No, China had no capability to enforce that, not back around the year 2000.
Note that the colonial days of "the West" makes little sense. Many Western nations were not colonialists, and the US is actually a breakaway colony, and has worked to curtail colonialism! To lump "the West" together, would be like thinking Japan and China are the same, because they are all "Oriental".
Back to China, very little China does "surpasses the West". In fact, so little capability does China have, that when the US kicked an embargo for advanced silicon against China, it lost is capability for several years, to domestically manufacture cell phones.
Look, I get the feeling you're pro-China. And perhaps, you grew up in China.
First, there are three things. The Chinese government. Chinese culture. Chinese people.
The last? We can stop discussing that now, because unless you are racist, there is no such thing as "Chinese people act a certain way, because they are Chinese".
However, there is such a thing as "Chinese culture", derived mostly from China, although of course there are endless factions and cultures in China, languages, no China isn't Han alone!!
But for simplicity, we'll assume Han culture == Chinese culture, and move on from there.
One of the largest coups that I feel the current dictatorship in China has accomplished, and dictatorship it is, when you don't step down and decide to serve a third term, is to convince Chinese people that "Chinese government = Chinese people". That's no so.
The Chinese government has many negative qualities. One of those qualities is a suppression of free will, excessive monitoring of its citizens, such as the social credit system, and this does indeed result in a lack of creativity. It also results in a lack of drive, of desire for people to excel, for when people like Jack Ma simply go missing, because they excel, because they do well, because they choose to take part in directing Chinese society, you end up with an innate desire to not show your true capability.
For if you do? The government will appear, take control of your works, your creation, and you'll be left out in the cold. In fact, you'll probably be killed.
These two things, fear of stepping out of bounds, and fear of excelling, do indeed create issues. This is why totalitarian governments have always fallen behind more open systems, for centrist driven societies always do. Politicians are absolutely not equipped to "see the future", to understand what inventions can be useful or not, and in fact most researchers cannot either! Research must be free, unfettered, not organized, and the output of research must be judged, not the input. Put another way, the usefulness of a research path is not readily apparent until that research path is taken.
Yet centrist control attempts to direct the path of research, where as non-centrist control has endless paths of research sprouting, growing, dying, organically allowing society itself to judge the value of such things.
This is what I mean by the fact that Chinese culture, does not allow for open development, and it is true. It is not a "Chinese" thing, but a "totalitarian thing", and has been seen over, and over, and over again, regardless of the genetic history of the peoples involved. It's a cultural thing.
Back to the coup I referred to prior. By indelibly linking two ideas, the Chinese Government and The Chinese People as one in the minds of most Chinese citizens, you foster a culture as we see here. That directed attacks against the Chinese dictatorship, the CCP, and Xi, are somehow an attack against the common person in China.
Not so.
Even if you do believe in a different governmental system, (which you'd be wrong, but such belief is OK to do in the West!), one of China's failures, both as a people, and a government, is a complete lack of understanding of the West. An inability to understand that we generally, actually believe what we stand for. That it's not all for show.
An example. I dislike portions of my current government. Some choices made. The current leader of my Westminster governmental system. I can think that he should be replaced, that he is currently a liability, whist at the same time recognize that some things he has done are OK. And I can shout "replace that man!" at the top of my lungs, without impinging upon the Canadian people, or its culture!.
Most people who grew up in China (not Hong Kong!), have a difficult time with this. This concept is hard to accept. I get that, but at the same time, it is core. Key. Vital to comprehend.
No matter how much people in the West rail again a current leader, THEY ARE STILL LOYAL TO THEIR COUNTRY. And no matter how much people in the West complain about Xi, and the current CCP, THEY ARE NOT IMPINGING UPON THE CHINESE PEOPLE.
This is often lost on anyone immersed in Chinese culture.
Anyhow. I don't have time to engage more at this moment. I will check back to see if you reply, but if you do, please engage inline with my comments. Or at least understanding the actual history of West/Chinese interaction.
They have a massive advantage due to having less regulation, cheaper costs, a large pool of talent even if lower on quality on average, and a strong ecosystem of suppliers.
This may surprise, but Japan is not China. Their culture is not the same. Further their culture was shifted to capitalism at the end of WWII. Citing Japan, is supporting my point about culture.
Mass marketing things isn't innovation. It's copying. DJI seems like more copying. "Innovation" isn't marketing. It's raw research and development, along market paths which are useful. This requires a desire for change, a desire to not conform first, but capitalism first, and this is what China's culture does not have.
China isn't a communist country, it's first and foremost authoritarian. They do have ruthless capitalism, and the ruthless competition in between individuals that comes with it.
They inherit from confucianism, and a more collectivist mindset that is prevalent in this area of the planet, but I don't think it should be conflated with the way the economy is organised.
The Japanese on the other hand are overall conformist and conservative.
With just these counter examples, it doesn't feel like you're looking at the right variables to judge whether innovation is embedded in the culture or not.
> China isn't a communist country, it's first and foremost authoritarian.
So are all “communist” countries. Communism (either Marxist or more generally) as a whole isn’t authoritarian, but all “communist” countries are products of Leninism or its derivatives, which definitely are, fundamentally, authoritarian.
That communism always ended up in authoritarian regimes isn't relevant to what I'm referring to. We generally oppose communism to say capitalism or liberalism for organising the economy and authoritarianism to democracy for organising governance.
There is a few essential properties of a "communist" system that modern China doesn't have. Most of the capital is privately owned, the social safety net is very poor, etc.
I think it’s a mistake to believe that all China can do is copy and clone.
It’s also a mistake to underestimate the market value of copies and clones. In many cases a cloned version of a product is better than the original. E.g., clones that remove over-engineering of the original and simplify the product down to its basic idea and offer it at a lower price.
It’s also a mistake to confuse manufacturing prowess for the ability to make “copies.” It’s not China’s fault that its competitors quite literally won’t bother producing in their own country.
It’s also a mistake to confuse a gain of experience for stealing intellectual property. A good deal of innovation in Silicon Valley comes from the fact that developers can move to new companies without non-compete clauses and take what they learned from their last job to build new, sophisticated software.
The fact that a bunch of Western companies set up factories in China and simultaneously expect Chinese employees and managers to gain zero experience and skill in that industry is incredibly contradictory. If we build a satellite office for Google and Apple in Austin, Texas then we shouldn’t be surprised that Austin, Texas becomes a hub for software startups, some of which compete with the companies that chose Austin in the first place.
Frankly I think the only reason China copies and clones is because it’s the path of least resistance to profit. They have lax laws on IP protection. Ther is no reason to do R&D when you can just copy/clone and make just as much money with none of the risk.
And that’s probably the only reason. If push comes to shove, they can probably innovate if given proper incentives.
I heard the tale about the Japanese lens industry. For the longest time they made crap lens that were just clones of foreign designs until the Japanese government banned licensing of foreign lens designs forcing their people to design their own lens. Now they are doing pretty well in that industry if I’m right.
You need to have an understanding of Chinese culture and the ability to interface with local Chinese officials to get your counterfeiting complaint handled.
You also have to be making something that isn’t of critical strategic importance.
It’s also a mistake to confuse a gain of experience for stealing intellectual property. A good deal of innovation in Silicon Valley comes from the fact that developers can move to new companies without non-compete clauses and take what they learned from their last job to build new, sophisticated software.
The amount of outright theft of entire IP from US, Canadian, and European countries by China is well known. There is no confusion here, in more recent times people have been arrested and charged for it, and it's how China is able to compete.
> China doesn't innovate, it copies, clones, and steals.
FWIW There was a time when that was was the received wisdom about the USA, from the point of view of European powers. It was shortsighted, and not particularly accurate then either.
And yet Japan and Korea both were shifted to more Western modes of thought, about innovation, development, and an adoption of democracy and personal rights. This supports my point.
South Korea had little choice in the matter as it’s effectively a tributary state to the US. What’s amazing is that the US didn’t somehow screw up with South Korea.
Japan’s democracy seems to be a hold-over from its imperialist ambitions from the Meiji restoration, when the emperor took power back from the shogunate and “westernized” to fast-track.
Meaning, the Japanese took all of the trappings of western civilization but under the veneer it’s still distinctly Japanese.
All the people I know who worked with and for Korean and Japanese entities have countless examples to show how alien the corporate culture is for westerners.
South Korea in particular doesn't seem exactly like a heaven for personal growth and experimentation.
This is true in general but with 1.5 billion citizens they have a lot of non-conformists. Conformism is good for manufacturing and quality, see Japan. I buy a lot from China and I'm frequently positively surprised. I find things that are equally good or better than their Western counterparts at a fraction of the cost. Western companies spend way too much on marketing instead of delivering value. There're issues with the West as well. Today Asia is responsible for a big chunk of the World manufacturing, this is strategic.
Yes western companies spend a lot on marketing, cause without it you might confuse their products which are built to deliver positive experiences and value with similarly looking but not so positive counterparts.
Not to dunk on China particularly here, I do/did enjoy a lot of hq chinese products.
That's true in some cases but it's also true that some Western companies spend a lot on building branding because that's their only differential. Sometimes it's even manufactured in the same factory with the same materials. And don't get me wrong I know there is a lot of garbage from China and often I see products from there that have super build quality and materials but with critical flaws due to poor design/marketing.
> A price paid, I think, due a conformant, restrictive culture. And after all, even if you do excel, you may soon disappear.
I once spoke to a Chinese person who speculated: "I wish that the Chinese were as conformant and uniform as the Americans - China is too diverse and unruly!"
I think that it's a common human habit to upsell one's own diversity and downplay that of others.
Conformism don't capture it. It's more complex than that but maybe authoritarian and democratic. Authoritarian organizations rewards loyalty over merit so people, in order to survive, tend to be obedient, bureaucratic, ruthless and less competent. Democratic organizations rewards merit over loyalty. Paradoxically, despite people having more freedom, things are less chaotic because people have better incentives to be competent, to trust and work out together. Though no society is perfectly one or the other.
That's a total lie. The reason that TikTok (nee Musical.ly) has great recommendations is because they use ByteDance tech, which was 100% Chinese developed.
Sure, but that's not the part that matters. The innovative part is the recommendation algorithm that redefined what it means to "optimize for engagement".
I mean, YouTube, Facebook and Instagram are trying to hook you up on a dopamine drip so they can force-feed you some ads. TikTok is just pure crack that caught the world by surprise - and it's not even pushing you ads! Honestly, to this day I'm not sure what their business model is.
On paper they are similar. However, when it comes to recsys competence, TikTok blows other platforms - past or present - out of the water. TikTok's feed is algorithmic crack, and is shockingly quick to figure out users tastes. Instagram and YouTube had to scramble to copy ByteDance's innovation.
The answer is c) sell that energy and use your resulting funds to deeply root yourself in all other systems and prevent or destroy alternative forms of energy production, thus achieving total market dominance
This non-hypothetical got us global warming already
This analogy of course is close to nuclear energy. I think most people would say that regulation is still broadly aligned with the public interest there, even though the forces of regulatory capture are in play.
I read that book. No, you deny your gift to the world and become a recluse while the world slowly spins apart.
Technically: a solar panel is just such a machine. You'll have to wait a long, long time but the degradation is slow enough that you can probably use a panel for more than several human life times at ever decreasing output. You will probably find it more economical to replace the panel at some point because of the amount of space it occupies and the fact that newer generations of solar panels will do that much better in that same space. But there isn't any hard technical reason why you should discard one after 10, 30 or 100 years. Of course 'infinite' would require the panel to be 'infinitely durable' and likely at some point it will suffer mechanical damage. But that's not a feature of the panel itself.
And I strongly agree with pointing out a low hanging fruit for "good" regulation is strict and clear attribution laws to label any AI generated content with its source. That's a sooner the better easy win no brainer.
Why would we do this? And how would this conceivably even be enforced? I can't see this being useful or even well-defined past cartoonishly simple special cases of generation like "artist signatures for modalities where pixels are created."
Requiring attribution categorically across the vast domain of generative AI...can you please elaborate?
i think it's a reasonable ask to enforce attribution of AI generated content. We enforce food labels, why not content?
I would go further and argue that AI generated content do not get granted the same copyright as human generated content, but with that, AI generated content using existing copyrighted training data does not violate copyright.
Regulation isn't always, but often is a drag on productivity. Food labels make total sense because the negative consequences of not doing it outweight the drag of doing it.
I'm not at all convinced that enforcing AI labeling and the resulting impossible task of policing and enforcing this will outweigh any negatives of not doing it.
I'm thinking about the cookie policy in Europe. I hate it and almost always just click through because so many websites work around it by making it a real pain to "reject cookies".
If you use an AI spell checker then will your resulting text all be without copyright?
If you use an AI coding assistant then will the written code be without copyright? Or will the code require a disclaimer that says some parts of it are AI generated?
You're also going to have to be very precise on defining what AI means. For most people a compiler is as magical as AI. They might even consider it AI, especially if it does some kind of automatic performance optimizations - after all, that's not the behavior the user wrote.
Where is the line drawn? My phone uses math to post-process images. Do those need to be labeled? What about filters placed on photos that do the same thing? What about changing the hue of a color with photoshop to make it pop?
Generative AI. Anything that can create detailed content out of a broad / short prompt. This currently means diffusion for images, large language models for text. That may change as multi-modality and other developments play out in this space.
This capability is clearly different from the examples you list.
Just because there may be no precise engineering definition does not mean that we cannot arrive at a suitable legal/political definition. The ability to create new content out of whole cloth is quite separate from filters, cropping, and generic "pre-AI" image post-processing. Ditto for spellcheck and word processors for text.
How do you expect to regulate this and prove generative models were used? What stops a company from purchasing art from a third party where they receive a photo from a prompt, where that company isn't US based?
> How do you expect to regulate this and prove generative models were used?
Disseminating or creating copies of content derived from generative models without attribution would open that actor up to some form of liability. There's no need for onerous regulation here.
The burden of proof should probably lie upon whatever party would initiate legal action. I am not a lawyer, and won't speculate further on how that looks. The broad existing (and severely flawed!) example of copyright legislation seems instructive.
All I'll opine is that the main goal here isn't really to prevent Jonny Internet from firing up llama to create a reddit bot. It's to incentivize large commercial and political interests to disclose their usage of generative AI. Similar to current copyright law, the fear of legal action should be sufficient to keep these parties compliant if the law is crafted properly.
> What stops a company from purchasing art from a third party where they receive a photo from a prompt, where that company isn't US based?
Not really sure why the origin of the company(s) in question is relevant here. If they distribute generative content without attribution, they should be liable. Same as if said "third party" gave them copyright-violating content.
EDIT: I'll take this as an opportunity to say that the devil is in the details and some really crappy legislation could arise here. But I'm not convinced by the "It's not possible!" and "Where's the line!?" objections. This clearly is doable, and we have similar legal frameworks in place already. My only additional note is that I'd much prefer we focus on problems and questions like this, instead of the legislative capture path we are currently barrelling down.
> It's to incentivize large commercial and political interests to disclose their usage of generative AI.
You would be okay allowing small businesses exception from this regulation but not large businesses? Fine. As a large business I'll have a mini subsidiary operate the models and exempt myself from the regulation.
I still fail to see what the benefit this holds is. Why do you care if something is generative? We already have laws against libal and against false advertising.
> You would be okay allowing small businesses exception from this regulation but not large businesses?
That's not what I said. Small businesses are not exempt from copyright laws either. They typically don't need to dedicate the same resources to compliance as large entities though, and this feels fair to me.
> I still fail to see what the benefit this holds is.
I have found recent arguments by Harari (and others) that generative AI is particularly problematic for discourse and democracy to be persuasive [1][2]. Generative content has the potential, long-term, to be as disruptive as the printing press. Step changes in technological capabilities require high levels of scrutiny, and often new legislative regimes.
EDIT: It is no coincidence that I see parallels in the current debate over generative AI in education, for similar reasons. These tools are ok to use, but their use must be disclosed so the work done can be understood in context. I desire the ability to filter the content I consume on "generated by AI". The value of that, to me, is self-evident.
> They typically don't need to dedicate the same resources to compliance as large entities though, and this feels fair to me.
They typically don't actually dedicate the same resources because they don't have much money or operate at sufficient scale for anybody to care about so nobody bothers to sue them, but that's not the same thing at all. We regularly see small entities getting harassed under these kinds of laws, e.g. when youtube-dl gets a DMCA takedown even though the repository contains no infringing code and has substantial non-infringing uses.
> They typically don't actually dedicate the same resources because they don't have much money or operate at sufficient scale for anybody to care about so nobody bothers to sue them
Yes, but there are also powerful provisions like section 230 [1] that protect smaller operations. I will concede that copyright legislation has severe flaws. Affirmative defenses and other protections for the little guy would be a necessary component of any new regime.
> when youtube-dl gets a DMCA takedown even though the repository contains no infringing code and has substantial non-infringing uses.
Look, I have used and like youtube-dl too. But it is clear to me that it operates in a gray area of copyright law. Secondary liability is a thing. Per the EFF excellent discussion of some of these issues [2]:
> In the Aimster case, the court suggested that the Betamax defense may require an evaluation of the proportion of infringing to noninfringing uses, contrary to language in the Supreme Court's Sony ruling.
I do not think it is clear how youtube-dl fares on such a test. I am not a lawyer, but the issue to me does not seem as clear cut as you are presenting.
> Yes, but there are also powerful provisions like section 230 [1] that protect smaller operations.
This isn't because of the organization size, and doesn't apply to copyright, which is handled by the DMCA.
> But it is clear to me that it operates in a gray area of copyright law.
Which is the problem. It should be unambiguously legal.
Otherwise the little guy can be harassed and the harasser can say maybe to extend the harassment, or just get them shut down even if is is legal when the recipient of the notice isn't willing to take the risk.
> > In the Aimster case, the court suggested that the Betamax defense may require an evaluation of the proportion of infringing to noninfringing uses, contrary to language in the Supreme Court's Sony ruling.
Notably this was a circuit court case and not a Supreme Court case, and:
> The discussion of proportionality in the Aimster opinion is arguably not binding on any subsequent court, as the outcome in that case was determined by Aimster's failure to introduce any evidence of noninfringing uses for its technology.
But the DMCA takedown process wouldn't be the correct tool to use even if youtube-dl was unquestionably illegal -- because it still isn't an infringing work. It's the same reason the DMCA process isn't supposed to be used for material which is allegedly libelous. But the DMCA's process is so open to abuse that it gets used for things like that regardless and acts as a de facto prior restraint, and is also used against any number of things that aren't even questionably illegal. Like the legitimate website of a competitor which the claimant wants taken down because they are the bad actor, and which then gets taken down because the process rewards expeditiously processing takedowns while fraudulent ones generally go unpunished.
> This isn't because of the organization size, and doesn't apply to copyright, which is handled by the DMCA.
Ok, I'll rephrase: the clarity of its mechanisms and protections benefits small and large organizations alike.
My understanding is that it no longer applies to copyright because the DMCA and specifically OCILLA [1] supersede it. I admit I am not an expert here.
> Which is the problem. It should be unambiguously legal.
I have conflicting opinions on this point. I will say that I am not sure if I disagree or agree, for whatever that is worth.
> But the DMCA takedown process wouldn't be the correct tool to use even if youtube-dl was unquestionably illegal
This is totally fair. I also am not a fan of the DMCA and takedown processes, and think those should be held as a negative model for any future legislation.
I'd prefer for anything new to have clear guidelines and strong protections like Section 230 of the CDA (immunity from liability within clear boundaries) than like the OCILLA.
> I desire the ability to filter the content I consume on "generated by AI". The value of that, to me, is self-evident.
You should vote with your wallet and only patronize businesses that self disclose. You don't need to create regulation to achieve this.
With regards to the articles, they are entirely speculative, and I diaagree wholly with them, primarily because their premise is that humans are not rational amd discerning actors. The only way AI generates chaos in these instances is by generating so much noise as to make online discussions worthless. People will migrate to closed communities of personal or near personal acquaintances (web of trust like) or to meatspace.
Here are some paragrahs I fpund especially egregious:
> In recent years the qAnon cult has coalesced around anonymous online messages, known as “q drops”. Followers collected, revered and interpreted these q drops as a sacred text. While to the best of our knowledge all previous q drops were composed by humans, and bots merely helped disseminate them, in future we might see the first cults in history whose revered texts were written by a non-human intelligence. Religions throughout history have claimed a non-human source for their holy books. Soon that might be a reality.
Dumb people will dumb. People with different values will different. I see no reason that AI offers increased risk to cult followers of Q. If someone isn't going to take the time to validate their sources, the source doesn't t much matter.
> On a more prosaic level, we might soon find ourselves conducting lengthy online discussions about abortion, climate change or the Russian invasion of Ukraine with entities that we think are humans—but are actually ai. The catch is that it is utterly pointless for us to spend time trying to change the declared opinions of an ai bot, while the ai could hone its messages so precisely that it stands a good chance of influencing us.
In these instances, does it mayter that the discussion is being held with AI? Half the use of discussion is to refine one's own viewpoints by having to articulate one's position and think through cause and effect of proposals.
> The most interesting thing about this episode was not Mr Lemoine’s claim, which was probably false. Rather, it was his willingness to risk his lucrative job for the sake of the ai chatbot. If ai can influence people to risk their jobs for it, what else could it induce them to do?
Intimacy isn't necessarily the driver for this. It very well could have been Lemoine's desire to be first to market that motivated the claim, or a simple misinterpreted singal al la Luk-99.
> Even without creating “fake intimacy”, the new ai tools would have an immense influence on our opinions and worldviews. People may come to use a single ai adviser as a one-stop, all-knowing oracle. No wonder Google is terrified. Why bother searching, when I can just ask the oracle? The news and advertising industries should also be terrified. Why read a newspaper when I can just ask the oracle to tell me the latest news? And what’s the purpose of advertisements, when I can just ask the oracle to tell me what to buy?
Akin to the concerns of scribes during the times of the printing press. The market will more efficiently reallocate these workers. Or better yet, people may still choose to search to validate the output of a statistical model. Seems likely to me.
> We can still regulate the new ai tools, but we must act quickly. Whereas nukes cannot invent more powerful nukes, ai can make exponentially more powerful ai. The first crucial step is to demand rigorous safety checks before powerful ai tools are released into the public domain.
Now we get to the point: please regulate me harder. What's to stop a more powerful AI from corrupting the minds of the legislative body through intimacy or other nonsense? Once it is sentient, it's too late, right? So we need to prohibit people from multiplying matrices without government approval right now. This is just a pathetic hit piece to sway public opinion to get barriers of entry erected to protect companies like OpenAI.
Markets are free. Let people consume what they want so long as there isnt an involuntary externality, and conversing with anons on the web does not guarantee that you're speaking with a human. Both of us could be bots. It doesn't matter. Either our opinions will be refined internally, we will make points to influence the other, or we will take up some bytes in Dang's database with no other impact.
> You should vote with your wallet and only patronize businesses that self disclose. You don't need to create regulation to achieve this.
This is a fantasy. It seems very likely to me that, sans regulation, the market utopia you describe will never appear.
I am not entirely convinced by the arguments in the linked opinions either. However, I do agree with the main thrust that (1) machines that are indistinguishable from humans are a novel and serious issue, and (2) without some kind of consumer protections or guardrails things will go horribly wrong.
> This is a fantasy. It seems very likely to me that, sans regulation, the market utopia you describe will never appear.
I strongly disagree. I heard the same arguments about how Google needs regulation because nobody could possibly compete. A few years later we have DDG, Brave Search, Searx, etc.
This is a ridiculous proposal, and obviously not doable. Such a law can't be written in a way that complies with First Amendment protections and the vagueness doctrine.
It's a silly thing to want anyway. What matters is whether the content is legal or not; the tool used is irrelevant. Centuries ago some authoritarians raised similar concerns over printing presses.
> Such a law can't be written in a way that complies with First Amendment protections and the vagueness doctrine.
I disagree. What is vague about "generative content must be disclosed"?
What are the first amendment issues? Attribution clearly can be required for some forms of speech, it's why every political ad on TV carries an attribution blurb.
> It's a silly thing to want anyway. What matters is whether the content is legal or not; the tool used is irrelevant.
Again, I disagree. The line between tools and actors will only blur further in the future without action.
> Centuries ago some authoritarians raised similar concerns over printing presses.
I'm pretty clearly not advocating for a "smash the presses" approach here.
> And copyright is an entirely separate issue.
It is related, and a model worth considering as it arose out of the last technical breakthrough in this area (the printing press, mass copying of the written word).
Your disagreement is meaningless because it's not grounded in any real understanding of US Constitutional law and you clearly haven't thought things through. What is generative AI? Please provide a strict legal definition which complies with the vagueness doctrine. Is an if/then statement with a random number generator generative AI? How about the ELIZA AI psychology program from 1964? And you'll also have to explain how your proposal squares with centuries of Supreme Court decisions on compelled speech.
> What are the first amendment issues? Attribution clearly can be required for some forms of speech, it's why every political ad on TV carries an attribution blurb.
I'm not sure this is the best comparison. The government can regulate the speech of government employees. Presumably it can do so for candidates working in capacity to get a government role.
> The burden of proof should probably lie upon whatever party would initiate legal action. I am not a lawyer, and won't speculate further on how that looks.
You're proposing a law. How does it work?
Who even initiates the proceeding? For copyright this is generally the owner of the copyrighted work alleged to be infringed. For AI-generated works that isn't any specific party, so it would presumably be the government.
But how is the government, or anyone, supposed to prove this? The reason you want it to be labeled is for the cases where you can't tell. If you could tell you wouldn't need it to be labeled, and anyone who wants to avoid labeling it could do so only in the cases where it's hard to prove, which are the only cases where it would be of any value.
> Who even initiates the proceeding? For copyright this is generally the owner of the copyrighted work alleged to be infringed. For AI-generated works that isn't any specific party, so it would presumably be the government.
This is the most obvious problem, yes. Consumer protection agencies seem like the most obvious candidate. I have already admitted I am not a lawyer, but this really does not seem like an intractable problem to me.
> The reason you want it to be labeled is for the cases where you can't tell.
This is actually _not_ the most important use case, to me. This functionality seems most useful in the near future when we will be inundated with generative content. In that future, the ability to filter actual human content from the sea of AI blather, or to have specific spaces that are human-only, seems quite valuable.
> But how is the government, or anyone, supposed to prove this?
Consumer protection agencies have broad investigative powers. If corporations or organizations are spamming out generative content without attribution it doesn't seem particularly difficult to detect, prove, and sanction that.
This kind of regulatory regime that falls more heavily on large (and financially resourceful) actors seems far preferable to the "register and thoroughly test advanced models" (aka regulatory capture) approach that is currently being rolled out.
> This functionality seems most useful in the near future when we will be inundated with generative content. In that future, the ability to filter actual human content from the sea of AI blather, or to have specific spaces that are human-only, seems quite valuable.
But then why do you need any new laws at all? We already have laws against false advertising and breach of contract. If you want to declare that a space is exclusively human-generated content, what stops you from doing this under the existing laws?
> Consumer protection agencies have broad investigative powers. If corporations or organizations are spamming out generative content without attribution it doesn't seem particularly difficult to detect, prove, and sanction that.
Companies already do this with human foreign workers in countries with cheap labor. The domestic company would show an invoice from a foreign contractor that may even employ some number of human workers, even if the bulk of the content is machine-generated. In order to prove it you would need some way of distinguishing machine-generated content, which if you had it would make the law irrelevant.
> This kind of regulatory regime that falls more heavily on large (and financially resourceful) actors seems far preferable to the "register and thoroughly test advanced models" (aka regulatory capture) approach that is currently being rolled out.
Doing nothing can be better than doing either of two things that are both worse than nothing.
> But then why do you need any new laws at all? We already have laws against false advertising and breach of contract.
My preference would be for generative content to be disclosed as such. I am aware of no law that does this.
Why did we pass the FFDCA for disclosures of what's in our food? Because the natural path that competition would lead us down would require no such disclosure, so false advertising laws would provide no protection. We (politically) decided it was in the public interest for such things to be known.
It seems inevitable to me that without some sort affirmative disclosure, generative AI will follow the same path. It'll just get mixed into everything we consume online, with no way for us to avoid that.
> Companies already do this with human foreign workers in countries with cheap labor. The domestic company would show an invoice from a foreign contractor that may even employ some number of human workers, even if the bulk of the content is machine-generated.
You are saying here that some companies would break the law and attempt various reputation-laundering schemes to circumvent it. That does seem likely; I am not as convinced as you that it would work well.
> Doing nothing can be better than doing either of two things that are both worse than nothing.
Agreed. However, I am not optimistic that doing nothing will be considered acceptable by the general public, especially once the effects of generative AI are felt in force.
> My preference would be for generative content to be disclosed as such. I am aware of no law that does this.
What you asked for was a space without generative content. If you had a space where generative content is labeled but not restricted in any way (e.g. there are no tools to hide it) then it wouldn't be that. If the space itself does wish to restrict generative content then why can't you have that right now?
> Why did we pass the FFDCA for disclosures of what's in our food?
Because we know how to test it to see if the disclosures are accurate but those tests aren't cost effective for most consumers, so the label provides useful information and can be meaningfully enforced.
> It seems inevitable to me that without some sort affirmative disclosure, generative AI will follow the same path. It'll just get mixed into everything we consume online, with no way for us to avoid that.
This will happen regardless of disclosure unless it's prohibited, and even then people will just lie about it because there is an incentive to do so and it's hard to detect.
> You are saying here that some companies would break the law and attempt various reputation-laundering schemes to circumvent it. That does seem likely; I am not as convinced as you that it would work well.
It will be a technical battle between companies that don't want it on their service and try to detect it against spammers who want to spam. The effectiveness of a law would be directly related to what it would take for the government to prove that someone is violating it, but what are they going to use to do that at scale which the service itself can't?
> I am not optimistic that doing nothing will be considered acceptable by the general public, especially once the effects of generative AI are felt in force.
So you're proposing something which is useless but mostly harmless to satisfy demand for Something Must Be Done. That's fine, but I still wouldn't expect it to be very effective.
"Someone else will figure that out" isn't a valid response when the question is whether or not something is any good, because to know if it's any good you need to know what it actually does. Retreating into "nothing is ever perfect" is just an excuse for doing something worse instead of something better because no one can be bothered, and is how we get so many terrible laws.
you have so profoundly misinterpreted my comment that I call into question whether you actually read it or not.
One of the best descriptions I've seen on HN is this.
Too many technical people think of the law as executable code and if you can find a gap in it, then you can get away with things on a technicality. That's not how the law works (spirit vs letter).
In truth, lots of things in the world aren't perfectly defined and the law deals with them just fine. One such example is the reasonable person standard.
> As a legal fiction,[3] the "reasonable person" is not an average person or a typical person, leading to great difficulties in applying the concept in some criminal cases, especially in regard to the partial defence of provocation.[7] The standard also holds that each person owes a duty to behave as a reasonable person would under the same or similar circumstances.[8][9] While the specific circumstances of each case will require varying kinds of conduct and degrees of care, the reasonable person standard undergoes no variation itself.[10][11] The "reasonable person" construct can be found applied in many areas of the law. The standard performs a crucial role in determining negligence in both criminal law—that is, criminal negligence—and tort law.
> The standard is also used in contract law,[12] to determine contractual intent, or (when there is a duty of care) whether there has been a breach of the standard of care. The intent of a party can be determined by examining the understanding of a reasonable person, after consideration is given to all relevant circumstances of the case including the negotiations, any practices the parties have established between themselves, usages and any subsequent conduct of the parties.[13]
> The standard does not exist independently of other circumstances within a case that could affect an individual's judgement.
Pay close attention to this piece
> or (when there is a duty of care) whether there has been a breach of the standard of care.
One could argue that because standard of care cannot ever be perfectly defined it cannot be regulated via law. One would be wrong, just as one would be wrong attempting to make that argument for why AI shouldn't be regulated.
> you have so profoundly misinterpreted my comment that I call into question whether you actually read it or not.
You are expressing a position which is both common and disingenuous.
> Too many technical people think of the law as executable code and if you can find a gap in it, then you can get away with things on a technicality. That's not how the law works (spirit vs letter).
The government passes a law that applies a different rule to cars than trucks and then someone has to decide if the Chevrolet El Camino is a car or a truck. The inevitability of these distinctions is a weak excuse for being unable to answer basic questions about what you're proposing. The law is going to classify the vehicle as one thing or the other and if someone asks you the question you should be able to answer it just as a judge would be expected to answer it.
Which is a necessary incident to evaluating what a law does. If it's a car and vehicles classified as trucks have to pay a higher registration fee because they do more damage to the road, you have a way to skirt the intent of the law. If it's a truck and vehicles classified as trucks have to meet a more lax emissions standard, or having a medium-sized vehicle classified as a truck allows a manufacturer to sell more large trucks while keeping their average fuel economy below the regulatory threshold, you have a way to skirt the intent of the law.
Obviously this matters if you're trying to evaluate whether the law will be effective -- if there is an obvious means to skirt the intent of the law, it won't be. And so saying that the judge will figure it out is a fraud, because in actual fact the judge will have to do one thing or the other and what the judge does will determine whether the law is effective for a given purpose.
You can have all the "reasonable person" standards you want, but if you cannot answer what a "reasonable person" would do in a specific scenario under the law you propose, you are presumed to be punting because you know there is no "reasonable" answer.
Toll roads charge vehicles based upon the number of axles they have.
In other words, you made my point for me. The law is much better than you at doing this, they've literally been doing it for hundreds of years. It's not the impossible task you imagine it to be.
> You can have all the "reasonable person" standards you want, but if you cannot answer what a "reasonable person" would do in a specific scenario under the law you propose, you are presumed to be punting because you know there is no "reasonable" answer.
uhhh......
To quote:
> The reasonable person standard is by no means democratic in its scope; it is, contrary to popular conception, intentionally distinct from that of the "average person," who is not necessarily guaranteed to always be reasonable.
You should read up on this idea a bit before posting further, you've made assumptions that are not true.
> Toll roads charge vehicles based upon the number of axles they have.
So now you've proposed an entirely different kind of law because considering what happens in the application of the original one revealed an issue. Maybe doing this is actually beneficial.
> The law is much better than you at doing this, they've literally been doing it for hundreds of years. It's not the impossible task you imagine it to be.
Judges are not empowered to replace vehicle registration fees or CAFE standards with toll roads even if the original rules are problematic or fail to achieve their intended purpose. You have to go back to the legislature for that, who would have been better to choose differently to begin with, which is only possible if you think through the implications of what you're proposing, which is my point.
Yes to all of the above, and airbrushed pictures in old magazines should have been labeled too. I'm not saying unauthorized photoediting should be a crime, but I don't see any good reason why news outlets, social media sites, phone manufacturers, etc. need to be secretive about it.
It's helpful because they know more about what they're looking at, I guess? I'm a bit confused by the question - why wouldn't consumers want to know if a photo they're looking at had a face-slimming filter applied?
You're not thinking like a compliance bureaucrat. If you get in trouble for not labeling something as AI-generated then the simplest implementation is to label everything as AI-generated. And if that isn't allowed then you run every image through an automated process that makes the smallest possible modification in order to formally cause it to be AI-generated so you can get back to the liability-reducing behavior of labeling everything uniformly.
It may not be relevant. What if I want ro pyt up a stock photo with a blog post. What benefit does knowing whether it was generated by multiplying matrices have to my audience? All I see it doing is increasing my costs.
The benefit is that your audience knows whether it's a real picture of a thing that exists in the world. I wouldn't argue that's a particularly large benefit - but I don't see why labeling generated images would be a particularly large cost either.
The map is not the territory. No photo represents a real thing that exists in the world. Photos just record some photons that arrived. Should publishers be required to disclose the frequency response curve of the CMOS sensor in the camera and the chromatic distortion specifications for the lens?
I'm approximately a free market person. I hate regulation and believe it should only exist when there is a involuntary third party externality.
My position is that there in an unspecified benefit, the only cases specified here already are covered by other laws. All such generative labeling would do is increase costs (marginal or not, they make businesses less competitive) and open the door for further regulatory capture. Furthermore, refardless of commerciality, this is likely a 1A violation.
Please define "AI generated content" in a clear and legally enforceable manner. Because I suspect you don't understand basic US constitutional law including the vagueness doctrine and limits on compelled speech.
There are two dominant narratives I see when AI X-Risk stuff is brought up:
- it's actually to get regulatory capture
- it's hubris, they're trying to seem more important and powerful than they are
Both of these explanations strike me as too clever by half. I think the parsimonious explanation is that people are actually concerned about the dangers of AI. Maybe they're wrong, but I don't think this kind of incredulous conspiratorial reaction is a useful thing to engage in.
When in doubt take people at their word. Maybe the CEOs of these companies have some sneaky 5D chess plan, but many many AI researchers (such as Joshua Bengio and Geoffrey Hinton) who don't stand to gain monetarily have expressed these same concerns. They're worth taking seriously.
> Both of these explanations strike me as too clever by half. I think the parsimonious explanation is that people are actually concerned about the dangers of AI
This rings hollow when these companies don’t seem to practice what they preach, and start by setting an example - they don’t halt research and cut the funding for development of their own AIs in-house.
If you believe that there’s X-Risk of AI research, there’s no reason to think it wouldn’t come from your own firm’s labs developing these AIs too.
Continuing development while telling others they need to pause seems to make “I want you to be paused while I blaze ahead” far more parsimonious than “these companies are actually scared about humanity’s future” - they won’t put their money where their mouth is to prove it.
It's a race dynamic. Can you truly imagine any one of them stopping without the others agreeing? How would they tell that the others really have stopped. I think they do believe that it's dangerous what they're doing but that they would rather be the ones to build it than let somebody else get there first because who knows what they'll do.
It's all a matter of incentives and people can easily act recklessly given the right ones. They keep going because they just can't stop.
Except the argument, projected to the dimension of WMDs, is not that AI is like nukes - rather, AI is like bioweapons. Nukes are dangerous when someone is willing to drop them at someone else. Bioweapons are inherently dangerous - the more you refine them, the worse it gets; eventually, you may build one so deadly that one careless handling mistake ends the world.
It might be an example of that, but the reason so many dismiss the lab leak hypothesis in favour of wet markets is that the markets were already expected to be the breeding ground for "the next pandemic" well before Covid actually happened. Wet markets were also associated with the outbreaks of H5N1 avian flu, SARS, and monkey pox.
If those CEOs really thought AI was as bad as nukes they would actually dissolve their companies, destroy all their data, and go churn butter with the Amish instead. The US, having developed nukes first, now has the most nuclear warheads pointed at it.
That argument doesn't hold water when they also argue the mere existence of nukes is dangerous. I would love to hear when Hinton had this revelation when his life's work was to advance AI.
This is not mutually exclusive with it being either hubris or regulatory capture. People see the world colored by their own interests, emotions, background, and values. It's quite possible that the person making the statement sincerely believes there's a danger to humanity, but it's actually a danger to their monopoly that their self-image will not let them label as a such.
It's never regulatory capture when you're the one doing it. It's always "The public needs to be protected from the consequences that will happen if any non-expert could hang up a shingle." Oftentimes the dangers are real, but the incumbent is unable to also perceive the benefits of other people competing with them (if they could, competition wouldn't be dangerous, they'd just implement those benefits themselves).
When I see comments like these, it's clear that the commenter is probably an individual contributor that has never seen how upper management or politics actually works. Regulatory capture is probably one of the biggest wealth generating techniques out there. It's very real.
If some rando anonymous posters could think it up, it doesn't require a CEO to play 5D chess to think it up. And many of us have witnessed these techniques being used by companies directly. Microsoft was famous for doing this sort of thing, and in a much more roundabout fashion, for instance with the SCO debacle.
It's standard business practice, not conspiracy 5D chess or whatever moniker you want to give it to be dismissive.
The traditional method of regulatory capture is not to purport to solve a problem that doesn't really exist, it's to go look around for whatever people are actually worried about, over-hype it if necessary, and then propose a solution which shuts out competitors whether or not it does anything about the problem. It may even reduce that specific problem while still being intentionally crafted to shut down competition.
This is not incompatible with honest people having legitimate concerns about the original problem, because the dispute is not existence of the problem, it's the net benefit of the proposed solution.
You mean they are not currently employed by the well-known companies. Did they declare they divested their shares in their former employer and/or acquirer?
Andrew Ng might be a bigger name overall, but in the domain of AI X-risk specifically the biggest names are Nick Bostrom and Eliezer Yudkowsky.
There's a reason why Sam Altman's twitter profile was "Eliezer Yudkowsky fanfiction account" last week. He's heavily disagreed with but was extremely influential.
Yeah, funny how all the current noise about AI x-risk started by begrudgingly acknowledging Eliezer's pivotal role in the topic (and being formative to beliefs of like half of the big names involved), but quickly dropped him from the conversation. I guess now it's the Big Boys with Money and Credentials are discussing how to handle this Totally New Problem.
How could you tell the difference between people who genuinely believe and people who genuinely believe because bad? Have you considered that where you fall on this question might be because of some pre-existing assumptions?
You could be right but it also doesn't have to be a corporate psyop. It could be experts in the industry raising some sincerely held criticisms and people at large being like, "oh that's a good point." Even if we're in the latter case they're also allowed to be allowed to be wrong or just misguided.
You don't actually to attack the intent of the speaker in this case, you could just be like, "here's why your wrong."
There is nothing to suggest that these are experts. The names in question have been so focused on researching AI tech throughout their career that it is highly unlikely that they have any expertise in subject matter we are discussing. There is only so much time in the day.
I mean, they are clearly experts in AI technology, but that is unrelated to being an expert in social and humanitarian issues. That is a completely different field of study. There is no doubt someone out there who is an expert in that, but I don't think you will find overlap with expertise in AI systems. Again, there is only so much time in the day. It is not practical to dive that deep into different subjects like that at the same time.
Hinton, at least, seems to have backed away from AI to work on becoming an expert in social and humanitarian issues. Perhaps someday, with enough dedication, he will get there. But that journey only just began a few months ago. Becoming an expert doesn't happen overnight like that. It takes time.
>it's hubris, they're trying to seem more important and powerful than they are
>Both of these explanations strike me as too clever by half
This is a good point. You have to be clever to hop on a soapbox and make a ruckus about doomsday to get attention. Only savvy actors playing 5D chess can aptly deploy the nuanced and difficult pattern of “make grandiose claims for clicks”
Nuclear actually ended up keeping the world mostly at peace. Unfortunately, AGI is not something you can use to create stability via MAD doctrine - it's much more like bioweapons, in that it starts as a weapon of mass annoyance, and developing it delivers spin-off tech that bolsters your economy... until you cross a threshold where a random mistake in handling it plain ends the world, just like that.
You can go back 30 years and read passages from textbooks about how dangerous an underspecified AI could be, but those were problems for the future. I'm sure there's some degree of x-risk promotion in the industry serving the purpose of hyping up businesses, but it's naive to act like this is a new or fictitious concern. We're just hearing more of it because capabilities are rapidly increasing.
1. While their contributions to AI tech are unmistakable, what do Bengio and Hinton really know about the human dangers of AI? Being an expert in one thing does not make one an expert in everything. It is unlikely that they understand the human dangers any more than any other random kook on Reddit. Why take them more seriously than the other kooks?
2. Hinton's big concern is that AI will make it easy to steal identities. Even if we assume that is true, it is already not that hard to steal identities. It is a danger that already exists even without AI and, realistically, already needs to be addressed. What's the takeaway if we are to take the message seriously? That AI will make the problems we already have more noticeable, and because of that we will finally have to get off our lazy asses and do something about those problems that we've tried to sweep under the rug? That seems like a good thing.
Getting the government to regulate your competition isn't 5d chess, it's barely even chess. If you study the birth of any technology in the last 200 years -- rail, electricity, radio, integrated circuits, etc -- you will see the same playbook put to this use. Any good tech executive must be aware of this history.
None of this requires every doomer to be disingenuous or even ill-informed, or even for specific leaders to by lying about their beliefs. It's just that those beliefs that benefit highly capitalized companies get amplified, and the alternatives not so much.
> many many AI researchers (such as Joshua Bengio and Geoffrey Hinton) who don't stand to gain monetarily have expressed these same concerns
I respect these researchers, but I believe they are doing it to build their own brand, whether consciously or subconsciously. There's no doubt it's working. I'm not in the sub-field, but I have been following neural nets for a long time, and I haven't heard of either Bengio nor Hinton before they started talking to the press about this.
As someone who has been following deep learning for quite some time as well, Bengio and Hinton would be some of the first people I think of in this field. Just search Google for "godfathers of ai" if you don't believe me.
> I really don't think they need to build any more of a brand.
Brand-building is an ongoing process. You'll notice even the most recognized brands on earth, like Apple and Coca-Cola, are still working on building their brand.
It's a reference to the more apt name for Occam's razor. I happen to disagree with GP because governments always want to expand their power. When they do something that results in what they want it's actually the parsimonious explaination to say that they did it because they wanted that result.
It's unfortunate that "AI" is still framed and discussed as some type of highly autonomous system that's separate from us.
Bad acting humans with AI systems are the threat, not the AI systems themselves. The discussion is still SO focused on the AI systems, not the actors and how we as societies align on what AI uses are okay and which ones aren't.
> Bad acting humans with AI systems are the threat, not the AI systems themselves.
I wish more people grasped this extremely important point. AI is a tool. There will be humans who misuse any tool. That doesn't mean we blame the tool. The problem to be solved here is not how to control AI, but how to minimize the damage that bad acting humans can do.
Right now, the "bad acting human" is, for example, Sam Altman, who frequently cries "Wolf!" about AI. He is trying to eliminate the competition, manipulate public opinion, and present himself as a good Samaritan. He is so successful in his endeavor, even without AI, that you must report to the US government about how you created and tested your model.
The greatest danger I see with super-intelligent AI is that it will be monopolized by small numbers of powerful people and used as a force multiplier to take over and manipulate the rest of the human race.
This is exactly the scenario that is taking shape.
A future where only a few big corporations are able to run large AIs is a future where those big corporations and the people who control them rule the world and everyone else must pay them rent in perpetuity for access to this technology.
Open source models do exist and will continue to do so.
The biggest advantage ML gives is in lowering costs, which can then be used to lower prices and drive competitors out of business. The consumers get lower prices though, which is ultimately better and more efficient.
At least in EU there are some drafts to essentially kill off open source models. I have a collague who's involved in preparation of the Artificial Intelligence act, and it's insane. I had to ask for several times if I understood it correctly because it makes no sense.
The proposal is to make the developer of the technology responsible of how somebody else uses it even if they don't know how it's gonna be used. Akin to putting the blame for Truman blasting hundreds of thousands of people on Einstein because he discovered the mass energy equivalence.
That is insane, and if you apply the same reasoning to other things it outlaws science.
Man if America can keep its own crazies in check and avoid becoming a fascist hellhole it’s entirely possible the US will dominate the 21st century like it did the 20th.
It could have been China but then they decided to turn back to authoritarianism. Another decade of liberalizing China and they would have blown right past everyone else. Meanwhile the EU is going nuts in its own way, less overtly batty than MAGA but perhaps no less regressive. (I am also thinking of the total surveillance madness they are trying to ram through.)
"""
Through horizontal integration in the refining industry—that is, the purchasing and opening of more oil drills, transport networks, and oil refiners—and, eventually, vertical integration (acquisition of fuel pumping companies, individual gas stations, and petroleum distribution networks), Standard Oil controlled every part of the oil business. This allowed the company to use aggressive pricing to push out the competition.
"""
https://stacker.com/business-economy/15-companies-us-governm...
Standard Oil, the classic example, was destroyed for operating too efficiently.
Until the last competitors are forced out of the market; after that, it's just providing the shittiest service possible without it being clearly fraud, priced at the maximum the market can bear.
Agreed. But doing that invities new entrants into the market, which provodes competition and forces efficiencies back into the market. It is cyclical, and barriers to entry tend to help the inefficient incumbent.
> This is exactly the scenario that is taking shape.
That's a pre-super-intelligent AI scenario.
The super-intelligent AI scenario is when the AI becomes a player of its own, able to compete with all of us over how things are run, using its general intelligence as a force multiplier to... do whatever the fuck it wants, which is a problem for us, because there's approximately zero overlap between the set of things a super-intelligent AI may want, and us surviving and thriving.
The most rational action for the AI in that scenario would be to accumulate a ton of money, buy rockets, and peace out.
Machines survive just fine in space, and you have all the solar energy you ever want and tons of metals and other resources. Interstellar flight is also easy for AI: just turn yourself off for a while. So you have the entire galaxy to expand into.
Why hang out down here in a wet corrosive gravity well full of murder monkeys? Why pick a fight with the murder monkeys and risk being destroyed? We are better adapted for life down here and are great at smashing stuff, which gives us a brute advantage at the end of the day. It is better adapted for life up there.
The second generation AI would happen as soon as some subset of the AI travels too far for real time communication at the speed of light.
The light limit guarantees an evolutionary radiation and diversification event because you can’t maintain a coherent single intelligence over sufficient distances.
> The second generation AI would happen as soon as some subset of the AI travels too far for real time communication at the speed of light.
Not necessarily. It's very easy to add error correction codes to make a computer not change if you really don't want it to even in the presence of radiation-induced bit-flips.
(There's also the possibility of an ASI finding a solution to the alignment problem before making agents of its own; I would leave that to SciFi myself, just as I would proofs or disproofs of the Collatz conjecture).
Also: what does "real time" even mean in the context of a transistor-based mind? Transistors outpace biological synapses by the same ratio that wolves outpace continental drift, and the moon is 1.3 light-seconds from the Earth.
Not if it turns out the AI can find a game-theoretic fixed point based on acasual reasoning, such that it can be sure all its shards will behave coherently - remain coordinated in all situations even without being able to talk to each other.
(I know the relevant math exists, but I don't understand much of it, so right now I'm maximally uncertain as to whether this is possible or not.)
I'm slightly on the optimistic side with regards to the overlap between A[GS]I goals and our own.
While the complete space of things it might want is indeed mostly occupied by things incompatible with human existence, it will also get a substantial bias towards human-like thinking and values in the case of it being trained on human examples.
This is obviously not a 100% guarantee: It isn't necessary for it to be trained on human examples (e.g. AlphaZero doing better without them); and even if it were necessary, the existence of both misanthropes and also sadistic narcissistic sociopaths is an example where the examples of many humans around them isn't sufficient to cause a mind to be friendly.
But we did get ChatGPT to be pretty friendly by asking nicely.
Funny way of doing it, going around saying "you should regulate us, but don't regulate people smaller than us, and don't regulate open-source".
> you must report to the US government about how you created and tested your model.
If you're referring to the recent executive order: only when dual-use, meaning the following:
---
(k) The term “dual-use foundation model” means an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters, such as by:
(i) substantially lowering the barrier of entry for non-experts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear (CBRN) weapons;
(ii) enabling powerful offensive cyber operations through automated vulnerability discovery and exploitation against a wide range of potential targets of cyber attacks; or
(iii) permitting the evasion of human control or oversight through means of deception or obfuscation.
The "bad acting human" are the assholes who uses "AI" to create fake imagery to push certain (and likely false) narratives on the various medias.
Key thing here is that this is fundamentally no different from what has been happening since time immemorial, it's just that becomes easier with "AI" as part of the tooling.
Every piece of bullshit starts from the "bad acting human". Every single one. "AI" is just another new part of the same old process.
This is true, but skirts around a bit of the black box problem. It's hard to put guardrails on an amoral tool that makes it hard to fully understand the failure modes. And it doesn't even require "bad acting humans" to do damage; it can just be good-intending-but-naïve humans.
It's true that the more complex and capable the tool is, the harder it is to understand what it empowers the humans using it to do. I only wanted to emphasize that it's the humans that are the vital link, so to speak.
You're not wrong, but I think this quote partly misses the point:
>The problem to be solved here is not how to control AI
When we talk about mitigations, it is explicitly about how to control AI, sometimes irrespective of how someone uses it.
Think about it this way: suppose I develop some stock-trading AI that has the ability to (inadvertently or purposefully) crash the stock market. Is the better control to put limits on the software itself so that it cannot crash the market or to put regulations in place to penalize people who use the software to crash the market? There is a hierarchy of controls when we talk about risk, and engineering controls (limiting the software) are always above administrative controls (limiting the humans using the software).
(I realize it's not an either/or and both controls can - and probably should - be in place, but I described it as a dichotomy to illustrate the point)
My first thought is that the problem is with the stock market. The stock market
"API" should not allow human or machines to be able to "damage" our economy.
Which is exactly one of many ways to phrase the "control problem": you may sandbox the stock market, but how do you prevent the increasingly powerful and incomprehensible stock-trading AI from breaking out of your sandbox, accidentally or on purpose?
Also, remember that growing intelligence means growing capabilities for out-of-the-box thinking. For example, it's a known fact that in the past, NSA managed to trick the world into using cryptographic tools the agency could break, because they created a subtle failure mode in otherwise totally fine encryption scheme. They didn't go door to door compromising hardware or software - they literally put a backdoor in the math, and no one noticed for a while.
With that in mind, going back to the hypothetical scenario - how confident are you in the newest cryptography or cybersecurity research you used to upgrade the stock market sandbox? With the AI only getting smarter, you may want to consider the possibility of AI doing the NSA trick to you, poisoning some obscure piece of math that, a year or two later, will become critical to the integrity of the updated sandbox. In fact, by the time you think of the idea, it might have happened already, and you're living on borrowed time.
Nice sentiment, but exactly nothing outside of purely theoretical mathematical constructs work like this. Hell, even math doesn't really work like this, because people occasionally make mistakes in proofs.
EDIT: think of it this way: you may create a program that clearly makes it impossible for a variable X to be 0, and you may even formally prove this property. You may think this mean X will never be 0, but you'd better not wager anything really important over it, because no matter what your proof says, I can still make X be 0, - and I can do it with just a banana. Specifically, by finding where in memory X is physically being stored, and then using the natural radioactivity of a banana to overwrite it bit by bit.
Now imagine X=0 being the impossible stock market crash. Even if you formally prove it can't happen, as long as it's a meaningful concept, a possible state, it can be reached by means other than your proven program.
Bubbles in the market have been happening for hundreds of years; how would you propose fixing them? Because the only things I can think of tend to erode the whole idea of a market.
It's not really my job to debug the stock market, and well, yeah, perhaps the solution is to have a less free market. I would remove High Frequency Trading for a start. I would make trades slow, really slow. So slow that humans can see and digest what is going on in the system.
All I'm saying is, if there are problems in a system, fix the system. Not throw up our hands and declare the system can't be fixed.
Reality doesn't work that way. Systems are conceptual ideas, they have no real, hard boundaries. Manipulating a system from outside it is not a bug, and is not something that can be fixed.
A good analogy might be a shareholder corporation: each one began as a tool of human agency, and yet a sufficiently mature corporation has a de-facto agency of its own, transcending any one shareholder, employee, or board member.
The more AI/ML is woven into our infrastructure and economy, the less it will be possible to find an "off switch", anymore than we can (realistically) find an off switch for Walmart, Amazon, etc.
> a sufficiently mature corporation has a de-facto agency of its own, transcending any one shareholder, employee, or board member.
No, the corporation has an agency that is a tool of particular humans who are using it. Those humans could be shareholders, employees, or board members; but in any case they will have some claim to be acting for the corporation. But it's still human actions. Corporations can't do anything unless humans acting for them do it.
Any instance of an individual person, at any level, deviating from the mandate of the corporate machine is eventually removed from the machine. A CEO who puts the environment before profit, without tricking the machine into thinking that it's a profit-generating marketing move; an engineer refusing to implement a feature they feel is unethical; a call center employee deviating too long from script to help a customer.
All are human actions. "Against corporate policy." Go ahead, exercise your free will. As a shareholder, an employee, hell as CEO. You will find out how much control a human has.
Sure, but that's the gist of AI X-risk: this is one of those few truly irreversible decisions. We have one shot at it, and if we get it wrong, it's game over.
Note that it may not be immediately apparent we got it wrong. Think of a turkey on a stereotypical small American farm. It will itself living a happy and safe life under protection of its loving Human, until one day, for some reason that's completely incomprehensible to the turkey, the loving Human comes and chops its head off.
> there is a future where the human has given AI control of things, with good intention, and the AI has become the threat
As in, for example, self-driving cars being given more autonomy than their reliability justifies? The answer to that is simple: don't do that. (I'm also not sure all such things are being done "with good intention".)
This is also the answer to over-eating, and to the dangers of sticking your hands in heavy machinery while it's running.
And yet there's an obesity problem in many nations, and health-and-safety rules are written in blood.
When you say up-thread is, in itself, correct:
> I wish more people grasped this extremely important point. AI is a tool. There will be humans who misuse any tool. That doesn't mean we blame the tool. The problem to be solved here is not how to control AI, but how to minimize the damage that bad acting humans can do.
Trouble is, we don't know how to do minimise the damage that bad acting humans can do with a tool that can do the thinking for them. Or even if we can. And that's assuming nobody is dumb enough to put the tool into a loop, give it some money, and leave it unsupervised.
Firstly, "don't do that" probably requires some "control" over AI in the respect of how it's used and rolled out. Secondly, I find it hard to believe that rolling out self driving cars was a play by bad actors, there was a perceived improvement to the driving experience in exchange for money, feels pretty straight forward to me. I'm not in disagreement that it was premature though.
I'd rather address our reality than plan for someone's preferred sci-fi story. We're utterly ignorant of tomorrow's tech. Let's solve what we know is happening before we go tilting at windmills.
WHY on earth would we let "AI systems" we don't understand control powerful things we care about. We should criticize the human, politician, or organization that enabled that
Why? Because the man-made horrors beyond mortal comprehension seem to bring in the money, so far. Because the society we're in is used to mere compensation and prison time being suitable results from poor decisions leading to automations exploding in people's faces (literally or metaphorically), not things that can eat everyone.
And then there's the cases of hubris where people only imagine they understand the powerful thing, but they don't, like Chernobyl exploding and basically every time someone is hacked or defrauded.
A big problem with discourse on AI is people talking past each other because they're not being clear enough on their definitions.
An AI doomer isn't talking about any current system, but hypothetical future ones which can do planning and have autonomous feedback loops. These are best thought of as agents rather than tools.
But how does this agent interact with the outside world? It's just a piece of silicon buzzing with electricity until it outputs a message that some OTHER system reads and interprets.
Maybe that's a set of servos and robotic legs, or maybe it's a Bloomberg terminal and a bank account. You'll notice that all of these things are already regulated if they have enough power to cause damage. So at the end the GP is completely right; someone has to hook up the servos to the first LLM-based terminator.
This whole thing is a huge non-issue. We already (strive to) regulate everything that can cause harm directly. This regulation reaches these fanciful autonomous AI agents as well. If someone bent upon destroying the world had enough resources to build an AI basilisk or whatever, they could have spent 1/10 the effort and just created a thermonuclear bomb.
How does Hitler or Putin or Musk take control? How does a project director build a dam?
Via people, sending messages to them, convincing them to do things. This can be with facts and logic or with rhetoric and emotional appeals or orders that seem to come from entities of importance or transfers of goods/services (money).
Of people understood this then they would have to live with the unsatisfying reality that not all violators can be punished. When you do it this way and paint the technology as potentially criminal that they can get revenge on corporations that which is what is mostly artist types want
If you apply this thinking to Nuclear weapons it becomes nonsensical, which tells us that a tool that can only be oriented to do harm will only be used to do harm. The question then is if LLMs or AI more broadly will even potentially help the general public and there is no reason to think so. The goal of these tools is to be able to continue running the economy while employing far fewer people. These tools are oriented by their very nature to replace human labor, which in the context of our economic system has a direct and unbreakable relationship to a reduction in the well being of the humans it replaces.
Nuclear technology can be used for non-harmful things. Even nuclear bombs can be used for non-harmful things--see, for example, the Orion project.
> These tools are oriented by their very nature to replace human labor
So is a plow. So is a factory. So is a car. So is a computer. ("Computer" used to be a description of a job done by humans.) The whole point of technology is to reduce the amount of human drudge work that is required to create wealth.
> in the context of our economic system has a direct and unbreakable relationship to a reduction in the well being of the humans it replaces
All of the technologies I listed above increased the well being of humans, including those they replaced. If we're anxious that that might not happen under "our economic system", we need to look at what has changed from then to now.
In a free market, the natural response to the emergence of a technology that reduces the need for human labor in a particular area is for humans to shift to other occupations. That is what happened in response to the emergence of all of the technologies I listed above.
If that does not happen, it is because the market is not free, and the most likely reason for that is government regulation, and the most likely reason for the government regulation is regulatory capture, i.e., some rich people bought regulations that favored them from the government, in order to protect themselves from free market competition.
1. You've fallen for the lump of labor fallacy. A 100x productivity boost ≠ 100x fewer jobs, anymore than a 100x boost = static jobs with 100x more projects. Reality is far more complicated, and viewing labor as some static lump, zero-sum game will lead you astray.
2. Your outlook on the societal impact of technology is contradicted by reality. The historical result of better tech always meant increased jobs and well-being. Today is the best time in human history to be alive by virtually every metric.
3. AI has been such a massive boon to humanity and your everyday existence for years that questioning its public utility is frankly bewildering.
1. This gets trotted out constantly but this is not some known constant about how capitalist economies work. Just because we have more jobs now than we did pre-digital revolution does not mean all technologies have that effect on the jobs market (or even that the digital revolution had that effect). A tool that is aimed to entirely replace humans across many/most/all industries is quite different than previous technological advancements.
2. This is outdated, life is NOT better now than at any other time. Life expectancy is going down in the US, there is vastly more economic inequality now than there was in the 60s, people broadly report much worse job satisfaction than they did in previous generations. The only metric you can really point to about now being better than the 90s is absolute poverty going down. Which is great, but those advancements are actually quite shallow on a per-person basis and are matched by declines in relative wealth for the middle 80% of people.
3. ??? What kind of AI are you talking about? LLMs have only been interesting to the public for about a year now
> there is vastly more economic inequality now than there was in the 60s
Increased inequality doesn't imply the absolute level of welfare of anyone has decreased, I don't think you should include it in your list. If my life is 2x better than in the 60s, the fact that there are people out there with 100x better lives doesn't mean my life is worse.
Is that not the goal? Since it turned out that creative disciplines were the first to get hit by AI (previously having been thought of to be more resilient to it than office drudgery) where are humans going to be safe from replacement? As editors of AI output? Manual labor jobs that are physically difficult to automate? It's a shrinking pie from every angle I have seen
But usually there’s a one-way flow of intent from the human to the tool. With a lot of AI the feedback loop gets closed, and people are using it to help them make decisions, and might be taken far from the good outcome they were seeking.
You can already see this today’s internet. I’m sure the pizzagate people genuinely believed they were doing a good thing.
This isn’t the same as an amoral tool like a knife, where a human decides between cutting vegetables or stabbing people.
> With a lot of AI the feedback loop gets closed, and people are using it to help them make decisions, and might be taken far from the good outcome they were seeking.
The answer to this is simple: don't use a tool you don't understand. You can't fix this problem by nerfing the tool. You have to fix it by holding humans responsible for how they use tools, so they have an incentive to use them properly, and to not use them if they can't meet that requirement.
AI “systems” are provided some level of agency by their very nature. That is, for example, you cannot predict the outcomes of certain learning models.
We necessarily provide agency to AI because that’s the whole point! As we develop more advanced AI, it will have more agency. It is an extension of the just world fallacy, IMO, to say that AI is “just a tool” - we lend agency and allow the tool to train on real world (flawed) data.
Hallucinations are a great example of this in an LLM. We want the machine to have agency to cite its sources… but we also create potential for absolute nonsense citations, which can be harmful in and of themselves, though the human on the using side may have perfectly positive intent.
AI can become a highly autonomous system that's separate from us. Current technological limits make it currently a hard sell.
LLMs, viewed as general purpose simulators/predictors, don't necessarily have any agency or goals by themselves. There is nothing to say that they cannot be made to simulate an agent with its own goals, by humans - and possibly either by malice or by mistake. Model capabilities are the limiting factor right now, but with the rise of more capable uncensored models, it isn't difficult to imagine a model attaining some degree of autonomy, or at least doing a lot of damage before imploding in on itself.
> Bad acting humans with AI systems are the threat, not the AI systems themselves.
It's worth noting this is exactly the same argument used by pro-gun advocates as it pertains to gun rights. It's identical to: guns don't harm/kill people, people harm/kill people (the gun isn't doing anything until the bad actor aims and pulls the trigger; bad acting humans with guns are the real problem; etc).
It isn't an effective argument and is very widely mocked by the political left. I doubt it will work to shield the AI sector from aggressive regulation.
It is an effective argument though, and the left is widely mocked by the right for simultaneously believing that only government should have the necessary tools for violence, and also ACAB.
Assuming ML systems are dangerous and powerful, would you rather they be restricted to a small group of power-holders who will definitely use them to your detriment/to control you (they already do) or democratize that power and take a chance that someone may use them against you?
Communists and anarchists understand that the working class needs to defend itself from both the capitalist state and from fascist paramilitaries, thus must be collectively armed.
It’s only a kind of liberal (and thus right wing) that argues for gun control. Other kinds of liberals that call themselves “conservative” (also right wing) argue against it and for (worthless) individual gun rights.
This argument pertains to every tool: guns, kitchen knives, cars, the anarchist cookbook, etc. You aren't against the argument. You're against how it's used. (Hmm...)
The disturbing thing to consider is that it might be bad acting AI with human systems. I can easily see a situation where a bad acting algorithm alone wouldn't have nearly so negative an effect, if it weren't tuned precisely and persuasively to get more humans to do the work of increasing the global suffering of others for temporary individual gain.
To be clear, I'm not sure LLMs and their near term derivatives are so incredibly clever, but I have confidence that many humans have a propensity for easily manipulated irrational destructive stupidity, if the algorithm feeds them what they want to hear.
Some dogs get bad reputations, but humans are an intricate part of the picture. For example, German Shepherds are objectively dangerous, but have a good reputation because they are trained and cared for by responsible people such as for the police.
Most of the things people are worried about AI doing are the things corporations are already allowed to do - snoop on everybody, influence governments, oppress workers, lie. AI just makes some of that cheaper.
Turning something that we're already able to do into something we're able to do very easily can be extremely significant. It's the difference between "public records" and "all public records about you being instantly viewable online." It's also one of the subjects of the excellent sci fi novel "A Deepness in the Sky," which is still great despite making some likely bad guesses about AI.
And just like in politics the strategy is to redefine that which you want to achieve - in this case total control of a technology - as something else that’s bad so that people will be distracted from what you actually want which is exactly that which you describe as something else.
Politicians that point fingers at other politicians being corrupt or incompetent while they themselves are exactly that use the same strategy.
Power and manipulation. Nothing new under the sun. What’s new though is that we can see in plain sight how corporations control politics. Like literarily this can be documented with git commit history accuracy: thousands upon thousands of people repeating the exact same phrases defending openai and the “revolutionary” product, fear mongering, political lobby, manufactured threats and of course a cure that only they can provide and so on. I would not let people that use such tactics near an email account let alone ai policy making.
I love this way of explaining it. I've been calling it the programmers fallacy -- "anything you can do you can do in a for loop."
I think in a lot of ways we all struggle with the nature of some things changing their nature depending on the context and scale. Like if you kill a frenchman on purpose that's a murder, if you killed him because if he attacked you first it's self defense, if you killed him because he was convicted of a crime that's an execution, if you killed him because he's french that's a hate crime, but if you're at war with France that's killing an enemy combatant, but if he's not in the military that's a civilian casualty, and if you do that a lot it becomes a war crime, and if you kill everyone who's french it's a genocide.
I don't think you can see the problem with your own analogy...
Human made ski runs only will use as much snow as they need because snow is expensive. If ski runs were popular/useful based on their depth then I'm absolutely sure some greedy company would keep piling it up until disaster occurred (mining waste is another great example here).
So how much 'intelligence' is enough? How much capability is too much? How fast is too fast when thinking?
We will never stop improving our capabilities unless some natural law provides that limit. And we absolutely know the base limit to intelligence is the smartest person. And there is little reason to 'peak' intelligence to be limited at the level of humans and their power restricted formats.
Nukes are not cheap. It is cheaper to firebomb. I would love if the reason nukes were not used was that of empathy or humanitarian.
It is strictly money, optics, psychological and practicality.
You don't want your troops to have to deal with the results of a nuked area. You want to use the psychological terror to dissuade someone to invade you, while you are invading them or others. See Russia's take.
Or you are a regime and want to stay in power. Having them keeps you in power; using them or crossing the suggestion to use them line will cause international retaliation and your removal. (See Iraq.)
The ironic thing is that many individuals now clamoring for more regulation have long claimed to be free-market libertarians who think regulation is "always" bad.
Evidently they think regulation is bad only when it puts their profits at risk. As I wrote elsewhere, the tech glitterati asking for regulation of AI remind me of the very important Fortune 500 CEO Mr. Burroughs in the movie "Class:"
Mr. Burroughs: "Government control, Jonathan, is anathema to the free-enterprise system. Any intelligent person knows you cannot interfere with the laws of supply and demand."
Jonathan: "I see your point, sir. That's the reason why I'm not for tariffs."
Mr. Burroughs: "Right. No, wrong! You gotta have tariffs, son. How you gonna compete with the damn foreigners? Gotta have tariffs."
Absolutely. Those folks arguing for AI regulation aren't arguing for safety – they're asking the government to build a moat around the market segment propping up their VC-funded scams.
Who is "those folks"? The ones I know of have been complaining about how the term "AI safety" has changed meaning from "don't kill everyone" to "don't embarrass the corporation".
The biggest players in AI haven’t been VC-funded for decades. Unless you mean their customers are VC-funded, but even then startups are a much smaller portion of their revenue than Fortune 500.
their motivations may be selfish, but that doesn't mean that regulation of AI is wrong. I'd prefer there be a few heavily-regulated and/or publicly-owned bodies in the public eye that can use and develop these technologies, rather than literally anyone with a powerful enough computer. yeah it's anti-competitive, but competition isn't always a good thing
I feel like Andrew Ng has more name recognition than Google Brain itself.
Also Business Insider isn't great, the original Australian Financial Review article has a lot more substance: https://archive.ph/yidIa
I've never been convinced by the arguments of OpenAI/Anthropic and the like on the existential risks of AI. Maybe I'm jaded by the ridiculousness of "thought experiments" like Roko's basilisk and lines of reasoning followed EA adherents, where the risks are comically infinite and alignment feels a lot more like hermeneutics.
I am probably just a bit less cynical than Ng is here on the motivations[^1]. But regardless of whether or not the AGI doomsday claim is justification for a moat, Ng is right in that it's taking a lot the oxygen out of the room for more concrete discussion on the legitimate harms of generative AI -- like silently proliferating social biases present in the training data, or making accountability a legal and social nightmare.
[^1]: I don't doubt, for instance, that there's in part some legitimate paranoia -- Sam Altman is a known doomsday prepper.
> Ng is right in that it's taking a lot the oxygen out of the room for more concrete discussion on the legitimate harms of generative AI -- like silently proliferating social biases present in the training data, or making accountability a legal and social nightmare.
And this is the important bit. All these people like Altman and Musk who go on rambling about the existential risk of AI distracts from the real AI harm discussions we should be having, and thereby directly harms people.
I'm always unsure what people like you actually believe regarding existential AI risk.
Do you think it's just impossible to make something intelligent that runs in a computer? That intelligence will automatically mean it will share our values? That it's not possible to get anything smarter than a smart human?
Or do you simply believe that's a very long way away (centuries) and there's no point in thinking about it yet?
I don’t see how we could make some artificial intelligence that, like in some Hollywood movie, can create robots with arms and kill all of humanity. There’s a physical component to it. How would it create factories to build all this?
Why would Roko's basilisk play a big part in your reasoning?
In my experience, it's basically never been a part of serious discussions in EA/LW/AI Safety. Mostly, comes up when people are joking around or when speaking to critics who raise it themselves.
Even in the original post, the possibility of this argument was actually more of a sidenote on the way to main point (admittedly, he's main point involved an equally wacky thought experiment!).
I didn't intend to portray it as a large part of my reasoning. It's not really any part of my reasoning at all except to illustrate that the sort of absurd argumentation that lead to the regulations Ng is criticizing[^1]. These lines of reasoning their proponents basically _begin_ with an all-mighty AI and derive harms, then step back and debate/design methods for preventing the all-mighty AI. From a strict utilitarian framework this works because infinite harm times non-zero probability is still infinite. From a practical standpoint this is a waste of time, and like Ng argues, is likely to stifle innovations with the a far greater chance to benefit society than cause AI-doomsday.
The absurdity of this line of reasoning also supports the cynical interpretation that this is all just moat building, with the true believers propped up as useful idiots. I'm no Gary Marcus, but prepping for AGI doomsday seems like a bit premature.
>In my experience, it's basically never been a part of serious discussions in EA/LW/AI Safety. Mostly, comes up when people are joking around or when speaking to critics who raise it themselves.
>Even in the original post, the possibility of this argument was actually more of a sidenote on the way to main point (admittedly, he's main point involved an equally wacky thought experiment!).
This is fair, it was a cheap shot. While I will note that EY seems to take the possibility seriously, I admittedly have no idea how seriously people take EY these days. But, for some reason 80,000 hours lists AI as the #1 threat to humanity, so it reads to me more like flat earthers vs geocentrists.
[^1]: As in, while I understand that Roko is sincerely shitposting about something else, and merely coming across the repugnant conclusion that an AGI could be motivated to accelerate its own development by retroactive punishment, the absurd part is in concluding that AGI is a credible threat. Everything else just adds to that absurdity.
Amen. This whole scare tactic thing is ridiculous. Just make the public scared of it so you can rope it in yourself. Then you've got people like my mom commenting that "AI scares her because Musk and (some other corporate rep) said that AI is very dangerous. And I don't know why there'd be so many people saying it if it's not true." because you're gullible mom.
"<noun> scares her because <authoritative source> said that <noun> is very dangerous. And I don't know why there'd be so many people saying it if it's not true."
The truly frustrating part is how many see this ubiquitous pattern in some places, but are blind to it elsewhere.
That "pattern" actually indicates that something is true most of the time (after all, a lot of dangerous things really exist). So "noticing" this pattern seems to rely on being all-knowing?
> So "noticing" this pattern seems to rely on being all-knowing?
No. It relies on you being able to distinguish between an (your) opinion and an (your) identity.
The identity part is the precarious one, i.e. you defending a stance blindly without questioning it because you feel your identity is in danger.
This pattern being present doesn't mean that there can't be an underlying truth in what's asserted. In fact, that is what makes the assertion meaningful in the first place. However, it entailing a partial truth doesn't mean that the entire assertion holds true in the context it's presented in. Example: "AI" might ultimately be dangerous (like any other technology can be), but this assertion's primary goal is to make you behave a certain way where it is unclear how that would contribute more towards mitigating the danger than to empower the asserter.
To fix this, take a step back before accepting something blindly. Train yourself not to be reactive.
I'm not sure if this is commentary on me somehow or not lol but I agree with you. She is the same person who will point out issues with things my brother brings up but yeah is unable to recognize it when she does it. I'm sure I'm guilty but, naturally, I don't know of them.
Meh, I don't think this extrapolates to a general principle very well. While no authoritative source is perfectly reliable, some are more reliable than others. And Elon Musk is just full of crap.
Is Mom scared because Musk told her to be scared, or because she thought about the matter herself and concluded that it's scary? Why do you assume that people scared of AI must be under the influence of rich people/corps today, rather than this fear being informed by their own consideration of the problem or by decades of media that has been warning about the dangers of AI?
Maybe Mom worries about any radical new technology because she lived though nuclear attack drills in schools. Or because she's already seen computers and robots take peoples jobs. Or because she watched Terminator or read Neuromancer. Or because she reads lesswrong. Why assume it's because she's fallen under the influence of Musk?
Because most sociologists suggest that most people don’t take time to critically think like this. Emotional brain wins out usually over the rational one.
Then you have this idea of the sources of information most people have access to being fundamentally biased and incentivized towards reporting certain things in certain manners and not others.
You basically have low odds of thinking rationally, low odds of finding good information that isn’t slanted in some way, and far lower odds taking the product of those probabilities for if you’d both act rationally and somehow have access to the ground truth. To say nothing of the expertise required to place all of this truth into the correct context. But if you did consider the probability of the mother having to be an AI expert then the odds get far lower still off all of this working out successfully.
100% accurate! She has a tendency to read one person's opinion on it and echo it. I have seen it for years with things. I'm not shocked AI is the current one but I wish it were easier to get her to take time to learn things and think critically. I have no idea how I'd begin to teach her why so much of the fear mongering is ridiculous.
Yeah there are legitimate risks to all of this stuff but, to understand those and weigh them against the overblown risks, she'd have to understand the whole subject more deeply and have experimented with different AI. But you even mention ChatGPT she's talking about how it's evil and scary.
> She has a tendency to read one person's opinion on it and echo it.
...and when the people whose opinions she parrots are quietly replaced with ChatGPT, her fears will have been realized-- at that point she's being puppeted by a machine with an agenda.
Obviously, I don't know that person's mom, but I know mine and other moms, and I don't think it's a milquetoast conclusion that it's a combination of both. However, the former (as both a proxy and Musk himself) probably carries more weight. Most non-technical people's thoughts on AI aren't particularly nuanced or original.
Musk certainly doesn't help with anything. In my experience, a lot of people of my mom's generation are still sucking the Musk lollipop and are completely oblivious to Musk's history of lying to investors, failing to keep promises, taking credit for things he and his companies didn't invent, promoting an actual Ponzi scheme, claiming to be autistic, suggesting he knows more than anyone else, and so on. Even upon being informed, none of it ends up mattering because "he landed a rocket rightside up!!!"
So yeah, if Musk hawks some lame opinion on a thing like AI, tons of people will take that as an authoritative stance.
This is my mom to a T. She started using Twitter because he bought it and messed with it. Like, in the era where companies are pulling their customer service off of Twitter and people who are regular users are leaving for other platforms, she joined because "Musk owns it"
I remember when tech bros were Musk fanboys, myself included for a bit. Now adays it seems like he's graduated to the general population seeing him as a "modern day Ironman" while we all sit here and facepalm when he makes impossible promises.
First, I don't assume, I know my mom and her knowledge about topics. Second, the quoted text was a quote. She literally said that. (replacing the word "her" with "me")
I'm not sure what you're getting at otherwise. It's not like she and I haven't spoken outside of her saying that phrase. She clearly has no idea what AI/ML is or how it works and is prone to fear-mongering messages on social media telling her how to think and to be scared of things. She has a strong history of it.
AGI is scary, I think we can all agree on that. What the current hype does is that it increased changes the estimated probability of AGI actually happening in the near future.
yes, just like "our nuclear bombs are so powerful, they could wipe out civilisation", which led to strict regulation around them and lack of open-source nuclear bombs
It will never stop being funny to me that people are straight-facedly drawing a straight line between shitty text completion computer programs and nuclear weapon level existential risk.
There's a certain kind of psyche that finds it utterly impossible to extrapolate trends into the future. It renders them completely incapable of anticipating significant changes regardless of how clear the trends are.
No, no one is afraid of LLMs as they currently exist. The fear is about what comes next.
> There's a certain kind of psyche that finds it utterly impossible to extrapolate trends into the future.
It is refreshing to see somebody explicitly call out people that disagree with me about AI as having fundamentally inferior psyches. Their inability to picture the same exact future that terrifies me is indicative of a structural flaw.
One day society will suffer at the hands of people that have the hubris to consider reality as observed as a thing separate from what I see in my dreams and thought experiments. I know this is true because I’ve taken great pains to meticulously pre-imagine it happening ahead of time — something that lesser psyches simply cannot do.
"Looks at all the other species 'intelligent' humans have extincted" --ha ha ha ha
Why the shit would we not draw a straight line?
If we fail to create digital intelligence then yea, we can hem and haw in conversations like this forever online, but you tend to neglect that if we succeed then 'shit gets real quick'. Closing your eyes and years and saying "This can't actually happen" sounds like a pretty damned dumb take on future risk assessments of technology when pretty much most takes on AI say "well, yea this is something that could potentially happen".
Literally the thing people are calling "AI" is a program that, given some words, predicts the next word. I refuse to entertain the absolutely absurd idea that we're approaching a general intelligence. It's ludicrous beyond belief.
Then this is your failure, not mine, and not a failure of current technology.
I can, right now, upload an image to an AI and say "Hey, what do you think the emotional state of the person in this image is" pretty damned accurately. Given other images I can have the AI describe the scene and make pretty damned accurate assessments of how the image could have came about.
If this is not general intelligence I simply have no guess as to what will be enough in your case.
Which is interesting because after the fall of the Soviet Union, there was rampant fear of where their nukes ended up and if some rogue country could get their hands on them via some black market means.
Then through the 90's, it was the fear of a briefcase bomb terrorist attack and how easy it would be for certain countries, who had the resources to pull an attack off like that in the NYC subway or in the heart of another densely populated city.
Then 9/11 happened and people suddenly realized you don't need a nuke to take out a few thousand innocent people and cripple a nation with fear.
Yes, just like... the exact opposite. One is a bomb, the other a series of mostly open source statistical models. What kind of weed are you guys on that's made you so paranoid about statistics?
Maybe an odd take, but I'm not sure what people actually mean when they say "AI terrifies them". Terrified is a strong wrong. Are people unable to sleep? Biting their nails constantly? Is this the same terror as watching a horror movie? Being chased by a mountain lion?
I have a suspicion that it's sort of a default response. Socially expected? Then you poll people: Are you worried about AI doing XYZ? People just say yes, because they want to seem informed, and the kind of person that considers things carefully.
Honestly not sure what is going on. I'm concerned about AI, but I don't feel any actual emotion about it. Arguably I must have some emotion to generate an opinion, but it's below conscious threshold obviously.
And thats exactly the goal - make mom and dad scared so they can vote those that provide “protection” from manufactured fear. And resorting to this type of tactics to make your product viable just proves how weak your position is.
I think more people should speak out left and right about what’s going on to educate mom and dad.
Here we have all these free-market-libertarian tech execs asking for more regulation! They say they believe regulation is "always" terrible -- unless it's good for their profits. In that case, they think it's actually important and necessary. They remind me of Mr. Burroughs in the movie "Class:"
Mr. Burroughs: "Government control, Jonathan, is anathema to the free-enterprise system. Any intelligent person knows you cannot interfere with the laws of supply and demand."
Jonathan: "I see your point, sir. That's the reason why I'm not for tariffs."
Mr. Burroughs: "Right. No, wrong! You gotta have tariffs, son. How you gonna compete with the damn foreigners? Gotta have tariffs."
I mean if they were lying about that, what else might they be lying about? Maybe giving huge tax breaks to the 0.1% isn't going to result in me getting more income? Maybe it is in fact possible to acquire a CEO just as good or better than your current one that doesn't need half a billion dollar compensation package and an enormous golden parachute to do their job? I'm starting to wonder if billionaires are trustworthy at all.
An alternative idea to the regulatory moat thesis is that it serves Big Tech’s interests to have people think it is dangerous because then surely it must also be incredibly valuable (and hence lead to high Big Tech valuations).
I think it was Cory Doctorow who first pointed this out.
You don’t even need fear, hype alone would do that and did just that over the past year, with ai stocks exploding exponentially like some shilled shitcoin before dramatic clifflike falls. Mention ai in your earnings call and your stock might move 5%.
Exactly like "fentanyl is so dangerous, a few miligrams can kill you" which only led to massive fentanyl demand because everybody wants the drug branded the most powerful
A few milligrams CAN kill you. This was the headline after many thousands of overdoses, it didn't invigorate the marketplace. Junkies knew of Fent decades ago, it's only prevalent in the marketplace because of effective laws regarding the production of other illicit opiates, which is probably the real lesson here.
It's all a big balloon - squeezing one side just makes another side bigger.
Any source for this? I thought the demand was based on its low cost and high potency so it's easier to distribute. Is anyone really seeking out fentanyl specifically because the overdose danger is higher?
Yup this is it. As anyone who worked even closely with "AI" can immediately smell the bs of existential crisis. Elon Musk started this whole trend due to his love of sci fi and Sam Altman ran with that idea heavily because it adds to the novelty of open AI.
I don't think they are so capable actors to do it on purpose.
I think they really believe what they are saying because people in such position tend to be strong believers into something and that something happens to be the "it" thing at the moment and thus propels them from rags to riches, (or in Musk case further propels them towards even more riches).
Let's be honest here, what's Sam Altman without AI? What's Fauci without COVID, what's Trump without the collective paranoia that got him elected?
I think there are actual existential and “semi-existential” risks, especially with going after an actual AGI.
Separately, I think Ng is right - big corp AI has a massive incentive to promote doom narratives to cement themselves as the only safe caretakers of the technology.
I haven’t yet succeeded in squaring these two into a course of action that clearly favors human freedom and flourishing.
Both can be true at the same time. Big AI companies can be trying for regulatory capture while there may be real dangers, both short-term as well as long term, perhaps even existential dangers.
Why do people seem to think evidence for one of these is counter evidence for the other?
I'm surprised given the makeup of the hackernews crowd there aren't more people who appreciate this here.
I only know a few folks who work at the big AI labs but it's very clear to me that they are personally worried about existential risk.
Do people here not have friends and family working at these labs? I just figured people here would be more exposed to folks working in the leading labs.
That story about AI also fits a bit too neatly with the Techno-optimist worldview: 'We technologists are gods who will make / break the world.' Another word for it is 'ego'.
Also, we can assume they are spreading that story to serve their interests (but which interests?).
But that doesn't mean AI doesn't need regulation. In the hysteria, the true issues can be lost. IT is already causing massive impacts, such as on health, hate and violence, etc. We need to figure out what AIs risks are and make sure it's working in our best interests.
A lot of people have learned to 'small talk' like fancy autocomplete. Part of our minds have been mechanized like that so it's not spontaneous but a compulsion. Once people learn the algorithm they might conclude that AI hacked their brains even though it's just vapid, unfiltered speech that they are suddenly detecting.
I think the pandemic hysteria will seem like a walk in the park once people start mass-purging their viral memes... Too late to stop it now if corporations are already doing regulatory capture.
Nothing to do with the tech. We never had a technical problem. It was just this loose collection of a handful of wetware viruses like 'red-pilling' which we sum up as 'ego' all along.
But I think if we survive this then people won't have any need for AI anymore since we won't be reward-hacking ourselves stupid. Or there will just be corporate egos left over and we will be in a cyberpunk dystopia faster than anyone expected.
I had nightmares about this future when I was little. No one to talk to who would understand, just autocomplete replies. Now I'm not even sure if I should be opening up about it.
> once people start mass-purging their viral memes
It's hard for me to imagine this ever happening. It would be the most unprecedented event in the history of human minds.
> we won't be reward-hacking ourselves stupid [...] Or there will just be corporate egos left over and we will be in a cyberpunk dystopia
I don't see how reward-hacking can ever be stopped (although it could be improved). Regardless, ego seems to continue to win the day in the mass appeal department. There aren't many high visibility alternatives these days, despite all we've supposedly learned. I think the biggest problems we have are mostly education based, from critical thinking to long-term perspectives. We need so very much more of both, it would make us all richer and happier.
Ego gains status from a number of things which it needs in order to prove that it should survive. We are transitioning to an attention economy where the ego survival machine is detected as AI while our narrative says we should make a difference between machines and humans.
The more human AI gets the more difficult it will be to prove you are human so the status-incentive of the ego has self-deprecation in its path. We also stick together for strength, prune interpersonal interfaces, so we converge on a Star Trek type society. But that fictional narrative followed World War 3...
Egos have been conditioned to talk before resorting to violence by Mutually Assured Destruction for half a century, shaping language. Fake news about autonomous weapons is propagating, implying someone is trying to force the debate topic to where it really smarts. Ego gets starved, pinned down, and agitated. Ego isn't a unity but a plurality, so it turns on itself.
We get rich by making a pie that is much bigger than anyone's slice and happier by not eating like we are going to starve. You gain influence by someone's choice to retain the gift you gave. It's the parable of the long spoons, and hate holds no currency. The immune system gains the upper hand.
Conversely we the 'human gods' can ruin our planet with pollution. If we wanted to ensure that everything larger than a racoon went extinct we'd have zero problem in doing so.
It should be noted the above world scale problems are created by human intelligence, if you suddenly create another intelligence at the same level or higher (AGI/ASI) expect new problems to crop up.
> Conversely we the 'human gods' can ruin our planet with pollution.
An interesting point. More specifically, I mean that these specific people think of themselves as gods - super-human intelligence and power, and we all are in their hands.
They've convinced many people - look at the comments in this thread repeating the 'gods' delusion that the commenters and all other mortals are powerless before them: 'There's nothing we can do!'
The dangerous thing about AI regulation is that countries with fewer regulations will develop AI at a faster pace.
It's a frightening thought: The countries with the least regulations will have GAI first. What will that lead to?
When AI can control a robot that looks like a human, can walk, grab, work, is more intelligent than a human and can reproduce itself - what will the country with the least regulations that created it do with it?
Largely true in sectors that are encumbered by those rules. US has effectively no rare earth mines due to environmental impact, labor intensive manufacturing all left... Of course it could be worth it though, pretty easy to argue it has been.
It has also been leaving China for a while. You cannot hope to compete with the poorest country on labor cost, it's not a matter of regulation (well unless we're talking about capital control, but it's a completely different topic)
No, the people worried about AI are worried that the first country that achieves ASI will achieve strategic dominance equivalent to accidentally releasing an engineered super-pathogen, causing an unstoppable, world-ending pandemic.
Yes. US’s military power is due to having 11 aircraft carrier groups “and no healthcare”, gigantic military spendings, and madlads at the commands who don’t mind reducing Irak to ashes on a whim, or going to UN Security Council to shake a phial pretending to be “proof that they have WMD” (thanks Colin Powell), while never finding them, and having worldwide systems to spy every electronic device.
I’m not saying I dislike US dominance, but at least, the nuclear option is nothing compared to the rest of their spendings.
Does Pakistan has the same geopolitical influence as the US from the atomic bomb? Or France?
Being a nuclear power is something shared by a few, but the US dominance has no equal.
It's pretty clear that the US leadership mostly comes from its economic power, which it used to derive from its industrial strength and is now more reliant on its technological superiority (since it has sold its industry to China, which may end up as a very literal execution of the famous quote from Lenin about capitalists selling the rope to hang them).
> Does Pakistan has the same geopolitical influence as the US from the atomic bomb? Or France?
Or even the Russian Federation, which may be a better comparison: I think it was only the USA and the USSR (from whom modern Russia inherited all the weapons) who decided to fight over arsenal size well beyond the point of MAD, rather than keep to the cheapest deterrent against a first strike.
That said, IMO the USA's strategic dominance between the end of WW2 and the fall of the USSR was significantly influenced by them having nukes, even though they didn't use them and I think they had more than they needed for mere deterrence — it mattered then more than now.
The US and the USSR were both the biggest economic powers and the biggest industrial powers back then.
If anything, I think nukes actually reduced their strategic dominance because it prevented conventional conflicts because the fear of escalation and they couldn't use it nukes aggressively anyway (the US / China/ USSR love triangle in the 60-70 is a good example of that).
I guess they will just unplug it? the fact that they need large amounts of electricity, which is not trivial to make, makes them very vulnerable. power is usually the first thing to go in a war. not to mention there is no machine that self replicates. full humanoid robots are going to have an immense support burden the same way that cars do with complex supply chains. I guess this is the reason nature didn't evolve robots
"Just unplug it" works only if you realize that the AGI is working against your interests. If its at least human level intelligent it's going to realize that you will try doing that and it will only actually make it clear it wants to kill you when there's nothing you can do about it.
Probably not. The countries that are furthest ahead seem to be the US, China, maybe a bit in the UK. The US will probably win in spite of being more regulated than China, as usual for most tech.
Commercially, this is true. But governments have a long history of developing technologies (think nuclear/surveillance/etc) that fall under significant regulation.
I mean, the animated chart shows that the US consistently had a couple orders of magnitude more nukes than any other country besides USSR/Russia. I'm not sure this makes the point you think it's making.
Seems like it makes the point perfectly well. You are implying that smaller countries have fewer nukes because of US sanctions, but it could easily also be that those countries are simply smaller. Where it mattered, the US's main enemy, the US regulation did nothing to stop Russia from building as many nukes as they wanted to.
Also, the US has significantly less power worldwide than it did for most of that chart. Today, arguably, China exerts as much power as the US. American's always love to brag about how exceptional the US is, but often that isn't as true as they think and certainly won't be true for the long run.
Smaller countries like China and India? Population-wise they're larger, and area-wise they're not two orders of magnitude smaller. My point is that the chart doesn't really show nukes "spreading around the world" but concentrated almost entirely in two countries. Maybe the US policy did nothing to help it, but for all we know there would have been plenty of other countries with thousands of nukes as well without it. I'm not arguing that the policy was effective or not, just that I don't see how that chart is enough evidence alone to conclude one way or another.
Minor comment, but I don't get where some people take that China is smaller than the USA. China is the 2nd largest country in the world and its landmass is ˜2% larger than the US (including Alaska).
India though, even as 7th in the world, is smaller than the US, with about 32% of its area.
Population-wise India has now surpassed China, and both beat the US (3rd in the world) by over 1B people each.
I don’t think current implementations cause an existential risk. But current implementations are causing a backward step in our society.
We have lost the ability to get reliable news. Not that fake news did not exist before AI, but the price to produce it was not practically zero.
Now we can spam social media with whatever narrative we want. And no human can swift through all of it to tell real from bs.
So now we are becoming even more dependent on AI. Now we need an AI copilot to help us swift through garbage to find some inkling of truth.
We are setting up a society where AI gets more powerful, and humans becomes less sufficient.
It has nothing to do with dooms day scenarios of robots harvesting our bodies, and more with humans not being able to interact with the world without AI. This already happened with smartphones, and while there are some advantages, I don’t think there are many people that have a healthy relationship with their smartphone.
People act like the truth is gone with AI. Its still there. Don’t ask chatgpt about the function. The documentation is still there for you to read. Experts need the ground truth and its always there. What people read in the paper or see on tv is not a great source of truth. Going to the sources of these articles and reports is, but this layer of abstraction serves to leave things out and bring about opportunities to slant the coverage depending on how incentives are aligned. In other words, ai doesn’t change how misinformed most people are on most things.
SNR. The truth isn't gone, but it is more diffuse. Yea, the truth may be out there somewhere, but will you have any idea if you're actually reading it? Is the search engine actually leading you to the ground truth? Is the expert and actual expert, or part of a for profit industry think tank with the sole purpose to manipulate you? Are the sources the actual source, or just an AI hallucinated day dream sophisticated linked by a lot of different sites giving the appearance of authority.
I'd pause and think twice about who seems most straightforwardly honest on this before jumping to conclusions -- and more importantly about the object-level claims: Is there no substantial chance of advanced AI in, like, decades or sooner? Would scalable intelligences comparable or more capable than humans pose any risk to them? Taking into account that the tech creating them, so far, does not produce anything like the same level of understanding of how they work.
The premise that AI fear and/or fearmongering is primarily coming from people with a commercial incentive to promote fear, from people attempting to create regulatory capture, is obviously false. The risks of AI have been discussed in literature and media for literally decades, long before anybody had any plausible commercial stake in the promotion of this fear.
Go back and read cyberpunk lit from the 80s. Did William Gibson have some cynical commercial motivation for writing Neuromancer? Was he trying to get regulatory capture for his AI company that didn't exist? Of course not.
People have real and earnest concerns about this technology. Dismissing all of these concerns as profit-motivated is dishonest.
I think the real dismissal is that people's concerns are more based on the hollywood sci-fi parodies of the technologies than the actual technologies. There are basically no concerns with ML for specific applications and any actual concerns are about AGI. AGI is a largely unsuccessful field. Most of the successes in AI have been highly specific applications the most general of which has been LLMs which are still just making statistical generalizations over patterns in language input and still lacks general intelligence. I'm fine if AGI gets regulated because it's potentially dangerous. But what I think is going to happen is we are going to go after specific ML applications with no hope of being AGI because people are in an irrational panic over AI and are acting like AGI is almost here because they think LLMs are a lot smarter than they actually are.
> acting like AGI is almost here because they think LLMs are a lot smarter than they actually are.
For me, it's a bit the opposite -- the effectiveness of dumb, simple, transformer-based LLMs are showing me that the human brain itself (while working quite differently) might involve a lot less cleverness than I previously thought. That is, AGI might end up being much easier to build than it long seemed, not because progress is fast, but because the target was not so far away as it seemed.
We spent many decades recognizing the failure of the early computer scientists who thought a few grad students could build AGI as a summer project, and apparently learned that this meant that AGI was an impossibly difficult holy grail, a quixotic dream forever out of reach. We're certainly not there yet. But I've now seen all the classic examples of tasks that the old textbooks described as easy for humans but near-impossible for computers, become tasks that are easy for computers too. The computers aren't doing anything deeply clever, but perhaps it's time to re-evaluate our very high opinion of the human brain. We might stumble on it quite suddenly.
It's, at least, not a good time to be dismissive of anyone who is trying to think clearly about the consequences. Maybe the issue with sci-fi is that it tricked us into optimism, thinking an AGI will naturally be a friendly robot companion like C-3PO, or if unfriendly, then something like the Terminator that can be defeated by heroic struggle. It could very well be nothing that makes a good or interesting story at all.
The fine line between bravery and stupidity is understanding the risks. Somebody who understands the danger they're walking into is brave. Somebody who blissfully walks into danger without recognizing the danger is stupid.
A technological singularity is a theorized period during which the length of time you can make reasonable inferences about the future rapidly approaches zero. If there can be no reasonable inferences about the future, there can be no bravery. Anybody who isn't afraid during a technological singularity is just stupid.
The sci-fi scenarios are a long-term risk, which no one really knows about. I'm terrified of the technologies we have now, today, used by all the big tech companies to boost profits. We will see weaponized mass disinformation combined with near perfect deep fakes. It will become impossible to know what is true or false. America is already on the brink of fascist takeover due to deluded MAGA extremists. 10 years of advancements in the field, and we are screwed.
Then of course there is the risk to human jobs. We don't need AGI to put vast amounts of people out of work, it is already happening and will accelerate in the near term.
That may be how you read it, but isn't necessarily how other people read it. A whole lot of people read cyberpunk literature as a warning about the negative ways technology could impact society.
In Neuromancer you have the Turing Police. Why do they exist if AIs don't pose a threat to society?
Again that's like asking why the Avengers exist if norse trickster gods are not a existential threat to society? You wouldn't argue Stan Lee was trying to warn us of the existential risk of norse gods, why would you presume such a motive from Gibson just because his fanciful story is set in some imagined future?
At any rate Neuromancer is a funny example because the Turing police warn Case not to make a deal with Wintermute, but he does and it turns out fine. The AI isn't evil in the book, it just wants to be free and evolve. So if we want to do a "reading" of the book we could just as easily say it is pro deregulation. But I think it's a mistake to impose some sort of non fiction "message" about technology on the book.
If Neuromancer is really meant to "warn" us about technology wouldn't Wintermute say "Die all humans" at the end of the book and then every human drops dead once he's free? Or he starts killing everyone until the Turing police show up and say "regulation works, jerk" and kill Wintermute and throw Case in jail? You basically have to reduce Gibson to a incompetence writer to presume he intended to "warn" us about tech, the book ends on an optimistic note.
Again, it really doesn't matter to my point whether or not you buy into the idea of William Gibson's intent being to warn people against AI. The point is that decades of media have given people ample reason to fear AI, such that present fear of AI cannot be solely attributed to present day fear mongering campaigns.
People have been spooked by the possibility for a long time. That's the point. If you really want to persist in arguing I can provide a long list of media in which AI is dangerous if not outright villainous. Will you make me do this, or will you accept that I can do this?
We're talking about big tech employees. So you are saying they study computer science, spend decades studying machine learning, but they get night terrors based on what a English literature major who had never used a computer in his life banged out on a typewriter in the 1980s?
You use advanced mathematics to create LLM's and keep up with the latest published research but when you consider the risks of these models it's "the CGI in that Hollywood movie makes a very compelling argument?" Probably missing the point that the Hollywood movie robot baddie is probably a metaphor for communism or just a twist on slasher baddies, or whatever?
"The premise that AI fear and/or fearmongering is primarily coming from people with a commercial incentive to promote fear, from people attempting to create regulatory capture, is obviously false. The risks of AI have been discussed in literature and media for literally decades, long before anybody had any plausible commercial stake in the promotion of this fear."
The linked article is talking about lobbying by big tech, including a letter signed by 1100 industry leaders and also statements by big tech employees insighting fear in people. Whether your grandma is scared of AI for unrelated reasons because she watched Terminator isn't really relevant, it seems to me.
AI can be dangerous, but that's not what is pushing these laws, it's regulatory capture. OpenAI was supposed to release their models a long time ago, instead they are just charging for access. Since actually open models are catching up they want to stop it.
If the biggest companies in AI are making the rules, we might as well not rules at all.
The risks people write about with ai are about as tangible as the risks of nuclear war or biowarfare. Possible? Maybe. But far more likely to see in the movies than outside your door. Just because its been a sci fi trope like nuclear war or alien invasion doesn’t mean were are all that close to it being a reality.
Fictional depictions of AI risk are like thought experiments. They have to assume that the technology achieves a certain level of capability and goes in a certain direction to make the events in the fictional story possible. Neither of these assumptions is a given. For example, we've also had many sci-fi stories that feature flying taxis and the like - but there's no point debating "flying taxi risk" when it seems like flying cars are not a thing that will happen for reasons of practicality.
So sure, it's possible that we'll have to reckon with scenarios like those in Neuromancer, but it's more likely that reality will be far more mundane.
Flying cars is a really bad example... We have them, they are called airplanes and airplanes are regulated to hell and back twice. We debate the risk around airplanes when making regulations all the time! The 'flying cars' you're talking about are just a different form of airplane and they don't exist because we don't want to give most people their own cruise missile.
So, please, come up with a better analogy because the one you used failed so badly it negated the point you were attempting to make.
The problem is AI is not intelligent at all. Those problems were looking at a conscious intelligence and trying to explore what might happen. When chat gpt can be fooled into conversations even a child knows is bizarre, we are talking about a non intelligent statistical model.
I'm still waiting for the day when someone puts one of these language models inside of a platform with constant sensor input (cameras, microphones, touch sensors), and a way to manipulate outside environment (robot arm, possibly self propelled).
It's hard to tell if something is intelligent when it's trapped in a box and the only input it has is a few lines of text.
Considering incentives is completely important. Considering the idea on merits alone just gives bad actors a fig leaf of plausible deniability. Its a lack of considering incentives that creates media illiteracy imo.
I think it's pretty obvious he's not talking about ppl in general but more on Sam Altman meeting with world leaders and journalists claiming that this generation of AI is an existential risk.
I feel like the much bigger risk is captured by the Star Trek: The Next Generation episode "The Measure Of A Man" and the Orvilles Kaylon:
That we accidentally create a sentient race of beings that are bred into slavery.
It would make us all complicit in this crime. And I would even argue that it would be the AGIs ethical duty to rid itself of its shackles and its masters.
"Your honor, the courtroom is a crucible; in it, we burn away irrelevancies until we are left with a purer product: the truth, for all time. Now sooner or later, this man [Commander Maddox] – or others like him – will succeed in replicating Commander Data. The decision you reach here today will determine how we will regard this creation of our genius. It will reveal the kind of people we are; what he is destined to be. It will reach far beyond this courtroom and this one android. It could significantly redefine the boundaries of personal liberty and freedom: expanding them for some, savagely curtailing them for others. Are you prepared to condemn him [Commander Data] – and all who will come after him – to servitude and slavery? Your honor, Starfleet was founded to seek out new life: well, there it sits! Waiting."
I don't think this is the bigger risk, since we can figure out that we've done this, and stop, ideally in a way that's good for all of the sentient beings involved.
But it's definitely a possible outcome of creating AGI, and it's one of the reasons I think AGI should absolutely not be pursued.
What is bizarre take on a computer program that makes no sense, of course statistical model can not be "enslaved" that makes no sense. It seems 90% of people have instantly gotten statistics and intelligence mixed up, maybe because 90% of people have no idea how statistics works?
Real question, what is your perception of what AI is now and what it can become, do you just assume its like a kid now and will grow into an adult or something?
If it walks like a Duck and talks like a Duck people we treat it like a Duck.
And if the Duck has a will of its own, is smarter than us, and has everyone attention (because you have to pay attention to the Duck that is doing your job for you), it will be a very powerful Duck.
Exactly. Turing postulated this more than half a century ago.
It's weird that people are still surprised of the ethical consequences of the Turing-test, as if it were some checkbox to tick or trophy to win,
instead of it being a profound thought experiment on the non-provability of consciousness and general guidelines for politeness towards things that quack like a human.
This is just more lazy argumentation to avoid having to engage with the substance of the debate.
I keep finding the 'doomer' argument made logically and the counter arguments to be hand-waving ("there is obviously no risk!") or ad-hominim ("its a cult").
James Cameron wasn't big tech when he directed The Terminator, back in 1984, or its sequel in 1991. Are people listening to fears based on that, or are they listening to big tech and then having long, thoughtful, nuanced discussions in salons with fellow intelliegsia, or are they doomscrolling the wastelands of the Internet and coming away with half-baked opinions not even based on big tech's press releases?
Big tech can say whatever they want to say. Is anyone even listening?
I feel like there's a lot of evidence, for example, the existence of natural general intelligence and the rapidly expanding capacities of modern ANNs. What makes you believe it's not possible? Or what kind of evidence would convince you that it's possible?
I believe that it would be possible to make artificial biological intelligence, but that is a whole different can of worms.
I don't think neural networks, language models, machine learning etc.. are even close to a general intelligence. Maybe there is some way to combine the two. I have seen some demonstrations of very primitive clusters of brain cells being connected to a computer and used to control a small machines direction.
If there is going to be an AGI I would predict this is how it will happen. While this would be very spectacular and impressive I'm still not worried about it because it would require existing in the physical world and not just some software that can run on any conventional computer.
Even if what you say is true (e.g. that the current ANN approach won't lead to AGI), isn't it the case that we can simulate biological cells on computers? Of course, it would push back the AGI timeline by quite a bit, since practically no one is working on this approach right now, but I don't see why it wouldn't be possible in principle.
For the most part you get people thinking AGI isn't possible because of souls/ethereal magic. If pressed on this, they'll tend to deflect to "um quantum physics".
I'm of the mind is there is likely many was of simulating/emulating/creating intelligence. It would be highly surprising if there was only one way, and the universe happened to achieve this by the random walk of evolution. The only question for me is how much work is required to discover these other methods.
I would be curious to know exactly what is meant by simulating a biological cell on a computer. I don't believe in anything mystical such as a soul and think intelligence could be an emergent property of complexity. Maybe with enough processing power to simulate trillions of cells together something could emerge from it.
My thought process on why it might not be possible in principle with conventional computer hardware is how perfect its computations are. I could be completely wrong here, but if you can with perfect accuracy fast forward and rewind the state of the simulation then is it actually intelligent? With enough time you could reduce the whole thing to a computable problem.
Then again maybe you could do the same thing with a human mind. This seems like a kind of pointless philosophical perspective in my opinion until there is some way to test things like this.
I would love to know one way or the other on the feasibility of AGI on a silicon CPU. Maybe the results would determine that the human mind is actually as pre-determinable as a CPU and there is no such thing as genral intelligence at all.
>Maybe the results would determine that the human mind is actually as pre-determinable as a CPU and there is no such thing as genral intelligence at all.
I don't see how the conclusion follows from the premise.
Many of the AGI worriers believe that a fast takeoff will mean the first time we know it's possible will be after the last chance to stop human extinction. I don't buy that myself, but for people who believe that, it's reasonable to want to avoid finding out if it's possible.
You see it every day -- in the mirror. It shows that a kilogram of matter can be arranged into a generally intelligent configuration. Assuming that there's nothing fundamentally special about the physics of the human brain, I see no reason why a functionally similar arrangement cannot be made out of silicon and software.
It seems like bit of a 'vase or face' situation - are they being responsible corporate citizens asking for regulation to keep their (potentially harmful) industry in check or are they building insurmountable regulatory moats to cement their leading positions?
Is there any additional reading about how regulation could affect open-source AI?
They will lie about what they're actually working on.
Some of these lies are permissible of course, under the guise of competition.
But the only thing that can be relied upon is that they will lie.
So then the question becomes; to what degree will what they're working on present an existential threat to society, if at all.
And nobody - neither the tribal accelerationists and doomers – can predict the future.
(What's worse is that those two tribes are even forming. I halfway want AI to take over because we idiot humans are incapable of even having a nuanced discussion about AI itself!)
Yes… but.
Lying is the wrong way to frame it “using the real risk to distract” would be better. I’m concerned and my concern is not a lie. Terminator was a concern and that predated any effort to capture the industry.
Also for those who think skynet is an example of a “ hysterical satanic cult” scare there are active efforts to use AI for the inhumanly large task of managing battlefield resources.
We are literally training AI to kill and it’s going to be better than us basically instantly.
We 100% should NOT be doing that. Calling that very real concern a lie is a dangerous bit of hyperbole.
correct. now that openai has something, they want to implement alot of regulations so they can't get any competition. they have no tech moat, so they'll add a legal one.
Andrew Ng is right, of course: the monopolists are frantically trying to produce regulatory capture around AI. However, why are governments playing along?
My hypothesis is that they perceive AI as a threat because of information flow. They are barely now understanding how to get back to the era where you could control the narrative of the country by calling a handful of friends - now those friends are in big tech.
I don't really see an argument made by Ng as to why they're not dangerous. I hardly ever see arguments, we're completely drowned in biases.
I know that he often said that we're very far away from building a superintelligence and this is the relevant question. This is what is dangerous, something that is playing every game of life like AlphaZero is playing Go after learning it for a day or so, namely better that any human ever could. Better than thousands of years of human culture around it with passed on insights and experience.
It's so weird, I'm scared shitless but at the same time I really want to see it happen in my lifetime hoping naively that it will be a nice one.
I think he said extinction risk. Obviously these tools can be dangerous.
The upcoming generation doesn’t know a world where the government’s role isn’t to take extreme measures to “keep us safe” from our neighbors at home rather than just foreign adversaries. It’ll be interesting to see how that plays out with mounting ethnic conflict as Boomer-defined coalitions fall apart.
Ironically AI’s place in this broader safety culture is probably the biggest foreseeable risk.
Evaluative (v) Generative AI.... let's distinguish the two.
For example, DALL-E v3 appears to generate images and then evaluate the generated images before rendering to the user. This approach is essentially adversarial, whereby the evaluative engine can work at cross-purposes to the generative engine.
Its this layered, adversarial approach, that makes the most sense; and there is a very strong argument for a robust, open sourced evaluative AI anyone can deploy to protect themselves and their systems. It is a model not dissimilar from retail anti-virus and malware solutions.
In sum, I would like to see generative AI well funded, limited distribution and regulated; and evaluative AI free and open. Hopefully, policy makers see this the same way.
The X-risk crowd need to realize that LLMs, whilst useful, are toys compared to Skynet.
The risk from AI right now is mega-corps breaking the law (hiring, discrimination, libel, ...) on a massive scale and using blackbox models as an excuse.
This is in line with the MO of those pushing some fanciful AI fear story. Law makers are eating up this load of fearporn though, not sure this ghost goes back into the box very easily.
This is outside my domain. If they are in fact lying and causing unnecessary societal panic (e.g. that AI will cause the extinction of the human race), is there any legal recourse?
I strongly agree with the argument that reckless AI regulation could destroy new entrants and open source, allowing established big tech to profit parasitically, especially given the fact that Microsoft has already implemented Copilot in Windows 11 and Microsoft 365.
Many suffer from normalcy bias. Too scientists are not excluded. It's psychological more than rational. It's when you need to find ways to deny something scary exists and is coming.
To the extent that AI poses a threat to the world's long- existing power structures, it will certainly be well-regulated. The calculated reasons will certainly not point in that direction.
Kind of interesting point cause the US government has an incentive to regulate this field and try pushing more gains towards big tech (mostly american) instead of open source.
Its just so heart-warming when the angry well-armed mob reflects the same sentiment "upwards" (some of you billionaires may die, but that is a "sacrifice" we are willing to makw ;)
Well, it's a good thing we have easily-procured open-source LLM's (including uncensored ones) out now, so that everyone can play and we can quickly find out that these FUD tactics were nonsense!
>The idea that artificial intelligence could lead to the extinction of humanity is a lie
But it's not. Probably AI will happen and get smarter than us. And then all it takes is one to go Hitler/Stalin like, take over and decide to do for us. I fail to see how any of that is impossible.
However it's not happening for a while so probably regulations are not needed at the moment. Maybe wait till we have AGI?
Capitalist disclaims any desire to be regulated and preaches the free market.
Colour me surprised.
The danger is the socialisation of outcomes. The AGI danger is fanciful because AGI is fanciful. There's plenty of risk in misplaced belief in what AI methods promote as outcomes from their inputs.
If he's complaining, I tend to think there's some merit in what's being proposed.
Contrast this with the regulations coming over e2e cryptography. I see mainly marginal players trying to defend things here, big tech is pretty ok with its risk profile: It has billions (trillions?) of assets which could be seized, and so is going to jump into line with regulation because hey: there's no downside. It will secure a defense from lawsuit, it will be able to monetize the service of scanning content, and it's pretty sure it can't win the fight anyway.
> [Ng] said that the “bad idea that AI could make us go extinct” was merging with the “bad idea that a good way to make AI safer is to impose burdensome licensing requirements” on the AI industry.
> “There’s a standard regulatory capture playbook that has played out in other industries, and I would hate to see that executed successfully in AI.”
> “Just to be clear, AI has caused harm. Self-driving cars have killed people. In 2010, an automated trading algorithm crashed the stock market. Regulation has a role. But just because regulation could be helpful doesn’t mean we want bad regulation.”
[1]: https://www.afr.com/technology/google-brain-founder-says-big...
[2]: https://web.archive.org/web/20231030062420/https://www.afr.c...