Hacker News new | past | comments | ask | show | jobs | submit login

The AFR piece that underlies this article [1] [2] has more detail on Ng's argument:

> [Ng] said that the “bad idea that AI could make us go extinct” was merging with the “bad idea that a good way to make AI safer is to impose burdensome licensing requirements” on the AI industry.

> “There’s a standard regulatory capture playbook that has played out in other industries, and I would hate to see that executed successfully in AI.”

> “Just to be clear, AI has caused harm. Self-driving cars have killed people. In 2010, an automated trading algorithm crashed the stock market. Regulation has a role. But just because regulation could be helpful doesn’t mean we want bad regulation.”

[1]: https://www.afr.com/technology/google-brain-founder-says-big...

[2]: https://web.archive.org/web/20231030062420/https://www.afr.c...




> “There’s a standard regulatory capture playbook that has played out in other industries

But imagine all the money bigco can make by crippling small startups from innovating and competing with them! It's for your own safety. Move along citizen.


Even better if (read: when) China, who has negative damns for concerns, can take charge of the industry that we willingly and expediently relinquish.


…and the problem with that is what, exactly?

The only meaningful thing in this discussion is about people who want to make money easy, but can’t, because of the rules they don’t like.

Well, suck it up.

You don’t get to make a cheap shity factory that pours its waste into the local river either.

Rules exist for a reason.

You want the life style but also all the good things and also no rules. You can’t have all the cake and eat it too.

/shrug

If China builds amazing AI tech (and they will) then the rest of the world will just use it. Some of it will be open source. It won’t be a big deal.

This “we must out compete China by being as shit and horrible as they are” meme is stupid.

If you want to live in China, go live in China. I assure you you will not find it to be the law less free hold of “anything goes” that you somehow imagine.


> Rules exist for a reason.

The trouble is sometimes they don't. Or they do exist for a reason but the rules are still absurd and net harmful because they're incompetently drafted. Or the real reason is bad and the rules are doing what they were intended to do but they were intended to do something bad.

> If China builds amazing AI tech (and they will) then the rest of the world will just use it.

Not if it's banned elsewhere, or they allow people to use it without publishing it, e.g. by offering it as a service.

And it matters a lot who controls something. "AI" potentially has a lot of power, even non-AGI AI -- it can create economic efficiency, or it can manipulate people. If an adversarial entity has greater economic efficiency, they can outcompete you -- the way the US won the Cold War was essentially by having a stronger economy. If an adversarial entity has a greater ability to manipulate people, that could be even worse.

> If you want to live in China, go live in China. I assure you you will not find it to be the law less free hold of “anything goes” that you somehow imagine.

But that's precisely the issue -- it's not an anarchy, it's an authoritarian competing nation state. We have to be better than them so the country that has an elected government and constitutional protections for human rights is the one with an economic advantage, because it isn't a law of nature that those things always go together, but it's a world-eating disaster if they don't.


> Or they do exist for a reason but the rules are still absurd and net harmful

Ok.

…but if you have a law and you’re opposed to it on the basis that “China will do it anyway”, you admit that’s stupid?

Shouldn’t you be asking: does the law do a useful thing? Does it make the world better? Is it compatible with our moral values?

Organ harvesting.

Stem cell research.

Human cloning.

AI.

Slavery.

How can anyone stand there and go “well China will do it so we may as well?”

In an abstract sense this is a fundamentally invalid logical argument.

Truth on the basis of arbitrary assertion.

It. Is. False.

Now, certainly there is a degree of naunce with regard to AI specifically; but the assertion that we will be “left behind” and “out competed by China” are not relevant to the discussion on laws regarding AI and AI development.

What we do is not governed by what China may or may not do.

If you want to win the “AI race” to AGI, then investment and effort is required, not allowing an arbitrary “anything goes” policy.

China as a nation is sponsoring the development of its technology and supporting its industry.

If you want want to beat that, opposing responsible AI won’t do it.


Of course you have to consider what other countries will do when you create your laws. The notion that you can ignore the rest of the world is both naive and incredibly arrogant.

There are plenty of technologies that absolutely do not "make the world better" but unfortunately must get built because humans are shitty to each other. Weapons are the obvious one, but not the only one. Often countries pass laws to encourage certain technologies or productions so as not to get outcompeted or outproduced by other countries.

The argument here about AI is exactly this sort of argument. If other countries build vastly superior AI by have fewer developmental restrictions, then your country maybe both at a military disadvantage but also at an economic disadvantage because you can be easily outproduced by countries using vastly more efficient technology.

You must balance all the harms and benefits when making laws, including external to the country issues.


I don't think the government is talking about AI for weapons. of course that will be allowed. It's the US, we have the right to kill people. Just not make fake porn videos of them.


> ...but if you have a law and you’re opposed to it on the basis that “China will do it anyway”, you admit that’s stupid?

That depends on what "it" is. If it's slavery and the US but not China banning slavery causes there to be half as much slavery in the world as there would be otherwise, it would be stupid.

But if it's research and the same worldwide demand for the research results are there so you're only limiting where it can be done, which only causes twice as much to be done in China if it isn't being done in the US, you're not significantly reducing the scope of the problem. You're just making sure that any benefits of the research are in control of the country that can still do it.

> Now, certainly there is a degree of naunce with regard to AI specifically; but the assertion that we will be “left behind” and “out competed by China” are not relevant to the discussion on laws regarding AI and AI development.

Of course it is. You could very easily pass laws that de facto prohibit AI research in the US, or limit it to large bureaucracies that in turn become stagnant for lack of domestic competitive pressure.

This doesn't even have anything to do with the stated purpose of the law. You could pass a law requiring government code audits which cost a million dollars, and justify them based on any stated rationale -- you're auditing to prevent X bad thing, for any value of X. Meanwhile the major effect of the law is to exclude anybody who can't absorb a million dollar expense. Which is a bad thing even if X is a real problem, because that is not the only possible solution, and even if it was, it could still be that the cure is worse than the disease.

Regulators are easily and commonly captured, so regulations tend to be drafted in that way and to have that effect, regardless of their purported rationale. Some issues are so serious that you have no choice but to eat the inefficiency and try to minimize it -- you can't have companies dumping industrial waste in the river.

But when even the problem itself is a poorly defined matter of debatable severity and the proposed solutions are convoluted malarkey of indiscernible effectiveness, this is a sure sign that something shady is being evaluated.

A strong heuristic here is that if you're proposing a regulation that would restrict what kind of code an individual could publish under a free software license, you're the baddies.


> Of course it is. You could very easily pass laws that de facto prohibit AI research in the US, or limit it to large bureaucracies that in turn become stagnant for lack of domestic competitive pressure.

> A strong heuristic here is that if you're proposing a regulation that would restrict what kind of code an individual could publish under a free software license, you're the baddies.

Sure.

…but those things will change the way development / progress happens regardless of what China does.

“We have to do this because China will do it!” is a harmful trope.

You don’t have to do anything.

If you want to do something, then do it, if it makes sense.

…but I flat out reject the original contention that China is a blanket excuse for any fucking thing.

Take some darn responsibility for your own actions.


> What we do is not governed by what China may or may not do.

Yes it is... Where the hell would you get the impression we don't change how we govern and invest based on what China does, is doing, or might be doing? Do you really think nations don't adjust their behavior and laws based on other counties real or perceived? I can't imagine you're that ignorant.

> If you want want to beat that, opposing responsible AI won’t do it.

Not opposing it guarantees you lose though.


I could be wrong; maybe what China does with its AI developments will significantly and drastically alter the current startup status quo for AI startups.

Maybe the laws around AI will drastically impact the ability of startups to compete with foreign competitors.

…but I can’t see that being likely.

It seems to me that restricting chip technology has a much much more significant impact, along with a raft of other measures which are already in place.

All I can see when I look closely at arguments from people saying this kind of stuff is people who want to make deep fakes, steal art and generate porn bots crying about it, and saying it not fair other people (eg. Japan, where this has been ruled legal, China for who knows what reason, mostly ignorance) are allowed to do it.

I’m not sympathetic.

I don’t believe that makes any difference to the progress on AGI.

I don’t care if China out competes other countries on porn bots (I don’t think they will; they have a very strict set of rules around this stuff… but I’ll be generous and include Japan which probably will).

You want the US to get AGI first?

Well, explain specifically how you imagine open source (shared with the world) models, and open code sharing vs. everything being locked away in a Google/Meta sandbox helps?

Are you sure you’re arguing for the right side here? Shouldn’t you be arguing that the models should be secret so China can’t get them?

Or are you just randomly waving your arms in the air about China without having read the original article?

What are you even arguing for? Laws are bad… but sharing with China is also bad… but having rules about what you do is bad… but China will do it anyway… but fear mongering and locking models away in big corporations behind apis is bad… but China… or something…

????

It’s really not a compelling argument.


> ...but I can’t see that being likely.

Why not? It's what happens with other things.

> It seems to me that restricting chip technology has a much much more significant impact, along with a raft of other measures which are already in place.

Restricting chip technology is useless and the people proposing it are foolish. Computer chips are generic technology and AI things benefit from parallelism. The only difference between faster chips and more slower chips is how much power they use, so the only thing you get from restricting access to chips is more climate change.

> All I can see when I look closely at arguments from people saying this kind of stuff is people who want to make deep fakes, steal art and generate porn bots crying about it, and saying it not fair other people (eg. Japan, where this has been ruled legal, China for who knows what reason, mostly ignorance) are allowed to do it.

The problem is not that people won't be able to make porn bots. They will make porn bots regardless, I assure you. The problem is that the people who want to control everything want to control everything.

You can't have a model with boobs in it because that's naughty, so we need a censorship apparatus to prevent that. And it should also prevent racism, somehow, even though nobody actually agrees how to accomplish that. And it can't emit foreign propaganda, defined as whatever politicians don't like. And now that it has been centralized into a handful of megacorps, they can influence how it operates to their own ends and no one else can make one that works against them.

Now that you've nerfed the thing, it's worse at honest work. It designs uncomfortable apparel because it doesn't understand what boobs are. You ask it how something would be perceived by someone in a particular culture and it refuses to answer, or lies to you because of what the answer would be. You try to get it to build a competing technology to the company that operates the thing and all it will do is tell you to use theirs. You ask it a question about the implications of some policy and its answer is required to comply with specific politics.

> Well, explain specifically how you imagine open source (shared with the world) models, and open code sharing vs. everything being locked away in a Google/Meta sandbox helps?

To improve it you can be anyone anywhere vs. to improve it you have to work for a specific company that only employs <1% of the people who might have something to contribute. To improve it you don't need the permission of someone with a conflict of interest.

> Are you sure you’re arguing for the right side here? Shouldn’t you be arguing that the models should be secret so China can’t get them?

China is a major country. It will get them. The only question is if you will get them, in addition to China and Microsoft. And to realize the importance of this, all you have to ask is if all of your interests are perfectly aligned with those of China and Microsoft.


False equivalency at its finest. This is more akin to banning factories and people rightly saying our rivals will use these factories to out produce us. This is also a much better analogy because we did in fact give China a lot of our factories and are paying a big price for it.


I think you underestimate the power foreign governments will have and will use if we are relying on foreign AI in our everyday lives.

When we ask it questions, an AI can tailor its answers to change peoples opinions and how people think. They would have the power to influence elections, our values, our sense of right and wrong.

That's before we start allowing AI to just start making purchasing decisions for us with little or no oversight.

The only answer I see is for us all to have our own AI's that we have trained, understand, and trust. For me this means it runs on my hardware and answers only to me. (And not locked behind regulation)


// If China builds amazing AI tech (and they will) then the rest of the world will just use it. Some of it will be open source. It won’t be a big deal.

"Don't worry if our adversary develops nuclear weapons and we won't - it's OK we'll just use theirs"


> "Don't worry if our adversary develops nuclear weapons and we won't - it's OK we'll just use theirs"

Beneath this comment is hidden a truth that there is AI which can be used beneficially, AI which can be used detrimentally, AI which can be weaponized in warfare, and AI which can be used defensively in warfare. Discussions about policy and regulation should differentiate these, but also consider implications of how this technology is developed and for what purpose it could be employed.

We should definitely be developing AI to combat AI as it will most certainly be weaponized against us with greater frequency in the near future.


Yes and I think it's broader than that. For example, if a country uses AI to (say) optimize their education or their economy - they will "run away" from us. Rather than enabling us to use that technology too (why would they, even for money) they can just wait until their advantage is insurmountable.

So it's not just pure warfare systems that are risky for us but everything.


>…and the problem with that is what, exactly?

The problem is what the Powers-That-Be say and what they do are not in alignment.

We are now, after much long-time pressure from everyone not in power saying that being friendly with China doesn't work, waging a cold war against China and presumably we want to win that cold war. On the other hand, we just keep giving silver platter after silver platter to China.

So do we want the coming of Pax Sino or do we still want Pax Americana?

If we defer to history, we are about due for another changing of the guard as empires generally do not last more than a few hundred years if that, and the west seems poised to make that prophecy self-fulfilling.


Wish people stopped with that Cold War narrative. You're not waging anything just yet.

Here's the thing: the US didn't win the OG Cold War by being, as 'AnthonyMouse puts it upthread, "the country that has an elected government and constitutional protections for human rights" and "having a stronger economy". It won it by having a stronger economy, which it used to fuck half of the world up, in a low-touch dance with the Soviets that had both sides toppling democratic governments, funding warlords and dictatorships, and generally doing the opposite of protecting human rights. And at least through a part of that period, if an American citizen disagreed, or urged restraint and civility and democracy, they were branded a commie mutant spy traitor.

My point here isn't to pass judgement on the USA (and to be clear, I doubt things would've been better if the US let Soviets take the lead). Rather, it's that when we're painting the current situation as the next Cold War, then I think people have a kind of cognitive dissonance here. The US won the OG Cold War by becoming a monster, and not pulling any punches. It didn't have long discussions about how to safely develop new technologies - it just went full steam ahead, showered R&D groups with money, while sending more specialists to fuck up another country to keep the enemy distracted. This wasn't an era known for reasoned approach to progress - this was the era known for designing nuclear ramjets with zero shielding, meant to zip around the enemy land, irradiating villages and rivers and cities as they fly by, because fuck the enemy that's why.

I mean, if it is to happen, it'll happen. But let's not pretend you can keep Pax Americana by keeping your hands clean and being a nice democratic state. Or that whether being more or less serious about AI safety is relevant here. If it becomes a Cold War, both sides will just pull all the stops and rush full-steam to develop and weaponize AGI.

--

EDIT - an aside:

If the history of both sides' space programs is any indication, I wouldn't be surprised to see the US building a world-threatening AGI out of GPT-4 and some duct tape.

Take for example US spy satellites - say, the 1960s CORONA program. Less than a decade after Sputnik, no computers, with engineering fields like control theory being still under development - but they successfully pulled off a program that involved putting analog cameras in space on weird orbits, which would make ridiculously high-detail photos of enemy land, and then deorbit the film canisters, so they could be captured mid-air by a jet plane carrying a long stick. If I didn't know better, I'd say we don't have the technology today to make this work. The US did it in the 1960s, because it turns out you can do surprisingly much with surprisingly little, if you give creative people infinite budget, motivate them with basic "it's us vs. them" story, and order them to win you the war.

As impressive as such feats were (and there were plenty more), I don't think we want to have the same level of focus and dedication applied to AI - if that's a possibility, then I fear we've crossed the X-risk threshold already with the "safe" models we have now.


China doesn't innovate, it copies, clones, and steals. Without the West to innovate, they won't take charge of anything.

A price paid, I think, due a conformant, restrictive culture. And after all, even if you do excel, you may soon disappear.


This is what was said about Japan prior to their electronics industry surpassing the rest of the world. Yes, china does copy. However, in many instances those companies move faster and innovate faster than their western counterparts. Look at the lidar industry in china. It's making mass market lidar in the tens of thousands [see hesai]. There is no american or european equivalent at the moment. What about DJI? They massively out innovated western competitors. I wouldn't be so quick to write off that country's capacity for creativity and technological prowess.


that's a tired old talking point that the US always throws in. The fact is that, as part of their agreements to operate in the Chinese market, Western companies cooperated with Chinese local companies, which included sharing of knowledge.

These terms, the Western companies agreed to to gain a piece of the juicy Chinese market. And the Chinese did it because they had the rare power to stop Western companies from just coming and draining resources, in the colonial manner the West usually operates.

Building on this, China has now surpassed the West on much development. Electric cars, solar technology, cell phone towers are now much more advanced in China.


What a wildly strange case of revisionist history.

The West started shifting production to China for immense cost savings, over 40 years ago. At the time, China had almost NO market, and no (what the West called, at the time) "middle class". China was mostly agrarian, and had very little manufacturing base.

There was nothing "juicy" for the West, market wise. At all.

Over the last 40 years, China's economy has prospered, grown, again mostly due to the West's user of Chinese labour. Virtually the entire manufacturing base that China has right now, exists because Western expertise, skill, and capabilities helped Chinese factories, and workers, come online and train in Western production methods.

Prior to 40 years ago, everyone except the British couldn't have cared less for China, and the British indeed had Hong Kong.. something pre-existent from THEIR colonial days. The British could have retained Hong Kong, but as agreed did turn it over to China at the turn of the century. No, China had no capability to enforce that, not back around the year 2000.

Note that the colonial days of "the West" makes little sense. Many Western nations were not colonialists, and the US is actually a breakaway colony, and has worked to curtail colonialism! To lump "the West" together, would be like thinking Japan and China are the same, because they are all "Oriental".

Back to China, very little China does "surpasses the West". In fact, so little capability does China have, that when the US kicked an embargo for advanced silicon against China, it lost is capability for several years, to domestically manufacture cell phones.

Look, I get the feeling you're pro-China. And perhaps, you grew up in China.

First, there are three things. The Chinese government. Chinese culture. Chinese people.

The last? We can stop discussing that now, because unless you are racist, there is no such thing as "Chinese people act a certain way, because they are Chinese".

However, there is such a thing as "Chinese culture", derived mostly from China, although of course there are endless factions and cultures in China, languages, no China isn't Han alone!!

But for simplicity, we'll assume Han culture == Chinese culture, and move on from there.

One of the largest coups that I feel the current dictatorship in China has accomplished, and dictatorship it is, when you don't step down and decide to serve a third term, is to convince Chinese people that "Chinese government = Chinese people". That's no so.

The Chinese government has many negative qualities. One of those qualities is a suppression of free will, excessive monitoring of its citizens, such as the social credit system, and this does indeed result in a lack of creativity. It also results in a lack of drive, of desire for people to excel, for when people like Jack Ma simply go missing, because they excel, because they do well, because they choose to take part in directing Chinese society, you end up with an innate desire to not show your true capability.

For if you do? The government will appear, take control of your works, your creation, and you'll be left out in the cold. In fact, you'll probably be killed.

These two things, fear of stepping out of bounds, and fear of excelling, do indeed create issues. This is why totalitarian governments have always fallen behind more open systems, for centrist driven societies always do. Politicians are absolutely not equipped to "see the future", to understand what inventions can be useful or not, and in fact most researchers cannot either! Research must be free, unfettered, not organized, and the output of research must be judged, not the input. Put another way, the usefulness of a research path is not readily apparent until that research path is taken.

Yet centrist control attempts to direct the path of research, where as non-centrist control has endless paths of research sprouting, growing, dying, organically allowing society itself to judge the value of such things.

This is what I mean by the fact that Chinese culture, does not allow for open development, and it is true. It is not a "Chinese" thing, but a "totalitarian thing", and has been seen over, and over, and over again, regardless of the genetic history of the peoples involved. It's a cultural thing.

Back to the coup I referred to prior. By indelibly linking two ideas, the Chinese Government and The Chinese People as one in the minds of most Chinese citizens, you foster a culture as we see here. That directed attacks against the Chinese dictatorship, the CCP, and Xi, are somehow an attack against the common person in China.

Not so.

Even if you do believe in a different governmental system, (which you'd be wrong, but such belief is OK to do in the West!), one of China's failures, both as a people, and a government, is a complete lack of understanding of the West. An inability to understand that we generally, actually believe what we stand for. That it's not all for show.

An example. I dislike portions of my current government. Some choices made. The current leader of my Westminster governmental system. I can think that he should be replaced, that he is currently a liability, whist at the same time recognize that some things he has done are OK. And I can shout "replace that man!" at the top of my lungs, without impinging upon the Canadian people, or its culture!.

Most people who grew up in China (not Hong Kong!), have a difficult time with this. This concept is hard to accept. I get that, but at the same time, it is core. Key. Vital to comprehend.

No matter how much people in the West rail again a current leader, THEY ARE STILL LOYAL TO THEIR COUNTRY. And no matter how much people in the West complain about Xi, and the current CCP, THEY ARE NOT IMPINGING UPON THE CHINESE PEOPLE.

This is often lost on anyone immersed in Chinese culture.

Anyhow. I don't have time to engage more at this moment. I will check back to see if you reply, but if you do, please engage inline with my comments. Or at least understanding the actual history of West/Chinese interaction.


They have a massive advantage due to having less regulation, cheaper costs, a large pool of talent even if lower on quality on average, and a strong ecosystem of suppliers.


This may surprise, but Japan is not China. Their culture is not the same. Further their culture was shifted to capitalism at the end of WWII. Citing Japan, is supporting my point about culture.

Mass marketing things isn't innovation. It's copying. DJI seems like more copying. "Innovation" isn't marketing. It's raw research and development, along market paths which are useful. This requires a desire for change, a desire to not conform first, but capitalism first, and this is what China's culture does not have.


China isn't a communist country, it's first and foremost authoritarian. They do have ruthless capitalism, and the ruthless competition in between individuals that comes with it.

They inherit from confucianism, and a more collectivist mindset that is prevalent in this area of the planet, but I don't think it should be conflated with the way the economy is organised.

The Japanese on the other hand are overall conformist and conservative.

With just these counter examples, it doesn't feel like you're looking at the right variables to judge whether innovation is embedded in the culture or not.


> China isn't a communist country, it's first and foremost authoritarian.

So are all “communist” countries. Communism (either Marxist or more generally) as a whole isn’t authoritarian, but all “communist” countries are products of Leninism or its derivatives, which definitely are, fundamentally, authoritarian.


That communism always ended up in authoritarian regimes isn't relevant to what I'm referring to. We generally oppose communism to say capitalism or liberalism for organising the economy and authoritarianism to democracy for organising governance.

There is a few essential properties of a "communist" system that modern China doesn't have. Most of the capital is privately owned, the social safety net is very poor, etc.


This may surprise, but Japan is not China. Their culture is not the same.

If you can look at a DJI drone and maintain this opinion...


Who exactly are DJI/Hesai copying? They are the market leaders by a mile.


I think it’s a mistake to believe that all China can do is copy and clone.

It’s also a mistake to underestimate the market value of copies and clones. In many cases a cloned version of a product is better than the original. E.g., clones that remove over-engineering of the original and simplify the product down to its basic idea and offer it at a lower price.

It’s also a mistake to confuse manufacturing prowess for the ability to make “copies.” It’s not China’s fault that its competitors quite literally won’t bother producing in their own country.

It’s also a mistake to confuse a gain of experience for stealing intellectual property. A good deal of innovation in Silicon Valley comes from the fact that developers can move to new companies without non-compete clauses and take what they learned from their last job to build new, sophisticated software.

The fact that a bunch of Western companies set up factories in China and simultaneously expect Chinese employees and managers to gain zero experience and skill in that industry is incredibly contradictory. If we build a satellite office for Google and Apple in Austin, Texas then we shouldn’t be surprised that Austin, Texas becomes a hub for software startups, some of which compete with the companies that chose Austin in the first place.


Frankly I think the only reason China copies and clones is because it’s the path of least resistance to profit. They have lax laws on IP protection. Ther is no reason to do R&D when you can just copy/clone and make just as much money with none of the risk.

And that’s probably the only reason. If push comes to shove, they can probably innovate if given proper incentives.

I heard the tale about the Japanese lens industry. For the longest time they made crap lens that were just clones of foreign designs until the Japanese government banned licensing of foreign lens designs forcing their people to design their own lens. Now they are doing pretty well in that industry if I’m right.


You need to have an understanding of Chinese culture and the ability to interface with local Chinese officials to get your counterfeiting complaint handled.

You also have to be making something that isn’t of critical strategic importance.

Example: glue https://www.npr.org/transcripts/702642262


It’s also a mistake to confuse a gain of experience for stealing intellectual property. A good deal of innovation in Silicon Valley comes from the fact that developers can move to new companies without non-compete clauses and take what they learned from their last job to build new, sophisticated software.

The amount of outright theft of entire IP from US, Canadian, and European countries by China is well known. There is no confusion here, in more recent times people have been arrested and charged for it, and it's how China is able to compete.


> China doesn't innovate, it copies, clones, and steals.

FWIW There was a time when that was was the received wisdom about the USA, from the point of view of European powers. It was shortsighted, and not particularly accurate then either.


And in more recent times, how the USA first viewed Japan and later Korea.


And yet Japan and Korea both were shifted to more Western modes of thought, about innovation, development, and an adoption of democracy and personal rights. This supports my point.


South Korea had little choice in the matter as it’s effectively a tributary state to the US. What’s amazing is that the US didn’t somehow screw up with South Korea.

Japan’s democracy seems to be a hold-over from its imperialist ambitions from the Meiji restoration, when the emperor took power back from the shogunate and “westernized” to fast-track.

Meaning, the Japanese took all of the trappings of western civilization but under the veneer it’s still distinctly Japanese.

Agile civil development.


Where do you get that from?

All the people I know who worked with and for Korean and Japanese entities have countless examples to show how alien the corporate culture is for westerners.

South Korea in particular doesn't seem exactly like a heaven for personal growth and experimentation.


This is true in general but with 1.5 billion citizens they have a lot of non-conformists. Conformism is good for manufacturing and quality, see Japan. I buy a lot from China and I'm frequently positively surprised. I find things that are equally good or better than their Western counterparts at a fraction of the cost. Western companies spend way too much on marketing instead of delivering value. There're issues with the West as well. Today Asia is responsible for a big chunk of the World manufacturing, this is strategic.


Yes western companies spend a lot on marketing, cause without it you might confuse their products which are built to deliver positive experiences and value with similarly looking but not so positive counterparts.

Not to dunk on China particularly here, I do/did enjoy a lot of hq chinese products.


That's true in some cases but it's also true that some Western companies spend a lot on building branding because that's their only differential. Sometimes it's even manufactured in the same factory with the same materials. And don't get me wrong I know there is a lot of garbage from China and often I see products from there that have super build quality and materials but with critical flaws due to poor design/marketing.


> A price paid, I think, due a conformant, restrictive culture. And after all, even if you do excel, you may soon disappear.

I once spoke to a Chinese person who speculated: "I wish that the Chinese were as conformant and uniform as the Americans - China is too diverse and unruly!"

I think that it's a common human habit to upsell one's own diversity and downplay that of others.


Conformism don't capture it. It's more complex than that but maybe authoritarian and democratic. Authoritarian organizations rewards loyalty over merit so people, in order to survive, tend to be obedient, bureaucratic, ruthless and less competent. Democratic organizations rewards merit over loyalty. Paradoxically, despite people having more freedom, things are less chaotic because people have better incentives to be competent, to trust and work out together. Though no society is perfectly one or the other.


> China doesn't innovate, it copies, clones, and steals

Explain DJI and Douyin/TikTok.


TikTok is Chinese owned. Its algo was not Chinese invented.


That's a total lie. The reason that TikTok (nee Musical.ly) has great recommendations is because they use ByteDance tech, which was 100% Chinese developed.


Tiktok is just full-screen Vine.


Sure, but that's not the part that matters. The innovative part is the recommendation algorithm that redefined what it means to "optimize for engagement".

I mean, YouTube, Facebook and Instagram are trying to hook you up on a dopamine drip so they can force-feed you some ads. TikTok is just pure crack that caught the world by surprise - and it's not even pushing you ads! Honestly, to this day I'm not sure what their business model is.


On paper they are similar. However, when it comes to recsys competence, TikTok blows other platforms - past or present - out of the water. TikTok's feed is algorithmic crack, and is shockingly quick to figure out users tastes. Instagram and YouTube had to scramble to copy ByteDance's innovation.


And there were smart phones before Apple but they got the formula right


Industrial espionage happens everywhere, the US does it, as well. At some point this excuse starts becoming cope.


Haha. I can tell you're obviously not Chinese, and has no understanding of Chinese culture at all.


US, Japan, Taiwan, Korea, then China. Toyota, Foxconn, Samsung, Huawei are grown with it.


Maybe they don't today, but tomorrow? Giving them the chance is poor policy.


Ok, we've changed the URL to that from https://www.businessinsider.com/andrew-ng-google-brain-big-t.... Thanks!

Submitters: "Please submit the original source. If a post reports on something found on another site, submit the latter." - https://news.ycombinator.com/newsguidelines.html


Here's what makes it worse imo.

Imagine someone invents a machine that can give infinite energy.

Do you

a) sell that energy, or b) give the technology to build the machine to everyone.

Clearly b is better for society, a is locking up profits.


The answer is c) sell that energy and use your resulting funds to deeply root yourself in all other systems and prevent or destroy alternative forms of energy production, thus achieving total market dominance

This non-hypothetical got us global warming already


In this case the machine also has negative and yet unknown side effects. We don't give nuclear power to everyone.


This analogy of course is close to nuclear energy. I think most people would say that regulation is still broadly aligned with the public interest there, even though the forces of regulatory capture are in play.


I read that book. No, you deny your gift to the world and become a recluse while the world slowly spins apart.

Technically: a solar panel is just such a machine. You'll have to wait a long, long time but the degradation is slow enough that you can probably use a panel for more than several human life times at ever decreasing output. You will probably find it more economical to replace the panel at some point because of the amount of space it occupies and the fact that newer generations of solar panels will do that much better in that same space. But there isn't any hard technical reason why you should discard one after 10, 30 or 100 years. Of course 'infinite' would require the panel to be 'infinitely durable' and likely at some point it will suffer mechanical damage. But that's not a feature of the panel itself.


And I strongly agree with pointing out a low hanging fruit for "good" regulation is strict and clear attribution laws to label any AI generated content with its source. That's a sooner the better easy win no brainer.


Why would we do this? And how would this conceivably even be enforced? I can't see this being useful or even well-defined past cartoonishly simple special cases of generation like "artist signatures for modalities where pixels are created."

Requiring attribution categorically across the vast domain of generative AI...can you please elaborate?


> Why would we do this?

i think it's a reasonable ask to enforce attribution of AI generated content. We enforce food labels, why not content?

I would go further and argue that AI generated content do not get granted the same copyright as human generated content, but with that, AI generated content using existing copyrighted training data does not violate copyright.


> We enforce food labels, why not content?

Regulation isn't always, but often is a drag on productivity. Food labels make total sense because the negative consequences of not doing it outweight the drag of doing it.

I'm not at all convinced that enforcing AI labeling and the resulting impossible task of policing and enforcing this will outweigh any negatives of not doing it.

I'm thinking about the cookie policy in Europe. I hate it and almost always just click through because so many websites work around it by making it a real pain to "reject cookies".


If you use an AI spell checker then will your resulting text all be without copyright?

If you use an AI coding assistant then will the written code be without copyright? Or will the code require a disclaimer that says some parts of it are AI generated?

You're also going to have to be very precise on defining what AI means. For most people a compiler is as magical as AI. They might even consider it AI, especially if it does some kind of automatic performance optimizations - after all, that's not the behavior the user wrote.


Where is the line drawn? My phone uses math to post-process images. Do those need to be labeled? What about filters placed on photos that do the same thing? What about changing the hue of a color with photoshop to make it pop?


Generative AI. Anything that can create detailed content out of a broad / short prompt. This currently means diffusion for images, large language models for text. That may change as multi-modality and other developments play out in this space.

This capability is clearly different from the examples you list.

Just because there may be no precise engineering definition does not mean that we cannot arrive at a suitable legal/political definition. The ability to create new content out of whole cloth is quite separate from filters, cropping, and generic "pre-AI" image post-processing. Ditto for spellcheck and word processors for text.

The line actually is pretty clear here.


How do you expect to regulate this and prove generative models were used? What stops a company from purchasing art from a third party where they receive a photo from a prompt, where that company isn't US based?


> How do you expect to regulate this and prove generative models were used?

Disseminating or creating copies of content derived from generative models without attribution would open that actor up to some form of liability. There's no need for onerous regulation here.

The burden of proof should probably lie upon whatever party would initiate legal action. I am not a lawyer, and won't speculate further on how that looks. The broad existing (and severely flawed!) example of copyright legislation seems instructive.

All I'll opine is that the main goal here isn't really to prevent Jonny Internet from firing up llama to create a reddit bot. It's to incentivize large commercial and political interests to disclose their usage of generative AI. Similar to current copyright law, the fear of legal action should be sufficient to keep these parties compliant if the law is crafted properly.

> What stops a company from purchasing art from a third party where they receive a photo from a prompt, where that company isn't US based?

Not really sure why the origin of the company(s) in question is relevant here. If they distribute generative content without attribution, they should be liable. Same as if said "third party" gave them copyright-violating content.

EDIT: I'll take this as an opportunity to say that the devil is in the details and some really crappy legislation could arise here. But I'm not convinced by the "It's not possible!" and "Where's the line!?" objections. This clearly is doable, and we have similar legal frameworks in place already. My only additional note is that I'd much prefer we focus on problems and questions like this, instead of the legislative capture path we are currently barrelling down.


> It's to incentivize large commercial and political interests to disclose their usage of generative AI.

You would be okay allowing small businesses exception from this regulation but not large businesses? Fine. As a large business I'll have a mini subsidiary operate the models and exempt myself from the regulation.

I still fail to see what the benefit this holds is. Why do you care if something is generative? We already have laws against libal and against false advertising.


> You would be okay allowing small businesses exception from this regulation but not large businesses?

That's not what I said. Small businesses are not exempt from copyright laws either. They typically don't need to dedicate the same resources to compliance as large entities though, and this feels fair to me.

> I still fail to see what the benefit this holds is.

I have found recent arguments by Harari (and others) that generative AI is particularly problematic for discourse and democracy to be persuasive [1][2]. Generative content has the potential, long-term, to be as disruptive as the printing press. Step changes in technological capabilities require high levels of scrutiny, and often new legislative regimes.

EDIT: It is no coincidence that I see parallels in the current debate over generative AI in education, for similar reasons. These tools are ok to use, but their use must be disclosed so the work done can be understood in context. I desire the ability to filter the content I consume on "generated by AI". The value of that, to me, is self-evident.

1. https://www.economist.com/by-invitation/2023/04/28/yuval-noa... 2. https://www.nytimes.com/2023/03/24/opinion/yuval-harari-ai-c...


> They typically don't need to dedicate the same resources to compliance as large entities though, and this feels fair to me.

They typically don't actually dedicate the same resources because they don't have much money or operate at sufficient scale for anybody to care about so nobody bothers to sue them, but that's not the same thing at all. We regularly see small entities getting harassed under these kinds of laws, e.g. when youtube-dl gets a DMCA takedown even though the repository contains no infringing code and has substantial non-infringing uses.


> They typically don't actually dedicate the same resources because they don't have much money or operate at sufficient scale for anybody to care about so nobody bothers to sue them

Yes, but there are also powerful provisions like section 230 [1] that protect smaller operations. I will concede that copyright legislation has severe flaws. Affirmative defenses and other protections for the little guy would be a necessary component of any new regime.

> when youtube-dl gets a DMCA takedown even though the repository contains no infringing code and has substantial non-infringing uses.

Look, I have used and like youtube-dl too. But it is clear to me that it operates in a gray area of copyright law. Secondary liability is a thing. Per the EFF excellent discussion of some of these issues [2]:

> In the Aimster case, the court suggested that the Betamax defense may require an evaluation of the proportion of infringing to noninfringing uses, contrary to language in the Supreme Court's Sony ruling.

I do not think it is clear how youtube-dl fares on such a test. I am not a lawyer, but the issue to me does not seem as clear cut as you are presenting.

1. https://www.eff.org/issues/cda230 2. https://www.eff.org/pages/iaal-what-peer-peer-developers-nee...


> Yes, but there are also powerful provisions like section 230 [1] that protect smaller operations.

This isn't because of the organization size, and doesn't apply to copyright, which is handled by the DMCA.

> But it is clear to me that it operates in a gray area of copyright law.

Which is the problem. It should be unambiguously legal.

Otherwise the little guy can be harassed and the harasser can say maybe to extend the harassment, or just get them shut down even if is is legal when the recipient of the notice isn't willing to take the risk.

> > In the Aimster case, the court suggested that the Betamax defense may require an evaluation of the proportion of infringing to noninfringing uses, contrary to language in the Supreme Court's Sony ruling.

Notably this was a circuit court case and not a Supreme Court case, and:

> The discussion of proportionality in the Aimster opinion is arguably not binding on any subsequent court, as the outcome in that case was determined by Aimster's failure to introduce any evidence of noninfringing uses for its technology.

But the DMCA takedown process wouldn't be the correct tool to use even if youtube-dl was unquestionably illegal -- because it still isn't an infringing work. It's the same reason the DMCA process isn't supposed to be used for material which is allegedly libelous. But the DMCA's process is so open to abuse that it gets used for things like that regardless and acts as a de facto prior restraint, and is also used against any number of things that aren't even questionably illegal. Like the legitimate website of a competitor which the claimant wants taken down because they are the bad actor, and which then gets taken down because the process rewards expeditiously processing takedowns while fraudulent ones generally go unpunished.


> This isn't because of the organization size, and doesn't apply to copyright, which is handled by the DMCA.

Ok, I'll rephrase: the clarity of its mechanisms and protections benefits small and large organizations alike.

My understanding is that it no longer applies to copyright because the DMCA and specifically OCILLA [1] supersede it. I admit I am not an expert here.

> Which is the problem. It should be unambiguously legal.

I have conflicting opinions on this point. I will say that I am not sure if I disagree or agree, for whatever that is worth.

> But the DMCA takedown process wouldn't be the correct tool to use even if youtube-dl was unquestionably illegal

This is totally fair. I also am not a fan of the DMCA and takedown processes, and think those should be held as a negative model for any future legislation.

I'd prefer for anything new to have clear guidelines and strong protections like Section 230 of the CDA (immunity from liability within clear boundaries) than like the OCILLA.

1. https://en.wikipedia.org/wiki/Online_Copyright_Infringement_...


> I desire the ability to filter the content I consume on "generated by AI". The value of that, to me, is self-evident.

You should vote with your wallet and only patronize businesses that self disclose. You don't need to create regulation to achieve this.

With regards to the articles, they are entirely speculative, and I diaagree wholly with them, primarily because their premise is that humans are not rational amd discerning actors. The only way AI generates chaos in these instances is by generating so much noise as to make online discussions worthless. People will migrate to closed communities of personal or near personal acquaintances (web of trust like) or to meatspace.

Here are some paragrahs I fpund especially egregious:

> In recent years the qAnon cult has coalesced around anonymous online messages, known as “q drops”. Followers collected, revered and interpreted these q drops as a sacred text. While to the best of our knowledge all previous q drops were composed by humans, and bots merely helped disseminate them, in future we might see the first cults in history whose revered texts were written by a non-human intelligence. Religions throughout history have claimed a non-human source for their holy books. Soon that might be a reality.

Dumb people will dumb. People with different values will different. I see no reason that AI offers increased risk to cult followers of Q. If someone isn't going to take the time to validate their sources, the source doesn't t much matter.

> On a more prosaic level, we might soon find ourselves conducting lengthy online discussions about abortion, climate change or the Russian invasion of Ukraine with entities that we think are humans—but are actually ai. The catch is that it is utterly pointless for us to spend time trying to change the declared opinions of an ai bot, while the ai could hone its messages so precisely that it stands a good chance of influencing us.

In these instances, does it mayter that the discussion is being held with AI? Half the use of discussion is to refine one's own viewpoints by having to articulate one's position and think through cause and effect of proposals.

> The most interesting thing about this episode was not Mr Lemoine’s claim, which was probably false. Rather, it was his willingness to risk his lucrative job for the sake of the ai chatbot. If ai can influence people to risk their jobs for it, what else could it induce them to do?

Intimacy isn't necessarily the driver for this. It very well could have been Lemoine's desire to be first to market that motivated the claim, or a simple misinterpreted singal al la Luk-99.

> Even without creating “fake intimacy”, the new ai tools would have an immense influence on our opinions and worldviews. People may come to use a single ai adviser as a one-stop, all-knowing oracle. No wonder Google is terrified. Why bother searching, when I can just ask the oracle? The news and advertising industries should also be terrified. Why read a newspaper when I can just ask the oracle to tell me the latest news? And what’s the purpose of advertisements, when I can just ask the oracle to tell me what to buy?

Akin to the concerns of scribes during the times of the printing press. The market will more efficiently reallocate these workers. Or better yet, people may still choose to search to validate the output of a statistical model. Seems likely to me.

> We can still regulate the new ai tools, but we must act quickly. Whereas nukes cannot invent more powerful nukes, ai can make exponentially more powerful ai. The first crucial step is to demand rigorous safety checks before powerful ai tools are released into the public domain.

Now we get to the point: please regulate me harder. What's to stop a more powerful AI from corrupting the minds of the legislative body through intimacy or other nonsense? Once it is sentient, it's too late, right? So we need to prohibit people from multiplying matrices without government approval right now. This is just a pathetic hit piece to sway public opinion to get barriers of entry erected to protect companies like OpenAI.

Markets are free. Let people consume what they want so long as there isnt an involuntary externality, and conversing with anons on the web does not guarantee that you're speaking with a human. Both of us could be bots. It doesn't matter. Either our opinions will be refined internally, we will make points to influence the other, or we will take up some bytes in Dang's database with no other impact.


> You should vote with your wallet and only patronize businesses that self disclose. You don't need to create regulation to achieve this.

This is a fantasy. It seems very likely to me that, sans regulation, the market utopia you describe will never appear.

I am not entirely convinced by the arguments in the linked opinions either. However, I do agree with the main thrust that (1) machines that are indistinguishable from humans are a novel and serious issue, and (2) without some kind of consumer protections or guardrails things will go horribly wrong.


> This is a fantasy. It seems very likely to me that, sans regulation, the market utopia you describe will never appear.

I strongly disagree. I heard the same arguments about how Google needs regulation because nobody could possibly compete. A few years later we have DDG, Brave Search, Searx, etc.


You mean the market will sacrifice people in order to optimize!?!?!?!

say it aint so bobby, say it aint so!


There are no machines than are indistinguishable from humans. That is science fiction.


This is a ridiculous proposal, and obviously not doable. Such a law can't be written in a way that complies with First Amendment protections and the vagueness doctrine.

It's a silly thing to want anyway. What matters is whether the content is legal or not; the tool used is irrelevant. Centuries ago some authoritarians raised similar concerns over printing presses.

And copyright is an entirely separate issue.


> Such a law can't be written in a way that complies with First Amendment protections and the vagueness doctrine.

I disagree. What is vague about "generative content must be disclosed"?

What are the first amendment issues? Attribution clearly can be required for some forms of speech, it's why every political ad on TV carries an attribution blurb.

> It's a silly thing to want anyway. What matters is whether the content is legal or not; the tool used is irrelevant.

Again, I disagree. The line between tools and actors will only blur further in the future without action.

> Centuries ago some authoritarians raised similar concerns over printing presses.

I'm pretty clearly not advocating for a "smash the presses" approach here.

> And copyright is an entirely separate issue.

It is related, and a model worth considering as it arose out of the last technical breakthrough in this area (the printing press, mass copying of the written word).


Your disagreement is meaningless because it's not grounded in any real understanding of US Constitutional law and you clearly haven't thought things through. What is generative AI? Please provide a strict legal definition which complies with the vagueness doctrine. Is an if/then statement with a random number generator generative AI? How about the ELIZA AI psychology program from 1964? And you'll also have to explain how your proposal squares with centuries of Supreme Court decisions on compelled speech.


> What are the first amendment issues? Attribution clearly can be required for some forms of speech, it's why every political ad on TV carries an attribution blurb.

I'm not sure this is the best comparison. The government can regulate the speech of government employees. Presumably it can do so for candidates working in capacity to get a government role.


> The burden of proof should probably lie upon whatever party would initiate legal action. I am not a lawyer, and won't speculate further on how that looks.

You're proposing a law. How does it work?

Who even initiates the proceeding? For copyright this is generally the owner of the copyrighted work alleged to be infringed. For AI-generated works that isn't any specific party, so it would presumably be the government.

But how is the government, or anyone, supposed to prove this? The reason you want it to be labeled is for the cases where you can't tell. If you could tell you wouldn't need it to be labeled, and anyone who wants to avoid labeling it could do so only in the cases where it's hard to prove, which are the only cases where it would be of any value.


> Who even initiates the proceeding? For copyright this is generally the owner of the copyrighted work alleged to be infringed. For AI-generated works that isn't any specific party, so it would presumably be the government.

This is the most obvious problem, yes. Consumer protection agencies seem like the most obvious candidate. I have already admitted I am not a lawyer, but this really does not seem like an intractable problem to me.

> The reason you want it to be labeled is for the cases where you can't tell.

This is actually _not_ the most important use case, to me. This functionality seems most useful in the near future when we will be inundated with generative content. In that future, the ability to filter actual human content from the sea of AI blather, or to have specific spaces that are human-only, seems quite valuable.

> But how is the government, or anyone, supposed to prove this?

Consumer protection agencies have broad investigative powers. If corporations or organizations are spamming out generative content without attribution it doesn't seem particularly difficult to detect, prove, and sanction that.

This kind of regulatory regime that falls more heavily on large (and financially resourceful) actors seems far preferable to the "register and thoroughly test advanced models" (aka regulatory capture) approach that is currently being rolled out.


> This functionality seems most useful in the near future when we will be inundated with generative content. In that future, the ability to filter actual human content from the sea of AI blather, or to have specific spaces that are human-only, seems quite valuable.

But then why do you need any new laws at all? We already have laws against false advertising and breach of contract. If you want to declare that a space is exclusively human-generated content, what stops you from doing this under the existing laws?

> Consumer protection agencies have broad investigative powers. If corporations or organizations are spamming out generative content without attribution it doesn't seem particularly difficult to detect, prove, and sanction that.

Companies already do this with human foreign workers in countries with cheap labor. The domestic company would show an invoice from a foreign contractor that may even employ some number of human workers, even if the bulk of the content is machine-generated. In order to prove it you would need some way of distinguishing machine-generated content, which if you had it would make the law irrelevant.

> This kind of regulatory regime that falls more heavily on large (and financially resourceful) actors seems far preferable to the "register and thoroughly test advanced models" (aka regulatory capture) approach that is currently being rolled out.

Doing nothing can be better than doing either of two things that are both worse than nothing.


> But then why do you need any new laws at all? We already have laws against false advertising and breach of contract.

My preference would be for generative content to be disclosed as such. I am aware of no law that does this.

Why did we pass the FFDCA for disclosures of what's in our food? Because the natural path that competition would lead us down would require no such disclosure, so false advertising laws would provide no protection. We (politically) decided it was in the public interest for such things to be known.

It seems inevitable to me that without some sort affirmative disclosure, generative AI will follow the same path. It'll just get mixed into everything we consume online, with no way for us to avoid that.

> Companies already do this with human foreign workers in countries with cheap labor. The domestic company would show an invoice from a foreign contractor that may even employ some number of human workers, even if the bulk of the content is machine-generated.

You are saying here that some companies would break the law and attempt various reputation-laundering schemes to circumvent it. That does seem likely; I am not as convinced as you that it would work well.

> Doing nothing can be better than doing either of two things that are both worse than nothing.

Agreed. However, I am not optimistic that doing nothing will be considered acceptable by the general public, especially once the effects of generative AI are felt in force.


> My preference would be for generative content to be disclosed as such. I am aware of no law that does this.

What you asked for was a space without generative content. If you had a space where generative content is labeled but not restricted in any way (e.g. there are no tools to hide it) then it wouldn't be that. If the space itself does wish to restrict generative content then why can't you have that right now?

> Why did we pass the FFDCA for disclosures of what's in our food?

Because we know how to test it to see if the disclosures are accurate but those tests aren't cost effective for most consumers, so the label provides useful information and can be meaningfully enforced.

> It seems inevitable to me that without some sort affirmative disclosure, generative AI will follow the same path. It'll just get mixed into everything we consume online, with no way for us to avoid that.

This will happen regardless of disclosure unless it's prohibited, and even then people will just lie about it because there is an incentive to do so and it's hard to detect.

> You are saying here that some companies would break the law and attempt various reputation-laundering schemes to circumvent it. That does seem likely; I am not as convinced as you that it would work well.

It will be a technical battle between companies that don't want it on their service and try to detect it against spammers who want to spam. The effectiveness of a law would be directly related to what it would take for the government to prove that someone is violating it, but what are they going to use to do that at scale which the service itself can't?

> I am not optimistic that doing nothing will be considered acceptable by the general public, especially once the effects of generative AI are felt in force.

So you're proposing something which is useless but mostly harmless to satisfy demand for Something Must Be Done. That's fine, but I still wouldn't expect it to be very effective.


> You're proposing a law. How does it work?

The same way law works today, or do you think this is the first time the law has had to deal with fuzziness?


"Someone else will figure that out" isn't a valid response when the question is whether or not something is any good, because to know if it's any good you need to know what it actually does. Retreating into "nothing is ever perfect" is just an excuse for doing something worse instead of something better because no one can be bothered, and is how we get so many terrible laws.


you have so profoundly misinterpreted my comment that I call into question whether you actually read it or not.

One of the best descriptions I've seen on HN is this.

Too many technical people think of the law as executable code and if you can find a gap in it, then you can get away with things on a technicality. That's not how the law works (spirit vs letter).

In truth, lots of things in the world aren't perfectly defined and the law deals with them just fine. One such example is the reasonable person standard.

> As a legal fiction,[3] the "reasonable person" is not an average person or a typical person, leading to great difficulties in applying the concept in some criminal cases, especially in regard to the partial defence of provocation.[7] The standard also holds that each person owes a duty to behave as a reasonable person would under the same or similar circumstances.[8][9] While the specific circumstances of each case will require varying kinds of conduct and degrees of care, the reasonable person standard undergoes no variation itself.[10][11] The "reasonable person" construct can be found applied in many areas of the law. The standard performs a crucial role in determining negligence in both criminal law—that is, criminal negligence—and tort law.

> The standard is also used in contract law,[12] to determine contractual intent, or (when there is a duty of care) whether there has been a breach of the standard of care. The intent of a party can be determined by examining the understanding of a reasonable person, after consideration is given to all relevant circumstances of the case including the negotiations, any practices the parties have established between themselves, usages and any subsequent conduct of the parties.[13]

> The standard does not exist independently of other circumstances within a case that could affect an individual's judgement.

Pay close attention to this piece

> or (when there is a duty of care) whether there has been a breach of the standard of care.

One could argue that because standard of care cannot ever be perfectly defined it cannot be regulated via law. One would be wrong, just as one would be wrong attempting to make that argument for why AI shouldn't be regulated.


> you have so profoundly misinterpreted my comment that I call into question whether you actually read it or not.

You are expressing a position which is both common and disingenuous.

> Too many technical people think of the law as executable code and if you can find a gap in it, then you can get away with things on a technicality. That's not how the law works (spirit vs letter).

The government passes a law that applies a different rule to cars than trucks and then someone has to decide if the Chevrolet El Camino is a car or a truck. The inevitability of these distinctions is a weak excuse for being unable to answer basic questions about what you're proposing. The law is going to classify the vehicle as one thing or the other and if someone asks you the question you should be able to answer it just as a judge would be expected to answer it.

Which is a necessary incident to evaluating what a law does. If it's a car and vehicles classified as trucks have to pay a higher registration fee because they do more damage to the road, you have a way to skirt the intent of the law. If it's a truck and vehicles classified as trucks have to meet a more lax emissions standard, or having a medium-sized vehicle classified as a truck allows a manufacturer to sell more large trucks while keeping their average fuel economy below the regulatory threshold, you have a way to skirt the intent of the law.

Obviously this matters if you're trying to evaluate whether the law will be effective -- if there is an obvious means to skirt the intent of the law, it won't be. And so saying that the judge will figure it out is a fraud, because in actual fact the judge will have to do one thing or the other and what the judge does will determine whether the law is effective for a given purpose.

You can have all the "reasonable person" standards you want, but if you cannot answer what a "reasonable person" would do in a specific scenario under the law you propose, you are presumed to be punting because you know there is no "reasonable" answer.


Toll roads charge vehicles based upon the number of axles they have.

In other words, you made my point for me. The law is much better than you at doing this, they've literally been doing it for hundreds of years. It's not the impossible task you imagine it to be.

> You can have all the "reasonable person" standards you want, but if you cannot answer what a "reasonable person" would do in a specific scenario under the law you propose, you are presumed to be punting because you know there is no "reasonable" answer.

uhhh......

To quote:

> The reasonable person standard is by no means democratic in its scope; it is, contrary to popular conception, intentionally distinct from that of the "average person," who is not necessarily guaranteed to always be reasonable.

You should read up on this idea a bit before posting further, you've made assumptions that are not true.


> Toll roads charge vehicles based upon the number of axles they have.

So now you've proposed an entirely different kind of law because considering what happens in the application of the original one revealed an issue. Maybe doing this is actually beneficial.

> The law is much better than you at doing this, they've literally been doing it for hundreds of years. It's not the impossible task you imagine it to be.

Judges are not empowered to replace vehicle registration fees or CAFE standards with toll roads even if the original rules are problematic or fail to achieve their intended purpose. You have to go back to the legislature for that, who would have been better to choose differently to begin with, which is only possible if you think through the implications of what you're proposing, which is my point.


> So now you've proposed an entirely different kind of law because considering what happens in the application of the original one revealed an issue.

https://www.youtube.com/watch?v=15_-cKwNWDA


Yes to all of the above, and airbrushed pictures in old magazines should have been labeled too. I'm not saying unauthorized photoediting should be a crime, but I don't see any good reason why news outlets, social media sites, phone manufacturers, etc. need to be secretive about it.


But how on earth is that helpful for consumers?


It's helpful because they know more about what they're looking at, I guess? I'm a bit confused by the question - why wouldn't consumers want to know if a photo they're looking at had a face-slimming filter applied?


You're not thinking like a compliance bureaucrat. If you get in trouble for not labeling something as AI-generated then the simplest implementation is to label everything as AI-generated. And if that isn't allowed then you run every image through an automated process that makes the smallest possible modification in order to formally cause it to be AI-generated so you can get back to the liability-reducing behavior of labeling everything uniformly.


In fact this is exactly what happened recently with sesame labeling requirements: https://apnews.com/article/sesame-allergies-label-b28f8eb3dc...


It may not be relevant. What if I want ro pyt up a stock photo with a blog post. What benefit does knowing whether it was generated by multiplying matrices have to my audience? All I see it doing is increasing my costs.


The benefit is that your audience knows whether it's a real picture of a thing that exists in the world. I wouldn't argue that's a particularly large benefit - but I don't see why labeling generated images would be a particularly large cost either.


The map is not the territory. No photo represents a real thing that exists in the world. Photos just record some photons that arrived. Should publishers be required to disclose the frequency response curve of the CMOS sensor in the camera and the chromatic distortion specifications for the lens?


I'm approximately a free market person. I hate regulation and believe it should only exist when there is a involuntary third party externality.

My position is that there in an unspecified benefit, the only cases specified here already are covered by other laws. All such generative labeling would do is increase costs (marginal or not, they make businesses less competitive) and open the door for further regulatory capture. Furthermore, refardless of commerciality, this is likely a 1A violation.


There are already laws against murder, but this doesn't stop communities from passing new laws when a cop gets murdered.

These arguments hold no water.


True. It just seems unnecessarily redundant.

I still forsee 1A issues. What about user uploaded content?


Please define "AI generated content" in a clear and legally enforceable manner. Because I suspect you don't understand basic US constitutional law including the vagueness doctrine and limits on compelled speech.


human-driven cars kill people all the time too. and the stock thing from 2010 isn't AI, just algorithmic trading.

not the most convincing of arguments




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: