Hacker News new | past | comments | ask | show | jobs | submit login
Will AI obliterate the rule of law? (matthewbutterick.com)
46 points by mmphosis on April 22, 2023 | hide | past | favorite | 75 comments



The OP is a bit too long-winded for my taste, but this passage struck a chord with me:

> If AI compa­nies are allowed to market AI systems that are essen­tially black boxes, they could become the ulti­mate ends-justify-the-means devices. Before too long, we will not dele­gate deci­sions to AI systems because they perform better. Rather, we will dele­gate deci­sions to AI systems because they can get away with every­thing that we can’t. You’ve heard of money laun­dering? This is human-behavior laun­dering. At last—plau­sible deni­a­bility for every­thing.

Corporations routinely get caught doing things that are illegal, but employees rarely suffer any legal consequences because it proves really hard or impossible to pin responsibility on any one person. The responsibility is diffuse; it falls on the system, not on the individuals that work for it. AI will make it more so.


The entire point of corporations is diffuse legal liability. We already see this as well in the law no AI needed this is why is rare for an individual police officer, judge, or prosecutor to face any accountability. The system is designed to ensure no personal accountability via over lapping nearly impenetrable liability shields (aka immunity)

The mantra then becomes "we have a systemic problem", and no one can agree on either the actual problem nor the solution when in reality we have a liability issue.

This will only grow with AI, but the fact people do not see that we have already obliterated the rule of law long before the law was touched by AI, we have no hope of stopping the abuse by AI


That ñack of personal accountability would be no problem, though, if the corporate sanctions for lawbreaking had sufficient teeth. Corporations will always treat fines etc. as a cost of doing business; the cost should be high enough to deter lawbreaking.


While that is true, I have seen executives attitude change when a regulation held them personally liable, either economically or under threat of jail time, in those are taken far far more seriously then when the punishment is just a fine for the company which then is a math problem, is the fine more than compliance.


The police should absolutely not operate as anything like a corporation, it is a fundamental pillar of democracy, this should be absolutely accountable for any wrongdoing. What individual officers can get away with in the US is just completely insane.


I dont think I claimed they should, the juxtaposition was the parent comment about corporations and tying that in the wider concept of law.

Corporations are legal constructions to limit liability not only of investors but of employees of the corporation, that is their function in law

Similarly the law has other legal constructions to exempt and limit liability for other positions inside that system

So it not that police are operating as corporations, the same legal construction that created corporations also created police forces.

The root of the problem for both is the same (more on that in a second)

>>it is a fundamental pillar of democracy

I disagree here kinda, a fundamental pillar of democracy is equality under the law, the root of the problem for both police, corporations, politicians, etc is the ability for the law to make some people "more equal" than others... i.e not equal at all.

This is done by making police officers exempt from some laws, some regulations, some liability even if the city is not... this is done by making individuals exempt from regulations even if the corporation is not...

So you are correct that police should not be operated like a corporation, but at the same we can look at the law and see where there are treated similarly and see why that is a problem.

I will refrain from my general negativity about democracy and just say that for what you call democracy to hold everyone must be treated equal under the law... and I do mean everyone, and I do mean equal. No exemptions, no special privileges. Most political ideologies have a problem with true equality


A lot of crimes things require intent. Perhaps using AI could actually circumvent the intent requirement of crime, thus making a lot of otherwise illegal things totally legal.

That is to say, it not only makes it hard to pin responsibility, but actually makes it no longer a crime at all.


I think this is actually a good thing in the long run because “requiring intent” creates a clearly perverse incentive where the organization may be able to do illegal things so long as it can delude itself about them, for instance by keeping inaccurate books or allowing broken processes to remain broken because fixing them would shed light on something illegal.

Instead I think it would be better for organizations to be approximately as liable for their mistakes as their crimes. In that case it doesn’t matter if an employee does something illegal or an AI does some illegal on behalf of the company, the company will remain liable.


> I think this is actually a good thing in the long run because “requiring intent” creates a clearly perverse incentive where the organization may be able to do illegal things so long as it can delude itself about them

This doesn't really match how intent is handled in (at least American) law. There are reasonable-person tests. It is subpar, and so I agree with your second paragraph, but it isn't as cut and dry as the first paragraph suggests.


Intent/mens rea doesn't work like that unless the law explicitly specifies so. By default intent simply means intent to perform an act, as opposed to intent to cause harm. In some specific cases, like murder, intent to cause harm is an element, but that's the exception not the rule.

For example if you intentionally take someone's property, maybe you took their phone away because you genuinely thought their phone was causing them harm and wanted to help them, you have the mens rea for theft.

However, if you unintentionally took someone's phone, like you mistook someone else's phone for your phone, then you don't have the mens rea for theft.


We’ve had software making business decisions for decades now, so this really isn’t a new problem. When is comes to finance laws, intent doesn’t work like that, the onus is on the company and it’s employees to prevent the criminal activity. Lack of intent, or even knowledge, is not sufficient. You have a responsibility to implement processes that give you the knowledge you need.

I wrote the above before reading the article and wondered how much the author knows about corporate criminal and civil responsibility. I’ve worked for finance institutions so I’ve been through the training on this. A graphic designer. Right, I completely understand and appreciate the problems with generative AI. Those are points well taken.

I mean I’ve got nothing against graphic designers, and I’m not saying there are no risks with AI. There are many. But the risk assessment particularly in finance, and likely other business areas is based on a fundamental misunderstanding of the way regulations on this actually work already today.


When a company does something illegal, the employees are also punished in a "diffuse" manner, as a result of economic sanctions. When a big company has to pay a X million fine, it is X less million for paying employees, so less raises, maybe layoffs, etc... People who invested in that unlawful company will also lose money.

When it is not just money matters (like when people may die), we usually designate one person, and say something like "if people die, you go do jail, make sure it doesn't happen". Up to him to make sure he does what is necessary, including punishing employees who do not follow the rules.


Counterpoint: when the fine is less then the profit made from doing the illegal action it us just the cost of doing business.


Same thing for individuals.

I know some people who never pay for parking, public transport, or tolls, because in their case, the fine for not paying times the chance of getting caught is less than the ticket price.


I recall that some Scandinavian countries base fine amounts for traffic violations like speeding according to income.


I guess that makes it economically viable for people with less income not to follow the law.


On the contrary, the point is to correct for that fact that a flat-fee fine is a complete non-issue if your income is high enough. There have been times in my life when an accidental or unexpected $50 fine would impact my ability to pay rent or other essential bills that month. But now I'm not even sure I'd bother going to court to contest something that small because I just wouldn't care enough to spend the time or emotional investment.


It's actually the other way around. It's high enough to not be economically viable for anybody.


If a person's income is low enough that probability of being fined * percentage of income per fine * income is less than price of parking, then it's economically viable for them not to pay for parking and risk the fine.


Human behavior laundering is a magnificent phrase.

The only thing is you don't have to worry about this in the future, it's already here. [1]

1 - https://www.reuters.com/article/us-amazon-com-jobs-automatio...


First, AI being a black box isn’t a legal shield. See Tesla Autopilot.

Second, corporations and employees have a specifically unique legal relationship but employers can be held criminally responsible for their actions if their actions are intentionally illegal. Often a corporations actions are illegal in the aggregate but no one person did anything illegal.

https://en.m.wikipedia.org/wiki/Trial_of_Kenneth_Lay_and_Jef...


Eh, we’ve had black boxes for a while. It’s just a non-human “fall guy.”

The law deals with this with outcome tests: if the outcome would have been illegal had a person done it, then the person using the black box is liable.

Eg, if you hire on non-skill-based criteria, and the outcome is not similar to the racial distribution of the applicant pool, you’re liable for discriminatory hiring. This is a common problem with using personality-based hiring assessments.


One of the biggest problems with the concept of corporate personhood is that if a corporation commits a crime you can't just throw it in jail like a normal person.

At some point perhaps we should just consider the "death penalty" for corporations that break the law.

That or we start holding executives and/or shareholders accountable for the actions that happen in organisations they're supposed to control.


Not only corporations. States (government and police and law) will also use this BS...


> For instance, to run a typog­raphy busi­ness, I have to stay conver­sant in at least four program­ming languages—Python, JavaScript, Racket, and Swift

The latter two are relatively uncommon. I can see why Swift might be needed for Apple's platform, but I was curious what the functional language Racket is doing in that list. A bit of Googling revealed that the author himself wrote a book publishing tool in Racket:

https://docs.racket-lang.org/pollen/


Butterick has written several typography related tools in Racket:

https://git.matthewbutterick.com/explore/repos?sort=recentup...


What is the modus operandi for upvoting links on HN these days? Is it to upvote interesting titles, so we can discuss certain topics? Or to upvote interesting content?

The topic (AI and the future of copyright law) is interesting. But the content seems to be an angry rant without any merit or structure.

    many gener­a­tive AI prod­ucts are based on
    massive viola­tions of law
If so, which law was violated?


The term you are looking for is modus operandi (not operandus).

Modus operandi literally means "manner of working," so you want the genitive of operandus (operandi) because the genitive in Latin corresponds (roughly) to the english preposition "of."


(That made me think of that Life of Brian scene with the Roman guards :D)


Thanks, updated.


> Is it to upvote interesting titles, so we can discuss certain topics? Or to upvote interesting content?

Why not both? If I see something on the top of HN, I know it's probably worth checking out, and I read the comments first.


It would be nice if we could separately upvote the discussion or the article, so a reader knew whether to ignore the article.


Off the top of my head (and IANAL):

1. Copyright laws

2. Data protection/privacy laws e.g. national implementations of GDPR, medical privacy laws


Why? How does training a neural network violate copyright law? Which paragraph?

Same question for "data protection/privacy".


IANAL but a quote from a Cornell article says

> The five fundamental rights that the bill gives to copyright owners—the exclusive rights of reproduction, adaptation, publication, performance, and display—are stated generally in section 106.

Seems it can be argued that generative AI violates many of these rights. Again I’m not a lawyer so I’m sure there’s some historic legal stuff that impacts the interpretation - but taking a copyrighted work and allowing a neural network to adapt it in any way seems like a clear violation?


Does AI really "adapt" a "work"? Or does it learn from looking at the world and then create its own works, just like a human does?


That's the trillion dollar question.

My belief is that generative AI probably breaks both the letter and the spirit of the law, but it also renders the law pointless because its existence, at whatever quality level, means there's no longer a commercial justification for copyright to exist to protect creative output of that level or lower.

But there might be an evolutionary/signalling reason (think peacock tail) that overrides the commercial.

https://kitsunesoftware.wordpress.com/2022/10/09/an-end-to-c...


An AI learns by looking at the world the same way my camcorder does when I take it to a movie theater and point it at the screen.


It's actually the other way round.

The camcorder can reproduce every scene in detail. But it can not tell you the plot.

AI can tell you about the plot, but not reproduce detailed scenes.

So this is a prime example of how AI is humanlike and not copying anything.


The analogy falls apart when you consider that for instance stable diffusion was trained on 2.3 billion images, but the model itself is only around 10GB. That only works out to be 4 bytes per image. Clearly it's not the same as pointing a camcorder at a movie screen.


You could argue that training a model is essentially a lossy compression algorithm resulting a compressed artifact that can be non-deterministically decompressed.


So are humans. If I were to draw a Mickey mouse from memory, would it be amenable to copyright laws?


Well, yes.


That’s 10GB with everything included, including training weights. The models most people use are 2 or 4 GB.


If current AI's don't actually violate copyright law (doubtful) then they should be modified, since it is obviously harmful to actual creators and will likely result in shitty outcomes for existing artists, fewer future artists, and less compelling art overall. If copyright law isn't intended to protect artistry in the case of existential threat, what exactly is the point of copyright law, anyway?

I personally think an unstated goal of society is that it's primarily for the benefit of humans, and the major problem with AI is it ignores that. Art in particular is special, and if AI cheapens art, it cheapens the human experience.


I disagree with this, AI makes making art easier and by doing so, if anything it both increases the supply of art and enhances the human experience by enabling anyone to create art. As an example, my drawing skills are so bad that I couldn’t express myself visually before but now Midjourney enables me to express myself. I don’t see how that could be a bad thing.

Does it harm existing creators? Probably, in the sense that it enables new entrants to the market and creates more competition but that should be acceptable. We live in a market society (since every other economic form has been a failure) and we need to acknowledge that sometimes that means people get their livelihoods destroyed. If we start picking and choosing, who deserves to keep their livelihoods we fall into political traps which aren’t ideal (why do we care about artists but not coal miners?).

Another point I want to get at here is that in a world of abundant art, I would expect art to be more compelling. I don’t know about you but one thing I find a bit frustrating about Anglophone art is that too often it follows the same tropes and happens in the same locations. A world of abundant art can solve that.


> if anything it both increases the supply of art and enhances the human experience by enabling anyone to create art.

It increases the supply of derivative art. I agree to some extent it "enhances the human experience by enabling anyone to create art" in the sense that they can adorn their environment or works with pretty things that they like, but they aren't actually creating anything in the traditional sense.

Barring major rapid advancements, I expect art generated by AI (and anything else) to be derivative of what it's consumed. I wouldn't expect for it to develop its own style of painting or music, I would just expect to see the same stuff rehashed over and over again. Stagnation. That's not something I want.

Again, what is the point of copyright, exactly?


I don’t think that it would be derivative — sure, for those who go to midjourney, try out a single query to feel good and then never generate another image it won’t be “art”. But these things already have integrations into photoshop and such, and a more collaborative between human and AI approach could easily result in actual art.

Think of something like, I asked for a portrait of a women, similar in layout to Mona Lisa, then click on some area I don’t like too much, and ask the AI to change it to something else. You don’t have to imagine many iterations before it could give you “real art”, especially that you are free to draw/edit/add layers at any step of the process, or even draw the initial sketch.


> Another point I want to get at here is that in a world of abundant art, I would expect art to be more compelling

I expect the exact opposite will be true.

Consider for a moment that most people who are naturally inclined to art, even with years of practice and training, do not produce much that is novel or of any real interest. If we’re talking about truly great art, even accounting for all the inherent subjectivity in that classification, it’s a tiny fraction of these people.

Now, let’s take masses of people with no particular knowledge of art or ability to reason about how good images are constructed and set them loose typing into “magic” image producing boxes, responding to whatever appeals to them in the most uncritical fashion imaginable. How would this make art more compelling? All it will do is make art “more” in the sense that food is cheaper and comes in bigger packages at a Walmart than a farmers’ market.


I have no idea how you make the law in the way that does not also punish other humans/businesses and give a massive windfall of power to large businesses that buy up copyright.


Most artists can’t actually make a living from their work, it’s only a tiny minority that can.

I think a future with UBI is inevitable, and then (most[1]) copyright laws could be erased.

[1] maybe they should apply when they protect an individual against a company


Should we also retroactively introdue laws to make shovels and pencils illegal? Those also made a lot of people obsolete. Let alone tractors and cameras. Do we have to go back to a time before the flintstone?


I don't know, is there something about not being able to dig or draw that is essential to the human experience?

It's funny to see people act like AI is just another simple tool instead of an unprecedented fundamental shake-up of how we think about intelligence, ourselves, and our role in society.


I think it's particularly interesting to see this debate play out as AI starts to reproduce code. As in this post: https://news.ycombinator.com/item?id=35643715 -- I find that people who work on these systems are fine with exposing all kinds of artists and writers to the dangers of automation, but when that comes home to roost and AI starts to be able to reproduce their own work, suddenly it's a problem.

It just makes me wonder why should software engineers should represent some special protected class from the dangers of automation.

Not saying this author is 100% right, just saying I understand the spirit of it.


Political influence, like just about everything, is distributed unequally. I doubt software engineers have much more pull than artists. But once AI is threatening to obsolete lots of labor, it might run into problems.

The strategic way to introduce it, therefore, would be in ways that help a lot of people without (at least initially) destroying many jobs. But it seems like firms would have to cooperate beyond what they are capable of to pull that off.


Most software engineers I know are happy to use ChatGPT.


All of these "will AI destroy X" articles are so weird. It's not as if unbridled late stage capitalism [0] hasn't been undermining our longstanding institutions and values. It's not as if corporate bureaucracies aren't artificial intelligences that we've been ceding control to for decades, while they increasingly become paperclip maximizers. I fully expect "AI" to turn the screws on existing power imbalances even harder, but to frame things as if we're at some watershed moment strikes me as utterly myopic.

[0] as distinct from distributed free market activity. I'm coming from a libertarian perspective here.


Yep, you can probably dig up some Victorian era writings of the same nature raging against whatever was new then. People not liking change and projecting their irrational fears is nothing new. They'll spell all sorts of negative outcomes, most of which never come even close to happening. Some of the Victorian age writings on this are quite hilarious to read in the modern day because of how ludicrously wrong and cringe worthy they were. The shelf life of these articles isn't very good.

AI is providing us with new tools. That's all. And there are lots of people that will use those tools in all sorts of ways. Some interesting, some misguided, some dangerous, but mostly mundane things. Some people will be more skilled using tools than others. Or have better access.

I had an interesting conversation with a friend whose kids are in college and are adapting to having chat gpt available. It's great. They are using it for all sorts of things. AI won't replace us. The next generation using it will. They'll be more familiar and comfortable with it. They'll have never known a world without it. They won't get hung up on the past. And they'll move us aside without much thought. This happens every generation.


Perhaps off topic, but I thought late stage capitalism is the what you get as a result of libertarianism.

To avoid wealth and power imbalances you need to effectively wield our democratic powers and take money from winners and redistribute it to the losers. (As taxes and regulation)


Thorough corporate welfare and regulatory capture across all industries, including the government routinely giving away trillions of dollars to banks, is not what I'd call libertarian at all. I'm open to the possibility that centralized capital accretion is inevitable, it's just not my null hypothesis and it's not the experiment we're running.

Overall I identify with libertarianism because individual freedom is a powerful perspective. It's most certainly not the only perspective, but it seems to be one of the easiest to lose sight of. And notice how I left the 'l' lower case - I'd say that the US "Libertarian" party is set up to be the exact same type of corporatist bait and switch as the other parties.


Corporate welfare and regulatory capture is just corruption in my mind. It's a symptom of individuals having too much power.

When I think if late stage capitalism I imagine a world where all our millions of individual businesses and are merged into just a few super conglomerates. Winners snowballing power, hovering up smaller players, all perfectly legal. That's the result of not taxing enough and blocking mergers.

I don't identify with libertarianism because, although I acknowledge that some people work harder than others, there is a multitude of factors outside our own control that effect our ability to succeed.


Related: ChatGPT Says What Our Unconscious Radically Represses by Slavoj Žižek <https://www.sublationmag.com/post/chatgpt-says-what-our-unco...>


Zizek is a slippery faux intellectual who says very little with a lot of flowery academic language, making his money by grifting midwits who can't see past his use of language, and who knows absolutely nothing about what he's talking about here, absolutely nothing, but thinks he's got an insight because he's done so much cocaine he thinks he's God's gift to modern philosophy


Ok sure but tell me how you feel.


Unconscious is far too generous a term and already loaded with implication at the possibility of consciousness (given that consciousness arose from a unconscious substrate).

LLMs can be (at best) considered intellectual zombies until proven otherwise.


Everyone except me is an intellectual zombie until proven otherwise.


That is a reasonable position if one ignores others are of the same species (mechanisms if you wish). In fact your position is the very position of a racist mind that refuses to project the implicit “this other is just like me” and thus refused to acknowledge the humanity of the other.


It was sarcasm. There's no way to prove whether something is or isn't an intellectual zombie. I definitely was not expecting to get an accusation of racism out of the deal, but I guess all bets are off on the world wide web!


I did not accuse you of racism. I merely pointed out that that position -- everyone is an intellectual zombie until proven otherwise -- is identical to that of a racist: a refusal to assume an identity of internal phenomena given identity of natural composition and being.

The other side of this coin is that to afford this projection of identity (here 'consciousness') to an entirely dissimilar other is an un-reasonable projection, as it is based entirely on superficial/externalized information.


Identity is not provable. A being cannot access anything beyond its own senses, so who says you aren't the only one of your kind, stuck in some kind of a VR set?

Heck, identity is not even right. Even twins with identical DNA are not identical. They might have some similar aspects, but similarity is not enough to rule out zombies.


This is true. Note words "reasonable" & "assume" in my comment. [Also, identity of "composition", and "nature", not molecular identity.]

(I remember having this conversation back in college; there were the three of us, a friend and his girlfriend. He was saying something along those lines, claiming us to be in his imagination. I told him I could prove he is wrong to his girlfriend, but alas he wouldn't be around to know.)


Identity of composition implies molecular identity :) Or some other kind of physical identity, if you happen to be made of something other than molecules (plasma?). The question of identity of nature is more interesting, but it's also hard to define.

To address your other point, I don't think it's the flip side of the coin: a coin has two sides, and similarity is a continuum. Rejecting consciousness to "dissimilar" things would require that we can split "similarity" and "dissimilarity" and compare which dominates for a particular subject.

I think it's more reasonable to see similarity as a continuum, and extend the assumption of - make it consciousness - to less and less similar subjects... And now we arrived at panpsychism.


This essay advances, but does not really analyze, the legalist argument against generative AI. I found it uncompelling. It would be more interesting to discuss the moral and ethical arguments rather than discuss old laws that were never designed to answer questions about AI.


Betteridge's Law of Headlines is wrong here because the answer is "yes, unless we do something about it."

The problem is that AI centralizes control [1], even "democratic" uses of AI, and the scale of AI will make it so entities that have resources will be able to scale their use of AI more.

And once an entity becomes large enough, they become a law unto themselves. Thus, they will ignore the rule of law and impose rules on us.

To those who claim that AI "Luddites" are hindering AI, would you be so eager to go that direction if you could see the future, and the future with AI was dystopian?

I promise you that it will be because those with control will toss us aside once they think they don't need us.

[1]: https://gavinhoward.com/2022/10/dispelling-ai-myths-and-rhet...


[flagged]


Please keep comments constructive and engage with the subject of discussion rather than ad hominem attacks.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: