Hacker News new | past | comments | ask | show | jobs | submit login

When reading comments like this it feels like: there's no ambitious startup in Europe to become one of the large companies. Because now a startup has less than 3 years to add this content filtering, which provided as a service or not, is going to cost €€€.

Do you think Spotify would be able to grow if it was created on March 27 2019 instead of 2008?

A successful Content filtering as a service (compliance.ly & co. In your example), assuming it gets adopted by all major websites, seems like it would shift the problem to an even bigger gatekeeper than YouTube, how is this a good thing?




What? Spotify has no uploading features, all their content comes from licenseholders.


Strictly speaking that’s not quite true. A user can upload playlist covers and a text description for that playlist. Both the image and text could fall under copyright.

In 2013/2014 Ministry of Sound sued Spotify over not removing playlists based on Ministry compilations, created by Spotify’s users. Ministry claimed that its compilations qualified for copyright protection due to the selection and arrangement involved. [1] [2]

[1] - https://www.theguardian.com/technology/2014/feb/27/spotify-m... [2] - https://www.theguardian.com/technology/2013/sep/04/ministry-...


All the content on YouTube is nominally licensed too. But what happens when someone submits someone else's music without permission?


I could publish someone else's music as my own, on Spotify.


>Because now a startup has less than 3 years to add this content filtering, which provided as a service or not, is going to cost €€€.

Not really? This isn't a flat 'you need to pay 10k a yr regardless of your size' imposition. Proportionality is important.

The articles, as written, are interesting because they already mention a ton of the balancing considerations. All of those are completely absent in these conversations.

Do you know why that's an issue? Because sometime soon people are going to start getting bullshit copyright trolling demand letters, and all this furor about how the internet is dead is going to convince them to close up shop or cave instead of saying 'nah, serve me your originating documents, this is a bogus claim'.

And that's how the internet will die.

>Do you think Spotify would be able to grow if it was created on March 27 2019 instead of 2008?

If the competitive landscape was the same? Yes. In fact, Spotify's arc is exactly what this law is attempting to encourage. As they grew, they became a quasi licensing clearinghouse instead of another Napster or Limewire. That's the entire point.

>how is this a good thing?

Because you don't end up with 1 compliance service, and you can litigate against the compliance service if they're inappropriately killing your content creation business. As it stands now, if you try to fight YouTube or the content delivery pipeline itself on the basis of their filters, you die. That's not necessarily the case if there's a healthy competitive filter ecosystem. Whether or not we get to that point is another question, though.


> Not really? This isn't a flat 'you need to pay 10k a yr regardless of your size' imposition. Proportionality is important.

The problem is the proportionality requirements are poorly designed. It would be one thing if requirements increased solely with revenue, but increasing with time or user count is purely destructive.

Plenty of small services will hit the time limit before they're big, and then the costs destroy them before they have a chance to be. And the fact that that's likely to happen will keep many people from even trying to begin with.

And user count doesn't mean anything if the profit per user is low. Many side projects have a million users, that doesn't mean it's making any money that could be used to spend on filters -- many of them are lucky to even pay for all of their own hosting costs.

> Do you know why that's an issue? Because sometime soon people are going to start getting bullshit copyright trolling demand letters, and all this furor about how the internet is dead is going to convince them to close up shop or cave instead of saying 'nah, serve me your originating documents, this is a bogus claim'.

That's a different problem. If there were real penalties for making false copyright claims then there wouldn't be so many fraudulent demand letters. I don't think as many people would be objecting to "copyright reform" if it did that.


>The problem is the proportionality requirements are poorly designed.

I don't think this is the issue. The requirements aren't set out in detail, and will largely be fleshed out by the courts. This is where the reality of Art. 13 will be set - in the rulings which follow.

Also, elements in a test don't react linearly in court judgements. Scaling from 100 users to 200 isn't going to suddenly mean that it's proportional for you to implement Content ID from scratch or that an applicable fine doubles.

The mental calculus I see here just doesn't take into account how courts work.

>That's a different problem. If there were real penalties for making false copyright claims then there wouldn't be so many fraudulent demand letters. I don't think as many people would be objecting to "copyright reform" if it did that.

I think most people can agree that the cut and dry abuse of copyright and copyright-adjacent systems should be penalized. But it is. Just not at the scale of individual content producers. If someone tried to extort you by placing false copystrikes on your work and you had proof, you would have a few torts or more general omnibus civil code provisions to use in most jurisdictions. But the cost and hassle of doing so might be higher than your expected return.

Justice doesn't scale linearly, which is a very, very big problem -- but not one that's unique to the Art 11/13 debate.


> The requirements aren't set out in detail, and will largely be fleshed out by the courts. This is where the reality of Art. 13 will be set - in the rulings which follow.

But that's part of the problem. It means a service you operate today is subject to a law that will be decided on tomorrow. So you either make the conservative choice, which is onerously expensive and may put you out of business immediately, or you risk being the case of first impression where the more cost effective choice you made is decided to be insufficient, and that too puts you out of business -- but only after you've dedicated years of your life to it.

> Also, elements in a test don't react linearly in court judgements. Scaling from 100 users to 200 isn't going to suddenly mean that it's proportional for you to implement Content ID from scratch or that an applicable fine doubles.

Users don't scale linearly either. Things have network effects. Side projects get posted to HN or similar and go from hundreds of users to hundreds of thousands in the course of an afternoon.

And again, just because you have a lot of users doesn't mean you make a lot of money. Your project may have had a million users for a decade, but if the revenue from those users is only just covering your hosting costs as it is, now you're out of business.

> I think most people can agree that the cut and dry abuse of copyright and copyright-adjacent systems should be penalized. But it is. Just not at the scale of individual content producers. If someone tried to extort you by placing false copystrikes on your work and you had proof, you would have a few torts or more general omnibus civil code provisions to use in most jurisdictions. But the cost and hassle of doing so might be higher than your expected return.

Which means that it isn't, because then nobody does that and there is no penalty for continuing to do it in practice. And the solution to that is quite straight forward -- make the penalty for a false claim sufficiently large, and the process for having it enforced sufficiently simple, that it justifies the victim in spending that amount of time to enforce the penalty.

Moreover, even the existing penalties are quite useless because the biggest problem isn't overtly fraudulent claims, it's the extremely high volume of false positives the claimants have no real incentive to reduce.


>But that's part of the problem.

No, it isn't. Tech changes rapidly, and legislation quite simply isn't going to be able to encode a specific contextual mutating standard. Law isn't wrong to offload that analysis to an institution that is in the thick of it, with access to expert testimony and amicus information to inform it. You WANT the EFF and other advocates being able to weigh in on how the balancing factors should work and you want the courts to listen.

>Side projects get posted to HN or similar and go from hundreds of users to hundreds of thousands in the course of an afternoon.

Yes, and then 95% of those go back down to pre-spike levels of interest. If they's the odd exception which has a massive sustained uptick for their service which promoted copyright protected works, now they can think about licensing and formalizing their processes to protect all stakeholders now that they're a success.

Just because Napster was once small doesn't mean their business model was going to be exempt from attention forever.

> And the solution to that is quite straight forward -- make the penalty for a false claim sufficiently large, and the process for having it enforced sufficiently simple, that it justifies the victim in spending that amount of time to enforce the penalty.

That's not simple. Courts do not afford less due process to larger penalties. The cost is in the complexity; who owns the rights, what did they know about their claim, how easy was the mistake to make, etc. Proving this to a court that has no starting knowledge of what's going on requires money to compile information, prepare briefs, etc.

We like to believe there's no Kolgomorov complexity associated with getting justice, but getting justice requires translating reality into consensus at some level of fidelity. That process is EXPENSIVE.

>the biggest problem isn't overtly fraudulent claims, it's the extremely high volume of false positives the claimants have no real incentive to reduce

Maybe on Youtube that's the case, but that's more of an issue with us having a system of private algorithmic arbitration, which is a seperate issue. The courts are too expensive to follow up on individual claims, and the only alternative is for content holders to sue youtube for big $$$ through content collectives (the threat of which is why we are where we are).


> Tech changes rapidly, and legislation quite simply isn't going to be able to encode a specific contextual mutating standard. Law isn't wrong to offload that analysis to an institution that is in the thick of it, with access to expert testimony and amicus information to inform it. You WANT the EFF and other advocates being able to weigh in on how the balancing factors should work and you want the courts to listen.

That is separate from the problem that the "new law" created by the court is being imposed ex post facto on actions you've already taken.

It means you don't know what the law actually is yet when you're trying to comply with it. That kind of uncertainty leads people to make overly conservative choices that make beneficial projects uneconomical, or just causes them to give up because it's not worth investing years of your life in something you don't know the courts won't unexpectedly blow apart.

And if you want someone to take input from the EFF et al then why should we wait until it's already in court instead of doing that in the legislature before passing a bad law to begin with?

> Yes, and then 95% of those go back down to pre-spike levels of interest.

But the fact that they did have a million users for twelve months may get them hauled into court.

> If they's the odd exception which has a massive sustained uptick for their service which promoted copyright protected works, now they can think about licensing and formalizing their processes to protect all stakeholders now that they're a success.

Again, you're assuming that success comes with popularity. If you're losing money on every user you can't make it up on volume.

There are projects operated by individuals with a large number of users that operate at a net loss. If you say to those people that they have to implement Content ID because they have too many users, those projects are dead.

And the projects that actually are successful would have high revenue, so the only projects ensnared by a user count limit but not a revenue limit are the ones that are barely making it as it is.

> Courts do not afford less due process to larger penalties. The cost is in the complexity; who owns the rights, what did they know about their claim, how easy was the mistake to make, etc. Proving this to a court that has no starting knowledge of what's going on requires money to compile information, prepare briefs, etc.

Yes, exactly, so if that process is used then the penalty would need to be sufficient to justify the victim in going through that process.

But now let me ask you this. How is it that we're willing to impose a prior restraint without going through that process but not a penalty for false claims?


>It means you don't know what the law actually is yet when you're trying to comply with it.

Yes, this happens in all industries that have cases being litigated all the time. In some instances, areas of settled law are completely upended by new rulings that change the status quo and force people to spend money on complying with the new state of affairs.

Yes, it sucks, but this is business as normal. The tension between certainty and flexibility in the law is a longstanding one.

You want these elements decided at the court level because these elements change, and legislation needs to be good law for a looooong time, whereas a shitty ruling can be blown up in months (sometimes in days).

>But the fact that they did have a million users for twelve months may get them hauled into court.

If they had a million users on a platform that shares and promotes other people's copyrighted works without a license, I'd sure hope they figured out their IP strategy.

> If you say to those people that they have to implement Content ID because they have too many users, those projects are dead.

Why would they need to implement Content ID...? That's the nuclear option in the field.

Do you think a blog's comment section needs filtering unless it becomes a common vector for sharing copyrighted material? It doesn't.

The objective isn't to nuke small companies - it is to strike a fair balance between distribution and content creation. No one wants distribution dead.


> Yes, this happens in all industries that have cases being litigated all the time. In some instances, areas of settled law are completely upended by new rulings that change the status quo and force people to spend money on complying with the new state of affairs.

And court decisions that make major changes like that are rare, exactly because they result in widespread burdensome changes to existing behavior that would have been less burdensome if what was required had been better specified to begin with.

If you pass a law that requires such a court decision to happen before anybody knows how to comply with the law, what is anyone supposed to do in the meantime?

Especially when many of the questions are obvious, not bothering to answer them is just punting because they know the answers will be problematic.

> If they had a million users on a platform that shares and promotes other people's copyrighted works without a license, I'd sure hope they figured out their IP strategy.

Everything with user generated content is "a platform that shares and promotes other people's copyrighted works" and they're intended to be licensed from the user/creator. That the platform has no good way to know when what the user uploads is unlicensed is the whole problem.

And if they didn't have some way to do that when they were small then they don't have it when they first become big either. If you need a solution before you have a million users then you need a solution before you have a million users -- and then we're imposing the same burden on the little guy as on Google, if the little guy ever hopes to become Google without promptly getting sued into the ground.

I also reiterate that user count is unrelated to resource level. An individual can operate a platform with a million users and make no profit from it, but impose a laborious content filtering requirement and that platform is gone.

That is presumably the sort of thing they're trying to protect with language about non-profits, but this is where the ambiguity bites us again. If an individual operates a forum as a labor of love where the ads break even with the hosting costs, is that non-profit or not? What if some years there is a "profit" of $200/year? An individual who doesn't want to be bankrupted by lawsuits is not going to enjoy rolling the dice there.

> Why would they need to implement Content ID...?

We don't know what they would need.

> Do you think a blog's comment section needs filtering unless it becomes a common vector for sharing copyrighted material?

Are blog comments not copyrighted material?

How is the platform supposed to know what is being shared there without reading it all?

> The objective isn't to nuke small companies - it is to strike a fair balance between distribution and content creation. No one wants distribution dead.

The objective of DMCA 1201 wasn't to keep farmers from repairing their tractors.

The issue is the divergence between their stated objective and what they did.


Getting dragged through courts is going to kill numerous startups regardless of how legally right they are, because the investors will drop them and they'll go bankrupt.


> Not really? This isn't a flat 'you need to pay 10k a yr regardless of your size' imposition. Proportionality is important.

In practice, it will all be up to the judge:

1. Was your AI filter adequate enough to properly filter the content

2. If not, how high can the fine be?

There is 1 easy solution to all of this: incorporate outside of the EU.


There's another independent criterion that will cause lots of trouble/legal uncertainty:

1b. Regardless of (1), can you prove you made "best efforts" to acquire licenses for the content that was later found on your platform.

It's not specified who you should be seeking deals with, how you're supposed to know ahead of time what a user will upload, how you're supposed to identify the true rightsholders of an uploaded work, etc.

That criterion must even be fulfilled when you're less than 3 years old, by the way!


You are forgetting that parody is legal. So this means the AI will have to understand the difference, which not even humans can do.


>In practice, it will all be up to the judge

That's the case for any piece of legislation.

The test isn't 'if your AI was good enough'. For the majority of people the most important part is: 'is it proportional to even use AI at your size?'

To which the answer is no.

If you're running a stream or youtube channel of self-created content, the cost of moving dramatically exceeds the total cost of legal risk you're eating in staying put.


The problem for streamers is not the legal part, it's the filtering part.


Let's be precise then. Streamers are already getting abused by Content ID.

How does the EU legislation change how that works? It already exists.

Edit: Content ID already covers the requirements of Art. 13 under any reasonable reading of the legislation. Things aren't going to get worse because of the legislation. They'll get worse because of pressure from their content partners and because they refuse to spend on human support. Why spend when you can do nothing instead?

Your speculation doesn't make legal or business sense.


Since YouTube itself can be sued now, they will lean towards a stricter false positive filter. If you think Content ID is bad, then this will be way worse. Because letting through copyrighted material can be more costly than disallowing new content.

But hey, if you are outside of the EU, no problem. So guess what streamers will do.

This is not rocket science you know. This is just simple cause and consequence.

Stricter filters for EU citizens. And hey, maybe if we are lucky, YouTube decides EU isn't worth the effort anymore and decide to use the block filter.


The problem is that there's absolutely nothing in there that explains how to balance anything. There's nothing in favor of moderate regulation.

Also: https://torrentfreak.com/german-data-privacy-commissioner-so...


I agree that there's an obvious risk here, but this is a burden for the courts to bear.

The concern over data-use at filtering service companies is new to me and interesting but substantially mitigated if they are compliant with GDPR. I haven't seen this argument before, so I'll have to take a look. Thanks!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: