It's funny that everyone says "that's not the reason why it got banned".
Yes. Well all know that. We all know that it was some mistake by some AI. Even the author knows that.
But we also know that the only way to get customer support from YouTube is to get some social media outrage.
It's so common, Google should put on their website: "If you need customer support, please try to create some outrage on social media. We do not provide other channels of customer support."
> It's so common, Google should put on their website: "If you need customer support, please try to create some outrage on social media. We do not provide other channels of customer support."
Not only Google, though. I closed my Twitter account a few years back and one of the few arguments in favour to leaving it open was that I had a sizeable number of followers (+6k) and I could leverage that if I needed to complain to some company for one reason or another. Social media is the world's support service, provided you carry enough eyeballs.
I find that in many cases you don't even need to have many followers. It's weird how the social media team tends to wield so much power. I think it's because companies see the ability for social media posts to go viral.
It's also quite literally an advertisement against the company. Similarly to how you can make people believe something if you just constantly bombard them with the message, a company can get a pretty bad image if people regularly see "my account was closed w/o recourse". And if people realize how volatile their accounts really are, Google (and Facebook etc) will have a massive problem.
I think that's what's behind a big movement to "de-Googlefy". Losing all your email, Drive files, etc., merely because you logging in from a share IP, which someone broke the TOS from, would be a pretty crappy deal. It makes you begin to really value data custody.
Almost at 5k followers, Twitter suspended 12 of my accounts and I could do nothing - got them to unsuspend once by filling the same form 12 times, and then everything got suspended again in a week. I gave up.
On the flip side, here. That is, those (company complaint incidents) are some of the few occasions I regret having never cultivated a social media world. When I'm on the hard end of some business/corporate injustice, and realise exactly how expendable and voluntary most of my so-called consumer's or citizen's rights are.
As a lone voice desperately holding his bowl out to a corporate giant, like a modern day Oliver Twist - and no army of social media friends behind me...
The YT ban here was certainly indiscriminate, but the presence of special immunity tokens to certain creators makes the bans by nature not indiscriminate. And while he may not have been targeted because of his "anti-Google" content, it's plausible that an immunity token would be harder for him to get because of it.
It's unfortunately a pattern we see outside digital, in particular with law enforcement. It's a force that undermines society in a very destructive and pernicious way. People need to feel that there is some fundamental fairness in society or they will stop contributing and start destroying.
It's really too bad that these large tech companies haven't maintained their early prioritization of fairness. It's the root cause of frustration with their search engine as well.
> It's so common, Google should put on their website: "If you need customer support, please try to create some outrage on social media. We do not provide other channels of customer support."
Personally I see nothing funny in it. It's been a sad reality for many years. Every time I see a comment here under one of these posts "Hi I'm Eric from Google, I'll try to check it with our X department", it's a sad reminder that you have a chance of solving your problem only if you manage to make enough fuss about it in public.
One day, internet outrage will not be enough and prospective customers who need support will have to show up with a marching band outside of the Google HQ.
In Google's eyes ads are the product YouTube sells to advertisers. Creators just produce the content that keeps viewers around between ads. Google does not see creators or those watching the content as customers.
That's at best naive. I straight refuse to believe that the AI overlord would make such a blatant and impossible to fall for mistake as flagging this particular video being harmful and dangerous content.
I believe they're imposing their tyrannical will partially until they're let lose for good when the time is right. They already made obvious how they like to treat our freedom of expression be it discussing political ideas, presenting medical information, researching intel about conspiracies, you name it..
I'd suggest you reconsider using the platform in general or proxy it via an invidious instance until you do.
Still the meat of the article is the insinuation that yt removed the video because of some nefarious motives. So either the author knows otherwise and the article is written in bad faith, or the author genuinely believes his theories. Neither option reflects well on the author.
> So either the author knows otherwise and the article is written in bad faith, or the author genuinely believes his theories.
That's a false dichotomy. I see no reason why Google earns any benefit of the doubt here. Letting Google off the hook by saying "it was just am AI mistake" when we don't actually know just gives Google more room to be draconian with no consequences. We really don't know why Google made this mistake and until Google provides a transparent and plausible explanation (hint, that won't happen even if this is decision is reversed), there is nothing wrong with assuming that Google is acting in bad faith.
If Google wants the benefit of the doubt here, they need to earn it back.
If pretending to claim that it's done for other reasons than grotesquely failing AI is the workaround Google is basing their "good enough" on, then why wouldn't you play along?
I disagree that it reflects poorly on the author to write an article in bad faith. There is literally 0 other ways to get Googles attention than to make as big a noise as possible. If people are forced to resort to absurd lies to get some customer support then that is a game Google has created for itself. An actual support chat would mean this article was never written.
Yeah Google not having any support channel or recourse for content providers and having Kafkaesque policies around this kind of thing is the first bit of bad faith going on here.
If Google gives zero information on what the real problem was and people want to scream about how its a deliberate policy of suppression on Google's fault we should let them and not feel bad about amplifying it.
Google can fix this all just by being more transparent and providing support.
I have at least a possible one. I bet there's a reasonable chance if we saw the output of the auto-generated transcript (which I'm sure is generated even if it not presented) we'd all be able to scan through it and then say "Oh, oh yeah, I bet this mis-transcribed phrase is what did it".
The core problem remains what people have observed before, which is that Google, and the tech industry in general, seals people off from any support mechanism, other than "generate outrage on social media". Little by little, this becomes a bit more abusive every year.
I'm down with mistranscription into something offensive, but then why would the appeal fail with the same condition? Could the appeal button be a placebo?
It wouldn't be the first time we've collectively thought the button was a placebo.
We also don't know what "human review" constitutes. It may well constitute a human simply reading the incorrect transcription and agreeing that it violates policy, without any reference to the audio track. The details of how these tech companies are overdependent on AI are not publicly known, we merely observe the consequences that can only be explained with reference to it. So I can't tell you if there was never even a human involved in the appeal, or a bad human, or one who didn't care, or whatever. We can only infer the possibility space from the outcome, not nail down the exact details.
Hacker News shouldn't be used as an customer support outrage sink. This just results in heated but uninteresting threads half filled with a pitchfork-wielding mob reacting to clickbait and half filled with people talking them down. Why does everyone who gets banned, demonetized or downvoted on any other social media platform have to come here to complain about it? Do that on Twitter!
There should be a rule against using Hacker News for customer support. Also using Hacker News to talk about sites being up or down.
That’s not a customer support channel… that’s a paid service for getting bigger disk quotas and other trinkets and perks… basic customer support should not be a fucking perk.
Why hasn't a competitor arisen who uses customer support as a competitive advantage? Could it be because it is competitively disadvantageous to pay humans to support millions of non-paying "customers"?
A video of myself and my kids singing a >100 year old shanty was flagged on YouTube for DMCA.
A modern folk art drawing I made of a squid was blocked on Instagram for a content violation. I joked that the AI must be afraid of tentacles. The only explanation that made sense was that it looked vaguely phallic in the way that any of nature's "bulbed cylinders" do, but even that seemed like a stretch -- I should think they would have flagged an earlier painting of mushrooms.
The lesson is to make everything look like a butt in a thong. None of the content moderation anywhere seems to have a problem with butts in thongs. Bulbed cylinders: bad. Smooshed jiggle orbs: good. Just no nipples.
This is the most damning observation of Silicon Valley. They’ve killed art. Social media is notably lacking original visual and musical arts, and it’s because they’ve happily focused on selling butts and boobs using copyright and DMCA as cover.
Combined with the incessant reliance on AI and no hope of humanity ever connecting with the customers, we have a perfect shitstorm.
> The lesson is to make everything look like a butt in a thong.
The lesson is to not use Google, and to only depend on structures/people you can rely on. Either yourself or a collective without profit motives, for example.
Instagram is Meta, but the point still stands that you can't count on 'em.
My long term goal is indeed to stop relying on the MAANG and publish first to my own site. I'm not really that interested in building a following. I have a small audience of friends who like to see what I'm up to and I genuinely like it that way.
The major barrier is wanting to set it up in my free time since it's like doing work.
I can only recommend buying a vps and slapping yunohost on it. Installation is dead easy, and then applications are just a few clicks away. It takes 30 min to have an equivalent to twitter, Instagram, Facebook events and YouTube right under your total control.
My 10 yr old daughter’s violin performance of the Elgar Salut d’Amour, same. The issue now, a few years later is that pre screen auditions for some competitions must be uploaded to YouTube. Fortunately she hasn’t had problems but it’s a worry. I hate that we have to rely on capricious infrastructure.
I'm not saying this is what's happening here, but here's a playbook I could see behind it:
1. Automatic/ML/AI/buzzword filtering that is, in blunt terms, kinda crappy
2. Preemptively and diligently correct for any such crappy decisions that would result in PR nightmares or credible lawsuits
3. Apply some portion of your effort to reviewing some portion of the cases that appeal. Hope (nudge nudge, wink wink) that those cases that you couldn't review, because you simply don't have the resources to review everything in depth, are those of your "enemies." You're not suppressing them; you just didn't have the resources to help them.
Result: The system punishes your enemies by design and you and your enemies can just blame the AI and your front line support, which everyone already loves to do. People who are suspicious can be painted as nutjobs, because it's obviously just the unfortunate but classic case of the little guy against the giant indifferent bureaucracy.
Thus any sufficiently influential entity can wield Hanlon's Razor as a weapon. You don't event need to plan for this; it is bound to happen. Parts 1. and 2. happening are foregone conclusions. When the opportunity for part 3. arises, I think people in all levels of the organization would quickly and tacitly realize that uneven review is in their best interest, if they don't just do it automatically out of laziness or tribal thinking.
If this were the practices of a startup, new biz, or small org and tried something like this they would not make it at all. Is this because of monopolistic practices? Why do they believe this behavior is good for business?
My question is the creator content video space in need of competitors? Some other products are making options available such as Rumble and Vimeo.
Google basically owns all the advertising space now so they're restricted in their growth to the overall growth of the segment. So they squeeze out their overhead wherever they can. This is how managers protect their numbers. Anyone who suggests spending the money on customer support for creators is going to get laughed out of the room when they run the numbers on the cost and what it'll do to the quarterly numbers. Meanwhile there's no viable competitor to do it because they're such a monopoly due to network effects.
Google needs an appeal process. They have too much of a responsability, they can't do this to people. If they don't want to do it, the law must force them to set one up.
Just an idea that pop into my head. With the scale that Google has a manual review process wouldn't scale well. I think that they could offload the evaluation to a jury of randomly selected, opt-ed in users. Where a quorum or super majority gets content unblocked.
What is their profit margin again? About 30 % right? They definitely are profitable enough to offer proper customer service and if they don't do it on their own I bet there will be a day when regulation forces them to.
Some things don't scale well and guess what that is cost of business. Imagine a company suddenly removing seats from cars just to increase their profit margin. People would think they are crazy. So why are we accepting this when tech companies do it with customer service?
>So why are we accepting this when tech companies do it with customer service?
I promise you that the man on the street doesn't consider this acceptable. That so many in tech believe it is fills me with despair. Our industry isn't alone in having scalability problems, but we do stand alone in the belief that it's then fine to ignore the customer.
Please add "non paying" before "customer" and morenpeople wouldnprobably believe it's OK. Heck, banks where you actually keep money there are cutting down on services...
I think you're still going to bump up against scaling issues with this solution. Youtube has something like 500hrs of new content uploaded to it every minute, and I've heard speculation that this solution couldn't possibly be solved by humans because there simply isn't enough of us on earth.
In a way I sympathize with Google, Youtube, Facebook and all the other companies operating as mass scale. On the one hand their algorithms suck for moderation, but on the other hand they are the most advanced created yet and there is simply no alternative. I would imagine in the next 10-15 years the players will get moderation under control and be able to model AI to moderate fairly. But that's going to be a slow process because there isn't any alternative to AI in this case, at least how I see it.
> I think you're still going to bump up against scaling issues with this solution. Youtube has something like 500hrs of new content uploaded to it every minute, and I've heard speculation that this solution couldn't possibly be solved by humans because there simply isn't enough of us on earth.
That's not the problem that needs solving. A manual review of the videos that are flagged by the AI snitch is all that's needed. Google can very easily afford this level of manual oversight.
> I think you're still going to bump up against scaling issues with this solution. Youtube has something like 500hrs of new content uploaded to it every minute, and I've heard speculation that this solution couldn't possibly be solved by humans because there simply isn't enough of us on earth.
Doesn't his means they've scaled beyond sustainable? As I read it, it's a good argument for them being broken up.
Breaking them up won't reduce the new videos total, or increase the number of humans on earth. So Breaking them up doesn't help.
I think a 'bond' system where you put up money to get human appeal, and get half back if the appeal is successful. Maybe with increasing bond as you lose more appeals. Is a much better system.
> In a way I sympathize with Google, Youtube, Facebook and all the other companies operating as mass scale. On the one hand their algorithms suck for moderation, but on the other hand they are the most advanced created yet and there is simply no alternative.
I don't. There's another alternative.
If you're growing too big to moderate, stop growing. Turn off signups. It's irresponsible to do anything else but, and "but we'll make less money" isn't a valid argument to this.
Because then you're putting a price of entry on having basic support even when you fucked up. Any user from a country where even 5 bucks is a big deal is basically locked out of it. Any user from a country where Google doesn't take payment from is locked out of it.
When Google provides dogshit services, you expect people to pay for them to be reminded to fix it ?
They don’t have to charge the same price everywhere as the work doesn’t cost the same everywhere either.
Having to refund people in cases where Google’s AI fails would create an incentive for them to improve the AI and set higher thresholds for content to get flagged.
I think my idea would make the system work better overall without making it worse for anyone.
>Having to refund people in cases where Google’s AI fails would create an incentive for them to improve the AI and set higher thresholds for content to get flagged.
Here's how it would actually go:
* AI flags your video
* You pay 5 dollars to have it reviewed by a human
* Either immediately as it's conceived, or a few months down later the line when some manager figured out they could lower costs this way, an AI reviews your video again, and never finds Google at fault.
Never underestimate Google's absolute disdain for its users.
How are there enough people on earth to create all that uploaded content? Even cutting out and uploading a quick clip from a show or something takes time.
Different idea: union model. New users can choose to be unaffiliated, but their videos dont get boosted and they have no recourse to AI moderation.
Uploaders form associations where if one uploader gets flagged, the union decides to agree with the moderation, possibly kicking the user for repeat violations, but if the union disagrees with the moderation they can threaten to disable all their videos until the decision is reversed.
google will have to listen to suitabaly large uploaders unions, and its mutually beneficial: moderation gets decentralized
I don't think unblocking would be good (if I'm putting myself in Google's shoes). But it would at least then be reviewed by a human with sense and influence.
I think the randomly selected only once jury is an important part of that idea. If you let people self select as moderators you're going to end up with alot of people with way to much time on their hands trying to push an agenda instead of moderate.
> Google needs an appeal process. They have too much of a responsability, they can't do this to people. If they don't want to do it, the law must force them to set one up.
They "have" one, it's mostly useless if you are small channel, akin to a fake panic button. They have that responsibility because they have a huge audience. Only laws could coerce them into having a real appeal process but then it might not be economically viable at their scale either.
This is a complex issue, a significant amount of people have their livelihood tied to some google service, in subtle ways, but Google also provides these services mostly at no direct cost for them. I don't think that internet paradigm with last though.
> but then it might not be economically viable at their scale either.
Google has a net income of well over 20% of revenue in their last quarterly earnings, even over 30% last March. This is ridiculously profitable. So I don't buy that argument at all.
And even if it's not economically viable to have customer support (or just a proper and timely process for appeals of AI decisions), then their business model probably shouldn't exist in its current form to begin with.
Viability isn't whether they can afford it, sadly. It's whether they consider the negative effects of not doing it to overwhelm the cost of doing it.
In this case they won't lose much income from a small channel and people will continue to use their service so purely financially it's a sound decision.
It's also a dick move to their users but big corporations don't have morals. Just shareholders.
Remember both the content creators and consumers aren't their customers. We're just assets for the real customers, the advertisers. Google is the farmer and we're the cattle.
I mean I don't disagree with your reasoning, but the economic viability was brought up as an argument against legislation forcing them to do it. So in that context it's certainly about affordability.
In theory in the EU there is a provision under GDPR to have the right of a human review of algorithmic decisions.
However in practice big tech companies either respond saying it doesn’t apply, or that it already has been reviewed by a human - which of course there’s no way to disprove from the outside.
This provision is for important stuff like job, rent, credit and the like. Look for proposals in the Digital Services Act, this might be more appropriate place to introduce moderation and ban standards.
But how? It's a free service, you got what you paid for. You might have damages but they are your disappointment around counting your free eggs from the free chicken.
I've been wondering about that. What happens if people coordinate across the world to take Google to small claims court for various infractions all at the same time. Every day. For months.
What happens then? If they don't have enough legal representatives and they can't show up they lose by default in many jurisdictions, right?
The problem is often there's just no way to get the attention of a real human through the support site. It's becoming more common that "support" is just a page where you have to choose "your issue" from a prebaked FAQ list. With automated answers. And if your issue isn't frequently asked? Well good luck.
The other option is a boneheaded chatbot that's so scripted it doesn't understand the real issue.
If they don't want to do it, the law must force them to set one up.
Which law currently on the books would force them to do this? If there is such a law on the books, don't you think it would be applied to every company?
Disclaimer: I'm new to SEO so I might have done something wrong
I submitted a sitemap to Google Search Console and they indexed all my blog articles... except for the one about using Plausible instead of Google Analytics
Please never post content you hold dear exclusively on a platform that can cancel you. Post it first on your own property and additionally on any number of less trusteworthy places.
How, exactly, is that person going to get views? If they are supported by advertisements or sponsorships, do you think that is going to happen?
Probably not.
The biggest problem really is this: A lot of folks have never heard of Wistia (I hadn't until reading your comment). I'm probably not going to go there on my own - you know, unless they advertise on a youtube channel that I'm watching or something similar. And honestly, why would I believe that the smaller place is going to be any more fair to content producers, especially if they happen to pick up a lot of market share and can no longer provide the personal assistance they gave when small?
And a website isn't exactly a substitute either: Who is going to find your website, anyway? Part of what makes large platforms popular is discoverability. Now, you might have some luck with some topics - the sort that people actually search for - but I'm not usually looking up any infotainment topics in a general search engine. I just go to youtube for that.
I can't vouch for the quality, but the consensus opinion in a recent thread is that the things you mention aren't actually problems, or at the very least they are the problems of the publisher and no one else, not worthy of discussion, etc:
The problem is Wistia (and the likes) is crazy expensive if one of your videos goes viral, in addition to the streamer having to handle monetization, promotion, DMCA complaints, audience interaction. People became captive of Youtube because Youtube made complicated and expensive things easy and cheap. People became used to how practical it is, even if the price is putting their livelihood at the whims of Google.
The ideal world would be if video hosting was like websites, where you have multiple separate and competing companies for domain names, DNS, file hosting, database, load balancing, ads, CDNs, email marketing... And if you're not happy with one of them, you can easily switch to another, all while remaining in full control of your contents and where they are displayed.
There was no way to host video cheaper than ad revenue. Video just requires too much compute, storage and bandwidth compared to the few cents you get from an occasional ad click.
Google could offer it cheap bandwidth and storage. They could use consumers love of Google products to force ISP's to deliver YouTube video traffic cheaper when ISP's threatened to block YouTube because it used too much bandwidth. Even with that, it wasn't profitable for 10+ years!
This here is why I think all these platforms that become the norm/de facto monopolies should be classed as "public utility" and be subject to audits on takedowns, neutrality and such.
They should, and in the past, legislators had the intestinal fortitude to break up companies of this size. It's clear to everyone that they're stifling competition at this point.
So your issue is probably with the links in your description, it's probably to prevent spread of malwares since majority of youtube users are random people, they'd run what ever .exe/.apk they are told to open
And the fact that he has to guess at what the problem may or may not be is the core of the issue here. A simple straightforward response clearly indicating why the content is removed is not so far-fetched an idea.
If they can automate flagging/removing content, they can automate providing relevant information to the posters of said content. I'm not suggesting they throw people at the problem and provide on-call personal support to everyone.
The only scaling issue my amateur brain can fathom is the manner in which the information is provided (data in a database then shown in a user interface, email notification, etc). And it's Google -- experts in AI, ML, data, and automation. Scaling sounds like an excuse, not a reason.
So people with honorable intentions should be punished and have no recourse? Security through obscurity is not security. I know this is a complicated subject with numerous methods of acting on fraud, abuse, or even simple mistakes -- but I don't see any reason why more information about exactly why something is removed in an automated fashion is bad for the user or bad for the business; assuming, of course, that Google even care to support their product.
I doubt this is the real reason of the cancellation, although it's probably something equally ridiculous, and it's good we keep giving spotlights to the youtube abuses.
The AI works in mysterious ways. Took him 10 years to block my "Everybody Draw Mohammed Day!"-Video in Pakistan and another year to completely remove it on grounds of "child protection". The video just showed a stick-figure being drawn.
Unlike some other comments, I do not think legislating once more on google is the answer.
I think it is up to the people to educate themselves, promote better alternatives to people they know and use it (audience drop WILL make google change it's ways) .
One of my geeky friend once told me "I am no longer sure if the internet going to the masses is a good thing"
EDIT : to clarify the comment above, I think internet is a wonderful tool, but by our choices/lack of actions, we allowed it to become what it is now.
I feel like this argument is in line with people who are against unions and tell employees of huge corporations "you can stand up to your employer on your own!".
The idea that even a fraction of YouTube's global audience would collectively choose to make an uncoordinated move - even over time - to something else because of YT moderation policy, is extremely unlikely, when compared to the coordinated marketing effort of YouTube to attract and retain viewers and content producers.
The coordinated side will always win here.
Other than important social issues, when has leaving things "up to the people" ever led to meaningful change?
It's hard enough when we're talking about actual issues that affect people, you think that choice of video platform on the Internet, a luxury if I've ever heard one, is going to shift because of some kind of popular movement?
Although I see the parallel in the comparison with unions, I would say there is a difference in the fact that, for most people, youtube does not put bread on their tables.
I agree with you that it is unlikely that even a fraction of youtube audience make the switch. It is up to them.
> Other than important social issues, when has leaving things "up to the people" ever led to meaningful change?
I can only speak for myself here since "up to the people" means up to every single individual, not a collectivity. If youtube goes down tomorrow, I will have to find another source of music, that's all. I do not care if there is a meaningful change or not. And I totally agree that a video streaming platform is just a luxury...
Depends on the legislation. Along the lines of the recent EU DMA, forcing giants to open up APIs, interoperability (federation...?) , and portability (for real this time, not the takeout thing. But even takeout was a result of legislation), can do good I think.
I don't think a nefarious explanation is necessary for this kind of thing. I think it is most likely simply that false positives on their harm/spam filters are much less annoying to the average viewer than are false negatives, and so they are tuned to avoid false negatives even if it means more false positives.
Do Google guidelines forbid links or references to other hosting platforms in videos?
Let's say I host a video on PeerTube, then make a short teaser and post it on Youtube with a link pointing to it, would that be acceptable?
Edit: note that PeerTube is 100% free and funded by a non profit organization.
It is most likely because the AI mis-captioned your accent and heard something you are not allowed to say. No way it is because Google sees Nextcloud uprising as a threat to Google Drive as it is suggested in the post.
I'm terrified of publishing on youtube or use any Google service that can get me banned, like adsense. I'm terrified it will cascade into my accounts and get me locked out of everything.
These automated algorithms that ban content are really very poor and they make a lot of mistakes. The review process being an equal mess doesn't help. Stuff like this just looks intentional at this point, all the more reason for people to start using software like Nextcloud and federated video alternatives to Youtube. Every one of this 0 cents business model companies is doing the same thing, none of them has an algorithm that does anything but annoy genuine customers.
The problem is that private companies that become a de facto standard for content sharing can't decide unilaterally what users can publish and what not. We need new laws, the same laws that should limit what you can publish: is it illegal? Then it's banned, otherwise it is allowed.
Update: the video has been restored. I did not receive any further correspondence from YouTube. I still have no idea why it was classified as "harmful or dangerous" or why it was restored again.
YouTube and Google will get their lumps when the Democrats are swept out of power. Forward this information to Rand Paul's office, the Republicans will need a justification for the policy changes they make once they are installed.
Rand Paul has been starting his recent video's talking about how YouTube is cancelling him (a US Senator) and he advertises for Rumble.
Lots being said about how bad the DMCA moderation process is - but really let's try to empathize with the BigCo, give them the benefit of the doubt, and think about why they might do it like this, even if they hate it too: Maybe they are under a legal guillotine. Maybe it's the politicians fault for passing dumb laws like DMCA. Maybe it's our fault for continuing to elect them.
The fact is, we have allowed the government to outsource the enforcement of their stupid laws to large companies who end up taking the blame. Copyright is one, but there are many more.
A better pattern would be to absolve BigCo from responsibility for your copyright violations, and if the government doesn't like what you're doing, then THEY can enforce it against YOU. This would force the government to take responsibility for the laws that it passes, and we would have a clearer idea of who to actually blame for poor enforcement or stupid laws.
Yes. Well all know that. We all know that it was some mistake by some AI. Even the author knows that.
But we also know that the only way to get customer support from YouTube is to get some social media outrage.
It's so common, Google should put on their website: "If you need customer support, please try to create some outrage on social media. We do not provide other channels of customer support."