I'd take it a step futher and borrow an adage to say platforms and media mainly say they are fighting misinformation to convince you that the rest of their content is meaningful.
Base reality is, they all make money on selling your attention to advertisers, and they need for you to believe that your encounters with them are "real," serendipitous, and that you aren't just being managed like an animal in an existential zoo. Sure, there are true believers who think abusing trust and technical power to shape narratives is honest and necessary work, and via Twain, there is no argument that will change their mind so long as they are being paid to believe in it. These would not be the first managers to mistake their own credulity for skill.
I'd reject that any of the platforms are seriously combating misinformation, and state it is just another trope in support of the same end, which is the global brandsafing of all public discourse on behalf of their main essential coalition members - the worst, wideley acknowledged, most objectively evil and mendacious organizations in the history of world. If you don't like how a platform puts you on a target list and actively wipes out your social capital and isolates you from your relationships and ability to make a living, people are perfectly free to just go and start up their own purges...
I agree with your thesis in general with several nitpicks;
> Sure, there are true believers who think abusing trust and technical power to shape narratives is honest and necessary work
Narrative warfare exists and is not a single agent game. I am not saying this makes narrative shaping defendable, but in the absence of some meaning-making work opportunist opponents will happily fill in the vacuum. This is the slippery slope that escalates to being a true believer of power (ab)use.
In the past, opponent processes of meaning-making was between differing, well-paced journalistic outlets somewhat within consilience distance. Today a) the inter-narrative drift is impossibly larger and b) heavily clustered around partisan dimensions which are themselves pushed away and c) comes out of a 24hr infotainment firehose because finances favor that. We're denial-of-service'd out of thinking for ourselves and the narratives we're given are hopelessly irreconcilable.
> most objectively evil and mendacious organizations in the history of world
There is no such cabal of coordinated evil. In fact, this gnostic belief is part of the narrative that creates ressentiment and nihilism. Emergent foolishness can just be as evil. And it can be countered with emergent, distributed wisdom, not necessarily another billions of dollars worth of a "good" platform.
> Base reality is, they all make money on selling your attention to advertisers,
Selling your attention to advertisers is a relatively minor harm. We are actually pretty good at suppressing noise and get to the signal we want. But the shaping of your attention and emotions that maximize time on site is harmful in a much more durable way. Not only it directly attacks your noise-suppression and relevance-detection heuristics, the more you stay on site the more you're actively trained to be more foolish, reactive and receptive. This bleeds to the rest of your life. It is like an anti-mindfulness training that stupefies you.
The solution is bound to be in the form of a suite of counter-trainings that gives back each individual their agency to realize this is not good for us.
There is no Evil Incorporated®, but there are certainly moneyed interests in every major industry and countless examples of more or less directed propaganda campaigns from oil to defense contractors.
I agree with the solution though, which shouldn't rely on platform moderation whose customers are advertising and PR. The platforms won't have leverage against their own customers here.
This isn't nihilism, nihilism would be ignoring the problem. And on angle to tackle the specific problem of parties wanting to clean up the net in their purpose was the insistence on neutrality and laissez faire moderation.
I think it still is the best course but requires education like you suggested.
There is no Evil Incorporated, nor is there a conspiracy, but when you are a company who has all of them as clients, from energy, chemical, lobbyists, to pharma, defense, and political organizations, it is a who's who of really serious baddies. This is normal, except your business is shaping the attention of children and young people based on the intentions of these organizations.
I'm sure the platforms manage their ethics, but this idea that we should trust them with moral authority because they deal with evil more than anyone else should be more suspect. I think it is far more likely that they are compromised by interests than resolute defenders of principle. At the end of the day, social media companies are just euphemistic peddlars of nuanced variations of porn, and a we probably shouldn't be relying on them for judgment on the quality of public discourse.
Censorship (to say nothing of the co-ordinated censorship between platforms) is a blunt instrument with a lot of side effects, and people are surely going to start up their own version of something, I'm saying we should think a lot harder about these issues, because we really don't want to be around when people finally get sick of it and start erecting their own platforms.
Thank you, this is one of the best posts I've read on here.
>on behalf of their main essential coalition members - the worst, wideley acknowledged, most objectively evil and mendacious organizations in the history of world.
Very few people seem willing say this out loud. These groups are not your friends, and are often the biggest offenders of many of the things they claim they are trying to prevent. 'the coalition' is social engineering on an industrial scale and it's disgusting regardless of the stated or unstated intentions.
Some of the smartest people I know went to work for Facebook or Google. Heck, most people on this site probably would too if offered the chance. Suggesting that thousands of highly skilled, well paid employees who could have their choice of a job anywhere are just "paid to believe in it" is frankly insulting to a lot of people.
The fact is we are in the middle of a pandemic and spreading some types of misinformation is directly akin to facilitating the spread of the virus. If you were in a situation where your basic choices were a) use whatever means you have to limit the spread of harmful misinformation or b) do nothing and watch as the thousands of extra dead and suffering pile up, I hope you would have the same basic human decency as these "true believers who abuse technical power" and choose the former.
Not sure if there's a more eloquent name for these kinds of "throw enough shit and hope some of it sticks" arguments but the article is very confused, from claiming ad hominem attacks are as bad as racism, to ranting about millenials and throwing in WMD's in Iraq for good measure.
The actual key issue is somewhere in the middle of that article:
>"When tech companies take it upon themselves to arbitrate what is or isn’t misinformation, or taking action based on events that occur outside of their own platforms what they are doing, at its core, is adjudicating international law"
No, it isn't. It's a private company determining how and who it conducts business with. Being a 'patrician' of your own property is completely fine and in fact a basic right, and if you don't like Mailchimp's content policies, go to a competitor.
Unless someone explains to me, in plain English, why we ought to abolish the very important freedom of deciding for themselves who and how private entities do business with, I don't want to read any essay length pieces any more about the Gulag archipelago, Orwell or how his friend being cancelled by Mailchimp is like being stuck in Soviet Russia.
And to preempt the inevitable "but they're so large!". This is not an indicator of anything. Tech companies are large yes, but competitors are, as Google actually correctly put it at one point, one click away. Size is not an indicator of market power, inability to chose alternatives is.
> Tech companies are large yes, but competitors are, as Google actually correctly put it at one point, one click away.
Except they're wrong. There's no meaningful competitors to choose from. The big tech platforms - and arguably most SaaS software - are not commoditized. There's no competitor you can just switch to. This is mostly achieved through purposeful incompatibility, and all kinds of network effects.
> Size is not an indicator of market power, inability to chose alternatives is.
Exactly. But in winner-take-all markets, the two strongly correlate.
>Unless someone explains to me, in plain English, why we ought to abolish the very important freedom of deciding for themselves who and how private entities do business with, I don't want to read any essay length pieces any more about the Gulag archipelago, Orwell or how his friend being cancelled by Mailchimp is like being stuck in Soviet Russia.
Because the government is supposed to enforce antitrust laws to make sure there is always a handful of private companies competing with each other, and taking different stances at various topics, so that the consumers could find the one that suits them.
Currently, if you want others online to hear your opinion, you need to rely on a very limited set of free-to-use services that have an increasing appetite in deciding what opinions should be heard. You cannot easily outcompete them, unless you can find billions of venture capital to subsidize your service until it grows large enough for the big advertisers (that are part of the same club) to consider it interesting.
I would say, if a private company doesn't directly charge its users for the services it renders, it should be treated as a government service.
Except that isn't true. You can fire up a BBS from your home machine. Anyone who wants to hear your opinion on things can read it. Freedom of speech is NOT the same as freedom to force people to listen to you.
Not really given most ISPs ToS, you’ll have to use a provider. Then what if this provider cuts you off, spin your own DC? What if you can’t get peering? How will you connect your DC?
If you effectively create a town square you shouldn’t be able to ban someone because you don’t like the ideas. How will others know these ideas even exist? (which is the point of censorship)
> Unless someone explains to me, in plain English, why we ought to abolish the very important freedom of deciding for themselves who and how private entities do business with,
Is there no room for nuance here? Can't we support freedom of association without encouraging expanding levels of censorship?
The reason why big tech platforms have been moving towards more censorship isn't just out of natural inclinations, but directly due to people blaming those tech for allowing types of speech.
There are strong arguments, some of which the article puts forward, for the value of free flowing information. This doesn't mean we have to abrogate freedom of association, but it does mean that we should push back against censorship.
We could also limit freedom of association without abolishing it. (There are already laws restricting freedom of association to prevent discrimination against protected classes.)
> Tech companies are large yes, but competitors are, as Google actually correctly put it at one point, one click away.
It often is far more than "one click". If your email provider terminates your account, you'll have huge problems everywhere you signed up with that email. If your twitter account is revoked, you lose all your followers and history. If you have digital "purchases" on an account, you permanently lose access to them if the account is revoked.
There are all kinds of ways that a corporation, exerting their freedom of association, can have huge negative impacts on their users. The power imbalance here is sufficient to warrant concerns about abuse.
I don't think we should revoke freedom of association from tech companies, but I do think we need to create limitations on how they can exercise that right and provide checks and balances in that process. I am not certain what those limitations should be but I think that is the direction the discussion should move.
> [explain] to me, in plain English, why we ought to abolish the very important freedom of deciding for themselves who and how private entities do business with
The explanation for this is the same explanation for why eminent domain exists. If you own a piece of land that will be in the middle of a busy road, in the US you likely can't construct that part of the road yourself and charge people or arbitrarily restrict people from using it - you likely will have to sell it to the US government.
I'm not trying to argue about if this is the ideal way to do things, but the more society views tech companies owning platforms like landowners owning parts of roads, the more government control.
Eminent domain exists because land use is exclusionary. Land owners have the ability to hamper development by exercising coercive power. If AT&T or your phone provider start to cancel you, you have a pretty good analogy.
Services and products on the internet however are not landowners. The reason eminent domain exists is because land is not fungible. Mailchimp however does not sit on mail-land. There is nothing that prevents dozens of mailchimp competitors to exist, offering whatever content policies people prefer. Google does not sit on search-land and somehow excludes anyone from building a search engine. Tech companies, in almost all cases, derive their value from the intellectual property that they built, not some sort of analog to land they stumbled upon. If that was the case we would be on MySpace.
> [Owners of services] and products on the internet however are not landowners. The reason eminent domain exists is because land is not fungible.
I agree. But fungibility isn't a binary thing, it's a spectrum. And how society chooses to regulate goods and services also varies between extremes.
> Google does not sit on search-land and somehow excludes anyone from building a search engine.
Owning a piece of land that will be in the middle of the road doesn't mean the government can't build a different road right? Both google and the landowner just make it harder.
> Tech companies, in almost all cases, derive their value from the intellectual property that they built, not some sort of analog to land they stumbled upon. If that was the case we would be on MySpace.
Couldn't your arguments could be applied just as well against standard eminent domain? I feel like you are saying something analogous to: Most businesses derive their value from the goods and services they developed, not some sort of logo or piece of land they own. That's why the government shouldn't be able to take anyone's land.
I dont know if your defense applies as much to Google search as it does sit on an existing search-land, if you scrape someone's website like google does you'll find different policies apply - on this website alone people link sites using referral tricks to accomplish the same thing, scraping as if we were googlebot.
You are refering to freedom of association and you are correct that platforms should be able to choose their customers if they want to. If they do that however, they will be held to account for their choices they make in contrast to platforms that allow more freedom of opinion. Makes sense, doesn't it?
Some people say allowing certain content is a statement of endorsement. "Silence is violence" and similar argumenents, but I disagree with that line of reasoning.
"Throwing shit at people and hoping it sticks" is something I do associate with accusations of racism to be honest.
> No, it isn't. It's a private company determining how and who it conducts business with. Being a 'patrician' of your own property is completely fine and in fact a basic right, and if you don't like Mailchimp's content policies, go to a competitor.
Correct. They can do what they want. However, the title of the article is still correct.
That part isn’t the problem. But the part where we make it clear that by carrying the wrong speech, tech companies are sticking their necks out for antitrust enforcement and other regulatory curtailment, is a bit worrying.
If I wanted to and could find the funds, would I in practice (as opposed to just in theory) be legally allowed to create my own financial services organization that would provide financial services like payment processing and banking to people who have been banned by other financial services providers for their theoretically legally protected speech? Honest question - I do not know the answer.
Yes. Legally you would. Now would your partners and customers and all the people who buy your products be ok with it? Hosting companies have popped up where for this exact scenario. Most businesses have a stack of (supply chain? vendors? dependencies?), and all of those people need to support you for you to stay afloat.
Problem is two fold, one is that part in power can use variety of methods to coerce platforms to do their bidding. This is not a supposition this has been going on all over the world for a while. For all practical purposes FB and twitter act as an arm of US government when it comes to COVID, remember when you would get insta banned for suggesting that maybe maybe virus that arose 10 mins away from a lab where they used to work on viruses like that might, could maybe somehow be related to the said lab?
Second is that platforms themselves now have massive political power. Twitter or FB can easily suppress any fact or opinion it does not like(see Twitter and Biden's crackhead of a son), it can also easily trade ability to suppress for political favours.
In a society as divided as US is right now it is inevitable that they take the 51% side to suppress the 49% side. Result will be a bloody a mess.
There is no solution to it either, shit is in full flight towards the proverbial fan.
> Unless someone explains to me, in plain English, why we ought to abolish the very important freedom of deciding for themselves who and how private entities do business with
Just to set some parameters here, you would be fine with, say, a landlord refusing to rent an apartment to a black person?
>Unless someone explains to me, in plain English, why we ought to abolish the very important freedom of deciding for themselves who and how private entities do business with
Not a lawyer, but I wonder if the Commerce Clause and Necessary and Proper clause give regulatory power to the government under the guise of promoting the general welfare. After all, this has been applied to commodities in the past, even when said commodities weren’t even used in trade/commerce.
In plain English, there’s precedent in the constitution and law for the government to regulate business if it promotes the general welfare.
> In plain English, there’s precedent in the constitution and law for the government to regulate business if it promotes the general welfare.
There are a lot of things that "private entities" aren't allowed to do when doing business with the public in the United States.
They can't discriminate based on race, or religion, or national origin, or handicaps, or (in civilized areas) sexual orientation.
It wasn't always that way. At one time all of those things were perfectly legal. But we, as a society, after a great deal of argument (sometimes contentious) decided that they were wrong and changed the laws to prohibit those things.
We're having essentially the same arguments about online speech right now.
Regulation of private companies is sane and normal, there is no need to appeal to their inherent freedoms and such, everyone should be on the same page about that. If you ideologically believe that no regulation should ever be imposed on a company, or individual, then take that idea to where it goes logically. Add problems like ecology, climate, transportation, and yes: control over information.
The problem is rather __who__ will regulate a __global company__, when infact companies like google arguably hold more political power than many smaller countries do.
Regulation of private companies is very sane and normal, I never said anything to the contrary. But yes there is a need to appeal to our fundamental freedoms apparently because you still need to actually justify and make a coherent argument why a given piece of regulation or limitation of freedom is reasonable, rather than just being mad that you got kicked off Twitter.
It's very important to distinguish between the actual right you have, which is that nobody should be able to compel you to speak, or how to speak, which is a right that in this case Mailchimp has, and the complete reversal of this, namely forcing private entities to host the speech of others, and have the government coerce platforms on behalf of third parties.
The author wants nothing less than a complete reversal on how we treat association and speech on the grounds that in his perception, 'purple haired millenials' run a business in a way that he doesn't approve of.
There is a simple, straightforward answer here - prevent companies from amassing such power in the first place.
It is far more obvious to me that there is an inherent right to choose whom you do business with and whom you don't than that there is an inherent right to have a single corporate entity employing unbounded numbers of employees and making unbounded amounts of money. The individual person can buy and sell on their own, without the government; the corporate entity only really works because it is recognized by the government.
To be clear, I'm not claiming that regulations are illegitimate. We do regulate, say, the individual farmer to make sure they're selling actual milk and not melamine. We do regulate the individual storekeeper to obligate them not to discriminate on race. But those regulations are all very measured and designed to solve specific problems (ultimately - protecting those who are even weaker than the individual businessperson).
Mailchimp is not some tech titan. They do not have more political power than any functional government. They are in no sense a monopoly in their industry, and they have exactly one industry, unlike the usual juggernauts that come up here. Calling them "Big Tech" is a clear example of the terrible slippery slope of adding regulations about obligating companies to do business vs. regulating their accrual of power, if that's the real problem.
Any regulation that would apply to Mailchimp would - if it's going to be effective - also have to apply to a startup that wants to be in the same industry. (It may not apply immediately, but it will certainly apply to the startup's medium-term goals, and thus affect its profitability.) If you just want to regulate Google and Facebook, I could almost believe that, but if "Big Tech" includes Mailchimp, who doesn't it include?
What we'll end up with is a system where anyone who wants to do business must have their business plan approved by Big Government, and where a professional specialization in complying with those regulations arises, and where there's a nice revolving door between implementing and enforcing them, and where Google continues to hold more political power than several countries and can engage in regulatory capture. It's hard to imagine that this aligns with the political desires of anyone except the current leadership at Big Tech. (Even the aspirational leadership of the next generation of Big Tech will find themselves shut out by this regulatory regime.)
So we’re on to time machines this early in the discussion?
Government is toothless and unmotivated to reign them in for the same revolving door issues you wrote about already happening today. The only thing that changes any of this will be whistle blowers and exposition from the inside of these companies. Then you have the issue of what media company will run that story? Think about how much the media rely on Twitter for “content” now.
I am skeptical of any positive changes for quite some time.
The issue about policing Is not "muh private property, people choiche", but about controlling a "public utility" like a nation Wide communication medium. Here the society needs hamper the private business (or monopolization in general) side, Like in journalism, food safety, health services. I would dare say that any social medium that get Mass adoption and can influence massively society needs public oversight & regulation (with good regulators). Facebook & Co. Need to comply as Purdue had to when society realized their shenanigans in a "lasseiz faire" enviroment
I'm not going to comment on the author's credibility or overall quality of his arguments.
However, I think he brings up a very important point: who gets to decide what's misinformation?
* Private organizations are run by fallible humans, chasing money and reduced liability
* Governments lie. If you don't believe me, read any history book
* Scientists are not a spherical mass floating in a vacuum with a unified opinion, nor are they correct by default
Furthermore, only engineers believe most of the important issues in the world actually have an objective answer to them.
So, who gets to decide? Who gets the power of saying what thoughts are bad and what thoughts are good? Is it someone you agree with? What if it changes to someone you don't?
Most evil in the world is perpetrated not by evil people, but by people with good intentions who believe they know what is best for others, and believe that justifies forcing it on those others.
I disagree: we live in societies and can vote. In my country we decided that trading, joking, lauding, repeating anything nazi can be prosecuted: so we do prosecute it, a judge decides, according to past judgment and his opinion of the law, and done.
No need to give everyone the job to be clever - we can actually blacklist any opinion we dont want to see circulated and vote on it.
Some societies have limits on these votes as a means to prevent the tyranny of the majority.
In the U.S. the Bill of Rights outlines specific rights that cannot be infringed, including the freedom of speech. Now, this amendment could technically be repealed but that is a very high bar since it’s considered a fundamental right by which the others are derived.
Edit: downvotes are fine but at least add to the discussion. I would be interested in hearing what parts you think I’m not interpreting correctly
In Manufacturing Consent, Chomsky and Hermann argue that the American system of propaganda is more effective than the Soviet one, precisely because there is no commissar explicitly telling you what you can and can't say or any other kind of overt and explicit censorship, making people blind to the possibility that they could possibly be being manipulated.
Freedom of speech is a natural right that just so happens to be enshrined in the constitution. It's very disturbing to see how modern discourse ignores the existence of natural rights. I don't see how the 1st amendment is a tenable position anymore when the government is now actively censoring by proxy. https://thehill.com/opinion/white-house/563547-hypocritical-...
The problem is that you're not conceiving the boundaries of the system properly. It's best, when creating an authoritarian provision to remove liberty, to assume that the control of such an mechanism is under the control of a faction you're least aligned with. This is why the US is liberal before it's a democracy. It has meta code defining the boundaries of the democracy, and in the case of speech, asserts that the liberty to speak supersedes any democratic authority.
Obviously people looking for an expedient authoritarian mandate to limit speech within the boundaries of what they consider to be acceptable see the existence of free speech as the allowance of that which is outside their boundaries, but they don't actually understand the purpose. The purpose is a hedge against tyranny with a trade-off. People get to say horrible, stupid, counterproductive things, but your political enemies don't get to constrain your ability to express yourself.
I also don't understand why the rest of the world is so eager to see this right stripped in America...I would feel extremely claustrophobic if there was no place on earth in which free communication wasn't enshrined in law. Even if Europeans want to mandate their own speech, they still benefit from the hedge against tyranny by such a place as America existing.
I think his point was that you don't always get to decide for yourself, and voting was an example. Whether it be a majority, a dictator, or a corporation, someone, to a degree, has the power to tell you what's right and wrong, what's real and not.
I think you’re right in your assessment of “evil” but pragmatically wrong (or maybe just oversimplifying) the solution.
When there are large asymmetries (for example, in information or risk) regulation can be warranted. For example, most people feel regulating fissile material is prudent because the risk of it coming into the wrong hands is too high to just let unchecked freedom ring when it comes to owning it or not.
There U.S. Constitution enumerates other powers for the government, including the power to tax and regulate commerce.
Sometimes I think people confuse what they wish/think the government role should be with what is explicitly defined within the social contract between the government and the governed.
If anti-vaxxers just hurt themselves, there wouldn't be nearly as much animosity around these issues. They pose a legitimate risk to other people. Not just through direct infection of other people, but by offering themselves up as a breeding ground for more effective versions of the virus.
Are you saying the vaccine only becomes effective if everyone has it? What good is it then, since we will never reach 100% vaccination?
The claim that unvaccinated people are a breeding ground for more effective versions of the virus sounds like a Big Pharma talking point, rather than something with solid science behind it. My understanding is that environmental pressure results in mutated variants becoming widespread. Couldn't vaccines themselves cause this? The virus hardly mutated for a whole year, but then once inoculations became available it seems that more infectious variants are popping up left and right.
Vaccines are most effective when they enable you to reach herd immunity which extends protection to those who cannot be vaccinated because they are immunocompromised to one degree or another. We don’t need 100%. We need something like 85% for Covid. You’re right that it will never happen because conservatives have weaponized stupidity and selfishness.
Every chance to breed is a chance to mutate. This isn't a binary factor its a continuum. 1% of the population is likely insufficient breeding ground and the virus population would collapse exponentially. 70% vaccinated large in contiguous regions may be sufficient to keep the pandemic going continuously if they successive variants emerge that are sufficiently effective at evading existing immunity.
1 person has no meaningful effect but collectively their actions are capable of doing great harm to the rest of us even if we all collectively act in our own best interests by getting vaccinated and participating in other public health measures.
If your freedom to act irrationally allows you to cause my untimely death I don't understand how my inability to go on living wouldn't represent a net decrease in freedom vs forcing you to vaccinate.
If you look at the break down of anti vaxxer sentiment only 9% are dead set against vaccination under any circumstances. These people's perception of their freedom isn't worth hurting everyone.
All evidence shows it to be the other way around, the vaccinated are far more dangerous to the unvaccinated than vice versa, it isn't even close. Leaky vaccines create more virulent and deadlier viruses in the unvaccinated.
This is incorrect. A vaccine is not an antibiotic; the presence of one does not cause virulent strains. Vaccinations trigger the body's immune response to the virus; nothing else. They provide for a more effective response faster, but do not otherwise provide any forcing function.
Mutations and variants occur all the time as the virus reproduces. Variants are not a response to any vaccine; they have appeared primarily (entirely? I haven't been following all of them) in regions where the vaccine is not yet widely available because that is where the virus is able to multiply (and thus mutate) the fastest.
The general theory is that a virus avoids evolving in a way that is too dangerous to its host because the sooner the host dies the less potential the virus has to spread. As the vaccinated have a weaker response to the virus there is no longer as strong evolutionary pressure against the virus becoming more lethal.
The counter claim seems to be that because vaccination reduces the abundance of the virus there is a lower chance of mutations. It's hard to say which effect is stronger and it probably varies greatly by virus and by vaccine!
Disclaimer: I'm not an expert, just an armchair enthusiast.
I think it's too soon to say yet if the vaccines encourage mutation. But my understanding is quite a few of the significant variants come from places where there was significant vaccine testing:
- Brazil (Gamma)
- South Africa (Beta)
- India (Delta)
- Alpha (UK)
I’m not qualified to really discuss, but here’s a study I recently heard brought up. It would be interesting to hear an unbiased opinion of someone with a medical background
“Vaccines that let the hosts survive but do not prevent the spread of the pathogen relax this selection, allowing the evolution of hotter pathogens to occur. This type of vaccine is often called a leaky vaccine.”
This is exactly why I disagree with everyone who argues it’s an issue of personal freedom.
You may have the personal freedom to get drunk—that doesn’t give you the freedom to drive around while under the influence and you endanger everyone else on the road.
The choice to be vaccinated or not is not a choice that affects merely the individual; its consequences affect communities. Regardless of whether it is valid or not to refuse vaccination, it is not solely within the realm of personal freedom and should not be treated that way.
This is not true, any more than it is for vaccinated people.
Viruses will still spread through a fully-vaccinated population, and they will still mutate in a fully-vaccinated population.
The CDC released a report [1] showing proportional numbers of cases coming from fully vaccinated people (74% of cases against 69% of the population fully vaccinated).
Treating people who are not getting the vaccine as enemies is not going to convince us. In fact, this whole thing has made me, in particular, suspicious of other vaccines that I will research and make decisions for myself about.
The CDC study gives some credence to your theory but your conclusion is by all evidence wrong. It shows nothing about the transmissibility of vaccinated people who test positive. It also disregards previous evidence before the delta variant that vaccinations widely prevented positive tests. So there is SOME effect there. Taking a study that says “there’s definitely some chance of transmission” and making the conclusion “there’s no difference” only happens due to your predetermined conclusion. A real look at the actual body of evidence on vaccines necessarily draws different conclusions.
One need only look at the overlay of cases and vaccination rates across different US states to show you’re just flat out wrong.
Most of what you say is correct, but saying absolutely that unvaccinated people are endangering other people and vaccinated people are not is also false.
Don't get me started on how I'm a better driver when tipsy than the average 80 year old that still has their license...
The tongue in cheek point I'm trying to make is that too much of the conversation focuses on relative risk. Relative risk obsession has people not taking more risky, but still incredibly safe (at least in the short term) vaccines here in Australia. Absolute risk (or its delta) tends to be a much more salient number to the victim.
It does help the antivaxx crowd in some circuitous arguments however, as the chance of an unvaccinated person killing a vaccinated person is low if vaccines work!
Do you feel the same animosity towards the bats and/or pangolins which originally produced SARS-CoV-2? Because they're still out there and still incubating new variants as well. Eventually some of those will inevitably jump to humans.
>Most evil in the world is perpetrated not by evil people, but by people with good intentions who believe they know what is best for others, and believe that justifies forcing it on those others.
Other way around. Evil would convince you that leaded gasoline is fine, and that "good intentions" of regulating lead out of gas are evil because government power.
That's the excuse given, not the reason. It's like saying "evil starts with science papers" because some scientists affiliated with the nazi party published artificial information that made racism look scientific and correct.
The bigger justices, like the U.S. bill of rights - that's for the greater good. Having laws and order. That's for the greater good too. As are judges, police, government, etc. All the greater good that requires people to comply.
Private companies justify their abuse by playing the "greater good" line too. From "we're gonna change the world" slogan that was the in every 2010s startup's marketing copy, through spinning "disruption" as something good, all the way to the usual corporate talk about creating jobs and innovation.
This is not what is currently up for debate. The current conflict is over control of private forums.
Its not a cut and dry issue either. If you were to own a site, anyone forcing you to post things to your site you don't believe is also an infringement of your rights. It cuts both ways.
In my eyes the major issue is the consolidation of discourse onto a few private forums, not whether or not those forums should be able to police themselves.
That was until we collectively decided harm reduction is a greater cause than freedom preservation. Now any and all freedoms we previously had can be temporarily, permanently, or conditionally suspended in the name of harm reduction (saved lives, fewer offended, etc.)
It’s not some recent idea that freedoms need to be balanced against the harm they can cause others. The harm principle has been around at least as long as On Liberty (1859), and it wasn’t really a new idea then.
While this is great in principle, some people's voice is much louder, and their words can devastate the lives of individuals, the economy and our society. Should there not be something in place to prevent people from spreading falsehoods that can lead to harming others? The solution is not as easy as what you've stated. In fact, I don't think there is a solution, only a continual rebalancing.
Whoever wants to. and then people can compare the competing arbiters and decide who would earn their trust. That used to work with mass media until the internet came along. Now we no longer have multiple newspapers, but "The Newspaper" , "The TV" and "The telegraph" (Twitter?), like 2-3 news sources in total which are basically politically aligned. The name of the game changed from "popularity and copy sales" to "the network effect". I m not sure if this can ever change within a centralizing communication medium like the internet, but perhaps at some point people will realize that the network effect is evil and it will get a bad name.
wrong or right? The internet was designed to be "decentralized" in the sense of robust communications, but ultimately the goal was to convey the commands of one, centralized US army
The packet switching network was designed to be decentralized. That's layers 2-4 of the ISO/OSI model. Layer 7, the application level, is and always was a force of centralization.
I think the better answer is not “whom” but “what process”.
People will always be fallible. Processes, however, can layer checks and balances in a way that people often can’t. Processes are also easier to change than people in my experience.
Processes certainly aren’t perfect but considering your examples are past tense, they can tend to arc towards “more perfect.”
I like this idea. Perhaps I'm committing the same fallacy that the Romans did around the fall of the Republic. The prevailing idea was that their problems were caused by a "failure of Roman morals", rather than by a failure of their system, even as they realized the inevitability of their situation and tried to enact changes.
> Processes certainly aren’t perfect but considering your examples are past tense, they can tend to arc towards “more perfect.”
Agreed. My use of the past was to illustrate a basic truth: given the big (colossal) failures of the past, is it more likely we have largely found the truth, or that we are still very far away from it? It's impossible to know of course. That said, the amount of people who will claim their truth is ultimate and all of who do not see it their way must be idiots is staggering. It's human nature though, I literally did that one comment above.
Ideally? A jury of your peers supported by a system that at least purports to provide checks and balances against the system being dominated by any single person or groups' judgement.
If factual veracity faced a similar system that tries to balance the needs and desires of all participants, then censoring misinformation would be a lot more palatable (and would still, like our justice system, often be wrong.) The current system of supressing misinformation has none of the check and balances contained in the legal system and adding those in would do a great deal to facilitate trust in that system.
You're getting mixed up. Good and bad isn't the same as misinformation. The intent is important. Bad faith arguments and presenting misleading conclusions to further an agenda is misinformation.
Yes, sometimes it is hard to distinguish between extremely dumb and dishonest behavior. Like, a few years ago in my native language someone made a news piece about some women that were working for Google suing the company for unfair wages (according to them, they were making less for the same work) and whoever edited the piece decided to include that according to glossdoor women in tech make less than men, but completely forgot that the same glossdoor said that google was a exemption (based on the information they got).
I think it's a good point, but... what information do we use to determine intent? And aren't they subject to the errors of normal information in addition to possible ill-intended misinformation?
If you are judging a single action from someone it is impossible to distinguish between "dishonest behavior" and "stupid behavior" (that's assuming you disagree with what they did), so more offen than not you need to dig deeper and compare what they said prior to the incident and what they have to gain with a change. But to make it clear: it doesn't mean that you are wrong (if what they said align with what you believe, it just means that some people are trying to get the same thing you want for other reasons).
Tactics used is a place to start. Leading questions, things like "you can draw your own conclusions", blatantly uncredible sources. Taking quotes out of context. A history of being caught being misleading. There's patterns used in frequent misinformation spreaders i think.
If the government enacts a law to ban misinformation, I am sure there will also be a ministry of truth by that government, which will decide what information is truthful and what information is not.
OK, but, we already have social/legal/cultural norms for deciding what thoughts are bad and what thoughts are good. Here are some things I can't do via Mailchimp for either legal or cultural reasons (see also https://mailchimp.com/legal/acceptable_use/):
- Advertise pornography
- Plan attacks against the United States government
- Advertise medical cures that the government doesn't approve of
- Sell counterfeit products
- Pump and dump a stock
- Reveal the identities of CIA operatives
All of these are thoughts, just as much as advocating against vaccines or for alternative cures or what-have-you are thoughts.
Who gets to decide any of the above?
Would it be a good idea to set up society such that all of the above thoughts are protected speech?
But this article isn't complaining about Mailchimp shutting people down for thinking things in their own mind, it's complaining about Mailchimp shutting people down for publishing and advocating those thoughts.
What is the argument that saying "Ivermectin is being suppressed by Big Pharma, anyone who takes it will get cured of covid" is simply a thought whose publication is worthy of both social and legal protection, but saying "Contoso has a groundbreaking product they're going to reveal next week, anyone who buys the stock now is sure to make a profit" is illegal securities fraud? How do you distinguish those cases?
The comment I'm replying to is asking where, if at all, society should be structured to prevent people from spreading bad thoughts, and it's clearly leaning towards the angle that nothing should prevent people from spreading bad thoughts, and listeners should be free to make up their own minds.
If certain types of bad thoughts are illegal to spread, the argument, I would think, is they should stop being illegal.
(On a more practical note, Mailchimp's legal department is not a court of law. If freedom of speech / of thought-spreading is important enough to your society that you want to place legal obligations on private companies' platforms not to interfere with it, as TFA advocates, I think a natural consequence is that private companies should not decide what crosses the line and what doesn't, and probably that private companies should not be liable for what's said on their platform. In fact, that's more or less what Section 230 gives us! In a world with strong free-speech protections, it seems to me that Mailchimp should say that you can send whatever you want through their service, and should be encouraged by the government to take that approach, and that if you send something illegal, that liability is on you alone, after due process of law.)
> Advertise medical cures that the government doesn't approve of
There is nothing wrong with advertising the truth that certain medical treatments exist which are not government approved in the US, or, legal in the US, but are defintely viable, and are actively used in other parts of the world, or, could be by physicians for off-label use.
domperidone [1] is a drug that is used for a few indications overseas. There is an active discussion around it on a number of social media sites, but the FDA warns against importing it, for violating an FDA Act. It is possible to get an exception for one indication in the US, potentially...... But, many women want it because it helps with lactation, and for a woman with a hungry infant, this is a big issue. You get a screaming baby that wants 8 oz more of milk a day, and lemme tell you, can't negotiate with a screaming baby! Off Label uses of medications also, is common among Physicians. Viagra is one drug that has a number of utilities in off-label use.
BigTech squelching the conversation around Domperidone is fairly similar to BigTech squelching the standard COVID treatment protocols [2], from Zelenko [3] & McCullough [4], which will likely become standard of care fairly shortly, and are now in evidence on CDC websites too [5], but significantly trailing the most updated protocols. But they don't squelch Domperidone talk, and the reason is simple. Pushing vaxxes are a big priority for the govt, while going after the discussion of secrets for increasing breastmilk outputs would probably be bad optics.
> actual violations of the law, at least in the United States.
Covid protocols have nothing to do with actual violations of Law, and, they will be standard of care shortly, as the CDC website writing [5] is on the wall.
It's a playful jab at a category of people I am part of too, although I do not share that belief. The statement I made is wrong on purpose regardless (for light comedic effect): not only engineers hold that belief, nor are they the only ones who do.
There is a significant overlap (higher than a random person's picked from the population) between engineers and what I would call "fundamentalist rationalists". That is, people who believe in reason and science to the point they wrap the speedometer back around to religious belief again. Those people become unable to think outside that framework and consider nuanced solutions. Not everything worth solving has measurable or falsifiable solutions. The world can be very uncertain and work against all common sense at times. This doesn't mean one shouldn't attempt to make sense of it obviously, but science does not a god make. It may, hopefully, but pretending it is that way now is no good.
I'd honestly consider your take as lacking nuance. Or lacking a dimension.
Accepting uncertainty doesn't mean abandoning objectivity. On the contrary - difficulties in objectively measuring things or falsifying hypotheses are best handled with mathematical tools, which absolutely can work with uncertain quantities (that's what probability theory is for). Throwing hands in the air and saying "it's too difficult therefore there's no objectivity", or "it can go whichever way, so it may as well go the way I like", is not the solution.
I don't disagree with that notion actually. I personally do not believe in objective truth, philosophically speaking. I do believe in creating ever more useful incarnations of a relative truth, and that so far the methods which you mentioned have been the most useful.
There is no golden hammer however. To give a concrete example, I believe our understanding of mental issues using a scientific framework is woefully inadequate. In the future, we may posses enough information to be able to apply scientific principles. In the present, we have managed to control some mental illnesses to a degree using pharmaceutics.
Talking with various people who have had the misfortune of being born with troublesome minds and experiencing my own mind has led me to believe that a large amount of mental illnesses, especially depression and anxiety, are misidentified due to our infatuation to treat the mental plane as analog to the physical body, so as to be able to apply our scientific knowledge to it. The body is hardware, hard and physical, the mind is software, it is changeable. Many are born into this world however without the necessary tools or aptitude to actually go through this process of change and heal their mind when it gets into a broken state. Their frustrations pile up, making them ever more ineffective, all while they are blind to the fact that they could wake up the next day and.. feel completely fine.
It is none of their own fault in a way, as these skills are barely ever taught, and not too many people seem even aware that it's something just as possible as moving your arm. They treat it as if the only way they can "get fixed" is external, as if their car broke down.
Humanity did and still has ways of undergoing these processes. Rituals of progression from one stage of life to another, as well as yearly/monthly etc. rituals affirming one's position in life were more important than they are nowadays. As much as "they do nothing", the mental effect was most definitely not nothing. The human brain always seeks meaning, and lack of stimulus is its greatest enemy.
That is not to suggest that the world was an idyllic paradise full of meaning before. Unlikely. But it's hard to deny God did die somehow.
> Furthermore, only engineers believe most of the important issues in the world actually have an objective answer to them.
No true engineer believes that. All engineers must make compromises. Anyone that uses black and white thinking simply cannot engineer (unless you are using the term in a strictly legal sense).
The answer to this question is obvious - it's an exercise of pure power. Any time you hear anything about "misinformation" the real thing they are talking is power. Whoever has the power can censor those that do not have it and deny them chance to express their views. The guise under which it is done is immaterial - today it's health, tomorrow it's climate change, next day it's electoral politics or terrorism or vegetarianism or gender issues or bee colony collapse. Anything can be used as a basis for censorship, once you have the power and the will to censor. And the power to define what "misinformation" is and the power to censor is one and the same. It's not about seeking the truth, it's about seeking control and domination.
Once, in America, it was universally acceptable that censoring speech is not the power somebody should get. Neither the government, nor anybody else. Of course, as with every ideal, the reality sometimes came short of it - there are many infamous examples of censorship, both governmental and otherwise, that happened in America. But those ultimately were realized as conflicting with the ideal - and ultimately, the ideal won and they have lost.
Today, this ideal is increasingly being abandoned - for the large parts of the Left, have already completely been abandoned - for the ideas of "safety" and narrow partisan political considerations. The nation - and the whole culture we call "Western culture" - will suffer a lot and regret a lot if we don't find a way to stop it.
"Misinformation" is one of the most brilliant propaganda coups of my life time.
It does not exist. There is no such thing as "misinformation".
There is only misinformation according to some party.
"Google is going to remove misinformation from YouTube." sounds incredible laudable to the modern ear.
"Google is going to remove things that are misinformation according to Google corporate policy." is much less compelling.
A challenge for anyone reading this: From now on, whenever you read the word "misinformation", fill out the "according to ..." yourself. I've been doing it for a while now. It really improves your perspective.
(I do believe in objective truth. However, if we had a mechanism for objectively determining it in a way we all agreed upon, we wouldn't have this problem in the first place. Whatever the objective truth may be, I seriously doubt "according to the corporate policy of X" is it.)
> It does not exist. There is no such thing as "misinformation".
I'm sorry, but this is complete BS. If someone makes statements that they know are false, then they are a liar and are spreading misinformation. If someone makes statements for which they have incontrovertible proof, this is absolutely not misinformation.
Just because the gradients between those two are complex, doesn't mean that misinformation doesn't exist, just that the definition of "misinformation" is fuzzy (like so many other words.)
(I don't believe in objective truth or meaning. I do believe that communication is impossible without mutually shared subjective truths and meanings.)
Edit: The reality is that there is a significant amount of misinformation put out by all kinds of people for all kinds of reasons. Some of it (e.g. some press releases) is widely recognized but also generally tolerated. Some of it is partisan. Some of it is widely accepted as truth by almost everyone.
I was, perhaps, unclear. Of course objective falsehoods exist. However, it's a motte & bailey argument to talk about how objective falsehoods exist, but then as soon as we're talking about policy to pretend that we've got some kind of objective way of determining what they are.
My point is that "misinformation" should always come attached with an "according to X". Some Xs are more reliable than others.
However, as I said and will certainly stand by, "according to Google corporate policy" or "according to Facebook corporate policy" is not one of the more reliable of them. Google and Facebook corporate policy is not in a position to be making that determination, and whatever metrics they are using it is certainly not any definition of truth I recognize.
There's also going to be quite a lot of disagreement about which of those Xs are reliable and how much. This disagreement is not suddenly resolved by stripping away the provenance of statements and letting the word "misinformation" float free without worrying about provenance. It is absolutely valid to discuss how much to trust a given source, but the propaganda word "misinformation" is like the passive voice of propaganda... it isn't misinformation according to X or according to Y, it's just... "misinformation", floating free and objectively determinable. It's a pretty serious step back in philosophical sophistication.
I also think there's a lot of people accepting "misinformation according to our global elite" who, if they thought about it a bit more clearly, would wonder why they're accepting that definition so easily and thoroughly.
> My point is that "misinformation" should always come attached with an "according to X". Some Xs are more reliable than others.
The value of specifying epistemic sources is as important when putting forward information as it is when labeling misinformation. Yet most people are generally pretty lax at including and verifying epistemic justifications (especially for claims they are inclined to agree with.)
It is true that the act of labeling things as misinformation can itself be a form of misinformation.
The issue, as I see it, is that we have accepted the practice of deliberate misinformation as long as it is "for a good cause". We don't place as much value in people trying accurately convey information and nuance as we place on the function that information is serving.
I see this everywhere: in the statements about masks that were made early in the pandemic to make sure healthcare workers had supplies. You see this is narratives about the last election. You see this is in a lot of the reporting about Russiagate. There are little to no consequences or reprobation from people who agree with the end goals.
The incentives are all wrong and I don't know how they get fixed, but the problem is much bigger than just "Big Tech", "Social Media" or a bunch of stupid people on the otherside of the partisan divide.
And this isn't even limited to the hypothetical. Just in the last year we've had completely egregious examples of abuse of misinformation classifications. An article about Hunter Biden's laptop, posted by the oldest newspaper in the country, the New York Post, weeks before the election, got them banned from Twitter and Facebook. Any mention of Hunter Biden's laptop would get the post flagged as misinformation and banned.
Now, I don't care about Hunter Biden's laptop. What I do care about is that this was absolutely not misinformation. And then the same happened again for the lab-leak hypothesis. Given pants-on-fire dismissal and declared "debunked," when it wasn't then, and has been shown to very clearly not be now.
So, yes, the abuse of the authority is obvious in the hypothetical, but the relegation to the abstract isn't even necessary to condemn the legitimacy of these self-proclaimed institutions.
There is objective truth. Trump did not win the last election if you look at it in the right light. The absolute truth is, he lost. That isn’t misinformation. That can be applied to a myriad of other things. The earth is not flat. The vaccine does not contain microchips. Masks do prevent the spread of airborne disease. Jan 6th was an insurrection by the Republicans and not a tourist trip.
a violent uprising against an authority or government.
They violently forced their way into our capital with the idea of intimidating the government to prevent them from handling the lawful business succession insofar as counting and acknowledging the lawfully cast electoral votes.
This isn't subjective its the simple literal dictionary definition of the word.
The peaceful transfer of power based on democratic elections is at the very heart of democracy and they gathered to obstruct it by force in order to substitute their will backed by violence for the will of the people.
They didn't fight with the cops they beat and murdered cops in order to destroy democracy itself.
A gunshot by the alleged insurrection. Getting shot does not make one an organized rebellion attempt.
In a nation with as many guns as America, especially in the right wing demographic, I'd expect some shooting and a lot more organization before I upgrade from 'riot'. Insurrection is just political spin. Look at that picture with the goofball holding the speaker's podium and tell me these people were serious.
That goofball who is currently in prison awaiting trial and will end up in federal prison for a long time to come? We must have different definitions of serious.
So insurrections were only possible after the invention of firearms? That's weird.
And I'm assuming you mean one gunshot "from the aggressors" as one of the Capitol Police officers did fire their weapon in defense of the Capitol.
What would you call a group of armed people who forcibly enter a building in order to disrupt the work of a government and install an unelected leader to a position of power?
"Supposed to be" according to whom? It's fine that the author wants to run a social media company like an email service, and of course he's free to do that. But none of the major social media sites share that business model. Their revenue is based on engagement, and that model is deeply and inextricably entangled with their content and how it is pushed out to users. Are social media sites qualified to be arbiters of truth? Of course not, that's not their business model. But they're well within their rights to decide what content gets exposed to who on their service, and they've done that from the very beginning. People who are upset at being "deplatformed" are in the exact same position as SEO optimizers whose business gets gouged when Google changes their search algorithm.
If you want to make Facebook some kind of common carrier, you're going to have to outlaw all of its engagement-optimizing AI and targeted marketing, the very core of its business model. When you get on a platform to take advantage of exactly those tools, you can't complain when they change the rules on you---you knew the score when you signed up, you can save the crocodile tears.
I'm also gonna flag this, after careful deliberation. I'm gonna leave a comment, because I'm not the shadow powers that be, I'm just a dude who thinks this is an incohesive rambling that's a waste of everybody's time and, despite the promising title, ultimately rehashed flame-baity non-tech non-news.
It was kind of amusing that in his section giving examples of what he said were various logical fallacies in the "vast majority of all media narrative around COVID-19" he committed several of those same fallacies, plus a few more.
It would be funnier if it weren't so common. Everything is a cudgel to people like him. Fallacies exist, not to introspect on one's own arguments, but to assail arguments you disagree with. Data exists only so long as it supports your conclusion and any data to the contrary or supports other conclusions is wrong. Never defend, always attack.
Yep, another day, another random dude who considers himself an expert on what freedom is, what tech platforms are, and the dangers of private businesses refusing to amplify his friends. Even a plumber will refuse to route your sewage back to the public water main...
There was another submission a few minutes ago in the same light. It linked to an article about how Big Tech companies are censoring doctors and decide what is accurate medical information. It was removed.
I emailed dang and he kindly unflagged it on request, but it got flagged again soon after. Maybe he can permanently unflag it?
He also said:
'I've unkilled it so you can respond now. It's not really on topic for HN though. Too much of a provocative opinion piece and not enough significant new information (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...). That guarantees a flamewar, which we're trying to avoid on HN.'
I thought it was worth posting because it was written back in August 2020, so was quite prescient.
Maybe it was flagged to death. Maybe it will be back. All I know is when these type of stories come out two things are assured. 1) Shenanigans do take place. 2) The powers that be definitely don’t like it if you notice and talk about these shenanigans.
Says who? We're seeing some negative side effects to certain kinds of content flowing through social networks, and perhaps more importantly, from the tech company's perspective, politicians do not actually want them to sit back and not touch this content. If they did that, they'd likely end up getting regulated.
Also, a truly unmoderated social network would quickly become a clearing house for garbage content nobody wants to look at.
Maybe that says a lot about the real value of digital social networks as we know them now.
Perhaps we should just have dumb pipes, and let people network over them without a MITM running psychology and behavioral economics experiments on people who merely want to connect with friends and family
The power of blocking content should give to state government, because they are supervised by the people. Big tech are not, and they probably use this power as a bargaining chip deal with politicians under table.
Exactly, tech giants are now a natural monopoly, which means we should force them to provide more utility than they want to give in exchange for tolerating their privilege. Either that or break them up.
I guess it depends on how you define 'political'. Almost all censorship is related to politics in one way or another. The cycle is X minority is mad and gets a news article written and then there's a new wave of disappearances. Really anything at all controversial. Often it's something not child friendly, which ignores the mass of the internet that is also not taken down for this. If you have kids do you really want their content moderation farmed out to some megacorp?
Since the beginning of forums there have been trolls and misinformation. There has always been the option to ignore, either by not replying or by blocking an offender from their view. However my experience has been that 1) people are almost universally incapable of ignoring trolls. 2) it is somehow almost impossible for people to stomach banning a troll knowing the troll is still posting.
The closest system that seems to work is for people to downvote whatever it is they dont like and for that to cause posts to have lower priority.
sometimes I wonder how people and politicians would react to a user-friendly decentralized and distributed social network with strong crypto in the right places and no hierarchy.
if such a thing were ever made, it would be very difficult to monitor or moderate.
That's what Mastodon is. It looks good and is about as easy to sign up for as it gets with a federated network. I don't think there's much demand for it. A lot of people seem to overestimate how relevant decentralization is to the vast majority of users.
The inherent complications or disadvantages of decentralization aren't worth it for most people.
It's easy to sign up but then you don't know what to do because you have no way of finding people you know on the fediverse. Any social media platform that has had any semblance of success had at least one of the two ways to bootstrap your social graph: either by importing your contact list from somewhere else, or through global search with enough parameters to find someone by name and city. Mastodon has none of this.
This is one of my favorite thought experiments, except in my version it is guaranteed that nobody can moderate anything, ever. My best guess is: There is a threshold for how much of a problem something can become before congress makes a law prohibiting "access" to this network or running the client, or any analogue thereof. Then there's another threshold at which the government ratchets up resources for real enforcement.
In short, the conclusion I always come to is that no matter what the math and the game theory say, if the legislative, executive, and judicial all agree that it's a big enough problem, it's dead. There's nothing technologists can do about this. At least, this is the conclusion I always run into whenever I've tried to brainstorm technical solutions over the years.
It'll immediately fill up with pirated content, porn, porn of dubious legality, spam, and harassment, ensuring that all the normies would stay on Facebook.
It'll be a social network for people who were banned from all the other social networks, which is not a great town to live in.
it could, but the internet in general is like this and everyone still uses it. imagine if user/content discoverability is impossible without a fingerprint or token.
It also sounds difficult to curate (just another kind of moderation), and much of the success of the current social media heavyweights is due to their curation.
It's a fallacy. Good social media platform doesn't need curation. It doesn't need any recommendation system whatsoever. Every single thing you see in your feed is the result of your conscious decision to follow someone. It's just you and your friends. And exactly zero content from people you don't know.
Current social media doesn't respect you because being respectful doesn't drive metrics.
That started when politicians shifted their responsibilities of governance and policy making to private entities to cover for their inefficiencies and their vote bank politics.
These guys are parody-proof. "In my blog which is indexed by Google and Microsoft I reiterate the arguments from my book, available on Amazon, about how big tech is deplatforming me."
It's honestly deeply comical. It's a classic example of wanting to have both free speech and freedom from consequences.
It seems to me the religion of capitalism has a solve, here, and it's called competition.
If they really believe in freedom, they should support the freedom of Twitter to kick them off the platform, and these folks have the freedom to find another platform elsewhere.
But, of course, freedom isn't what these people are after. They just want to manufacture a different kind of consent.
Odd critique coming from the head of a DNS company, that could easily offer a competitive service with MailChimp.
The obsession with class (in the title, and content) and misunderstood genius reads more like within-class squabble than an argument for a different order.
I did some freelance work helping someone worried about Google censorship move their services to more independent alternatives, including EasyDNS. This post makes a lot more sense to me as signaling, in that respect.
> Odd critique coming from the head of a DNS company, that could easily offer a competitive service with MailChimp.
I think you missed the author's point. His point is not that someone needs to start a competing service. his point is that the feel that tech companies are patricians rather than plumbers is widespread and common place.
An attack on the title: what the author does not realize, is that plumbing is not limited to the person who comes to their house to fix pipes. The majority of plumbing is from some water source to cleaning facility to your house and then back to treatment facility and finally back into nature. And that major part is actually mostly managed by "patricians".
Not that I support tech giants invasion in private lives. But when thought through the analogy works against the author's favor.
> You’re using their mailservers, and in their mind that’s what gives them the right and the moral authority to monitor the content of your communications to your own audience.
Yes, how strange that somebody would believe they should have a say in how you use their services.
> We see Fauci removing his mask the moment the cameras stop rolling. We see AOC sitting amongst a fairly close knit crowd put one on for the express purpose of a photo op and then take it off again. But if you send an email to your own subscribers via Mailchimp wondering out loud if masks are nothing more than performative theatre, you’ll get shut down.
no, when you nitpick an _extremely complicated scientific matter_ to where you are actively going against CDC guidelines for the sake of pushing your own argument, you get shut down. imagine that. it's almost as if Mailchimp would rather you believe scientists than some tech bro with a podcast.
I'd like to be able to export data easily, run my own programs easily, write software that doesn't get taxed or asked to be resubmitted (since when did Microsoft charge for Windows app development?)
Every company purports to be for you, but it's a big toothy grinned lie. Some of them sell you, others sell you out.
It also sucks that these companies surface the worst of humanity to drive engagement. Are people really this terrible?
When companies market themselves as the "medium for public discourse", it is easy to cross lines between staying "neutral" or influence it the way they see fit. "Social Media" has served no useful purpose beyond generating these kind of moralistic arguments. We are better off without them.
Isn’t this just a consequence of American capitalism? Incentivise companies to optimize for profit, and they’ll do anything to rake in more ad money including violating people’s privacy and curating user-submitted content. We’ll have to fundamentally rethink entire economies before we could see large companies (if they would be, at all, allowed to exist) acting ethically.
I don't know why you're downvoted, because I think this theory on the cause is among the more interesting and nuanced comments. I think you've got an interesting observation there worth considering deeper. Perhaps they disagree with your Marxist conclusion?:
> We’ll have to fundamentally rethink entire economies before we could see large companies (if they would be, at all, allowed to exist) acting ethically.
Then I got to the part where he suddenly rants that content moderators are "probably...teams of purple-haired Millennials with nose hoops and personal pronoun mood-rings."
And calls out the AP for marking a medical claim "false" just because no one has been able to produce any evidence for it.
And says that calling anti-vaxxers nasty names is literally just as bad as racism.
I'm looking at the comment I just typed and it sounds like I'm exaggerating to make him look bad. But I'm not. That's what he's actually claiming.
I think the title is distasteful, but the title tag of the article matches the hn title and is also the first title displayed above the fold (on a mobile browser, at least).
James Charles, a very popular individual on Twitter, has been cancelled about 5 times. Guess what? They're still there. That's because "Cancel Culture" is largely made up by talking heads. Very few people actually get "cancelled", and when they do, it's not because of posts on Twitter by themselves.
Youtube houses plenty of conspirators, right wing news, gun channels, and they're monetized. Not only that, but youtube comment sections are notorious for being inline with "conservative" views.
Some gun channels have been demonitized. Not because Youtube doesn't want the money, but because Google doesn't control what advertisers do. In one of the demonitization dramas it was revelead that Youtube as a company would stop generating revenue if a big advertiser like Pepsi or Disney stopped doing business with Youtube.
If you want to see "cancelling" how about the crowds of hate that target minority groups with significant harassment? Why is online canceling from Twitter talked about, but not actual real life militia groups and violent groups that gather to kill transgender individuals? One of those actually harms people.
Do you see these "cancellers" showing up to city council meetings, forming actual physical protests? Nope. Everyone is actually quite free to ignore it. The only time things change is when those in positions of power decide to change it. They make those changes not on the outrage on Twitter, but on their own personal beliefs on whom they want to do business with.
Ontop of this, "cancel culture" can be translated to "protest culture". Somehow critisizing and stopping business with people is "cancelling", and not "protest" now?
>No tech company should be enforcing their Terms of Service based on what they think their users have done or might do off of their own platforms. Yet Twitter, Facebook, Patreon and who knows who else do that.
Reworded: No individual should be enforcing their morals based on what they think someone has done or might do when not next to them.
These "I should be able to say anything I want on someone elses platform" is uniquely american privledge talking. If Youtube, or Twitter, or Facebook allowed whatever people want, guess what? They'd be regulated tomorrow to not allow it. Words actually do have harm, and words do not inherently have 1st ammendment protections.
There are already plenty of misinformed people whom want a "repeal of 230". These companies have both a moral and financial responsibility to enforce their terms of service.
>A mailer company like Mailchimp has no business even parsing the content of their paying clients, let alone summarily judging whether it is misinformation or not. Mail providers should care about two things and two things only:
Yes, they do. They have to parse them for illegal content, and then they have to parse for content that might have their services used in ways that harm others. Disinformation can easily be harmful, and a company like Mailchimp does not want to be implicated in it regardless of 230 protections or not.
The part where he says that being snarky to vaccine deniers is literally just as bad as racism.
> “Covidiots”, “Deniers” these are not rational counter-arguments, they’re slurs. Anybody employing them is not engaging in discourse but rather bigotry and prejudice. This is as inexcusable as racism. Over the past few years many have been challenged to examine their own biases and privilege, in certain contexts for perfectly valid reasons. Anybody engaging in this type of othering toward skeptics and contrarians lacks self-awareness and empathy to the same degree as a racist.
He uses the words "slur", "bigotry and prejudice", "as inexcusable as racism", and "lacks self-awareness and empathy to the same degree as a racist".
Does that seem any more reasonable and rational than "as bad as racism"?
I could even get behind the basic idea that ad hominem attacks are always bad, except that he's already launched an ad hominem attack against what he assumes are "purple-haired Millennials with nose hoops and personal pronoun mood-rings" in the same article! I don't see how to read this as anything but "snarky insults are reprehensible when they happen to me and totally justified when I throw them at someone else."
If the guy you responded to is not the author of that piece, he's connected to the site in some way.
I do not believe his question is in good faith, but is an attempt to deflect any criticism by instead forcing his critic to defend and explain in detail any such criticism before he even attempts to defend his half-baked ideas.
Yeah, there's definitely someone mass-downvoting anyone who criticizes the piece in any way. Dunno if it's that guy or not. But it doesn't hurt to have clear, concise answers to deflecting questions in the comments for skim-readers to see.
Eh. Maybe not. As of late, in the past year or so, I forget the actual timeline, I've noticed a certain direction the overall tone of the comments and voting have taken.
It's been a little frustrating (I guess that's the word closest to what I'm feeling) as I've typically viewed HN as a place that went more for technology and science rather than... other things.
> As of late, in the past year or so, I forget the actual timeline, I've noticed a certain direction the overall tone of the comments and voting have taken.
I've noticed the same thing. It's unsettling.
I haven't always agreed with the HN consensus on things, but until quite recently that consensus seemed to be honest and well-intentioned.
I remain confused as to why anyone thinks they have unlimited first amendment rights on social media. It sounds a lot like thinking one has first amendment rights at a private employer. Let's clear this up right now. TLDR: you don't unless you explicitly do.
Edit: keep bringing those freethinker(tm) downvotes. It won't change the law one bit, but I guess it gives good feels.
I mean you could also try starting your own social media network that absolutely guarantees first amendment rights, but so far only hilarity has ensued from that premise along the lines of tragedy plus time equals comedy. You could be the first one to prove them all wrong. What's stopping you?
Or if that's too hard, why don't you all apply to work at Facebook and Twitter? Then you can work your way up the ladder until you can depose Jack Dorsey and Mark Zuckerberg. And on the first day of your reign of your new empire you can proclaim to all shareholders and employees that your social media site is entirely unregulated. Otherwise WTH should these companies care about you? You won't build anything and you won't do the work to change the path of something already built.
And yet still not a single counter argument explaining why you guys don't build your own social media site instead of whining about the old and stupid leading brands. There's lesson in the irony.
You're making a lot of assumptions. One of the saddest and most frustrating aspects of big tech's narrowing of permissible discourse is the association of defenders of discourse with red ties, right wing ideas and all of that.
An Intro to Constitutional Law course or book would demonstrate how false that is, that up until only a decade or less ago, conservatives were the ones assaulting free speech. And Progressives were vigorously defending freedom of speech. This includes hate speech. They were preventing 'consequences' as people like yourself like to remind us we have no protection from. I can suggest materials to read, but what has degenerated is this thread, so it's pointless effort unless someone wants it.
The philosophical arguments made by the Progressives in those days in defense of speech resonate timelessly, from the 1600s to the 2020s. The arguments resonate also beyond narrow ideas of what constitutes government restrictions.
Today the question is around whether the FB's and Twitter's and YouTube's should be regulated as utilities and in the public interest, rather than an outmoded ideal around private ownership rights and the 1st Amendment not applying. From misinformation to viewpoint discrimination, it's the real question of our present time.
Cool, how about some specific references to settled cases that are relevant to the situation rather than vague references to law books?
Back in the day, I remember how Tipper Gore was all about warning us of the dangers of explicit lyrics and videos and she wasn't a conservative or have I lost track of who's on which team?
Disliking what became of one political party because of the embrace of a demagogue does not necessarily make one a strong adherent of the other leading brand or is that too nuanced for you?
I support the rights of social media sites to moderate content as they see fit and I support the rights of their customers to take their business elsewhere if they don't like how it's run. I won't be going into Hobby Lobby anytime soon nor will I be going to Carl's Jr, but I'm also not going to tell them how to run their businesses. I think that's where we really differ here. Facebook and Twitter will ultimately fall to superior successors that catch the zeitgeist just like they did. But not on any of our schedules.
> Back in the day, I remember how Tipper Gore was all about warning us of the dangers of explicit lyrics and videos and she wasn't a conservative
Al Gore was long known as a conservative Democrat, and his taking up his wife's singular political issue was in line with that; in fact the WSJ complained about his supposed reinvention as a liberal in 2000 during the campaign. (Weirdly, complaining that he had done so in aligning with Clinton, who also was, even as President, a conservative Democrat; this was a partisan campaign hitpiece more than anything, Gore’s actual reinvention as anything but a conservative Democrat didn't really happen until after he was out of electoral politics, when he became a single-issue campaigner on climate change.)
I have no problem with social media companies kicking off anyone they please. I do have a problem when the lines between government and private get blurred, and a government can indirectly or directly influence the censorship decisions of private companies through threat of regulation, a revolving door, or by explicitly flagging certain posts. That's a grey area 1A violation in my mind.
Tech gets all sorts of tax breaks to innovate. That's okay, but if they proactively kick people off the platform so as to head off government regulation, that's not okay?
Or wait, let me guess, all tax breaks are good! How'd I do?
No, it's definitely not okay for a government to threaten a private company (through which most public discourse flows) into censoring its political opponents. It is no different in practice to a 1A violation.
The extent to which this happens at all can be debated. But the principle of the matter should be clear, if it is shown to be happening, it is certainly morally wrong, it's anti-democracy and (although it's difficult to prove in court) perhaps unconstitutional.
Here's an example of the grey area I'm taking about in my previous post. The executive branch is flagging posts for "COVID misinformation" to Facebook. We don't know who is doing the flagging or what criteria they're employing, and Facebook has an incentive to comply with the take-down requests so as to avoid potential costly regulation. The sum total of this is the government censoring speech (well, there's one layer of indirection, which is a compliant and scared Facebook) in a way that's completely unaccountable and not auditable.
That's funny, I didn't hear any of this righteous self indignation during the last administration over its bogus antitrust threats. Were those okay with you or did it take you this long to work up a really good lather?
This is presumptive whataboutism with fairly incendiary language. You don't know my views on this question and have lumped me in with am amorphous blob you've labelled "conservative". You might be surprised to learn that I am not against social media censoring hate speech, for example.
For the record, what Trump was doing was clearly authoritarian and had the intention of controlling moderation decisions on these platforms. It was anti-democratic and awful due to its underlying intentions. Ron DeSantis is trying a similar thing to Ben and Jerry's now.
However, I was and remain less concerned by this than I am with the influence of the Democratic Party on tech firms, mainly because the Republicans have so little influence despite the threats. You can see the power asymmetry (and revolving door) on display through a single example, the censorship of the Hunter Biden story on the eve of the election.
They didn't censor it, for nothing really significant has come of it since other than he was exploiting existing law for his own personal gain. So they decided to mostly ignore Giuliani's fantastical tales of the stolen laptop. Seems like a good call to me. Imagine a congress willing to work together to change the laws to make what he did illegal in the future though. That's how things used to mostly work IMO.
We seem to be in a quarterly pandemic wave cycle right now and it's overwhelming our healthcare system and it's killing Americans and ultimately depressing the economy. Given all the changes and adaptations we made as a society post 9/11, attempting to control the flow of covid disinfo seems minimal to me. And it's easy to criticize our leaders. But just like a true first amendment zealot should build their own social media site centered around the first amendment to find out why that's impractical, try leading this country when it's on a course to be an indefinite pandemic purgatory.
But you raise a valid point. I'm very concerned with YouTube's slut shaming through demonetization of Naomi Wu's YouTube channel. It's not a good look for them but then very little is these days.
They did censor the original New York Post story. The media chose to ignore it, but the social media companies were censoring any links to the story, actively encouraged by Democratic Party operatives such as Schiff who were falsely labelling it as Russian disinformation and encouraging a blackout. This has potentially election moving consequences, so coercive Democratic influence over social media is absolutely a real threat to democracy.
Regarding your comment about a 1A-friendly platform, I don't advocate for a censorship-less social media platform. It's stupid on its face. But moderation should either be voluntary with no coercive influence of government (whether that influence is achieved by carrot or stick), or it should be defined by the legislative body (hate speech laws) instead of implied threat of regulation or via the executive branch.
Regarding the flagging of COVID misinformation, that's fairly lower on my list of worries, and it's not that I don't favor removing that toxic information, it's that the executive branch is the one coordinating it. The encouragement of political censorship (the Hunter Biden case above) is much higher on my list of concerns and much more directly anti-democratic.
And then they reversed their censorship. It's hard to make perfect decisions in real time. The information got out there, it was bogus, and they faced criticism for doing it in the first place.
So they chose to eliminate its effect on the election based (I would assume) on the bogus stuff about Hillary's email server four years earlier. And if this is the end of October surprises I can only say good riddance, but it probably won't be.
Try walking a mile in their shoes before holding them accountable to infallible behavior is my only suggestion here.
We're going to have to disagree about a first amendment friendly platform. I'm a huge fan of letting the marketplace of ideas work these things out lawsuits and all. I 100% expect a dumpster fire every single time, but then most tech companies end up dumpster fires anyway so why not? There's probably some epsilon incremental revenue pulling this off and capturing the single digit percentage audience that demands these things, no?
If Twitter's moderation team made that decision because they genuinely thought it was Russia disinfo, then I have no problem with it. But the waters are so muddied when we have a chorus of policymakers applying pressure and encouraging that decision, both through the rhetoric in that moment and leading up to it. It's that kind of behavior/rhetoric (similar to Trump's anti-trust threats) which is the threat to democracy, not the decision itself in the abstract which may have been perfectly defensible.
I can only conclude that Schiff et al were being intentionally dishonest about the Russian disinfo line in an attempt to apply the maximum amount of influence at the most critical time before the election. It was a cynical attempt by elected officials to sway an election through censorship. They knew it wasn't Russian disinfo at the time. Joe Biden's own comments revealed that knowledge about a day into the saga.
> Edit: keep bringing those freethinker(tm) downvotes
Sauce. Goose.
Edit: Maybe you should start your own tech news site. Or if that's too hard, why don't you apply to work at YC and work your way up to a position where you control the moderation for this site?
You're assuming I'm not okay with the content moderation of Hacker News. You should never assume.
If anything you should jump with joy at the rights of people who disagree with this article to flag it and at your right to register discontent without any substantive argument as to why you are registering your discontent with but a simple down vote.
That's freedom of speech in action! And while my tone here is sarcastic, I believe sincerely that this moderation system is currently superior to any of the ham fisted bad AI deployed on either of the two leading social media sites.
Mailchimp is "big tech" now? Can't these folks just throw a RSS feed together (hosted on the cheapest static webhost around) and tell people to subscribe to it? This is a big fat nothingburger if I ever saw one.
Because to a first approximation everyone understands, and can get, email, while maybe 5% of the population, if that, understand RSS feeds. My 80 year old aunt can read email. While I'm sure I could teach her to use RSS feeds, it would not be a task I would look foward to. And she has me. Most people don't have me, or anyone like me (or, likely, you, or the majority of other users of this site).
What you're saying is roughly equivalent to "So what if they can't use the telephone or television? Can't these folks just use Morse code?"
Or they could use email but via someone other than Mailchimp, such as ActiveCampaign, MailerLite, Sendinblue, GetResponse, Moosend, EngageBay, Omnisend, Zoho, Mailjet, ConvertKit, Benchmark, AWeber, HubSpot, Constant Contact, DotDigital, Campaign Monitor, Sender, SendPulse, Klaviyo, Jilt, Autopilot, Drip, Keap, and probably many others.
Email is not like, say, Facebook where to reach someone on Facebook you need to send via Facebook. How you send email is largely irrelevant as long as you pick a provider that doesn't send so much spam that their outgoing mail gets blocked.
> Because to a first approximation everyone understands, and can get, email, while maybe 5% of the population, if that, understand RSS feeds.
At some point in history, 5% or less of the population understood email. RSS feeds are not inherently more complex than email, in fact they're quite a bit simpler. (And even if you don't grok RSS feeds, you'll probably grok a weblog; RSS is only relevant here because it gives the exact same affordances as something like a Mailchimp newsletter, even though it works differently under the hood.)
This is rehashing the old "common carrier" debate again, isn't it? Which of these are the common carriers: Physical wire/connectivity provider? ISP? Tier1 ISPs? Social media sites connecting messages with other folks?
Where is the line drawn, and what responsibility does each level have?
You can't use the common carrier argument to force someone to be a common carrier. If you pick and choose your clients based on policy you are by definition not a common carrier. Mailchimp is a private carrier. They choose what to carry.
Agreed -- but a lot of social media sites say "we just connect people" and then turn around and are forced to play moderator so things don't burn out of control...
I don't know why social media companies haven't tried letting users opt out of different categories of censorship.
Presumably every user appreciates the anti-spam filtering these sites do, and anti-scam/phishing, but there should be a separate checkbox (ticked by default) for "Remove misinformation about covid/vaccines", and one for "Remove misinformation claiming the election was stolen", etc.
Then the social media companies can say to governments "If you would like all users in your jurisdiction to be unable to opt out from these categories, please pass a law requiring us to hardcode these settings for them, and we'll update our UIs to tell users why they can't opt out."
Better yet, the social media companies should say to governments "Please provide your own list of which posts are misinformation, and we'll make sure that users see a blank page saying 'Your government has deemed this post to be misinformation' instead of the content they want".
You could do that, but the end results will be the same as the social media sites which don't do moderation.
They turn to shit, extremely fast.
It isn't there wasn't a lack of social platforms which didn't try to remove moderation "because of free speech" but they don't last long before they collapse under their own crazyness, and the crazyness of their users.
I don't think such a site would collapse. Instead, it would fragment into different bubbles, some of which would be toxic to mainstream users, but users wouldn't see any of that toxicity without specifically opting in to it.
I imagine such a platform would turn out like Reddit, only better, since people who opt out of the consensus reality would probably have their messages only visible to like-minded users, unlike on Reddit where each user is trusted by default in each subreddit.
A horrible person got deplatformed and they're all angry with the world.
Companies get to decide how to run their platforms; if you don't like it, send your money to other horrible companies, or try and convince lawmakers that your horrible case should be protected.
Ultimately though, stopping being horrible is probably the easiest option but these people never seem to want to take it?
HN doesn't allow downvotes so I upvoted literally everything else on the front page to compensate. Thx
Should big internet companies by plumbing and not exert control over content on their system? As much as it annoys me some of the things Google, for example, does exercising the power they have, it does have the right to control what is on there, to some extent. As I interpret it, I think the OP is saying it is not right to censure, but I am not sure how this applies to a private company and content on their own system.
I agree a big company shouldn't have _complete_ power to control content on their site for reasons the OP says because at some point this becomes a danger to society. At the same time it is a danger to society to freely let users promote whatever content they want, for the same reasons.
For all the examples or people who have been persecuted (or whatever word would fit better) for saying the "truth", like scientist who said the earth was not at the center of the universe, there are countless others who, for the better, were quieted from issueing harmful information.
The OP says people should be taught a lesson in critical thinking, but I don't think this is a problem. I think most people are pretty logical, they are just dealing with misinformation, no matter what "side" they are on.
So this is a real problem. It is good the OP raises these questions, but I don't think it has the solution.
I think this article spot-on. In my opinion, there is very little (but not zero) actual misinformation on the Internet. Most of what we call “misinformation” is an influence campaign by one side, which is thoroughly disliked by the opposing side. When Big Tech labels something “misinformation” they are essentially taking one side or the other on the issue at hand.
I watched my mom’s* Facebook feed for a bit on Jan 6. In a span of 30 minutes she was getting bombarded by claims that the people invading the capital in order to overturn the election were actually antifa complete with a veneer of evidence**, a passage that was misattributed to Dave Ramsey***, along with all the normal claims about a “stolen” election.
The misinformation may be coming out of a organized campaign or it may arising organically, but it definitely exists in droves and is spread by social media. It exists on both sides, but it is not balanced.
* thankfully, she’s pretty discerning and don’t believe any of this, but believes it is important to not le this sort of behavior cause her to “unfriend” people she’s known for decades.
** just to be clear, they weren’t, and the evidence was that pictures of insurrectionists appeared on an antifa web site. That was true, but that’s because they appeared on the part of the website dedicated to documenting people as fascists
*** I found the original source in about a minute, and a public rebuttal from Dave Ramsey in two
This comment shows the power of disinformation — pure lies like “the election was stolen” or “vaccines cause autism” if repeated enough become “differences of opinion” with two different “sides.”
> there is very little (but not zero) actual misinformation on the Internet.
It’s not the amount to take issue with but the spread of its influence. Anti-vaccine ideology was given a platform by big tech and now we have an endemic disease because folks believe that nonsense and let it hold sway over their medical decisions.
Edit: for folks voting this down I’d be interested in a rebuttal or at least an explanation of how this doesn’t add to the conversation. I try to be a good citizen here and thought I was engaging in good faith.
> Edit: for folks voting this down I’d be interested in a rebuttal or at least an explanation of how this doesn’t add to the conversation. I try to be a good citizen here and thought I was engaging in good faith
"Misinformation", someone might say, and we wouldn't have to give you any justification. No freedom from consequences, they say.
> These companies think they’re the patricians of internet discourse. The reality is they’re the plumbing.
That was true until they started promoting and demoting content in order to drive engagement and juice their ad revenues.
Big tech is not plumbing. The minute Facebook invented the news feed they became editors and content promoters.
If they really want to just be plumbing, they can shut down their algorithms any time.
As an aside, I picked on that part of the post because the rest of it quickly pivots away from tech and reads as a self-promoting, semi-unhinged rant about covid that reads like the drunken ramblings of Q adjacent anarcho-libertarian, complete with the big hits: references to the "Davos elite", hydrochloroquine, ivermectin, Evil Dr. Fauci, etc, etc.
Which reminds me, I should probably flag this post given it's relevance to HN is glancing at best...
> The minute Facebook invented the news feed they became editors and content promoters.
Ok. If they are editors and publishers then perhaps the law should change so that should be treated like other content editors that exist already.
We could do this by treating them legally equal to things like the New York Times, with all the legal responsibilities, freedoms, and liabilities that such a thing entails.
And these companies that don't want to be treated like editors, then we could create new provisions in the law, that give them protections if they act like how the phone company acts.
If an organisation provides a service transparently, such as Akamai, Level3, DNS root servers etc. then they're plumbing and they should shift packets unless you or your systems are trying to break their plumbing.
If they run a service that you patronise as an end user then they can do what they want because if you don't like their terms of service you can go elsewhere.
Don't like f*c*book's terms or behaviour? Leave. YouTube keeps mistreating you? Leave. AWS repeatedly asks you to stop your users breaking the terms you agreed to? Leave.
I don't care if you can't get the clout or customers you want on Vimeo or Mastodon. I don't care if you don't think there is no viable alternative. You obey the contract you agreed to or you get kicked off. Organisations are not obliged to agree with you or serve you.
Lumping Internet-based organisations into "Big Tech" is a misnomer and I find it is frequently a sign that someone's about to go on a rant about something they either don't understand or are purposefully being obstinate about.
By like the second sentence I was able to deduce pretty much most of what this guy was going to complain about and what side he was on every issue.
I do miss the days before "big tech" and social media when every asshole with half an opinion wasn't given a bullhorn to shout it from the rooftops as if they had something worthwhile to say.
When you are complaining about the Associated Press, maybe take a step back and consider whether or not you're just wrong.
Because the Associated Press is one of the most milquetoast, boring, middle-of-the-road, non-opinion-giving, dry ass, plain organizations out there. They really don't give a shit. They report on things so other newspapers can run the story.
And it seems like all he wants to do is commit the fallacy fallacy. He thinks by saying something is a fallacy, that makes them wrong. It doesn't.
Not to mention, the authorities people are "appealing" to are experts in this field. Not to mention, we can turn that "appeal to authority" around on Ivermectin as well. He's just pimping it because "a doctor said so". And when you point that out, they'll point out why we should blah blah blah. But that's the rub. It's something they want you to do for them, but won't do for you. No one is listening to these people simply because they're in charge, but because they're proven experts in their field.
And so on, with the rest. When he's not just completely misrepresenting a situation to claim something is fallacious when it's not.
Why?
The media companies have always done so.
The press is and always was "patricians" of printing press discourse. You can't say whatever you like on a news paper.
The TV companies and Cable companies are and always were "patricians" of tv programming. You can't say whatever you like on a tv program.
The publishing companies are and always were "patricians" of the books. You can't say whatever you like on a book and get them to print it, promote it and distribute if they disagree.
Radio companies are and always were "patricians" of the radio waves, you need a license to operate a radio and they decide what gets on air.
Why the double standard?
Now, you can say whatever you like on the internet, you just are not entitle to force other people and companies to give you a platform to do so. Facebook, Youtube, this site, etc., doesn't owe you sh#t. If you want to publish something that they don't like, host it and promote it yourself, just as if you want to put something in the news that they don't like, or on TV or on the radio or on a book.
The insane and childish authoritarianism of the author of this article that believes that "Mail Chimp" doesn't tell him what to do with their service but instead he is the one that tells them how to run their private for profit company is both comical and sad. If you don't like the terms of service, stop using their service. That's how that works. If you think that they are infringing on your rights, them take them to court, don't click bait about it.
Base reality is, they all make money on selling your attention to advertisers, and they need for you to believe that your encounters with them are "real," serendipitous, and that you aren't just being managed like an animal in an existential zoo. Sure, there are true believers who think abusing trust and technical power to shape narratives is honest and necessary work, and via Twain, there is no argument that will change their mind so long as they are being paid to believe in it. These would not be the first managers to mistake their own credulity for skill.
I'd reject that any of the platforms are seriously combating misinformation, and state it is just another trope in support of the same end, which is the global brandsafing of all public discourse on behalf of their main essential coalition members - the worst, wideley acknowledged, most objectively evil and mendacious organizations in the history of world. If you don't like how a platform puts you on a target list and actively wipes out your social capital and isolates you from your relationships and ability to make a living, people are perfectly free to just go and start up their own purges...