Hacker News new | past | comments | ask | show | jobs | submit | jonstokes's comments login

I don't know LeCun personally, but there's a lot of backstory here that this polemical clickbait is leaving out.

- LeCun has a history of getting mobbed by "AI ethics" types on Twitter, and in the past he was very deferential to these folks, and even left Twitter for a while. I wrote about some of that here: https://www.jonstokes.com/p/googles-colosseum

- The MIT Tech Review, which is the author's main source here apart from Twitter, is techlash rag, and they went through a long phase where they only published anti-AI stuff from the "AI ethics" people. Most of those writers I used to follow there on this topic have since moved on to other pubs, and the EIC responsible for this mess has moved on to run WIRED. But it seems they're still publishing the same kind of stuff even with new staff and management. They have exactly one and only one editorial line on AI in general and LeCun in specific, and that is "lol AI so racist and overhyped!" It's boring and predictable.

- LeCun has a longstanding beef with Marcus, and the two treat each other pretty poorly in public. Marcus seems to have a personal axe to grind with LeCun. Given that Marcus has been leading the mob on this, it's not shocking that LeCun got crappy with him.

- Emily Bender, Grady Booch, and the other folks cited in the MIT Tech Review piece all, to a person, have exactly one line on AI, everywhere at all times and in all circumstances, and it's the same one I mentioned above. You could code a bot with a lookup table to write their tweets about literally anything AI-related.

- Yeah, LeCun is a prickly nerd who gets his back up when certain people with a history of attacking him come after him yet again. He should probably should stay chill.

- "AI so overhyped" is a pose, not an argument, an investment thesis, or a career plan. But hey, you do you.

Anyway, I hate to be defending anything Meta-related, but this article is slanted trash, its sources haters who have only one, incredibly repetitive thing to say about AI, and the author is a hater.


Thanks for posting this, it gives good added context that helped me change my original opinion.

I was quite familiar with Lecun's dust up with Timnit Gebru on Twitter, and I had a lot of sympathy for him in that situation.

I think it's quite sad that so much bad-faith argument has infiltrated academia to the extent that it has. Some may say it's always been that way, but it feels worse to me now. One of my "heroes" of unbiased rationality, Zeynep Tufekci, wrote a really good Twitter thread recently about how some of these flat out liars in academia manage to continue their lies unscathed with little pushback: https://twitter.com/zeynep/status/1592210111359250432


Yes, "AI has too much control over our lives, and it's biased" is a repetitive claim made over and over, but it's also, you know, a biased AI ruining lives right now, and we should repeat it until things stop.

Every week there's a random here post about "some AI detection system closed my Gmail account / took down my Android app / froze my Square funds", and Hacker News is seen as the semi-official tech support line for companies who have turned to biased AI to cut costs.

A lot of what AI ethicists are saying is that "if we hook these AI systems up to safety-critical systems, anyone who doesn't fit the model is going to be labeled an outsider", and I don't know why we shouldn't repeat it as many times as it takes to get people to listen... accounts are still being banned, lives are still being ruined.

To counter this with "think about what progress AI has been making!" is missing the point. "Sure, it Markov-chain'd some random facts about space bears and cited random people with papers it made up who are now caught in the cross-fire of machine hallucination, but think about the progress! It could format its fiction to look like a TeX paper and add some random squiggles that look like math expressions!" is not the slam-dunk defense you think it is.


> Every week there's a random here post about "some AI detection system closed my Gmail account / took down my Android app / froze my Square funds", and Hacker News is seen as the semi-official tech support line for companies who have turned to biased AI to cut costs.

I would agree with this if I ever saw these self-appointed AI-ethicists focus on these kinds of harms. But, at least in my experience, is usually focused on the exact same set of concerns that 90% of the time has "intersectionality" somewhere in the criticism.

Yes, I'm being a bit unfair and snarky, but I'd be more willing to pay more attention to some of these criticisms if I felt it included more of the harms you bring up than just what I feel has become a constant bone to pick. I agree with the GP when he wrote "You could code a bot with a lookup table to write their tweets about literally anything AI-related."


>But, at least in my experience, is usually focused on the exact same set of concerns that 90% of the time has "intersectionality" somewhere in the criticism.

You have the choice to avoid google accounts and limit the destruction a google AI system can do to you.

You don't have a choice to not be born black, and not be put in jail for longer just because you are black.

Why don't you care that millions of people will be hurt by these things, and care more that an app developer gets locked out of the app store? Apple hasn't put anyone in jail.


Yes, I used a nerdy example because I figured it would appeal more closely to the computer dork crowd of Hacker News, hoping that, by metaphor and extrapolation, you could imagine all sorts of ways that AI biased against sex or race would be immensely damaging to the fabric of society, and bias gets built into our society. This is already happening, as biased AI is used to estimate how much jail time someone will get [0]. Or pushing rents higher [1]. Or treating people for healthcare [2].

These AI ethicists are complaining about all of this, but of course they yell more loudly about sexism and racism, because, you know, those are fairly serious things that should be addressed first???

I don't think they need to be original, I think they need to bang their drum loudly. "Oh, that women's suffrage movement won't shut up about how they don't have a voice in policy that governs their life, can't they talk about something else for once" isn't an indictment of the people complaining, it's an indictment of the people not listening.

[0] https://www.propublica.org/article/machine-bias-risk-assessm...

[1] https://www.propublica.org/article/yieldstar-rent-increase-r...

[2] https://www.science.org/doi/10.1126/science.aax2342


> this article is slanted trash, its sources haters who have only one, incredibly repetitive thing to say about AI, and the author is a hater.

It doesn't "source haters". It quotes MIT Tech Review and Gary Marcus precisely in order to provide context for the subject of the “article” (blog post): LeCun's bizarre "this is why we can't have nice things" statement. This petulant remark seeks to shut down negative feedback as a class, regardless of its merits. That's what the blog post is about. The quotes are there so that the quote from LeCun makes sense, not because they're legitimate criticism.

> Emily Bender, Grady Booch...

These people are not mentioned in the linked post, which tells me you're pattern matching on "techlash" and posting a bunch of only vaguely related context (and a medium self-link).

> He should probably should stay chill.

Exactly?! That's the point of the blog post. It's in the title of the article. It seems obviously true, and I'm not sure how any of what you say adds up to a robust conclusion that "the article is slanted trash" and "the author (Andrew Gelman?!?!) is a hater" other than you don't like some of the quotes.

> and in the past he was very deferential to these folks, and even left Twitter for a while

How many times has he quit Twitter now? Three IIRC. Seems like he needs some coaching on the following through with promises.


The article they quote from,

https://www.technologyreview.com/2022/11/18/1063487/meta-lar...

Twitter-quotes both of them and others


Yes I know. They are irrelevant to the point of Gelman’s blog post, which is why he doesn’t mention them. I assume they’re being brought up here out of some kind of guilt by association thing, or to make the self-link seem more on topic.


50% of TFA is quotations fromt he TR article


Yes. To provide the context necessary to understand the "this is why we can't have nice things" quote, you have to say what "this" is. That's how pronouns work.


Thanks for taking the time to your thoughts in such clear and lengthy terms. This article seems more about drama than scientific merit.

> Second, what’s the endgame here? What’s LeCun’s ideal?

This section I think is particularly exemplary - regardless of your take here, I think it’s pretty ridiculous and uncharitable to interpret “criticism of science is immoral” from LeCun’s quoted statements.


> "AI so overhyped" is a pose, not an argument, an investment thesis, or a career plan. But hey, you do you.

> Anyway, I hate to be defending anything Meta-related, but this article is slanted trash, its sources haters who have only one, incredibly repetitive thing to say about AI, and the author is a hater.

I was hoping someone else would also find the actual context and background for what is clearly a poorly written hate piece.

I have very little interest in Meta, but the amount of FAANG hate that the media knowingly perpetuates and the amount of criticism that anything launched receives, would absolutely wear someone down. Lecun is no saint, but the article is very unfairly written.


Pro-AI articles are all the same as well. "AI will be so useful!" It's boring and predictable.


Wrong. "Here is a totally mind-blowing new thing AI will let you do this week that humanity could not do last week" is an endless source of novelty and eyeballs. Source: Am a publisher in this space and have seen the engagement numbers. You are the guy in the meme standing in the corner of the party, with the words "They don't know AI is just math" printed over his head.


Could you make your post more substantive by e.g. sharing rough engagement numbers? Otherwise as an outsider this just reads as pretty mean-spirited.

Sad because I am an Ars Technica fan, but can empathize with GP's point about AI coverage being repetitive and often over-hyping results.


Yeah and if I told everyone I could turn lead into gold I bet that would get a lot of clicks too.


that was already done in 1941


I wish AI was overhyped, sadly it is not. If it was overhyped, we would have more time to solve the alignment problem.


"AI ethics"

Are these a new type of luddite? Why aren't there "computer ethicists" complaining about real issues with the use of technology in general? I'm being tracked at all times without my consent. These "AI ethicists" are happy to use platforms like Twitter that track you, and even reward you for giving them more info (e.g. phone number).


If I were a lot smarter and had a lot more time, I'd love to paraphrase the article using, instead of AI, a legacy technology that at the time struck many as useless, but after development and acceptance became indispensable. Maybe the internet itself?


[flagged]


Really? You can't see any basis to criticize a "scientific tool" released into the wild that spits out convincing falsehoods?


The person you’re responding to dodged (with a very fair argument), but I’ll bite the bullet here: not really, and I’m curious to hear what you think the damage could be.

I mean SOME criticism always has a basis, but that seemed to be a large part of the reason they published this technical demo: to get feedback and spark scientific discussion on the state of the art. They did publish it with prominent warnings to not trust the output as necessarily true, after all.

If the worry isn’t with primary users but with people using it to intentionally generate propaganda/falsehoods for others to consume… idk it seems like we’ve long passed that point with GPT-3.


So their goal was to gather feedback (read: criticism) but took it down after 3 days? In lieu of some sort of coercion (which, idk how you’d coerce Meta), it seems like they weren’t all that interested in feedback and discussion.

The fact that people responded negatively to a bad model (where “bad” can vary from unethical to dangerous to useless depending your vantage) has little to do with the anti-AI cottage industry.

Portraying criticisms as necessarily stemming from bad faith actors is exactly the opposite of fostering feedback and improvement.




I'll take "strawman fallacy" for 1000, Alex.

Not sure how you made the leap from "There is cottage anti-AI industry from people who couldn't do real AI and hoping the next best thing to do is label it as 'racist'" to "You can't see any basis to criticize a "scientific tool" released into the wild that spits out convincing falsehoods?"


Well I'm pretty sure we're on a thread about an AI tool that was released as a scientific tool and it spits out convincing falsehoods. So maybe the cottage industry comment was truly just a non-sequitur, or maybe it was in reference to the topic of this entire HN post?


I'll give you another option: There are valid concerns with the type of output that large model AI generates, and there are experts working in the field who are trying to improve the state of the art by researching and implementing solutions to these valid concerns. There are also a subset of academics whose "one trick pony" is just "veto, veto, veto", without providing valid solutions, or worse, not taking a good faith understanding of "yes, this may not be perfect yet, but that doesn't mean we have to shut the whole thing down."

I'm not as familiar with how this culture works in the AI field, but I absolutely have seen it in the world of open source: people who have little to no programming skill who do nothing but grep repos for instances of "whitelist" and "blacklist" and pretend they are doing God's greatest work by changing these terms, and then cause typical faux-outrage storms on Twitter when their PRs are met with eyerolls.


Like GP I was replying to, it sounds like you’re mostly looking to air grievances here rather than discuss the topic at hand. Thank you for your work on OSS in any case, I imagine that’s a very frustrating experience.


What's the purpose of this organization that gets grant money?

https://www.dair-institute.org/


What is their actual reach and impact to the AI field?


I don't know who this "we" is you're talking about. Many of us are outraged over Saudi atrocities going all the way back to 9/11.


The reason being an anti-vaxxer is bad is because of a concept called "herd immunity." There is no analog, here. I think that not only do you not understand much about kids, but you also don't understand the analogy you're trying to use.


I guess I mean it’s more along the lines of crazy-parenting that hurts kids.

> I think that not only do you not understand much about kids, but you also don't understand the analogy you're trying to use.

I understand herd immunity lol but thank you for the lesson


I'm a parent of three girls, the oldest of which is 10. None of them will have smartphones, probably not even in high school. I run my house pretty much like the author of the article -- there are gadgets on the weekend, only. And even then the internet-connected ones are very carefully controlled and there are a ton of rules.

But you know what? My kids aren't babied. My oldest and I just went to a two-day rifle shooting clinic, and the two oldest have knives of their own that are very sharp and that they can use whenever they want.

They hike in the woods by our house, unsupervised, and they ride horses and swim. They climb trees. They camp in a tent in the woods by the house.

As for their peers? Those poor kids have never even touched a sharp knife, much less been given one of their own. My kids are well aware that they're allowed to take a lot more risks than their peers, and that they're given more responsibility for their own safety.

They're not babied. Rather, the kids who stay indoors on a gadget are the ones who are babied and stunted. They're the ones whose parents have infantilized them.

A kid doesn't "deserve" a smartphone. What they deserve is a childhood. They deserve to be bored for long stretches and to have to make up their own games and stuff to do. They deserve the privacy of their own thoughts, and to not be tethered to a gadget that they can't put down. They deserve flesh-and-blood relationships, instead of jerky pixels and audio. They deserve a life, and not just an existence.


There is some huge conflation there. I have three kids 10, 12, 15 and all three have cell phones. There are rules regarding use.

1. No phones at dinner EVER

2. No electronics (of any kind) except low music on school nights without explicit approval or when the device is being used specifically for school work

3. Free reign on weekend electronics outside of prior commitments and all chores are done

Weekends roll around and you would think they would be glued to the electronics based on everything people post here, but usually it ends up being a last resort. They would much prefer to go play soccer with friends, practice their artwork, go shooting or give each other facials.


Sounds like you are nicely mixing in opportunities for both connected and "unplugged" experiences. I believe both are important. Everyone's experiences, opportunities and abilities will likely differ.

I suspect you use the knife example as an analogy. My daughter has a hatchet. It scares the heck out of me. I don't let her use it when I am not around or children other than our own are with her. But she loves it, and I have attempted to teach her correct and safe usage.

Technology can be as dangerous as a "sharp knife." Being given safe and monitored access to it and becoming familiar and comfortable with it will likely result in a more positive and healthy experience with it when unfettered access is suddenly thrust upon them.


I think it's amazing that on a site that's supposedly filled with tech geeks, so few commenters seem to know the difference between spending time alone in your room in front a non-networked PC learning to code, and being glued to a networked smartphone no matter where you are, taking in a feed and waiting for the dopamine hit from a "like" or a share.

Anyway, the public-spirited side of me sees responses like the ones here and is depressed, but my secret inner libertarian sees them and thinks: "Score! My three kids will have attention spans and social skills, and will out-compete the smartphone-addicted children of these fools in every arena of adult life. So by all means, cripple your kids by handing them one of these pocket slot machines. Mwuahahaha"


> My three kids will have attention spans and social skills, and will out-compete the smartphone-addicted children of these fools in every arena of adult life.

until, you know, they decide to buy a smartphone as an adult, and over consume then


They will still have had almost an 18 year learning advantage at that point.


Yeah, cripple your kids by allowing them to participate in adolescent social life the same way their peers do, making it easier to make and maintain such ties later in life. What a horrible fate.


Looks like Zawinski's Law is still 100% true:

“Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can.” Coined by Jamie Zawinski (who called it the “Law of Software Envelopment”) to express his belief that all truly useful programs experience pressure to evolve into toolkits and application platforms (the mailer thing, he says, is just a side effect of that). It is commonly cited, though with widely varying degrees of accuracy.

http://www.catb.org/jargon/html/Z/Zawinskis-Law.html


So why are there no really good email clients?


Because email clients can only be as good as email itself, which isn't really good. ;)


I know you're being facetious but email clients fail to satisfy me primarily when it comes to search and organization, remaining performant when dealing with large archives of mail, good UI/UX and robustness / reliability. None of these are really problems inherent to email, though they are perhaps inherent to software.


I think the same problems appear in other domains with large libraries. Music players have also been historically terrible with bad UI, slow searching and many of the same issues you mentioned.


surprisingly I have found rhytmbox by the gnome devs to be pretty good. It does what is supposed to very well and it works which is all I want out of a digital music player.


Back around 2000 programs were instead expanding until they were able to burn CDs. And apps a few years ago - until they included Snapchat-like stories.


I've never heard of this, thanks for sharing. So true.


My first reaction to "could this be related to the storm," was "oh no, now this QAnon stuff has spread to HN."


What's wrong with SES? (Not a rhetorical question. I've played around with it and it seems ok, so looking to hear about downsides.)


Many of their IPs have been blacklisted by spammers.

When I last tried to use them for marketing purposes with paying clients, it was really hit and miss.

See below. They are old links, but serves my point.

[0]: https://forums.aws.amazon.com/thread.jspa?threadID=233001

[1]: https://forums.aws.amazon.com/thread.jspa?messageID=538207

[2]: https://forums.aws.amazon.com/thread.jspa?threadID=220517


Also if your sending that amount of emails, you want your own IP. Maybe even two IPs, One for transactional and one for marketing.


SES allows you to purchase dedicated IPs


This ^^. The "OMG... HORROR" in this article about the mere fact of x86 usage is deeply silly. The stuff about C64 and so on at the end... there's just no redeeming this mess. The mods should nuke it.


I found it to be interesting. Don't be a gatekeeper.


So, I just want to throw out there that your arstechnica uarch articles are one of the things that pushed me into computer engineering rather than straight CS. And Inside The Machine I consider to be up there with Patterson & Hennessy. Thanks for all of that! : )


Thanks!


Hey, same here! Your articles are what inspired me to go into chip design /computer engineering. Still have my signed copy of Inside the Machine on my bookshelf :)

Thanks a ton!


Thanks for the info! Now there's a pundit I can follow and read the books he authored.


Uh, it's technically interesting. If HN consisted of just articles like this, it would be a better site.


Is it? It's 3 pages of overly emphatic text revealing that an Intel chip is based around an x86 CPU. By the author's own admission is their conclusion: "Nothing really, I just found this funny and wanted to share".

Also I'm a bit baffled that they wrote a tool to measure the entropy of the machine code, tried hand-disassembling and considered that it might have been an encrypted binary format before guessing that an Intel chip could be running an Intel CPU. But they did try "EVERY POSSIBLE RISC ARCHITECTURE [they] KNOW" because apparently nobody ever used CISC on embedded devices. Nobody tell him about the GameBoy.

Of course I'm a bit harsh, it's easy to mock in hindsight but it's still not very interesting technically.


obviously you're not into baseband reversing, otherwise you would have known that for the past 10+ years, basebands were almost always RISC cpu and almost always ARM...

moreover, all previous iterations of Intel basebands were custom ARM cores based around Infineon IP acquired by Intel to be competitive in the baseband market...you did not even read my document, because I said this about the old baseband version

moreover, by the nature of baseband itself, it requires a CPU capable of real-time or near real-time processing, as a matter of fact other vendors are using Cortex-R CPU, which is an ARM cpu made for real-time os, giving you predictable timings, especially interrupt processing and memory access

for example, Cortex-R gives you a special kind of memory, called TCM (Tightly-Coupled Memory) memory, which gives you predictable memory access timings, something that you cannot obtain with a simple cache

by the way, Cortex-R is also used in WiFi chipsets, because the type of processing required is very similar (check the excellent writeup done by Google's Project Zero about this)

so yes, it is interesting to see how Intel managed to implement this kind of features in an x86 CPU, which was never designed for such kind of requirements

I suggest you take a look at the References in my document, they might provide some useful information on the matter

of course if you're not interested in baseband reversing, then I guess you're right, it's not technically interesting material


It's interesting that x86 is making inroads into mobile, yes.


Sorry, having learned on VAX and 68K, followed by several years of ARM, I will never lose my instinctive revulsion to the creeping unkillable multitentacled horror that is x86. :)


It was a great read, I wouldn't nuke it.


AWS has absolutely taken down websites because they didn't like what was on them, just recently, in fact:

https://freebeacon.com/issues/gun-rights-activists-posted-gu...

I'm a journalist and have seen a screencap of the takedown that Amazon issued to the Firearms Policy Coalition, which started and maintains the censored site (CodeIsFreeSpeech.com). The takedown erroneously cited a temporary restraining order issued against an entirely different site (defcad.com) as the reason for the sudden, no-warning booting of the site from AWS.

And as you mentioned, Azure just now threatened to pull Gab.ai off its platform over a pair of anti-Semitic posts.

https://www.businessinsider.com/microsoft-gab-azure-cloud-an...

So these infrastructure providers are absolutely involved in censorship right now.


I wrote about this last year:

https://fightthefuture.org/article/the-new-era-of-corporate-...

It's easy to side with CloudFlair when they go against a site like The Daily Stormer (Which is so out there it might just fall into Poe's Law).

The fact is that most decent hosting is only available in a handful of industries. Even the CF CEO has had misgivings of his decision and it gets us into a really questionable space.

Platforms should be free to do what they want right? They should be able to deny customers .. just like an airplane company should be allowed to keep people who crazy political opinions from boarding plans right? .. oh and black people too. Oh wait..what?

The freedom of speech in the US is pretty limited to government censorship. But we don't let businesses do whatever they want. They can't keep a certain ethnic group from eating at their restaurant, and in many states they can't choose if their venue allows smoking. The big question is, does speech need to fit into this same framework?

With the recent child protection act that gutted craigslist and took down backpage (an act that is leading to more violence against sex workers in the US and an act that the EFF and ACLU are actively fighting as being unconstitutional), we see the US government holding content hosting companies liable for the criminal actions of their user base. That is disturbing and already a form of government control over what customers a business is allowed to have.

It'd be one thing if censored sites could just go to another provider, but there are only a couple of big providers and their mass has the ability to crush anything they find questionable.


It's actually worse than there being only a couple of big providers.

The problem is that the same sort of people who are trying to shut down sites through legal pressure tactics against Amazon, Google etc are absolutely happy to use illegal tactics too. In particular once sites are booted off large providers onto smaller ones or self hosted sites, that's when the DDoS attacks start. Infowars already saw one, for instance. How many firms can sink large DDoS attacks without needing to kick out the target? Not many.

If a pressure group or activist employee base can get content off CloudFlare, Google, Amazon and Microsoft then DDoS-wielding ideological zealots will do the rest and then the site is gone for good.

Where does speech go then?

It's a very dangerous game for the people at these content platforms to play. I don't see Republicans sitting back doing nothing as their worldview and voter base is systematically wiped off the internet. Legislation seems likely.


I agree. I see this all the time when otherwise reasonable people spout "oh well, Google/FB/Twitter are private companies so free speech argument does not apply to them and that racists/nazis etc aren't owed anything by social media platforms."


Step 1 is to normalize censorship for racists. Step 2 is to redefine racism until it captures most of your political opponents, up to and including "supports free speech" as a racist viewpoint.


This is based on a slippery slope argument: if the major platforms can ban speech inciting violence against Jews and African-Americans, then what's to stop them from doing it for other classes of speech? The answer is that the public outcry for kicking off other kinds of users is likely to be more pronounced and more justified. I'm not shedding any tears for the Daily Stormer or Gab, and I don't view them as canaries in the coal mine.


Relying on public outcry to defend free speech is by definition guaranteed not to work, because the only speech that needs protecting is unpopular speech.


Is it? The public outcry against what these firms are doing is pretty loud. Members of Congress have expressed concern. Even Vox has published stories about Twitter suppressing content from conservative politicians (but no democrats). It has no effect.

Moreover how do you measure "public outcry"? The very point of censorship is to stop public outcry. If the media aren't writing stories and anyone who expresses concern is deemed to be supporting hate speech and banned, then it will look a lot like nobody cares even if many people do.


> Where does speech go then?

Federated / p2p systems like Mastodon? Non-Web-proper sites based on Dat and IPFS?

/*

If I were an adviser to the conspiracy theorists' insidious world government, I would suggest that pressure on non-consenting opinions be put carefully, to securely remove them form the normal mass Web, but not too strong as to push the normal users away from the (controlled) Web, to harder-to-control media. One way to achieve this is to allow some mild fringe content, and actively demonize any serious non-consenters, so that they'd look way off the chart to general public, thus "worthy" of being censored out.

*/


Dat/IPFS are peer to peer protocols. There is nothing that makes them DDoS resistant, you can just locate each peer rehosting content and blast each one off the net.

But more to the point, being forced onto Dat or IPFS is equivalent to being erased, given that nobody would know how to find or access the new location (Google doesn't index such net spaces).


Finding and blasting every peer is a bit harder, especially since pieces of content are encrypted on peer nodes, AFAICT. It's not any easier than to blast every peer torrenting chinks of a particular file (which, AFAICT, is still unheard of).

> being forced onto Dat or IPFS is equivalent to being erased

Yes, for now it is! It's a wild frontier without amenities for a normal netizen, such as a decent search engine. So the point of censorship is to force every important non-consenter to that wilderness without having enough other people to go there and civilize it, as they civilized the web, the online music access, etc.


Interesting, thanks! I hadn't heard of that.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: