Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's a surprising amount of insecure code in the wild; and naive engineers who are willingly ignorant in their security practices.

I'd assume that Parler's engineers motivations had more to do with politics than providing a secure platform for protecting dissidents under duress.

(Or, if we look at the history of a recent major war, the mediocre engineers working for the other side thought they were the good guys.)



>and naive engineers who are willingly ignorant in their security practices.

Fairly sure we could replace algorithm and data structure whiteboard interviews with security interviews and we'd all be better off


I'm all in favor of getting rid of the leetcode interviews, but it's not an either-or one, coding competency is still the primary concern, security a secondary one.

And I don't think an individual developer would have prevented this; this is an issue with the general security and monitoring policies at Parler. I mean how could they create millions of admin accounts and extract 70 TB of data without any alarm bells, flood control and circuit breakers engaging?


When I interview candidates and their solution encounters an unexpected condition I typically park the original question to temporarily discuss how to handle this exception.

This and more generally their thoughts on how to handle other types of unexpected scenarios is an important part of delivering real world solutions. I'm shocked by the amount of engineers that don't have any thoughts on this topic.


The popular way of quizing developers during the interview has more about throwaway leetcode than experience.

What you're digging for is about their experience. To me, what you're finding comes at no shock. The industry has been punishing people with experience and willing to show it for years.


I think it depends on level you're interviewing at. New college hires rarely have much a clue about this but engineers and senior engineers should be able to have a conversation about it. To be fair because of time constraint, I don't have candidates code this but do want to entertain the discussion. For example, what happens if i pass "illegalValue" to your method, or what do you think your solution should do when the service dependency you call fails to return.

The sentiment is important because it shows how a candidate ties the requirements to the customer use case that it is serving. Not being able to connect these two is a concern in my mind especially for those who are not new college hires with little real world experiemnce.


It is an important part of delivering real world solutions. How often is it a part of being hired to deliver real world solutions?


I'm not sure I understand the question. In my mind, if you cannot reason through how to handle non-expected things in your code, I'm going to think twice about hiring you. Code being able to account for erroneous situations is as important as your golden path because as we know software should be predictable but regularly is not. Especially so in the world of micro-services where so many integration paths are changing and in flux.

Are you suggesting we should only interview candidates to code the happy case and not consider how they will reason through what to do when the input isn't what they expected or a dependency fails to produce results?


I think you should hire for that skill.

In practice, I do not think candidates are hired for that skill. I have never been asked about error handling in an interview.


Generalizing things because you've never experienced them on an individual level seems like a bit of a stretch. Anyway, it may not be asked in interviews but I've interviewed ~300 engineers for a FAANG company and its something I cover in interviews for the reasons described.


I saw a tweet regarding it that the IP rate limiting fell over due to X-forwarded-for header not being correctly handled allowing the bypass of that circuit breaker.


I am certain most software engineers would abysmally fail the Security+ exam, which is the entry level for security practice.

But a far more expedient process is to just give candidates an essay exam to see if they are functionally literate in their profession.


My go to question for interviewing candidates is "What trends do you see in web development now or you see being important in the next few years?". There's no right answer there, I'm just looking for something relevant that shows they've got some knowledge of the field outside of being able to bash out code to order. For juniors I don't really expect much while seniors should at least be able to talk a bit about a couple of things, but having interviewed prospective head of development candidates I was amazed that two out of four just tanked on the question, one just couldn't answer at all.

Candidates who know there are things they don't know about I don't mind, but candidates who are unaware of these things at all are typically uninterested in broadening their knowledge, and not someone I'd like on my team.


Can you outline what you might expect in an essay exam for a software engineer position?

Would it take the form of a standard interview question:

"Write a few paragraphs on what happens when you press a key on the keyboard"

Or would it use the medium to ask a question less appropriate in an oral interview process?

Genuinely curious what you envision with this format.


The goal is to assess a candidate’s ability to communicate technical matters quickly with structure, organization, and planning. Secondary considerations include the ability to follow simple instructions, command of written language, and accurate descriptions of technical subjects.


Replace fizz buzz with spotting SQL injection issues.


There is plenty of use for coders who can't spot SQL injection issues. I can't think of much use for a coder who can't do fizzbuzz.


How?

Sql injection is so basic, that if you can't spot it, you probably can't do fizzbuzz.


It more likely means that you just aren't familiar with SQL.


Bingo. They most likely didn’t care. It was all a means to an end. I would be combing this data to see if any active users that were inciting a call to violence are employees or contractors of say: Epoch Times, Members of Congress or their staff, members of law enforcement (especially capital police), select corporations or donors.


I would also be checking to see if they were employees or contractors of NY Times, CNN, known agents of China/Russia/Iran.

There are too many groups that are enjoying seeing the division of the American people.


If you’re referring to world war 2, axis engineers were in no way “mediocre”.


Systems can be complicated and even the smallest detail can be dangerously revealing to a scrutinizing eye.

"Allied intelligence noticed each captured tank had a unique serial number. With careful observation, the Allies were able to determine the serial numbers had a pattern denoting the order of tank production. Using this data, the Allies created a mathematical model to determine the rate of German tank production. They used it to estimate that the Germans produced 255 tanks per month between the summer of 1940 and the fall of 1942."

One source of many: https://www.wired.com/2010/10/how-the-allies-used-math-again...

This information was used to estimate force size and thus counter it, and it turns out this method was surprisingly accurate.


Indeed. If you look at any report/book/etc about the strategic production of war goods in WW2, you'll quickly realize that the Germans over-engineered most of their equipment. This resulted in fewer weapons and more maintenance for said weapons. The famous Tiger II tank (and most of their other planes/tanks) took longer to make, required more maintenance, consumed more fuel (a precious commodity for Germany at the time), and required more one-off spare parts (even the tracks were designed for one specific side of the tank). On top of this, Germany had more tank models than the rest of the allies combined. The allied tanks were simpler and could be mass-produced at insane quantities, parts were interchangeable, and could more easily be taken from disabled machines.

The Russians even went further, specifically engineering their tanks to only pass QA to last a very short amount of time (as little as a few dozen KM of use) during the first half of their involvement in the war because they'd be destroyed before then on average anyways.


It should be noted however that the Axis could never compete with the Allies in terms of quantity - America alone had over 5 times the industrial capacity of the entire Axis in 1944, and Germany was critically limited in resources like oil. Germany needed weapons that could get 10+:1 kill ratios. Further most of the late war German equipment was designed during the early war when they were doing well: it looked like their industrial base was expanding and they mostly needed equipment for well supported offensive actions. If germany had spammed tanks like the Russians did, they'ed just run out of fuel sooner. It was a gamble to go for over-engineered equipment, but it was rational even if it ultimately didn't pan out.


This and similar stories should really be interpreted more as british intelligence being brilliant than the germans being dumb. It's almost scary how many times the allies produced paradigm-shifting hacks in record time throughout the war.


Nice, I hadn't heard that one. I did hear that they always ended encoded messages with "heil hitler", giving the decoders a solid lead and verification that the key used was correct.

On that note, using UUIDs would be more 'secure' than auto incremented numbers, wouldn't it? I don't like how much space they take up in my URLs though.


> On that note, using UUIDs would be more 'secure' than auto incremented numbers, wouldn't it? I don't like how much space they take up in my URLs though.

Or just assign them in blocks that are out of order. Any intelligence gained from the leakage of such blocks would be misleading. Misleading is often even better than non-existent.


That, and always beginning them with a weather report (at least for U-Boats).


it seems unlikely anyone would be describing WW 2 as recent.


On a tangent: This hacking of parler might be the reason a more recent WW does not come into fruiton.


But it is, as is WW I. The latter ended 102 years ago and set in motion many of the technological developments which define our current world, WW II refined these to a level which is recognisable and often still useable today. Electronic warfare, programmable computers, jet-powered aircraft, nuclear weapons - all of these were used in WW II. Modern computers are faster, modern jets are more reliable and more fuel efficient, modern nuclear weapons are more compact and modern electronic warfare has kept up with the development of computers and electronics but as wars go WW I and WW II were the first - and possibly last [1] - "modern" large wars.

[1] - modern weaponry makes large-scale land war difficult to survive, e.g. the average survival time of a main battlefield tank is counted in minutes.


And many smart Germans (engineers) left when they figured out what was going on.


"Cool, I found a package for this"


Or how about a system where pretty much the entire thing is built upon packages? Even for the most minute things like left-padding..


Hello fellow follower of the cult of cargo!


Scariest comment so far!


> I'd assume that Parler's engineers motivations had more to do with politics than providing a secure platform for protecting dissidents under duress.

If one is to look at the LinkedIn for the tech leadership of Parler it would not be a stretch to say that they are way outside of their depth technologically speaking.


To be fair, it's probably hard for a network like Parler to attract top talent. I mean, they explicitly advertised themselves as the "free speech social network" (i.e. "all hate speech welcome, we won't censor anyone except maybe Trump parody accounts") - would you want to work for such a company, or have it on your resume in the future?


> would you want to work for such a company, or have it on your resume in the future?

Does top talent only work for giant ad networks that thrive through undermining privacy (and hence, free speech) while manipulating public discourse to the point where these companies hardly have any defenders left? I suppose so, money will easily trump other considerations, especially among the naive, ignorant or just plain venal.

The sad fact is your comment exposes how difficult it is for anyone in the tech industry to hold a sincere conviction that free speech is a good thing, which until recently would've been astounding. It's a giant backwards step.


> The sad fact is your comment exposes how difficult it is for anyone in the tech industry to hold a sincere conviction that free speech is a good thing

Twitter occasionally labels disputed/debunked political claims as such (but still lets them be published) and, after literally years of doing little more than that, finally took actions to ban a half-dozen high-profile accounts that kept pushing such claims after they arguably literally lead to an armed insurrection. Parler was literally designed with suppression of political viewpoints they disagree with in mind from the start. It should be crystal clear which of those networks "values free speech" to a higher degree.

So, no, your implicit claim that it's sad that top talent wouldn't work for Parler because that would demonstrate their commitment to free speech is silly at best and disingenuous at worst. Parler has demonstrably less commitment to free speech than Twitter does.

I'll be blunt: my sincere conviction is that "if you moderate anything it means you are not for free speech" is not a viable operational principle. It's a rhetorical device. Trolls -- alt-right or otherwise -- have always claimed that moderation suppresses their free speech. If you listen to them, you are running a forum for trolls, whether or not that is your intent. It is not Parler's publicly claimed intent to be doing so, but -- even based on the content on their site, let alone their ideologically-driven moderation which, again, goes far beyond anything Twitter, Facebook, et. al, have actually done -- it is painfully obvious it is their actual intent.


> Parler was literally designed with suppression of political viewpoints they disagree with in mind from the start.

"We're a community town square, an open town square, with no censorship... If you can say it on the street of New York, you can say it on Parler."

That's a quote from the CEO I just grabbed from CNBC[1] and there are others floating around about the lack of censorship.

How does that lead to suppression of opposing political viewpoints?

[1] https://www.cnbc.com/2020/06/27/parler-ceo-wants-liberal-to-...


This is just talk. You can scroll up to the head of the thread and see how the platform was designed to censor in order to maintain a certain ideology.


> This is just talk

If you can't construct a proper response don't denigrate mine, thanks.

> You can scroll up to the head of the thread

Again, if it's too much effort for you to construct proper responses then you shouldn't respond at all.

> see how the platform was designed to censor in order to maintain a certain ideology

That is not what the tweets show. They show that there are moderation tools. There is no proof that any view was censored, you, along with the tweeter, have inferred that. If you have a statement sent out to mods about what should be allowed, or you have something that shows certain types of speech were suppressed then you can make the claim. "Social network has moderation tools and mods" does not cut it.

Now, if you're going to reply again I'd ask that you try harder to maintain some civility and up the quality, probably through greater effort.


> The sad fact is your comment exposes how difficult it is for anyone in the tech industry to hold a sincere conviction that free speech is a good thing, which until recently would've been astounding. It's a giant backwards step.

Free speech is a good thing. And if algorithmic ranking weren't involved, I would probably still hold to the notion that the appropriate cure for bad speech is usually more speech.

But I've come to understand that even when some of a thing is good, more of it is not always better.

By all means we need vigorous debate and principled stances arguing about where the line should be drawn, and what are the appropriate consequences for stepping over it, but "fire in a crowded theater" is just the obvious case when we have people nattering on about how it's stupid to prevent people from walking around with spare cans of cans of gasoline, just in case and ... "hey, did you know that if the fuel-to-air mixture is rich enough, gasoline actually won't catch fire? Here is your complimentary box of matches BTW."

We desperately need better models and mechanisms for regulating speech that don't require heavy handed censorship by the government or outright bans by private parties, and it would be great if these mechanisms can be meaningfully exercised at the edge of the network (or social graph) rather than centrally deployed, but I'm not sure how to get there from here without the various failure modes bringing everything crashing down around us anyway.


Speech is not gasoline, and ignoring the fact that "shouting fire in a crowded theatre" is not the correct quote (it's falsely shouting it, and I only know one case of it happening[1]), you've placed the cause of this on algorithmic ranking as much as speech itself.

Great, let's stop interfering with speech, it leads to bad things.

[1] https://www.youtube.com/watch?v=X3Hg-Y7MugU


> Speech is not gasoline

It's a metaphor. Let me extend it a bit: Speech is the drought of repeated lies about the illegitimacy of an unwelcome outcome drying out the underbrush of widespread discontent.

Speech gathers the tinder of a crowd committed to "stop the steal".

Speech is the accelerant convincing those so predisposed that violence or the threat of violence is acceptable and even to be admired.

And speech is the match tossed offhandedly aiming the mob you primed and egged on at the target you want to intimidate (and that's the most charitable interpretation).

> you've placed the cause of this on algorithmic ranking as much as speech itself.

To be clear, I find algorithmic ranking to largely be the cause of mostly the 'widespread discontent based on repeated lies' part, although there are clearly also various 'automated radicalization' effects happening as well.

> Great, let's stop interfering with speech, it leads to bad things.

I don't think we get to unring that particular bell (especially since such ranking efforts predate both Twitter and Facebook). Applying ranking techniques to the prioritization and selection of a subset of items from a variety of sources is one of those ideas that was inevitable because it was obvious to a sufficiently large number of people, whether the result is a live feed, an automatically arranged 'front page', a playlist, or some other format. It doesn't even matter which specific techniques happened to be used, they were all going to be tried by a lot of folks until somebody got the results they wanted (more engagement / time on screen). Tech companies like Twitter and Facebook are going to continue to try to mitigate the toxic side effects on public discourse, but I don't think there is a way for them to back away from algorithmic ranking unless government regulation simply forbids it outright (which isn't too likely). For that matter, I'm not sure how useful HN itself would be if the front page wasn't being automatically ranked based on engagement+recency with editorial decisions being made to nuke certain types of posts/topics for being attractive nuisances.


> It's a metaphor.

Clearly, here's a better beginning:

“Let me extend the metaphor a bit".

> Speech is the drought of repeated lies about the illegitimacy of an unwelcome outcome drying out the underbrush of widespread discontent.

Did you mean hate speech? A drought is usually a negative. Sorry, I just can't make sense of that.

> And speech is the match tossed offhandedly aiming the mob you primed and egged on at the target you want to intimidate (and that's the most charitable interpretation).

All very poetic but, again, I'm sorry but I can't find any substance to it.

> I don't think we get to unring that particular bell

I wasn't the one suggesting we do. You presented two causes - algos and lies, the negative outcome of the latter being magnified by the former - and then chose to suppress one. I think that's a false choice as I can think of other possible ways to remedy the situation and would pursue them instead, but even if I only had the false choice then I wouldn't pick suppression of speech. I can manage with a chronological feed, I was on Twitter and Facebook pretty early, it wasn't bad enough for me to give up the foundation of liberal society. I even remember a time before the internet, it really wasn't that bad.

> I don't think there is a way for them to back away from algorithmic ranking unless government regulation simply forbids it outright (which isn't too likely)

What's likelihood got to do with it? You're just shrugging your shoulders and choosing to support the suppression of speech instead. It's not as if you've made the argument that you care about free speech but the alternatives are unlikely. I don't even see any evidence as to why it's unlikely. Making a decision based on the likelihood makes little sense, it's not a horse race.

> I'm not sure how useful HN itself would be if the front page wasn't being automatically ranked…

I agree. Why would legislation of algorithms mean no algorithms? It doesn't and it wouldn't. Here are some viable alternatives:

- Social media sites give users control over the algorithms that decide their feed - Users get access to the tools for blocking et al, and the scoring of users. Why can't I know that the person replying to me is likely to be a bot? Or has scored highly for trolling? Why can't I know my own score? Why can't I choose my own shadowbanning? No good reason other than the centralisation of power. - Companies over a certain size lose section 230 protections. This will encourage greater moderation on the large sites and foster competition from smaller ones.

I'm sure there are more.


> No good reason other than the centralisation of power.

Hiding some implementation details from bad actors isn't necessarily a bad idea, though of course you have to figure out whether you're doing the security equivalent of hiding a proprietary algorithm (bad idea) or hiding your secret key (good idea).


I think we have all unfortunately seen what unbridled free speech on the internet looks like.

I think we have reached a good balance (for now). Government has to be hands off, and platforms can censor as they wish.

It’s hard to say whether this will work long term, though.


Have you compared the unbridled free speech on the internet, which is (or was) available to US citizens, with that of bridled speech available to citizens of other places around the world?

Which place has more violence as a result?


Have you used an email with no spam filter?


I still get to see what's in my spam folder.


And you can still go to dark corners of the internet and view all sorts of hate speech.


You're stepping well outside the bounds of your analogy. Incoming email is like a social media feed, hence the comparison. It fails when you compare things like shadowbanning with spam, as I pointed out.

Going out onto the "dark corners" of the internet though, that would be equivalent, perhaps, to signing up for an equivalent email provider without a spam filter?

Nope, that doesn't work. Hard to tell what you could mean other than "my initial analogy didn't work so I'm going to move the goalposts".


Promoting hate speech to be published in the same spots as not hate speech because "free speech" is similar to putting the spam in with the regular mail. You're causing damage and actually preventing "free speech" because your speech incites action against those who they are speaking against. It also simply drowns out regular speech. Nobody wants to use a platform that has child porn or white supremacists plotting murder.


Free speech is one thing, building an echo chamber for lunatics is something completely different, at least in my book. In principle, we probably also agree that all people should be equal, but if you follow that principle to the end, you get Communism. That's why there is a need for supreme courts to interpret each country's constitution, which are basically just a list of simple principles that are acceptable to everyone, but the devil's always in the details...


> building an echo chamber for lunatics is something completely different,

Perhaps it would've been better for Twitter to support free speech then and they'd (the Parler users in question) have remained a fringe voice completely overwhelmed by opposition on a mainstream platform.

Even then, the main problem I see driving all of this is the lack of competition, so I fully support building "echo chambers" if that means competition for platforms like Twitter that are actively working to create echo chambers that they control.

Edit: clarity


> In principle, we probably also agree that all people should be equal, but if you follow that principle to the end, you get Communism.

That's a caricature. The actual principle is "equal justice before the law" (there are variations). Justice is an important part of the principle since otherwise "The law, in its majestic equality, forbids rich and poor alike to sleep under bridges, to beg in the streets, and to steal loaves of bread". Not to mention such evils as selective enforcement and prosecutorial discretion.

Few people these days would insist that equality of outcomes is a legitimate goal, though the extent to which disparity of outcomes is treated as a "code smell" at least potentially indicating a societal problem worth examining does vary fairly predictably across the US political spectrum. The appetite for instituting solutions when a systemic problem is demonstrably found also varies fairly predictably.


It'd be interesting, in any case. I recall tales of former colleagues who used to work for a dodgy online casino; fancy office or mansion on an island, extravagant parties, etc.

And online I think I read something about porn sites, who were working on large scale video streaming well before Youtube and Netflix (as streaming service) were a thing.


So it’s now just a given that all of HN objects to the phrase “free speech” without limits?

Seems like a pretty radical assumption and a pretty sad sign of the times if true.

Note: this message has nothing to do with Parler and being right or left. Just about the phrase “free speech” and what it now connotes.


The only reason you can even post here is because HN stops spam and DDOS attacks. Your entire post is only possible because there are limits on free speech.


Free speech has always had limits...


Well maybe in the eyes of US law, but what that's not a given for, well, me or perhaps others on HN. Take "the US highway network" for example. It can just as easily be used to transport black-market items, child porn, weapons, drugs, etc as it can transport ice cream bars. I support it. Tor just as easily supports drug sales as it does star-trek reviews. I support it.

Can't say for sure, but I think it's hazardous to assume we all prescribe to the exact same notion of the limits on free speech.


Oh, it's not just the US. Other countries have restrictions as well. Societal norms impose restrictions too. Unless you're an anarchist, there have always been (and probably always will) be restrictions on speech and free speech is typically meant in the domain of political speech.


Even anarchists believe there should be restrictions on speech. Just not that the power to restrict speech should be concentrated.

The way humans socially operate inherently restricts speech.


There are many talented engineers all across the political spectrum.


This isn’t untrue but the distribution matters. Most of the conservative-leaning engineers I’ve known tended to be libertarian and/or rule of law types who wouldn’t work for a place like Parler. If you’re a devoted capitalist, you might favor larger companies with better pay. And, of course, if you’re not all of white, straight, Christian, and male you might reasonably have concerns which would not have stopped you from taking a job from, say, Mitt Romney.

Each degree you move to an extreme has a fair impact on your ability to hire the best in a very competitive market. Even if you’re a movement conservative a prudent question is how something on your resume might affect your future earning potential.


Jack Dorsey: "We are the free speech wing of the free speech party"

Mark Zuckerberg: "Trump says Facebook is against him, liberals say we helped Trump. Both sides are upset about ideas and content they don’t like. That’s what running a platform for all ideas looks like.”

Matt Cutts: "We don't condone the practice of googlebombing, or any other action that seeks to affect the integrity of our search results, but we're also reluctant to alter our results by hand in order to prevent such items from showing up"


> would you want to work for such a company, or have it on your resume in the future?

How much are they paying, again? If they pay on par with FAANG, I'm sure they would have no problem attracting top tier talent. If they are paying multiples of FAANG, they would attract top of the FAANG talent. Of course if they are paying a fraction of FAANG, they are going to get a very mediocre talent.


> would you want to work for such a company, or have it on your resume in the future

And even if you did want to work for such a company, the impact on your resume in the future along might be enough to deter you.


Yes? I wouldn't work for Parler, but I fail to see how the phrase "free speech social network" should elicit some negative emotion. Parler sucks because they are a haven to right wing extremists, not because of their marketing.

Its like being angry at Signal because their encryption allows terrorists communicate securely.


Everywhere I go, I feel like I have free speech by default. I suppose it's my privilege that I feel like that, but I digress. When free speech is explicitly advertised, it smells.


Yes, this, exactly. It's sad that "free speech" currently feels like a dog whistle for the alt right, but it's disingenuous to ignore the reality that social media sites and forums that have sprung up in the last few years explicitly advertising this have very much been going after an explicitly far right audience. The explicit promise is "we won't suppress your speech like those other platforms do," but Twitter, Facebook, et. al., demonstrably suppress very little speech: there are high-profile cases of people who have been kicked off after repeated warnings, but that's not actually the same claim. The real promise of Parler and friends is "you'll be surrounded by people who agree with you, unlike those other platforms."

(There are lots of anecdotes of individual users who get temporary bans on Twitter for political speech, but I have heard those anecdotes across the political spectrum. I suspect conservatives grumpy at Twitter would be very surprised how much left-wing discourse there is about how Twitter protects TERFs, how they pay lip service to banning Nazis but don't really do it, how Jack Dorsey is probably a crypto-fascist, and so on. The parallel -- "I know of people who agree with me who have been moderated and people who disagree with me who have not, ergo Twitter is obviously biased in favor of The Other Side" -- is kind of fascinating.)


I agree the “free speech” label has been taken over by these content-outcasts and turned into a dog whistle. Today, if a platform markets itself as “The Free Speech version of X” it seems to always mean “The platform that hosts only content so bad it’s banned from X”.


Well, free speech is a good thing, if done responsibly. In practice though, "free speech" as used by Parler means no moderation at all, so the most blatant lies and the craziest conspiracy theories can run unchecked. And since mainstream platforms are cracking down on extremists, your platform will inevitably become a haven (and echo chamber) for them, even if you didn't intend to be one.


There was/is extremely heavy moderation on Parler.


Yeah I checked it out for a bit a few days after it launched, scrolled around for 10-20 min to see if it'd turn out like twitter, 8chan, or the_donald in terms of discussion and it was really weird. IDK how to even describe it other than that it seemed to have that MLM esque or truman show vibe where everything seemed strangely personal but also really shallow and performative? None of the discussions I saw felt natural. It was all super identity focused with very little policy discussion let alone material disagreement.


As others have pointed out, yes they did moderate, just for different things.


Almost everything on Parler and similar sites that is not explicit calls to violence against specific targets and does not call describe black people using the n-word and does not talk about things like how the Nazis were right when it comes to Jews could be posted on Reddit in /r/conservative without violating any rules of the subreddit or of Reddit itself.

Most of it could also be posted on Twitter and Facebook, although there it might get labeled as misinformation.

It's actually fairly difficult for the overwhelming majority of people to get legitimately kicked off of most mainstream social media. By "legitimately" I mean by actually violating the site's published rules. At the scale of these sites there are occasional mistakes made where someone gets banned who shouldn't, and it can be difficult to get that reviewed, but nevertheless for most people those sites are "free speech social networks".

Because of this, when you start a site like Parler you get almost all of your initial membership from those people who got kicked off of Reddit, Twitter, etc., or who are having to work at not getting kicked off because they want to post calls to violence, etc.

That sets the tone for the site from then on. Hence, when a site is specifically selling itself as a "free speech social network" it almost always can correctly be interpreted as "a social network for <X> extremists who could not follow basic norms for civilized discourse" for some X.


> I fail to see how the phrase "free speech social network" should elicit some negative emotion.

Can you name a "free speech social network" that isn't overrun by white supremacists and Nazis?

It turns out that if you prioritize free speech, then the people who congregate on your site are mostly those with beliefs that are sufficiently repugnant that decent humans don't want to be associated with them.


The description "free speech social network" is very heavily associated with far right extremists.

Much like, for example, the gadsden flag.


The yellow field, "don't tread on me" banner.

https://en.m.wikipedia.org/wiki/Gadsden_flag


if free speech is only for the enemy, I fear what victory looks like.


Someone recommends you some games, about one they say "it has a simulated theme park with intricate rollercoaster building engine and you compete with other theme parks for customers", about another they say "you're trying to build a rocket to the moon but it's really challenging", and about another they say "the interface responds to mouse clicks".

"if a working interface is only for bad games, I fear what good games look like"

Good games have working interfaces too, but they have a lot more worth talking about.


Top talent works at PornHub so I imagine Parler would have done all right. Perhaps not in the area of security, but we can point at plenty of other companies that were discovered to be lacking in this area at relatively early stages of their lifecycles (e.g., Zoom), not to mention a few very mature organisations (e.g., Intel!).

One of the things that's incredibly unhelpful in our current political debates is that there exists a very noisy (at least) minority on both sides of every one of those debates that assumes all the people on the other side are idiots. In general this is not true[0] and so, yes, even though Parler was a social network explicitly for conservatives, they would still have been able to hire smart people.

I don't say that Parler was for extremists, although an extremist contingent was certainly present, but it's worth remembering that even those that are unequivocally and uncontroversially agreed to be extremists by the vast majority of people (Bin Laden, Stalin, Hitler[1], et al) were always able to "hire", or perhaps disciple, very smart people.

Being smart is not the same thing as being ethical, by which what I really mean in this context is sharing the same set of ethics that you or I have.

(On a tangentially related note to both my first and last paragraphs, Boeing employ a very large number of very smart people and yet, as the 737 Max debacle clearly illustrates, they were nursing some absolutely horrendous culturual issues that led to a situation where that airliner was certified and sold even though it contained systems that incorporated severe safety failings.)

[0] And the culture of endless cheap shots, snobbish intellectualism, and disrespectful dismissiveness that surrounds political debate these days is not a force for good in the world.

[1] At the risk of invoking Godwin's law.


For the very reasons people work in ad tech, Facebook, et. al. Not everyone is a wannabe politician or a wannabe future founder/leader/influencer/celebrity with the accompanying delusions of grandeur.

A lot of people just want to lead normal lives with their friends and family. I envy them. Truly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: