Hacker News new | past | comments | ask | show | jobs | submit login

Good old personal responsibility. It's your responsibility not to walk into traps laid out to catch you.

Why stop people from laying out the traps? They're just trying to make a living, it's perfectly fine!




Absolutely. Let's lay out traps for the trap layers then, and find a way to do it for a living. You're saying that's fine, so lets do it.


Except the trap layers are bigger (more money), stronger (good lawyers from that moeny), and there are more of them than you (lots of people and lawyers).

How do you fight that? With a bigger group of people. Since they are so large, that means a nation or state (if you're a big one, as if you're small they probably won't care, that's how big they are).

If there's a pride of man-eating lions roaming about, you don't set out by yourself to fix the problem. That's just an imaginative attempt at suicide. You get a large group of people. That's why regulation is what we need, because that's what it means when you get a really large group of people to impose their will in the modern age.


Except you don't lay out traps for the trap layers. You make it very clear that if they lay out traps then they will face consequences.

They are the ones acting in bad faith and trying to extract resources from you via subversion.


What do you propose to solve this problem besides teaching others to take personal responsibility?


Maybe we can look for historical examples of how legislation had a positive impact on a widespread harmful habit.

  Early on, the U.S. Congress adopted the Federal Cigarette Labeling and Advertising Act of 1965 and the Public Health Cigarette Smoking Act of 1969. These laws—

  * Required a health warning on cigarette packages
  * Banned cigarette advertising in the broadcasting media
  * Called for an annual report on the health consequences of smoking
https://www.cdc.gov/tobacco/data_statistics/sgr/history/inde...


Some basic rules and regulations would be great :)

This should of course not replace taking personal responsibility, but humans are way to easy to influence / hack to leave it entirely to market forces. Some common ground and basic rules is at the very core of any healthy society or community.

Speed limits, traffic lights and bike lanes creates great conditions for a healthy marketplace. An absence of regulations benefits only the biggest and meanest, and usually ends up creating unsustainable systems that is crushed under it's own weight.


> Some basic rules and regulations would be great :)

Any specific ones? I personally can't think of any at all that would work.


I can see two obvious interpretations of what you might be implying:

1. That, because you are unable to think of rules and regulations that would work, there must not be any.

2. That, in order for anyone to legitimately suggest that something might be worth attempting, they must already have a working solution.

Is it one of those two, or something else?


Some (including myself) would argue that a repeal of section 230 would go a long way towards solving some of these issues.

A large part of the problem is that social media companies algorithmically serve content to serve their purposes, but are not liable for the consequences of algorithmically served content.

To me, using an algorithm to select what content to show a user is exactly the same thing newspapers do: editorializing. I don't understand why its treated any differently.


Those people who say that about section 230 are better known as either dishonest or complete fucking morons.

The entire mutually exclusive division between platform and publisher is a zombie propaganda lie that won't stay dead.


I personally can't think of any at all that would work.

This rhetorical technique is called 'poisoning the well,' preemptively injecting doubt into whatever follows. The GP gave several examples of regulatory mechanisms that already work to some degree, and an argument for why, but you chose to ignore that completely for some reason.


The GP gave examples of regulatory mechanisms that work for other situations, but I also am unsure of what specific regulatory mechanism would work well for this, and those examples don't really bring anything to mind, even though I am in favor of a regulatory solution. I think coloring the question as poisoning the well might be a bit premature.

That is, what are traffic lights and safety lanes for social networks? How do we enforce those for private products when generally they are things why apply to our public spaces? A social network is much more of a mall than a public park, so what restraints are we willing to make towards a private property, and what will actually work? I think those are very valid questions, and while the argument from blueterminal may have gotten there in a roundabout way (and in a way that some consider not in good faith), I they are well worth considering in detail.

To me it's blatantly obvious something needs to be done, I'm just not sure what that is, and am slightly afraid we'll implement a fix that if not as bad as the problem, is still much worse than it needs to be unless we consider it carefully.


It's the pre-emptive foreclosure of discussion I find objectionable, negating the attempt rather than exploring the problem space.

I would personally reduce the private property privileges substantially since a social network by definition derives its value from the number and variety of people that use it. I'd like if FB were at least as searchable to its users as to advertisers, for example; arguably FB knows more about many of its users than they know about themselves.


> I would personally reduce the private property privileges substantially since a social network by definition derives its value from the number and variety of people that use it.

That does make sense, and also fits with how private property can't entirely be viewed in isolation if it has negative externalities. I.e. You should be able to pollute the air immediately above your land as much as you want because that doesn't just stay on your land.

> I'd like if FB were at least as searchable to its users as to advertisers, for example; arguably FB knows more about many of its users than they know about themselves.

That might help in some small amount (and as long as people people actually reviewed info and requested removal, and that had to be honored, it would help), but I'm not sure the companies in question wouldn't just turn their neuroscience divisions to the task of making people want to allow the data for some reason.

To me this feels more like a drunk driving type situation, or age of consent, or ability to sign away your rights. We believe some things should be disallowed because the combined cost to society pf allowing it is much greater than the sum of the individual costs for disallowing it added up. But even if that can be sold as a good idea for this, I'm not sure what specific things we would do to block it that aren't nebulous and can be gamed. Maybe disallowing advertising, but that seems to be targeting one industry, and I suspect something else with a similar negative and the same incentives might take its place for this area.


Yes, make facebook news feed default to first in / last out, and let users upload community created filtering algorithms so they can decide whether they see certain news/sites etc. or if they just see their grand kid’s picture (and if there are none, they would probably log off of FB quite quickly, which is why FB does not offer this as an option now)


@blueterminal: There are many proposed solutions, some better than others. GDPR is a step in the right direction, it should not be easy or trivial to collect huge amounts of data on people. There is no good rational justification for this behavior. Consumer protections is important and should be extended to how algorithms are used. Fine-grained targeting of ads (political or not) should not be allowed, due to the obvious negative consequences for societies. An expert panel could be given the task to make some sensible compromises.

The irony is that such regulations would ultimately benefit the companies that are currently being criticized, making long term survival more likely. Regular people would be better of too, and I would not need to discuss movies like The Social Dilemma on HN.


Stop people from laying out the traps, and saying it is not really fine. Guess that means regulation and enforcement thereof. Things go both ways, of course. There is a certain level of personal responsibility to teach and take into account.


> Stop people from laying out the traps, and saying it is not really fine. Guess that means regulation and enforcement thereof.

What kind of regulations would you impose? Any specifics?


How about stop the hysterical hyperbole of calling speech traps instead?


Not the original poster - just replying with some ideas...

I would purpose that Google in particular be broken into many different companies. The fact that they own such a large part of mobile, browser, search, video, email, news aggregation, online office products is hugely problematic. I believe one of the issues that was not highlighted is the overall reach of these companies, they focus in on the mobile aspect, but there are huge swathes of data being collected via other means. Search in particular should not have a bias in the autocomplete - search should be agnostic, like Wikipedia. When I search it should show the same results as when you search. Next there should be a requirement that these companies provide access to a users data back to them, along with the analytical output that creates their digital avatar. There should be the ability to FULLY delete the data per a request. Next there should be a requirement that the sites provide ability to utilize their services with browser anonymity enabled. Much of the data leakage is harvested automatically from the mobile device or browser, the ability to slow the trickle of information should be required by law. Perhaps a requirement that a price that an advertiser paid for each ad should show up, and an opportunity for the user to opt-out and pay for the service instead at the same rates in truly anonymized fashion. More extreme options - restrictions on access based on age, we do this for cigarettes and alcohol, perhaps this should be applied to social media as well. Perhaps there should be a throttle on data gathering or hard limits placed on how much data can be pulled in and kept. Require maximum data retention periods on this data to make the models dumber. We have data retention periods set for health data - why not restrict it for social media.


Time and time again I see it suggested that these "lets be evil" giant companies like Google be split up. I'm coming around to this. They aren't able to be accountable because they are far too giant (not necessarily in the head-count sense) to be influenced by human conscience. They take on a life of their own, unconstrained by their externalities, adapting like DNA, optimizing for their survival at any cost. Force these tech giants to split up, because we agree as a society that they are evil (or at least problematic). The individual people that make up the mini-corps will then have more influence to steer these mini-hydras. What about shareholders?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: