Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm a fan of the chemistry analogy here: you can catalyze (reduce barriers) and you can incentivize (increase rewards). If the incentives point the wrong way, catalyzing won't help. If the barriers are too high, incentivizing won't help.

I strongly suspect we're in a "barriers too high" regime rather than an "incentives are too low" regime, and that our money would be better spent reducing barriers than increasing incentives.



I think I agree but the question seems to be how do we lower the barriers smartly while still keeping the intent of the rules and regulations? Add to it our own bias to put more emphasis on the regulations that concretely prevent us from doing what we want now, rather than those that abstractly benefit us and its easy to point to them being the problem.

It's like the "one mans trash is another mans treasure" cliche, something that you see as an impediment I see as a social good. Finding the right balance is tricky and unfortunately in a polarized political environment it's easy for people to fall into the far sides of the argument.


The real question is: why did we install the protections(barriers) in the first place? I assume that a major component of that was as a reaction to the increasing complexity of modern life. Self sufficiency is fine, but it is difficult to take the consequences for many things knowingly. As such, we created, one at a time, experientially driven barriers designed to protect people from new, novel issues.

I don’t think we need to offer more incentives or reduce barriers per se, but I think we really ought to be talking about refactoring our laws. The amount of legislative “technical debt” is intense and problematic.


For the many regulations in American society, there is usually a dead body buried underneath, possibly many dead bodies, including those of women and children, etc.

For each and every one, someone suffered terribly or died.


Unfortunately, many times, the dead body belonged to an idiot, whose foolishness won them a Darwin Award, and the legacy of inconveniencing millions of people.

We place too high a value on human life, and it's no wonder that we can't accomplish anything when a non-zero level of risk is unacceptable.


> We place too high a value on human life

I don't think this is quite right. More accurately, I think we need to the risk assumer and risk accepter may need to be making the informed decisions.

Too often there is someone making a decision that someone else pays for. You want to strap yourself to a rocket and go into space and you have been communicated the risks and accepted them, that's one thing. It's entirely another when you get into a car running on software that you have no clue about. In the latter case, you aren't making a risk informed decision because the person/corporation determining that the risk is acceptable doesn't flow that information down to you (or, there's limited bandwidth that any one person can make an informed decision about). That's where regulation levels to playing field somewhat to ensure perverse incentives don't cause risks to be accepted by those who don't have to bear them. I or [insert relevant company name here] shouldn't be the decider on what risk you should be (perhaps unwittingly) accepting.


> We place too high a value on human life, and it's no wonder that we can't accomplish anything when a non-zero level of risk is unacceptable.

This is too close to another formulation of the same thought: "we should allow corporations to kill people for profit, with impunity" for comfort.


No, it's really, really not. It would be nice if we could have a discussion of acceptable levels of risk versus inefficiency without immediately taking a left turn into "Soylent Green is People!" territory.


The "lefty" epithet, how handy.

You could start by telling us why the statements aren't the same, using examples and facts.

You seem to want to have an abstract, bloodless, clinical discussion of when it is acceptable to kill people in the name of "efficiency" (I will interpret this as "profit").

I can tell you that is a dishonest discussion, if that's what is intended. If you want to have it honestly you will have to first indicate an understanding of exactly, in detail, the most evil things that will happen as a result. Also the most devastating things that will happen and be protected from proportionate legal recourse under your proposed or imagined regime.

It won't be a bloodless conversation.


Evil and blame free do not necessarily follow from acknowledging that some level of risk is acceptable.


This is a non-statement intended to deflect into the abstract and away from actual consequences.

Policies and regulations aren't made in some Platonic ideal universe, they are made in specific, factual circumstances.

Come back to us when you can talk about how a specific "acceptable" level of risk does not involve enabling evil actors, and indemnified doers for a specific policy area.

Then we'll talk about the specific things you find acceptable, and just whose death and suffering you will trade for profit.


I don't read their statements the same way you do. The way they come across to me is that there needs to be a discussion of risk grounded in the understanding that in many (most?) domains, zero risk is unattainable.

It's like the idea of the FDA setting limits to the amount of insect contaminates allowed in food. At first, it seems disturbing until you realize that if they put a zero tolerance policy in place, there would be near-endless grounds to sue every food manufacturer out of business.


> in many (most?) domains, zero risk is unattainable.

Not only is it unattainable, it's actively counterproductive to try and achieve because of unexamined and unintended consequences.


Yes, but that doesn't mean the regulations are still relevant, or that they were ever been a good regulation to start with. The PATRIOT act came from a horrible tragedy, and many people would tell you how terrrible it is.


“Refactoring” is great word choice. It implies there might be a set of test scenarios we check against before revising a law to make sure we don’t break the intent. It also sets an expectation for continuous improvement.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: