Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't really want to engage with the RESF. We have the level of safety that we feel is appropriate. Believe me, we do feel responsible for quality, working code: but we take responsibility for it personally, as programmers, and culturally, as a community, and let the language help us: not mandate us.

Give us some time to see how Hare actually performs in the wild before making your judgements, okay?



I'm a security professional, and I'm speaking as a security professional, not as an evangelist for any language's approach.

> Give us some time to see how Hare actually performs in the wild before making your judgements, okay?

I'm certainly very curious to see how the approach plays out, but only intellectually so. As a security professional I already strongly suspect that improvements in spatial safety won't be sufficient to change the types of threats a user faces. I could justify this point, but I'd rather hand wave from an authority position since I suspect there's no desire for that.

But we obviously disagree and I'm not expecting to change your mind. I just wanted to comment publicly that I hope we developers will form a culture where we think about the safety of users first and foremost and, as a community, prioritize that over our own preferences with regards to our programming experience.


I am not a security maximalist: I will not pursue it at the expense of everything else. There is a trend among security professionals, as it were, to place anything on the chopping block in the name of security. I find this is often counter-productive, since the #1 way to improve security is to reduce complexity, which many approaches (e.g. Rust) fail at. Security is one factor which Hare balances with the rest, and I refuse to accept a doom-and-gloom the-cancer-which-is-killing-software perspective on this approach.


You can paint me as an overdramatic security person all you like, but it's really quite the opposite. I'd just like developers to think more about reducing harm to users.

> to place anything on the chopping block in the name of security.

Straw man argument. I absolutely am not a "security maximalist", nor am I unwilling to make tradeoffs - any competent security professional makes them all the time.

> the #1 way to improve security is to reduce complexity

Not really, no. Even if "complexity" were a defined term I don't think you'd be able to support this. Python's pickle makes things really simple - you just dump an object out, and you can load it up again later. Would you call that secure? It's a rhetorical question, to be clear, I'm not interested in debate on this.

> I refuse to accept a doom-and-gloom the-cancer-which-is-killing-software perspective on this approach

OK. I commented publicly that I believe developers should care more about harm to users. You can do with that what you like.

Let's end it here? I don't think we're going to agree on much.


> There is a trend among security professionals, as it were, to place anything on the chopping block in the name of security.

I really have to disagree on this, in spite of not being a security professional, because the history has proven that even a single byte of unexpected write---either via buffer overflow or dangling pointer---can be disastrous. Honestly I'm not very interested in other aspects of memory safety, it would be even okay that such unexpected write reliably crashes the process or equivalent. But that single aspect of memory safety is very much crucial and disavowing it is not a good response.

> [...] the #1 way to improve security is to reduce complexity, [...]

I should also note that many seemingly simple approaches are complex in other ways. Reducing apparent complexity may or may not reduce latent complexity.


History has also proven that every little oversight in a Wordpress module can lead to an exploit. Or in a Java Logger. Or in a shell script.

And while maybe a Wordpress bug could "only" lead to a user password database leaked but not the complete system compromised, there is a valid question which is actually worse from case to case.

Point is just that from a different angle, things are maybe not so clear.

Software written in memory unsafe languages is among the most used on the planet, and could in many cases not realistically replaced by safer languages today. It could also be the case that while bug-per-line might be higher with unsafe languages, the bang-for-buck (useful functionality per line) is often higher as well (I seriously think it could be true).


Two out of your three examples are independent to programming languages. Wordpress vulnerability is dominated by XSS and SQL injection both of which are natural issues arising from the boundary of multiple systems. Java logger vulnerability is mostly about the unjustified flexibility. These bugs can occur in virtually any other language. Solutions to them generally increases the complexity and Hare doesn't seem to significantly improve on them over C probably for that cause.

By comparison memory safety bugs and shell script bugs mostly occur in specific classes of languages. It is therefore natural to ask for new languages in these classes to pay more attention to eliminate those sort of bugs. And it is, while not satisfactory, okay to answer in negative while acknowledging those concerns---Hare is not my language after all. Drew didn't, and I took a great care to say just "not a good response" instead of something stronger for the reason.


> #1 way to improve security is to reduce complexity

If managing memory lifetime is an inherently complex problem (which it is), the complexity has to live somewhere.

That somewhere is either in the facilities the language provides, or in user code and manual validation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: