Hacker Newsnew | past | comments | ask | show | jobs | submit | umvi's commentslogin

> Well designed security models don't sell computers/operating systems, apparently.

Well more like it's hard to design software that is both secure-by-default and non-onerous to the end users (including devs). Every time I've tried to deploy non-trivial software systems to highly secure setups it's been a tedious nightmare. Nothing can talk to each other by default. Sometimes the filesystem is immutable and executables can't run by default. Every hole through every layer must be meticulously punched, miss one layer and things don't work and you have to trace calls through the stack, across sockets and networks, etc. to see where the holdup is. And that's not even including all the certificate/CA baggage that comes with deploying TLS-based systems.


> Every time I've tried to deploy non-trivial software systems to highly secure setups it's been a tedious nightmare.

I don't know exactly which "secure setups" you are talking about, but the false equivalency between security and complexity is mostly from security theater. If you start with insecure systems and then do extra things to make them secure, then that additional complexity interacts with the thing you are trying to do. That's how we got into the mess with SE Linux, and intercepting syscalls, and firewalls, and all these other additional things that add complexity in order to claw back as much security as possible. It doesn't have to be that way and it's just an issue of knowing how.

If you start with security (meaning isolation) then passing resource capabilities in and out of the isolation boundary is no more complex than configuring the application to use the resources in the first place.


Look at how people have responded to Rust. On the one hand, the learning curve for memory safety (with lifetimes and the borrow checker) can feel exhausting when moving from something like Ruby. But once you internalize the rules, you're generally cooking without it getting in your way and experiencing the benefits naturally.

Writing secure systems feels similar. If you're trying to back port something, as you said, it can be a pain in the ass. That includes an engineer's default behavior when building something new.


Whats wrong with firewalls?

Or, how the alternative world looks where network security is more pleasant?


Firewalls are a fundamentally bad approach and are avoidable with good design.

Nothing should have access to the network by default. You can either get that right by limiting resource access (which is the job of the operating system) or you can get it wrong and have to expose new APIs and hooks to invite an ecosystem of many, slightly different, complicated tools to configure network access.

To give access to the network, you spawn the process with a handle to the port it can listen on, or a handle to a dynamically allocated port that it can only dial out of. This is no more complicated than configuration, and it doesn't have to be difficult for users. It can bubble up to a GUI very similar to what the iPhone has for giving access to location, contacts list, etc.

The fact that most "security" people have a knee-jerk reaction to "firewall bad" is exactly the cultural problem that I'm talking about. It's not a technical problem anymore, the solutions are known, but they aren't widely known, and they aren't known by decision makers. We've become so used to the wrong way for so long that highly trained people reliably have bad taste.


There's a reason why all security professionals I know use an iPhone.

To my knowledge there hasn't been a single case of an iOS application being able to read the data of another application - or OS files it wasn't explicitly given authorisation to do so.

It can be done, but for desktop it has never been a priority.

A bit like the earliest versions of Windows encountering The Internet for the first time. They were built with the assumption they'd be in a local network at best where clients could be trusted. Then The Internet happened and people plugged their computers directly into it.


Lots of sandbox escapes on iOS, but my favorite was https://blog.siguza.net/psychicpaper/

> Well more like it's hard to design software that is both secure-by-default and non-onerous to the end users (including devs).

Doesn't Qubes OS count?


Yeah, would be nice to have a "start at level 100" button so you can skip to the challenging part


It starts getting tricky at 50, 100 is the kinda like end, you win.


I got to 77 on my first try and my loss was dumb, I could definitely do better, but I'm like, "It took so long to get to 77."


> manipulating vulnerable viewers

it always rubs me the wrong way when people infantilize the masses. The "vulnerable" masses already already partake in lots of harmful substances and practices (tobacco, alcohol, drugs, gambling/lotto), AI videos are just another potential pitfall people will need to learn to be wary of.


I think "learning that absolutely nothing you see with your own eyes can be trusted as reality despite looking completely real" is a valid problem and that we are all somewhat vulnerable there.


External C++ code never has CVEs? Or I guess since you are manually managing it, you are just ignorant of any CVEs?


I suppose this largely depends on the kind of software that you write. Ideally, you also extract only the part of the external code that you need, audit it, and integrate it into your own code. This way you minimize the attack surface. I don't work on software that is exposed to the Internet however, so admittedly the importance of security vulnerabilities is low.



Discussed here!

The best things and stuff of 2025 - https://news.ycombinator.com/item?id=46365726 - Dec 2025 (97 comments)


Not just programming either; he invented a mathematical technique for calculating the nth hex digit of pi


Atlassian recently did this with BitBucket self hosted runners. Is there a CI/CD cartel or something?


SVGs have a lot of security landmines; it's simplest to just disallow them, especially if they are untrusted (user provided)


Definitely! In 2020, I reported an XSS vulnerability in GitLab using the onLoad attribute to run arbitrary JavaScript, and I was able to perform user actions without requiring any user interaction. For some reason it took them months to fix it after I reported it to them.


Protobufs are a pain to debug and maintain compared to json and modern browsers support zstd compression making json "efficient"


Have any videos or pictures you can share?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: