Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
We've learned nothing from the SolarWinds hack (macchaffee.com)
143 points by dvfjsdhgfv on Nov 13, 2023 | hide | past | favorite | 110 comments


> If we could agree on a good, standardized capabilities model for software and everyone starts using it, we will have reached security Nirvana.

I want to get there, but I also don't trust anyone except (maybe) myself to determine what capabilities are "good" versus "bad".

I literally just spent most of today messing with my 2018 Samsung TV trying to get rid of the bloatware they pre-installed. I had to do that because the apps took up 96% of the storage within. These apps cannot be deleted normally.

If we're heading towards a place where vendors are the ones deciding what users are allowed to do, as well as what the software is allowed to do, and at what level of granularity, I have no reason to trust them any more than I trust a russian state-sponsored hacker.


But capabilities are not good or bad, no more so than e.g. integers.

What the post talks about is "a good system" of capabilities, that is, a comprehensive, logically sound, lean way to describe capabilities that would be widely accepted and applied. With that, indeed a lot could be achieved.


I think you can already see the issue with that proposition in terms of Google's attempts to prevent ad-blocking in MV3. Google went so far as to design a brand new protocol for handling web requests in such a way as to make it difficult or impossible to do proper blocking. They "Reworded" the capability.

If we assume that the "owner" of this list of capabilities is the same party that's writing the software, you better believe they don't (or won't eventually) have the user's needs in mind. Enshittification makes a fool of us all.


We've seen this elsewhere. "Security" becomes not about protecting the user's data from malicious operators, but protecting the manufacturer's data from the user.

If we do agree a standard it'll need to be about actually protecting the user, but implemented by manufacturers who have no incentive to actually do that, and lots of incentive to protect their data, sorry, business model.


"I literally just spent most of today messing with my 2018 Samsung TV trying to get rid of the bloatware they pre-installed removed."

Can you elaborate?


Buy a brand new TV.

TV comes with 5Gb hard drive

TV comes with 4.75 Gb of OS + bloatware apps.

Watch youtube and youtube caches .25GB for 2 hour long information video.

TV complains of no storage.

Rinse. Repeat


He did elaborate. "I had to do that because the apps took up 96% of the storage within." Literally the line right after the one you quoted...


I second this, please. I can't even update the preinstalled apps I use on my 2018 Samsung TV because they keep adding more default apps and there's literally zero free space with nothing but their default apps installed.


I combined a bunch of awful "tech tips" websites into a working approach:

1. Reset my TV to factory settings.

2. Go into the "apps" item while it desperately tries to update itself

3. Go into "Developer Mode" by pressing 12345 on the remote.

4. Restart the TV by holding down power button for 2 seconds.

5. Open "Apps" again

6. Open the cog.

7. Under the app you want to remove, press "Deep Link Testing" and then hit cancel.

8. If the "Apps" app hasn't been updated yet, your delete button should magically reactivate.

I was able to delete the product manual which is like 90mb, all others were reloaded after another restart though but it gave me enough wiggle room to update and install other apps.


Thank you!


this is madness


As a tech literate person, it was by far the most infuriating experience in recent memory, but achieving some level of victory was fun! Reminded me of hacking my xbox as a teen.


> I want to get there, but I also don't trust anyone except (maybe) myself to determine what capabilities are "good" versus "bad".

The illusion of control and oversight is why Blueteams always lose in the Cyber Game.

It's not feasible, and you effectively don't have time for that. The sheer amount of millions of alerts per day are the result of people still trying to spin the hamster wheel because they don't take a break and think about how this is gonna end up.

Don't get me wrong, but I think this is a clustering and statistics problem. You can never be 100% secure and sure about everything, but you can always push the odds in your favor on a larger scale of nodes.

Nobody cares whether a system got hacked if there was no compromise of the database and if it can be rolled back easily afterwards. That's why I believe in a fully automated approach based on a strong(er) inventory and stronger communication between systems in regards to incidents and responses.


Offtopic tip. Buy a pc screen instead and attach a nvidia shield or your own box. I never return to normal tv screens again. I also like the more natural color feel of pc screens. Tv s are bloated with color filters, frame interpolators and more.


PC monitors are dramatically more expensive though. A regular 55 inch TV is like $300-$500, a monitor that size is gonna be thousands of dollars. (Edit: And you can usually tinker with the color profiles or just turn on Game Mode if you hate the enhancements. It'll never get as close as a properly calibrated PC monitor with ultra color range or whatever, but it's fine for entertainment.)

Same with the Shield, it's like $150-$200 while a Chromecast with Google TV is $50 and does pretty much the same thing. Either would be a huge upgrade over integrated smart TV UIs.


There are "dumb TVs" - https://www.tomsguide.com/how-to/how-to-buy-a-dumb-tv-and-wh...

I have latest LG OLED and I would have hapilly paid more for to get it without smarts built in. Using it with attached Apple TV and the experience is sublime when compared to the builtin crap.


You can also just get any regular "smart" TV and never set up its wifi. I plugged in my Chromecast as soon as I got it, and never used the TV's built-in stuff. It works fine and has no ads or anything. It just boots directly into the Chromecast.


that's the only way.

except using a chromecast is just undoing what you thought you had accomplished.


I don't care about the tracking, if that's what you mean? The TV UIs are just so slow and bloated and terrible. The Chromecast is a little faster. Apple TV is even more performant, but their content hierarchy and recommendations suck compared to Google.

I think the Shield was nice for a while too but then Nvidia started adding their own ads.

The Chromecast with Google TV's as close to a "Pixel for TV" as we're gonna get, I think. Wish they'd make a version with a faster processor though. The current model is still noticeably laggy.


> then Nvidia started adding their own ads

It has definitely been paper cuts on my shield I've had for a long time (2018?)

Originally awesome, but then the last major Android update added the advert banner. It was defended because it basically only showed adverts for shows and movies.

In the past fortnight it has started showing other adverts, perfume I think.


People forget how much the cost of a SmartTV is subsidized with the data collection. PC monitors are not doing that, yet. I'm guessing refresh rates and some other features are playing into the pricing difference, but without them being subsidized with your analytics, it's unlikely they'll ever be as cheap


> People forget how much the cost of a SmartTV is subsidized with the data collection.

People keep saying this with zero proof or evidence.

We used to be able to buy TVs that were pretty nice and intelligent and didn't have all the data collection and they didn't cost that much more. I'd like to be able to buy those again.


> People keep saying this with zero proof or evidence.

There is so much available data on this subject I assume at this point it is common enough knowledge people don't feel the need to cite sources.

https://www.washingtonpost.com/technology/2019/09/18/you-wat...

https://tv-watches-you.princeton.edu/

https://www.consumerreports.org/electronics/privacy/how-to-t...


Your honor, I'd like to show the jury this link we've marked as Exhibit A to counter the claim of the witness:

https://www.theverge.com/2021/11/10/22773073/vizio-acr-adver...


Yeah so Vizio made $502.5M in device revenue and $57.3M in Platform+ gross profit.

So if the devices cost 11.4% more that'd compensate for the elimination of Platform+ profit.

And depending on the accounting it could be much less than that since the cost of the devices would go back to being much cheaper if they were just dumb TVs with HDMI ports and no spyware/adware.

So that's about what I thought.


I'd give them all my viewing data if they'd give me a free TV, lol. In this case it's actually helpful for generating recommendations anyway.


Luckily there are still some companies building dumb TVs, and with reasonably good quality.

I personally have been buying Swedx for 5 years or so.


Yeah this was a bad purchase overall, but I figured if worse comes to worse, I can just hook up a media box of some sort and entirely ignore the rest of it. Definitely getting close to that call.


They're like $50 and DRAMATICALLY better than what the TVs offer. It makes a huge difference :)

There is still some bloat in them (bundled apps) but it's much less intrusive and crappy than the TV ones, IMO.

Everyone makes them now... Google, Amazon, Apple, Nvidia, Roku...


Knowing your supply chain is still going to be an important aspect of fixing this. The author is absolutely right that our build systems can be compromised, but securing them is another part of securing the supply chain. Managing updates carefully is still yet another aspect of securing the supply chain. Auditing (or 'deploy another agent') - is still going to be an important aspect of verifying that various parties are still operating as we would expect. Making these better are all pretty important toward hardening our systems.

The proposed solution - 'run software in a way where we don't really care if it has a vulnerability, because it will happen' - is close to reality, but not quite there. The reality is that no matter how good our technical solutions are, trust ends up being a people problem, which means that compromises of that trust are going to happen. No matter how fine grained your capability based system is - if a system has the capability to do anything, then you're going to suffer loss when it is compromised. If a system gets compromised or corrupted and you don't care, then it's not a system you were using for anything useful in the first place.

While capability based systems are another important part of the solution, I would counter-propose that what we're actually looking for is compartmentalization. Acknowledging the reality that parts of our system are going to be compromised, and we are going to suffer some loss when that happens, carefully controlled capabilities can help scope that blast radius of compromises.


We are still installing obscure binary-only datacenter-wide software with full system permissions for security reasons, and powerful organizations still require that from third parties.

If we can't change that, what hope we have for a sane small-kernel capability-based system?


> We are still installing obscure binary-only datacenter-wide software with full system permissions for security reasons, and powerful organizations still require that from third parties.

Do you mean macos? It falls under this definition from what I can see.


No, I mean software like the Solar Winds IDS and system auditing. But yes, proprietary OSes satisfy that too.


Darwin has been released as open source. I haven't looked at it in a really long time. Has it's availability situation changed? Is it woefully out of date from current releases? hell, does macOS still use Darwin underneath?


The problem is that “Darwin” really means “whatever random subset of projects Apple decided to open source”. Perhaps once it was more of a cohesive thing; I know there were attempts by others to build a Darwin-only operating system, though I wasn’t around then. But even then it was never anything more than a toy.

In any case, today Darwin really is just a subset of projects, which can’t be used (or sometimes, can’t even be compiled) without other projects that are closed source. The projects that are open source are still updated, but new projects are rarely open sourced. And that includes rewrites: when a project goes through the rewrite treadmill and gets replaced with a shiny new codebase written from scratch, that new codebase is often placed in a new project, which like any other new project is usually not open source. Thus, even though open source projects usually aren’t directly changed to closed source, the amount of functionality provided by open source projects shrinks.


Chef and Puppet and other "control plane" software would satisfy that definition.


Chef and Puppet are binary-only?


Chef's omnibus installation is a full ruby interpreter along with associated support libraries like openssl and everything more or less above glibc necessary to run the client. It may not be binary only but there's ~100MB of so of binaries in it.

Pretty sure Puppet has something similar with pre-built binaries.


I believe most orgs switched to Ansible already, not because of binary blobs or their size, but because of the necessity of installing anything more than an SSH server.


Yeah, I worked for 12 years on the chef-client so I'm well aware of that.


>The Inconvenient Truth about how to actually fix this

The security game is just too fast paced for a profitable business. Until we reach the point where it makes financial sense to move slowly and take extra time to ensure critical business systems are secure, nothing will be fixed.


A lot of times people compare software development in general to the amount of effort that goes into software for safety critical systems in things like airplanes or manned spaceflight. What's less talked about is that in those same areas, overall system design also focuses on reducing critical failure points through redundancy, backups, cross-checking between different sources, manual alternatives, fail-safes, etc.

I totally agree that putting more thought and care into software development could certainly be beneficial, especially where security is critical. But I think the point of the article is that an alternative, and maybe more realistic, approach is to make critical systems and networks less brittle so an intentional or unintentional software issue isn't the end of the world. Even in airplanes, computers don't always work right. But it (mostly) doesn't result in planes falling out of the sky because the safety of the system as a whole isn't reliant on one piece of software working perfectly 100% of the time.


You mean we should make companies pay hefty fines


That sounds great(-ish) when the company in question is one like SolarWinds that does hundreds of millions in revenue, but people and companies are equally reliant on open source software or software from tiny businesses. Trying to fine random Github users or the Linux foundation or whoever seems like a losing strategy.

It also ignores the point from the article that the solution can't be trying to make all software secure. Whatever the incentive mechanism and whatever the source of the insecurity, it's unrealistic to think we're ever going to be able to make that a reality. But designing the environments software is used in such that broken software isn't the end of the world seems doable, if also incredibly difficult.


You do not fine Home Depot if a suspension bridge fails because the builder decided to go there and buy the cheapest rivets to hold up their suspension bridge instead of sourcing components that meet the design specifications. You fine the bridge builder for using components unfit for purpose.

For that matter, even considering to use components that unfit for purpose is inconceivable in most professions. Only in software do people use systems with no specifications, no guarantees, and where even the component makers did not intend or design them to be load bearing.


The inconvenient truth is that people say they want more, better security but they don't want to pay for it. So to survive you need "just enough" security to get by, and that "just enough" turns out to be quite low. It's also not clear if a significantly better proactive security is necessarily better - humanity is still collectively figuring out how to balance the security/privacy vs the convenience, and we do risk fossilized information technology stacks if we jump on the security too hard.

That said, it's often easy things that get the most benefit and we are collectively a long way away from getting the basic security things done properly across the board.


> The inconvenient truth is that people say they want more, better security but they don't want to pay for it.

They are already paying for it, and not getting the security.


That probably costs us as a society more man hours than it saves.

Particularly in reduced competition due to deep pockets being required to even play the game.


How do you differentiate between honest to goodness mistakes and a willful/malicious attitude toward just getting (product) done to make money at all costs?

It seems like a fine structure would have to take that into account? Or am I way off base?


It's all the same to the users, and making mistakes impossible is also part of quality. I'd say go for it.


You can reduce mistakes, but you can't make mistakes impossible. This is a very rough take.

We wouldn't have the level of tech we have today if we were to require 'mistakes to be impossible', Rapid growth requires mistakes to be acceptable in some situations.


It also entrenches the Big Tech. Because what FAANG and big banks can pay, no startup or small company can afford.

On the other hand, a quality needs fines. Otherwise, it's too easy to "forget" an inconvenient quality to make a short-term profit (and sometimes compromise users data).


We need to recognise that features bring bugs, and if we want fewer bugs we need to settle for fewer features. Same goes for security bugs as well.

You can have devs work on bug fixing or you can have them work on features. Whichever brings in more money will be prioritized.


I dunno...It definitely makes financial sense to make sure stuff you work on doesn't appear in someone else's DEFCON presentation.


I'm not sure that's actually true, since basically every company and every project of significant size has been the subject of DEFCON presentations. Sure, it's temporarily embarrassing and unfortunate, but it's not like customers are going to move to the non-existent alternatives that have never had a security vulnerability.

There's certainly degrees of CVE counts and the like, but as long as it's not egregiously worse than everyone else, being the subject of a DEFCON presentation puts you in pretty good company with literally everyone else.


That...just isn't true. I spend a dozen years working on AI enabled surveillance cameras and there were some vendors who's cameras could continually show up and vendors (including ours :) who didn't.

And while there are customers who are aware of things like this and shop accordingly, I'd personally be more concerned about my own reputation damage. It's not the sort of thing I'd like to get questions about in a job interview.


I'm not so sure. Atlassian software comes to mind as a counter-example.


Businesses will lose money though, if not from the initial hack then eventually due to loss of the publics trust, ect, if the hack is made public knowledge.

Surely the gov lost a lot of money from this hack assuming they had to hire consultants, ect just to deal with this massive oversight.


Nah, they really don’t.

The fact solarwinds or equifax exist after such egregious security errors with a botched response at best or an intentional attempt at covering up the size/scope of the problems at worse should tell you everything you need to know.


Over time (particularly since iOS 6), less and less permissions have been granted to iOS apps by default, instead requiring apps to request those permissions from users explicitly. It's still not perfect (like access to contacts still being a binary "yes/no"), but every permission clawed back from the default set required breaking backwards compatibility, a phrase rarely uttered in regard to the Linux and Windows kernels.

If you have been an iOS developer since 2012, I'm sorry you had to go through that, but your extra work has been profoundly important to the privacy and security of mobile OSes. I'd like to see that same principled energy brought to desktop and server OSes.

Or, demand the EU let you side-load so you don't have to bother.


On Android at least sideloaded apps still have to ask for permissions. Is this not the same on iOS? Is the permission model not enforced by the OS? I would be shocked.


But there's no way to side-load on iOS, AFAICT.


You can do it already it is just unreasonably cumbersome.

https://altstore.io/

Or you can install a company cert profile, or pay 100$ a year for a dev account.


Side-loaded apps still have the same permission model?


Permissions I've read side-loaders insisting are crucial to them:

- permission to ignore my subscription intents

- permission to bypass my private relay prefs

- permission to share my PII and purchase profile to third parties

- permission to share my browsing with adtech

- permission to dynamically load unvetted code

- permission to bully the user into letting the app out of the sandbox model

- permission to bully users into granting elevated privs

There are any number of permissions devs think side-loading will give them, and any number of these actively misused by app devs today on MacOS side. A majority of the side-loading table thumpers build business models on the user as the product, the user's data as the thing of value. These firms are not on the user's side.

Of course not you, gentle reader. You only want side-loading for good. The government only wants encryption backdoors for the children. If we think back and side doors weaken security posture or tempt legislatures, we should maintain that stance.

Instead of building holes we pinky swear are only for the good guys, imagine there were no holes.


You are hallucinating holes.

I don't even know exactly on a OS permission level what you mean by some of these, but at least forsome of these, I know for a fact you currently can't do by sideloading an app.


Didn't say you could. I said these are among the reasons growth hackers cite in public for side-loading.

I'd be happiest if even payments and subscriptions were still required to go through the customer-centric system Apple set up, the number one dev complaint.

For users, ideally Apple would relax nothing for a side-loaded app, and would maintain a stringent review process for whose certs can authorize apps, and a zero strikes policy for revoking such certs. Unfortunately, some of these will be relaxed.


> I'd be happiest if even payments and subscriptions were still required to go through the customer-centric system Apple set up, the number one dev complaint.

It is the number one complaint due to the 15-30% cut they take, but even now, entities such as Amazon, Uber and others dealing in real life goods don't have to use Apple's system. It is not customer centric, it is profit centric.

> For users, ideally Apple would relax nothing for a side-loaded app, and would maintain a stringent review process for whose certs can authorize apps, and a zero strikes policy for revoking such certs. Unfortunately, some of these will be relaxed.

Could Microsoft release an Xcloud client under these rules? Torrent apps? Emulators? These are all perfectly legal and some of these don't even require payments. Could apple still collect the 100$ per year from these devs?


Yes. There's no way for an app to sidestep that.

AFAICT the only way to sidestep things on Android is to root the phone, then explicitly install some native-code binaries that do direct access to things normally controlled by permissions. This is not something that can easily happen by mistake, let alone by normal operation.


If I have to choose between a phone that's "secure" and one that I can actually run the apps I want to on, guess which I'm choosing. What's the point in a phone that you can't even look at porn on?


> There is absolutely no way we can perform a full binary analysis of every new version of every binary blob that powers modern IT

I wonder if there's enough demand for a service like this to be viable. Bundled with a universal package manager, signed and verified binaries, caching mechanisms, etc.


I'm sure there's enough demand for someone to sell a service that attempts this, but for several of the reasons mentioned in the post I expect it would be ineffective.

When you run all your code and all its dependencies with full authority, it only takes one tiny piece of malicious code to blow the whole system wide open. I think scanning will always be a losing battle.


Plug: we've been building Packj [1] to detect malicious Python/NPM/Ruby/Rust/Java/PHP packages. It carries out static/dynamic/metadata analysis to look for "suspicious” attributes such as spawning of shell, use of files, network communication, use of decode+eval, mismatch of GitHub code vs packaged code, and several more.

1. https://github.com/ossillate-inc/packj


Making and maintaining a universal package manager then getting it deployed completely enough at any large org to make a difference strikes me as on the order of “let’s build a perpetual motion machine that’s also a fusion reactor” as far as feasibility level.

… now, that doesn’t mean one couldn’t make money while utterly failing to ultimately deliver the promised value.


I'd settle for a source analysis and reproducible builds for just our myriad open source dependencies. All it takes is a single developer to be compromised in the thousands throughout a typical stack..


Working on this right now.


Oh?


> "We should sign and verify all our dependencies"

pypi: "YOLO, let us deprecate signatures!"


If pypi won't let you add signatures, can you generate one of your own (for internal use) using a repository manager like jfrog, operating as a pypi proxy?


New signatures cannot be uploaded to pypi https://blog.pypi.org/posts/2023-05-23-removing-pgp/


They email me at every upload telling me to stop sending them my archaic signature.

It seems the focus now is more to keep the secrets on github and authenticate with those.

https://discuss.python.org/t/2fa-usability-on-pypi-and-with-...


The sheer amount of eggs in the GitHub basket is absolutely terrifying to me. If something happens to that thing, the entire computing world will halt.

It reminds me a bit of what the journalism world did by becoming so reliant on Twitter for sourcing and communication, but in our case I think it's deeper and worse. We've hard coded GitHub into countless systems. I'd venture to say that most CI systems, package managers, build farms, etc. would break.


Debian would keep working :D

And at work my team uses a mirror on LAN. The other teams on the other hand…


Not sure about jfrog, but Sonatype does something similar. They basically hash all components/packages from a bunch of different repositories, and then tag the hash with various metadata you can use to create policies.

I started using this in my org a couple years back, and we've ended up using it to check commercial software as well just to get an overview of known components, vulnerabilities and things to watch out for.

I really wish the big repositories would invest more in useful mechanisms; when we looked into this before making our decision, the only repository with any kind of checking was Maven Central. Nuget had support for author signing and repository signing, and Pypi (at the time) author signing. As far as I remember none of the other repositories had any verification of anything including the git repo, so you couldn't even determine what commit hash the code was based on or who was behind it.


I use .deb packages mostly rather than pip.

Signatures are checked if present, at least.

Also packages that stop working because python removes stdlib modules that they use, tend to get found and patched in distributions. In pypi if they are abandoned, they will be unusable for all eternity.


I partially disagree to this post.

What the author wants is unrealistic and their understanding of detections isn't ideal. I assume they know that a SIEM doesn't need agents. But an EDR does. EDR can be a well configured auditd/sysmon and a response agent. Or you pay up and get a "nobody gets fired for buying crowdstrike".

One reality is that APT hacks are out of scope for most orgs.

Another reality is that for at least a decade and a half the security industry has accepted that prevention is not a good strategy against APTs. This is why you need that other EDR agent and SIEM. You focus your detection on what the threar actors do after compromise and around your important data. For most APTs this is effective but it is far from perfect. The biggest problem is people like the author who are not seeing actual compromises happen all the time.

Security costs money and it can only cost as much as the risk tolerance of the company. It can't cost so much that the company's profit margins take a beating for example. That's why NIST recommendations are important, so execs can say we are spending enough.

Another side of prevention is that it needs to be as invisible as possible. If users notice it or it gets in their way, by default it is bad, you have to do things to compensate for the deteriorated user experience. You can add MFA for example but better make up for it by making it a yubikey and doing SSO.

Keep in mind that corporate networks are a hodge-podge patchwork of random things pieced together over time. Threat actors need one mistake to abuse. And the P in APTs mean they'll try until they find the mistake, it's their day job quite literally lol.


I'm currently trying to get through a InfoSec questionnaire for a large company, one of their requirements has been that we purchase over $1m of cyber limit.

The questions that we have to answer are heavily focused on the assumptions that we:

1. Have a network. 2. Have a physical premises. 3. Don't use API keys.

I'd happily attest to using AWS IAM properly. I'd happily be told that we aren't using it properly and asked to make changes.


These repetitive one-size-fits-all InfoSec questionnaires are a big part of the reason we started Platformed (https://platformed.com) to automate the process for vendors.

As a smaller vendor selling into a larger organisation you end up spending a lot of time rephrasing the same answer, which often boils down to "We don't have this very specific control you're asking for, but we do have an equivalent more appropriate for our size or business which is ...".


I had a look at the site. Putting the answers in the questions isn't where we struggle it's trying to answer questions sensibly at all that assume a 1990s network topology.


A deeper root cause arguably of this and countless other security incidents involving software:

https://xkcd.com/2030/

> I don't quite know how to put this, but our entire field is bad at what we do, and if you rely on us, everyone will die.


Putting all your eggs in one basket sure saves money as you don't need to buy extra baskets :P


I think if we can sufficiently isolate the build process we can solve this problem. Lot's of opportunity with our project Witness to add extra isolation. It is something we are working on. However, the real supply chain security "business problem" is just tracking everything in a standardized way. This is what the in-toto project helps with. I wrote about it here: https://www.cncf.io/blog/2023/08/17/unleashing-in-toto-the-a.... we also wrote Witness and Archivista to help solve this problem.. We have lots of work to do. https://github.com/in-toto/witness

Full disclosure, I am a member of the steering committee for in-toto and the CEO of TestifySec which is the main contributor to Witness.


how much experience you have with embed? from small iot white label like small business alarm systems to behemoths like Samsung... the only constant is they ship whatever and the lowest interns handle build


Step one is actually wanting to improve security. Those IoT companies have no motivator. Most of our business is with Federal/Defense and Finance. Those companies will only change if liability changes or the regulatory environment forces them to.


> If we could agree on a good, standardized capabilities model for software and everyone starts using it, we will have reached security Nirvana.

That's about as likely as everyone agreeing on a good, standardized operating system architecture, API, and driver model that also happens to be Written In Rust™ for good measure.


> Some people carry around headphones, smartwatches, etc - but some don't carry any devices at all (or keep Bluetooth off on their phone).

I wonder if one could do better by using a HackerRF and spy on LTE+WiFi communications. You wouldn't need to decrypt them, just identify how many different devices there are.


a lot of devices, especially phones, will randomize the hardware MAC address when on an unknown network or doing open pings, but if a known SSID is found it will use a consistent, and often correct, MAC


Ah, but I've heard from Reliable Sources that CMMC will fix everything.


There's actually another possible approach to blocking SolarWinds style hacks that doesn't require pervasive sandboxing. Sandboxing is very hard with the current set of primitives operating systems expose, e.g. macOS doesn't expose low level sandboxing primitives as a documented API at all, Windows barely has a sandboxing API, Linux has several but they're all extremely low level and complicated.

That other approach is to rely on confidential computing technologies, in particular, the remote attestation capabilities of Intel SGX and AMD SEV. These technologies are often associated with DRM or multi-party computation, but they can be used in a different way. In this alternative use case you statically attach a remote attestation to a piece of data as a signature of computation.

A signature of computation is a proof that a piece of data was derived by running a specific program with a specific input. If that program is deterministic, and has been verified by yourself or a third party, then it means the correctness of the transform can be mechanically verified in such a way that it's much harder to hack. In particular if you use SGX then root exploits on the machine doing the computation won't help the attacker due to the hardware level protections. If you use SEV it's harder but can AFAIK still be done with some specialized OS hacking.

Concretely this means that a shipped software artifact could come with a signature proving not only the identity of the developers, as is the case today, but that those files were produced using a known third party compiler and linker applied to e.g. a source tree at a specific git commit hash. Because those files can be verified by anyone including the developers themselves, it means that a compromised CI system is very limited in what it can do: every file has a cryptographic proof tracing it backwards to the input source tree, which in turn is replicated independently on developer's workstations and laptops so it can be checked by many semi-independent actors. And for extra value the company can hire a third party auditor who verifies that the source code at hash X does in fact meet the vendor's description of what it does.

I've experimented with this sort of approach in the past, but unfortunately CC as a tech gets sort of repeatedly stuck in the domain gap between low level systems programmers and the kind of wideband ecosystem building you need to deliver genuine business value. People aren't aware of what it can do or how to use it.

Also, this approach has a subtle side effect: it means the compiler is now a valid target for attack. For example if source code can corrupt compiler internals sufficiently to inject code, then you could potentially cause a miscompile that invalidates the proof. One solution for that is to use memory-safe compilers but the only one I know of is Graal.


Interesting. I presume harder to tamper with (due to CC) attestation wouldn't help if the developer were compromised/had ill intent/wrote an exploitable bug, while sandboxing or other confinement might (depending on the malware/exploit, and coarseness/robustness of the confinement), so they are complements?

(I'm not sure whether SolarWinds involved any malware or unintended vulnerabilities, from my casual reading it seems to have been a bunch of different attacks...in any case, these supply chain attacks/vulnerabilities exist whether they were used in SW or not...I'm just always slightly uncertain by what is meant by SW-style; not a criticsm of this comment, just laying out my ignorance!)

Re memory-safe compilers, is Graal the known one because it is (I guess it must be given the claim) pure Java? As opposed to relying on eg LLVM or GCC which are heaps of C and C++.


Re: Graal, yes exactly.

Yes attestation has to attest to the execution of a program and there's no program that can tell you the developer was competent/honest/not compromised. But what it can do is, for example, tell you that a build was done in a clean and verified environment i.e. one free of attackers/malware. And importantly this guarantee can be made also considering the cloud vendor as an attacker. Think about how horrific it'd be if an attacker got privileged access to AWS or GitHub Actions, for example.


Indeed, thanks for explaining.

Oh, and your mention of Graal reminds me of reading here 56 days ago that it has some support (JavaScript-only at that time) for sandboxing itsself https://news.ycombinator.com/item?id=37572536 presumably not nearly as fine-grained as capability-based schemes mentioned in the OP, but still a useful step perhaps.


It's actually usable as a capability system. You can just run code in a context without permissions and then pass in cap objects that are exposed to that context.


>One solution for that is to use memory-safe compilers but the only one I know of is Graal.

The Rust compiler is another (being written in Rust).


Thanks. I thought the Rust backend was still LLVM.


It is LLVM.


did this whole site/blog used to be on a different domain? cyrnel.net?


Related ongoing thread:

Discouraging the use of web application firewalls - https://news.ycombinator.com/item?id=38255004 - Nov 2023 (125 comments)


Are you sure this is related? It’s the same author and general topic of IT security, but that’s where similarity seems to end.


You're right! Sorry




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: