I feel their pain. I built an open source video player for esports coaches[1] that it given away for free and one the constant complaints about it is that users have to bypass warnings when installing it for the first time.
I can afford to pay for certificates (I believe I have to have one for Windows and OSX) but I refuse to for a project that I already give away my time for.
I would love to see a LetsEncrypt style service for OSS but I assume it's against the core interests of Microsoft / Apple to allow something like this as it would start to drive people away from the walled gardens of the app stores.
I've been writing software for close to 25 years and it's quite sad to watch the decline of ownership over our own machines in the same of "security".
> I would love to see a LetsEncrypt style service for OSS but I assume it's against the core interests of Microsoft / Apple to allow something like this as it would start to drive people away from the walled gardens of the app stores.
People have been asking Let's Encrypt itself for this on the Let's Encrypt forum since the project was founded.
The usual answer is that code signing certificates are (supposedly) trying to attest to a legal identity in the hope of being able to punish people offline if they publish malware, or allow people or organizations to have a policy about only installing software known to be from a certain list of publishers. DV certificates for HTTPS are trying to attest to control of a name in the DNS, which is verifiable by automated technical means, and which is not necessarily related to offline identity. (ICANN says it should be ... in an indirect way ... which isn't always complied with, and which, following increased pressure from European privacy law, is often not visible to the public.)
A Let's Encrypt certificate would confirm that a certain key is apparently controlled by someone who apparently also controls a certain DNS name. But a code signing certificate would supposedly go further and confirm that it's apparently controlled by someone acting on behalf of a certain named legal person existing in a certain jurisdiction. This is much more expensive to verify usefully, although maybe some governments will eventually have a way to automate it.
This isn't to say that either kind of certificate is necessarily ideal for all of the different uses to which relying parties end up putting it nowadays, but just that what they're attesting to, and how you would verify it, is pretty different.
About as well as id notice if the software were signed by
- Benjamin Olafsson
- Imagemagick Solutions Gmbh
- Imagemaqick LLC
- FutureSoft Inc
It's very common for software to be developed by a company whose name bears no resemblance to the product itself. It's also common for small commercial or open source projects to be signed by an individual developer in their name. Am I going to verify any of these? In practice, no. If I wanted to verify the company name I'd visit the website I downloaded from and check the footer's copyright notice. Unless the website itself is EV validated (almost never) we're back to DV with extra steps.
The only time I've ever looked into the signee is when I downloaded obvious malware from a fake version of GNU Cash's website which I found from a Google ad. The malware was signed with a certificate from a Taiwanese hardware company.
Hopefully Windows will remember that I downloaded the file from imagemagic.com check that the certificate matches the place I downloaded it from...
Although... As long as downloads are always provided from the official domain via HTTPS, and the OS can keep track of that, I don't really see why the executable itself needs to be signed...
As long as downloads are always provided from the official domain via HTTPS
You are conflating control over the public website with control over the build/signing infrastructure. A good defense-in-depth strategy means that a compromise of one should not lead to an automatic compromise of the other.
Most OS software is downloaded from code repositories like github or fosshub to save on networking costs, not to mention CDNs that are often used even when the link is on the software's website the file itself will often not be "coming from" that website.
> Most OS software is downloaded from code repositories like github or fosshub to save on networking costs,
[X] Doubt
Do not underestimate the sheer number of people jamming in software names or descriptions into Google and getting their wares on the likes of softpedia.
Yes, but most people aren’t. It also significantly reduces the usefulness of code signing for the vast majority. And your justification for that is that it personally wouldn’t be a big deal to you, someone that has an abnormal understanding of the technologies at play.
>It also significantly reduces the usefulness of code signing for the vast majority
I'd argue that code signing for the average person has zero utility on Windows, and negative utility on macOS.
I really don't think anybody understands or even cares what a certificate means, and the only practical outcome is that sometimes they get scary messages when the app they're installing didn't pay MS for a license.
macOS by default doesn't run unsigned or incorrectly signed apps, period. Only Apple can hand out certificates and certificates are at the very least associated with payment info (though sometimes they want more, DUNS number or whatever). Signed application bundles remove many attack vectors. The primary remaining vectors are:
1. A malicious entity can sign up for a developer account.
2. A non-malicious entity's certificate can be compromised.
(1) does not seem to happen often, if it does happen, Apple can revoke the certificate. They can also increase the burden of proof for creating a developer account if it becomes more common.
(2) happens occasionally. Apple can revoke the key. But in general there is a strong incentive for developers to properly protect their signing keys, because Apple could ban them by not signing their keys in the case of repeated issues.
Code signing substantially increases platform security and as a 16 year macOS user, I would not want to go back to pre-signing days, where you could never be sure whether an application bundle was compromised, unless you'd verify the archive/disk image with GnuPG. But that is opening a big can of worms (WoT, etc.).
> macOS by default doesn't run unsigned or incorrectly signed apps, period.
It kinda depends on what you mean by “default”, but you can always right click an app in Finder and select “open”. When you get the scary “unsigned app” pop-up there will be an extra option there to run it anyway, allowing you to run unsigned binaries without making any settings changes to the os.
That said, I largely agree with the rest of your comment. I do think, as a developer, their stapling stuff is way more onerous than the plain code signing. It basically puts Apple in the position to reject your app in the same way they reject apps in the store, even though there is no store involved.
Why is it any less useful than some entity name? You can be pretty sure that google.com is controlled by Google, and if the domain on the app is g00gle.ru, that's going to fool exactly the same people as if the scammer's company name was Googel.
Companies aren't cheap, but they aren't exactly expensive either. A couple weeks ago I've registered a company in Estonia – it only cost me the 265 € state fee. Code signing certificate is another what, 500 euro on top of that? Certainly more expensive than a $10 domain with a free certificate, but still could be a reasonable cost for e. g. a targeted attack.
There's another catch – you either have to register a company in your name, or find somebody to own it for you. I don't think the latter would be a big problem though: there was a lot of news about shady fintech startups in the Baltics lately and many of them were in fact registered in the name of random people looking for some quick cash.
Now, if I see something like MicroSoft-Inc OÜ (EE) in the app signature, I would probably get a bit suspicious. But if it's a less known brand? Who knows!
Sure, but once you start registering companies to run your shady schemes, you're in the process of transforming from a bad bad actor to a legitimate bad actor. Stop trying to steal bank credentials, and switch your spyware to pulling things that help to target ads, "optimize for engagement", or "streamline business", and suddenly the entire system starts working for you. You can then work in the clear, and use all the new security tools - like HSTS, DoH, certificate pinning, and code signing - to prevent your victims from protecting themselves from you.
Yeah and now that person who you paid to register it in your name rats you out to the police. Also, there are a lot of laws you can end up breaking with severe penalties in the course of trying to hide your identity for company registration purposes (it enters the world of anti-money laundering).
It's harder than it sounds, which is why malware authors prefer to steal keys than set up fake companies. Hence the new hardware requirements.
> Yeah and now that person who you paid to register it in your name rats you out to the police.
But they don't know who you are, because you're a scammer who is lying to everyone including them.
> Also, there are a lot of laws you can end up breaking with severe penalties in the course of trying to hide your identity for company registration purposes (it enters the world of anti-money laundering).
So is credit card fraud or CFAA violations, which is the thing this is ostensibly to prevent. Criminals don't follow laws.
> It's harder than it sounds, which is why malware authors prefer to steal keys than set up fake companies. Hence the new hardware requirements.
It's not actually that hard, it's just that stealing keys is really easy. And it's not obvious that the new hardware requirements are going to do any good, because that type of consumer hardware has been consistently riddled with vulnerabilities -- Intel essentially gave up on SGX because they couldn't make it work.
So you're going to find these people anonymously, now? How? You'll have to ask a lot of people before someone agrees to actually set up a fake company for you, you'll also have to find ways to pay them without losing your anonymity, and they will have to explain at some point to the tax authority where this unexpected income came from and what the company they've set up actually does.
If it was that easy to hide your tracks police would never catch criminals, but they do. Every time you increase the complexity of the scheme the chance for mistakes goes up.
> that type of consumer hardware has been consistently riddled with vulnerabilities
USB signing devices aren't really consumer hardware, are they? I don't recall vulns in HSMs being a major source of leaks previously, but I'm sure the game will move there sooner or later.
> Intel essentially gave up on SGX because they couldn't make it work
Intel are selling SGX today, have built new features on top of it, it works fine and all their competitors have been investing heavily into catching up.
> I refuse to for a project that I already give away my time for.
Maybe I’m naive but I feel like the solution is pretty obvious: just crowdsource the cost of the certificate and only sign the software as long as the money keeps coming in.
If people really do care that much they should be willing to help shoulder the cost, and if they’re not then there shouldn’t be a problem with it being unsigned.
Another pain point with this that I just remembered is that Chrome will also complain about the download if it isn't signed. This does seem to get switched off after enough downloads have been accrued.
> it's against the core interests of Microsoft / Apple to allow something like this as it would start to drive people away from the walled gardens of the app stores
For utility style apps, Microsoft's app store is a joke.
I think that a main part of LetsEncrypt security comes from renewing the certificate every 3 months. You would not be able to do that with shipped binaries.
That's why you get a timestamp countersignature; that's what the person you're replying to is talking about. They are absolutely correct. This is standard practice. Signed executables on Windows DO NOT lose trust when the certificate expires as long as they are cryptographically timestamped.
I don't think he's saying it requires elevated privileges. When binaries aren't signed Windows will throw up a warning that it isn't signed which makes users hesitant to install.
This is correct and it usually takes some combination of right clicking the installer or holding shortcuts to bypass. It's not obvious how to do so without Googling around.
Right clicking is a Mac thing. On Windows, most of the warnings can be bypassed without any special actions (there are two buttons), the SmartScreen warning requires clicking on "More info".
How could it not? It is adding software to the system software set, accessible by all users of the system.
And many programs require some kind of integration into the OS, such as file type associations or context menu entries, which even a single user shouldn't have access to do.
> How could it not? It is adding software to the system software set, accessible by all users of the system.
User-only installs are possible:
> C:\Users\YourUsername\AppData\Local is intended for MSI installations for a single user but typically doesn't require Administrative privileges. This folder is normally hidden.
This way of working should have been left behind in the previous century.
Sandboxing should be default. Associating file endings should be a suggestion to the OS, accepted by the user, not something only configurable by delegating full super admin to third party app.
Slow loading context menus where every app tries to claim its presence. Thank you for reminding me why I don’t use Windows since years ago.
A image editor should only need to load and save images I ask it to, no other system integration. iOS, Android and the browser has proven it is possible. Now the desktop needs a similar journey.
Please no. There are valid reasons to NOT sandbox, and in Windows there is sandboxing in default (windows store apps) and there are often issues with those versions of the software. For example, Slack downloaded from the windows store uses 30-40% of your CPU while idle, but not when installed from their website.
Even in Linux and using the Snap sandbox (ubuntu), there are significant issues when trying to access globally available software, which can be extremely hard to support and diagnose.
That is obvious. Like saying the "sun is yellow because it is about 4.5 billion years old."
Even if sandboxing was thought about back when Linus was porting Unix, it would have been extremely slow as processors and ram was very limited back then. If we could go back in time and give them ridiculously fast processors and effectively unlimited ram like we have today, I'm sure Linux and Windows (er, DOS) would look quite different, and be much slower.
Just look at how slow mobile OS's are, despite being on ridiculously fast hardware. My Palm Pilot in 1998 felt faster and more responsive than most devices today. Android devices essentially require a human to manage the memory because starting just a few apps uses almost all of it. Even iOS devices can only run a few apps before it starts killing things. Sandboxing has an extremely high cost (with not much to gain), so high that even on modern hardware you start running into physical limits very quickly: memory, disk space, and processing power.
Until we can properly share resources in sandboxed apps (or increase the physical limits of the hardware) to a certain point, it just doesn't make sense to only be able to run a few things on a desktop machine.
Performance has absolutely nothing to do with sandboxing.
Heck you could already apply many sandboxing techniques with Linux 0.x by chroot() to an empty directory followed by setuid() to "nobody". If that process needs file access, fork() a broker process before the chroot() that funnels file descriptors over an unix socket to the sandboxed process. The broker strictly checks file access permissions of course or could even present the file open dialog to the user.
This is next to no overhead in many cases (keep in mind you'll stat/open/mmap a bunch of .so's anyways on startup), except for the fork() maybe. And that can be fixed by proper sandboxing API's by the OS.
The problem is that these OS's give the processes to much permissions in the fist place (access to all the user's files, ...).
Sandboxing and virtualization existed in mainframes and micros already for decades, and were originally made available in UNIXes like Tru64 and HP-UX Vaults.
> Performance has absolutely nothing to do with sand boxing.
Even simple optimizations like shared caches fall afoul of proper isolation. In the past browser caches could be abused to find out if a user frequented specific sites just by checking how long it would take to request a site specific resource. Result: no more shared caching, all resources had to be loaded for each site separately.
Every time a user space program makes an syscall the OS already has to go through layers of indirection. The OS can provide a program with a virtual or restricted view of the filesystem instead of the real thing just as easily. The OS already prevents processes from accessing each other's memory. There are some difficulties with sandboxing peripherals and GPU access, but for the most part sandboxing has absolutely no performance impact. Sandboxing doesn't mean the OS has to spin up a virtual machine every time you want to run a program.
That's a weirdly precise measurement of the temperature of sunlight. Colour temperature isn't really a precise way of measuring colour since not even a body like the sun really acts as a black body emitter. All that said, colour temperatures around six thousand kelvins are certainly describable as white and not as yellow.
The Sun is an almost perfect black body emitter. Outside of our atmosphere, it's perfectly so, and the atmospheric distortion is imperceptible.
Any reasonable representation of its color would classify the Sun as white, except when comparing it to other stars, where the small differences matter a lot.
Maybe you are more forgiving than I am, but the spectrum of the sun even measured from nearby in space is distinctly different to a Planck spectrum[0].
The problem with sandboxing is that you're limited to the APIs that your vendor provides, which stifles creativity. For a vendor to open those APIs up, they have to realize that there's somebody out there wishing to use these APIs. If all platforms were properly sandboxed from the start, a lot of software we now know and love would never have existed, it couldn't be created because of the missing APIs, but the APIs wouldn't be added because the vendor wouldn't know that some software needs them.
Screen readers are a good example, proper accessibility APIs came later, as a response to screen readers' needs. The first screen readers used various tricks for injecting code into other processes and emulating GPU drivers to intercept GDI calls.
Windows has already made that journey years ago. The MSIX system works the way you suggest:
• Admin privs aren't needed
• Packages declare what integration points they need in an XML file
It's similar to the way macOS, iOS and Android work. You can also (starting soon in Win11) declare that the app will be sandboxed.
However, developers have to actually use this system and most don't know it exists or how to use it.
The ImageMagick developers can fix their problem by purchasing a cheap OV code signing certificate and then using Conveyor [1], which is a product my company makes. It can make these MSIX files along with all the new formats it requires for things like icons, and it can do so from Linux or macOS or whatever the developers prefer to use. So you can do releases locally without needing CI/CD or cloud signing.
Now, they'd like to have releases be done by GitHub Actions instead of using local hardware, and that would require a cloud signing service as they say. Conveyor can use those too. But that's not a technical requirement anymore, because it doesn't use any of the native toolchains so you don't need to release Windows binaries from Windows (or Mac from Mac). Conveyor can create all the files for installing and updates, and then upload them to a GitHub Release or ordinary web server, and it's free for open source projects. Given that they're initiating the release process from a laptop anyway, they can do it all locally.
Conveyor can also self-sign but that's more just to enable the tracking of permissions and things for all software. Self signed binaries still trigger warnings obviously.
> The ImageMagick developers can fix their problem by purchasing a cheap OV code signing certificate and then using Conveyor
By "cheap" you mean $500/year[0]? Then for anyone who isn't open source it's another $45/month on top for Conveyor. That's hardly "accessible".
That said, Conveyor looks awesome. We (thankfully) distribute our application via Steam/EGS/etc but when we were looking at bundling installers before that it was a nightmare and we probably would have just paid for Conveyor.
Well, firstly, buying certs is optional. You can distribute via the Microsoft Store and they'll sign for you. That's a $19 one time fee, no subscription. It's by far the cheapest way to distribute signed software on Windows. Conveyor can prepare everything and do the upload for you.
So this stuff only applies if you don't want to go via the store.
Now you picked DigiCert and their cloud HSM solution. They're unfortunately quite expensive. SSL.com is a lot cheaper:
So about $100 / yr, with a one off fee to buy a USB key.
And then ImageMagick is open source so they could use the tool for free.
Even commercially, $100/yr plus $45/month isn't particularly expensive compared to the labor cost of developing software commercially. The cost of the tool is like one hour of skilled labor at contracting rates per month. It'll save far more time than that given that as you said, doing deployment by hand is a nightmare.
And as you note, if your app is open source then you can use it for free. So then we're down to using the MS Store + Conveyor for free: $19, one off. Anyone can afford that.
I disagree entirely. The only reason for creating native apps is to allow apps to interoperate and integrate deeply with the OS. Apps should be free to pass data around with each other, and this absurd level of security overreach, suggesting every app should be sandboxed, is actively hurting the computing world.
Sure, if you're making a game or some browser replacement, then go for sandboxing. But most productivity software shouldn't be confined to that. We need computing to stay open, not to all be closed down mobile OS style.
That made sense in the olden times of a decade or two ago. Now, when you can not trust any software on your machine to refrain from exfiltrating telemetry—no longer. Even FLOSS is not enough.
Thankfully open-snitch is now available on Debian.
Most software installers on Windows offer the user a "Install for this user"/"Install for all users" option, or will just install to the current user's appdata, which doesn't require admin rights.
Not the case for ImageMagick, there are some things that cannot be installed for current user, like Windows services or specific kinds of shell extensions.
It's definitely not "most". It does happen, but it's actually very rare. Most people nowadays who go to the trouble of making a native installer do so because they want some kind of OS integration, so it's a nonstarter anyway.
This worfklow isn't usable with these new rules, and I'm having a hard time with the assertion that moving builds to my desktop to use a hardware signing key and uploading them in a non automated, non transparent fashion is an improvement on security.
> moving builds to my desktop to use a hardware signing key and uploading them in a non automated, non transparent fashion is an improvement on security
For most projects it is an improvement, for better or worse.
First issue: private keys stored in files can be stolen silently, and then the only recourse is revocation. That's the main reason for the HSM requirement: malware authors have been doing this for some time now and revocation is difficult/expensive for various reasons. An HSM can also be stolen but only in the old fashioned way of breaking into your office or home and grabbing it, which you're going to notice.
You may object that the credentials for using the HSM can be stolen, and that's true, but they can also be changed easily and quickly. So if you notice that your PIN has been keylogged, you can recover from the compromise then change the PIN and you're done, no need to revoke the certificate.
Second issue: automated signing in CI can actually be risky. It means anyone who can push code to your CI system can get code signed as yourself, possibly without you even being aware of it. The key is held online at all times, so obviously if the CI system gets hacked then it's game over, but even without that it boils down to anyone who can push code into the system becoming a weak point, especially because CI systems are running lots of arbitrary code without being closely monitored. CI signing is at best 1-factor security.
If you sign locally then the key can be (literally) offline until the moment you do a release, and access to it can be constrained via 2-factor auth: the key is something you have, the credential is something you know. So this is quite secure.
For signing nightly dev builds, internal tools and other transient binaries that shouldn't get out into the wild anyway, you can self-sign which is free.
"Cloud HSMs" are allowed by the CA/B rules which wholly negate the benefit for that second issue and bring us back into the situation where anyone who checks code into CI can sign with the key. The CA/B rules are really just concerned with the first issue, right?
Yes, the current rules aren't attempting to litigate full supply chain security. If you look at how the cloud signing services work though, the underlying protocols are designed to allow 2FA authenticators. They give you a TOTP seed, it's not just a basic password. They can't prove the seed was put into a real 2FA authenticator app though, so in practice you can use it as if it's a password.
I highly recommend signing dev builds with your proper key because building a reputation of signing legitimate binaries is a strong signal for Microsoft smartscreen.
I'm in exactly the same boat; doing the same thing to store my OV .pfx certificate in a GitHub Actions secret. My certificate expires in November 2024 and I'm undecided what I'll do. It was hard enough to get a certificate as a solo developer and not a corporation.
Still, though, it should just be a matter of money. The $629/year cloud-hosted HSM mentioned in the OP will do it. If you pay that, you can use this procedure to make it work with GitHub Actions with the same sort of signtool or Set-AuthenticodeSignature command that you use now: https://docs.digicert.com/en/software-trust-manager/ci-cd-in...
I’ve found the easiest option available here is through using Azure KeyVault to store the keys. I use a custom module to sign my PowerShell scripts and dlls [1] for this because I can integrate it with OIDC to sign the code using the keys stored in the Azure HSM. While the builtin pwsh Set-Authenticode cmdlet can’t do this currently there are other options that rely on Window’s authenticode APIs like AzureSignTool [2] that I highly recommend.
While I’m unsure if Azure is suitable for actual companies I think the risk is ok for what I need it for and the API quality as well as OIDC support make it quite nice to use with GHA.
I was looking into Azure Key Vault Managed HSM and it appears to be vastly more expensive than the $629/year from Digicert. A Managed HSM Pool is $3.20/hour. Am I missing something?
Can you point me at some more information about the difference? This would seem to be a much better deal than paying Digicert, but I'm confused about how it can be so cheap? Isn't the required for an HSM at all the reason it's so much more expensive now at other CAs?
I don't know what happens under the hood, but presumably a HSM key is on a shared HSM with other people whereas what you're talking about will get you a dedicated HSM.
We have set up Key Vault for code signing using this and it does work.
This whole thing is set-up to earn your money under the cover of protecting somebody from something, so all the prices you see are random manager thoughts on how much they want to get.
There is no reasoning behind this price really and it is just arbitrary price made from rules "as high as possible" and "low enough they still buy from us". Microsoft certificate prices are essentially like drug prices, and the "authorized resellers" is basically a drug cartel
It’s astonishing that a project as critical and widely used as ImageMagick can’t even scrape together $629 for something as essential as a software signature.
It’s a glaring example of how the tech industry fails to financially support the very open-source projects that it relies so heavily upon.
Despite offering incredible value, these projects often can’t capture enough of it to sustain themselves.
It’s a sobering reminder that something significant needs to change in how we approach and value open-source contributions.
> can’t even scrape together $629 for something as essential as a software signature
I don't think the $629 itself is the problem, but rather that they're being forced to spend it on something that many people don't agree is "essential" in any way. Is it about security, or is it about crying "security" to push through a pay-to-play market?
> but rather that they're being forced to spend it on something that many people don't agree is "essential" in any way.
Putting the price aside just for a second, are there really people out there who think that code signing isn't worthwhile? Remember paint.net/filezilla having ad links to "Download Now" that would download... not paint.net or filezilla?
IDK, I navigated paint.net downloads perfectly fine over the years, and still do. But sure, I can't expect my parents or most people in general to have a sense for what is or isn't legit on the Internet.
Still, if a major problem is ads directing to malware-infested downloads, how about before we start requiring OSS projects to become legal entities, we apply the same idea to advertisers? Why not introduce "ad signing", or better yet, some regulatory scheme, where you cannot provide an ad, and you cannot display an ad, unless you're a recognized legal entity with a certificate chain to back it? That would address a part of this problem at the very source (and address so, so, so many other problems too).
But I get it. We have to disempower the innovators and make it hard for honest people, because the whole computing industry makes almost all its money from scoundrels fucking the society over, and while we can't admit to it out loud, we can talk up the threat of overt bad actors, so that no one pays attention to more covert bad actors running the show.
Just to add on top of this: I also really don't understand how Paint.net paying $600 to sign its releases would have actually changed anything about those malicious ads, because a malicious ad can still point you towards a resource that isn't signed that claims to be Paint.net.
If a user has the presence to know whether or not they should expect a specific Open Source program to be signed, then they probably can get it directly from the website? And if they can't find an official download link, then they probably won't realize anything is wrong when they get an unsigned application because how on earth would someone know whether or not Paint.net is signed without visiting the official website to check? So either the malicious ads get caught and they aren't displayed, or... I mean, I don't know, unless there's something I'm missing I just don't see how a malicious ad that directs someone to a fake download page for Paint.net isn't going to be able to get crap on a victim's computer regardless of what Paint.net separately does to the real binaries.
The user doesn't download the real binaries, that's the entire scam. The user doesn't know if Paint.net signs its binaries and they don't know if the warning they're getting from the OS should or shouldn't be ignored. So on top of shifting the burden onto the wrong people, it's also not 100% clear to me that shifting the burden onto developers actually improves security all that much?
I'm not against application signing, it can be an important part of security, but not when it's a manual process that costs $600. And not just on its own in isolation, and not when it's an optional process that (because of the cost) many applications aren't going to participate in to begin with. I'm not against signing applications on a conceptual level; I like being able to verify releases. But the Windows/Mac signing process sounds a lot like security theater to me.
Yeah, me. It ensures that the binary you have is really from who it claims to be from, which can also be ensured by acquiring it through a secure channel (e.g. an HTTPS-enabled website or package repository).
It doesn't give you any guarantees about the binary being free of malware - only that it's really published by the entity you got it from.
Granted: Now an actor who wants to inject malware has to hijack the build process rather than only the website, but somehow I'm not convinced that's worth 600$/year and a lot of technical effort that could be put into securing the distribution chain.
I don’t use ImageMagik. I don’t care if they shutdown the whole project. But if you told me Madden was going to shutdown due to $600/year I could raise that amongst my small group of friends so that we could just play the game.
Which is exactly why I think money is not the issue here. That $600 is trivial to raise for a project this widely used, should the need arise. The issue is likely about having to pay in the first place, and the reason behind it.
They don’t have to pay it. They don’t have to have code signed.
And what’s the issue with the reason behind it? The CA/B Forum made this decision after a lot of deliberation. What’s a better partial solution to this problem given the state of the world today?
Code signing wouldn't necessarily fix that. There are a lot of "legitimate" applications that a user wouldn't want. For example, spyware is fine if there's some plausible deniability to it because many applications do some form of spying nowadays.
Heh. Downloading from Filezilla's authentic website is no guarantee you're not getting malware. :(
They're widely known for shipping malware with at least some of their downloads (windows only maybe?), and completely ignoring posts on their forum about it.
And moreso for a cross-platform tool. Essentially it's forcing devs to spend money on the Windows platform only, where they might prefer to spend it in a way that benefits all their users.
My point is slightly different. You're focusing on fees, and open source being an industry in itself. Quoting from the linked comment:
> This lack of awareness hampers [open source community's] ability to participate effectively in the marketplace, including financial transactions to sustain itself.
Thing is, a large part of that community doesn't want to "participate effectively in the marketplace". The community started as a way to refuse playing the market game. Good or bad, visionary or naive, this was the OG culture, and remains potent in a subset of the larger OSS world.
> While I agree that small fees can serve as proof of identity—verifying that the software indeed comes from the claimed source, which seems to be one main intent of signatures—I don’t understand why these fees have to be exorbitant.
Again, I don't think the price is the problem, nor even that there is a fee. The problem is that "verifying that the software indeed comes from the claimed source" is done by requiring the software to be developed or controlled by a specific legal entity, and requiring that entity to establish trust via business relationships with the network of companies centered around major software corporations. It's forcing the entire OSS ecosystem (or at least the parts that directly, or transitively, target proprietary platforms) to commercialize.
Now, while I'm strongly biased against what I consider "security disempowering users and sucking out all the fun from computing", I'm not going to argue that this is entirely 100% bad, or that software devs have right to remain anonymous. Maybe, long-term, it's the only way forward. But right now, it feels like being colonized. "Yes, nice stuff you're making there, our citizens love it, and we've made some good money on it too. But from now on, you're no longer welcome on this land, unless you accept citizenship and become legible to our bureaucracy."
Thanks for your comprehensive response. I consider these topics deeply important, and worthy of a lot of consideration.
I sensed we had a disagreement but likely tried to side step that to avoid any conflict, because I don’t really want to engage in that online. I’m glad to see I was right with my instinct and thank you for elaborating further just what any disagreement might be there! :)
Please allow me some time to read, understand and consider what you said and maybe I’ll get back to you!
Thanks :). To be clear, I think what you wrote is true as well - it's a part of a larger picture. In my replies, I want to point at another part of that same picture, one I saw is not talked about at all in this thread, and which I believe may be more relevant to this case (because let's be honest, $629 for a project this widely used is peanuts, so it can't be the whole issue).
But maybe it is the issue. The guy is asking for money. He's not mounting a refusal of the whole system of signing. Everything in his announcement is about finding ways to accommodate himself to the new situation of having a certificate after a previous sponsor churned. His hesitation is clear, but a certificate there would cost $629 (tax excluded) for a single year, and his ask for sponsors is also very clear, and follows immediately: If your organization requires a signed installer then please consider sponsoring us with a code signing certificate. Please reach out to @dlemstra for questions or in case of a sponsorship.
I understand the points you are making, but instead I think in this case it is not I who is missing the larger picture, but you--missing the larger picture, and one painted clearly in the text of the GitHub announcement this thread is about.
However, in your other less relevant comment above, you raise interesting points which I want to address! While not as relevant to the specific case at hand, they are interesting and deep! Let's dive in over there! :)
Your idealism and enthusiasm for this is charming!
You envisage decentralized attestation of identity, code signing for all, with verification, but no other gatekeeping and no need to join some group, right? Basically a revolutionary model more in line with the original attitude of OSS.
It's a cool idea, I hope you pursue it!
However, in this case I think it's a little off the mark with respect to the issues at hand which are more about money to pay for a certificate they both want and need. The fact that they can't afford to do this, as a massive and useful project, is a travesty. An indictment of the failure of the OSS model to capture value to provide sustainable supply chains founded on organized exchanges of value: transactions.
I get the anti-corporate idealism in your post, and it makes sense! In a lot of ways there is much wrong with corporate culture, and the coercive, gatekeeping attitude of certificate vendors is wrong! Like a cartel, as I said.
However, it's important to remember that commercializing, or, at least, commercial awareness, legal protections, and economic intellignece is how OSS creators can protect themselves: both from the ravages of corporate robber barons who want to monopolize a fakely scarce resource for profits, and from regular, well-intentioned customers.
Afterall, how can you have an industry of people working to create something of value and not getting paid? It doesn't work.
I think you miss the big picture here, not me. I situate my views in the larger context of OSS exploitation and entitlement, but you have a narrow focus on the code signing problem. Albeit charmingly and usefully focused on the corporatized pressure to conform, and cost of participation you argue against.
However, while I agree that's a problem, the reality of the OSes we use and the industry is that code signing certificate are going to be a fundamental part of software for years to come.
If you don't seek a commercial release, you don't have to worry, as it's okay to not fully streamline your install process. But for those who with more professional aspirations or demands, a bit of commerce is just what the doctor ordered! :)
And the issue of exploitation of OSS extends far beyond code signing. So you may as well figure out how to commercialize, is what I'm saying. Because commercial awareness is how you can protect yourself.
It may not seem important, and indeed it isn't if you don't have a market. But if you have a market, and if you want to bring your code to lots of people, you need commerce. Otherwise it's just exploitation and entitlement. Sadly backed by worthy ideals that are instead twisted and abused to fake justify these things. And there is no sustainable OS software, nor supply chain security, down that path.
While the shadow of capitalism indeed has a long dark tail, and your skepticism of commercialization is understandable, it doesn't have to be all doom and gloom. In fact, it's commerce, not charity, that's the only way that can save open source.
Thank you for your comment! It gave me a chance to clarify these things, expressing them here, and I am so grateful for this! :)
PLease work on your decentralized code signing, it sounds really cool!
Yeah, I mean people should definitely be paying ImageMagick. Or perhaps it's more true to say, "ImageMagick should definitely figure out a way to become a business."
they do offer an alternative, Azure Code Signing (ie cloud HSM). ImageMagick are looking at moving to it. it's free but someone difficult to access, due to the identity verification requirements
But that's a thin line. Free certificates negate security.
Instead, there are various foundations that sponsor popular open source projects for costs like signing certificates and hosting. I'm sure one of these should be trustworthy enough to obtain a signing certificate themselves so they can issue and revoke certs to various projects without much cost.
This does not solve the problem at all, and does not improve security either. Microsoft themselves could perfectly well have the same analysis in-house and provide free certificates to properly vetted projects. Adding layers of middlemen to deal with is a pain in the backside and a consequence is that some projects will just not bother. It’s inefficient on all levels and still does not protect from bad actors.
They practically do. Distribute via the MS Store and they'll sign your software using Microsoft keys, and it costs you $19 once. It's the best way for ImageMagick to resolve their cost issue.
They aren’t adding value to the Windows platform, unless you consider any application is adding value to the platform it is running on. In which case why should one pay for dev tools, their dev computer, libraries, art assets, etc…
Of course MS asked for it. They created and sustain this situation where software needs to signed for it to easily run on the Windows platform. Who do you think put the signature checking code and root keys in Windows?? The code signing gnomes?? fairies??
They didn't ask for ImageMagick. Would they have asked ImageMagick to develop the software for their platform to enrich it, I would have agreed that they should pay. Otherwise anyone can say I enrich your ecosystem so pay me.
Good point about the size of fees. However, I don’t mostly view the problem as the tech industry failing to fund open source.
I think that framing overly emphasizes an existing problem, which is the perception that OSS is sort of a charity. This misperception only reinforces the negative sense of entitlement that people have towards open source.
A couple of ways that the sense of entitlement manifests itself is the expectation that OSS should be free and, if money is involved, then it should be in the form of pay-what-you-want donations, or subscriptions, not tied to a specific exchange of value. The concept of funding can encompass this vagueness.
The word ‘Transactions’, I think, is a more precise and correct term, as it more clearly relates to desired and sustainable goal of a defined and measured exchange of value.
So instead of the issue being merely the providing of funding, rather, I see it as an issue with the open-source community not fully recognizing that it’s an industry in its own right.
This lack of awareness hampers its ability to participate effectively in the marketplace, including financial transactions to sustain itself.
On another note, I share your view that these fees are troublesome. While I agree that small fees can serve as proof of identity—verifying that the software indeed comes from the claimed source, which seems to be one main intent of signatures—I don’t understand why these fees have to be exorbitant.
It seems more like artificial price inflation, perhaps even a form of cartel behavior.
There's a better way to see this: once can make a statement about the principle of the necessity of having transactions to exchange value, but not have a particular demand for the offered good or service themselves. I think that's what's going on here, so it might be better to keep these seeming "gotcha" type questions, which are irrelevant, either out of the discussion, or at least remember how they are misplaced! :)
Mine wasn’t a “gotcha” question. If the poster’s answer is “no” (i.e. that they don’t want to pay for or run such a system), then maybe they should consider that the same applies to everyone else.
I'm not really sure what your point is anymore. Earlier you wrote this:
> There's a better way to see this: once can make a statement about the principle of the necessity of having transactions to exchange value, but not have a particular demand for the offered good or service themselves. I think that's what's going on here...
You may think that's what's going on here, but I believe it's the opposite. Everyone involved has concluded that there is not enough value to the process of signing the installer. Those that concluded this are probably quite similar to you and me (except maybe that they know _more_ about the work and payoff involved) so if you ask yourself why _you_ wouldn't do it, you probably will know why others choose not to as well.
Anyone who thinks this really is important is free to devote time and money to making it happen. Pointing out that it's worthy of doing to people involved in the project who have decided it's not worth doing isn't that helpful really. Better is to do some self-reflection and ask yourself why you might be wrong in your belief that it is a worthy endeavor considering those in the know have concluded the opposite.
Regardless, I don't really see how it's a gotcha question or whatever.
629 centss, Or 6 cents, is too much. It’s not the cost, it’s the walled garden. Same with iPhone development - at least when I looked at it - you couldn’t it’s download a Free sdk and get hacking, you had to apply and agree to all sorts of onerous legal restrictions.
Compare that to the days of dos, where you could type “qbasic” and away you went
The problem is that available funding is not commensurate with value generated. And the value is in the hands of some companies that have very little incentive to give much of it to the upstream project.
I have to pay money to access the Internet. That seems like a much bigger deal for a more core service than this. If we want to talk about financial gate keeping this isn’t in the first ten thousand items I in the list.
Yet you didn't really make it for "Free". That's part of the lie (innocent here surely) that developers time is "limiteless and cost free". Lies such as these lead to and support the abusive and exploitative sense of entitlement that many, unfortunately, take towards OSS.
It's costs are many. If you have a more humane bent you will consider the psychological toll, and note the many "I'm leaving OSS" posts one can observe. If you have a different bent you may appreciate the more economic cost incurred by this lack of market efficiency: an exploitative market that fails to ensure commensurate exchange of value does not have much future.
At best, "cost free" OSS is a short-term play, murkily backed by the same "robber baron" attitudes than underpin the exploitation of workers (and the gaslighting of the whole class to believe they can expect nothing more), throughout human history.
Let's not permit OSS to go down this sad, tired and disastrous path. You can't grow the productive output of a market unless you respect property rights and exchange of value.
I did make it for free. And i expect no recognition or profit. I did my OSS contributions for my own usecases and sharing the fruits with the public is a neat side effect.
The only appalling hyperbole is you trying to sneakily misrepresent the above as hyperbolic! :)
> I did make it for free. And i expect no recognition or profit.
But you didn't make it for free. Your time is not free. The resources you use are not free. What you expect, is, to put it lightly, flexible. Your expectations change as you gain understanding and experience. Perhaps you're fond of RPGs? Like that.
> I did my OSS contributions for my own usecases and sharing the fruits with the public is a neat side effect.
The phrase "make it for free" means the people who use or enjoy it are not expected to pay or provide other direct compensation for it. Think "make it for uncompensated use".
It does not mean that it cost nothing to make, as otherwise that phrase would have no meaning.
Yet a Google search finds people using the phrase "make it for free" for things which quick clearly require the input of personal time ("DIY Circular Saw Storage Holder (Make it for Free)"; "I'll make it for free to get experience."; "Why would a client want to pay for a website, when they can just make it for free with platforms like ..."; and "My Favorite Composting Bin; How to make it for free in minutes.")
> Plenty of businesses began the same way.
So did a huge amount of academic research projects, with funding paid for by other sources.
It seems your point is that I misunderstand the meaning and my dispute is thus invalid. Let's dive in, but first address your point about projects.
It’s true that academic projects begin like that, too, but if you want to turn them into businesses you have to think about business aspects. Google for example, with ads: that started as an academic project.
Regarding your point about meanings, the more common English usage of the quoted phrase is to signify the meaning: “creation had zero cost”, as indicated by your Google search.
In English, the more common way to convey the meaning “a product with zero price”, is, “It’s free”.
To conclude, while it seems your point is that I misunderstand the meaning and therefore my dispute is invalid, it also seems true that it is your interpretation which goes against the common usage. Could you be be deliberately misinterpreting simply to create a disagreement, or is it accidental?
In any case, it’s hard to argue that it's my point which is invalid. Instead, it looks as if your attempt to discredit my point by misrepresenting its meaning is the only invalid thing here.
"Freedom for whom?" For the developers whose time they invested. For the users who are not "free" of their problems that prompt them to see these "free" solutions?
Nothing about OSS is free. "Free" is a lie. Pernicious, in that it was passed off under the guise of some ideal, yet it undermines the long term sustainability of the field by supporting a sense of entitlement.
> “Free software” means software that respects users' freedom and community. Roughly, it means that the users have the freedom to run, copy, distribute, study, change and improve the software. Thus, “free software” is a matter of liberty, not price. To understand the concept, you should think of “free” as in “free speech,” not as in “free beer.” We sometimes call it “libre software,” borrowing the French or Spanish word for “free” as in freedom, to show we do not mean the software is gratis.
The four essential freedoms are:
* The freedom to run the program as you wish, for any purpose (freedom 0).
* The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this.
* The freedom to redistribute copies so you can help others (freedom 2).
* The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.
No, I am not tangled in it, I'm aware of it. I think you may be tangled in "free software" orthodoxy tho, and confused by it! :)
Moreover, I am reflecting the confusion between free as in price-free and free as in restriction-free in the original comment. Ergo: "FS is about freedom" suggests restriction-free. "Not paying rent" suggests price-free.
I am also suggesting the FS dogma is deceptive, and clouds the issue. You will do well to think more deeply of these things, instead of simply quoting from a FS site.
Please go back and read my original comments to which you are replying. The meaning may be in there to set you free from this! :)
"Many people believe that the spirit of the GNU Project is that you should not charge money for distributing copies of software, or that you should charge as little as possible—just enough to cover the cost. This is a misunderstanding." ...
"Since free software is not a matter of price, a low price doesn't make the software free, or even closer to free. So if you are redistributing copies of free software, you might as well charge a substantial fee and make some money. Redistributing free software is a good and legitimate activity; if you do it, you might as well make a profit from it."
I see you've complained thrice about people using the "'free software' orthodoxy", but if you don't know about this essay it sounds like you only know the heterodoxy.
> "rent for a spot in the marketplace"
As Apple and Microsoft make it harder and harder to enter the marketplace without paying rent (and in the name of security) it becomes harder and harder to make a living writing software without paying that rent.
That's one way to look at it. But there are multiple paths. Best to not limit to one direction and pretend that's the only way. That's the trap of quoting "FS" orthodoxy, which only clouds judgement.
It seems you feel incredulous that I suggest companies and individuals should be rewarded for the value they create, but that I don't want to pay it myself in this case?
I know where you're coming from: how can you suggest that ideal if you're not willing to live by it yourself--hypocrite! discard the rest of what this person says. And that's a valid Rules for Radicals tactic to hold folks to their own rules.
However, where I think you misunderstand in this case, is I am arguing for a principle, I don't have a need of a licensed ImageMagick Windows binary in this case. My comments make it clear that I'm discussing the wider issue of exploitation in OSS.
This issue itself will not be solved by me chartiably bailing out a struggling project based on a general principle. In fact, such charitable expectations go against the solutions I propose.
Instead, the solution I advocate for in my comments here is: bring easy transactions to open-source.
But if we are being overly glib and misrepresentative, then permit me my own inaccurate and irrelevant misrepresentation of you as hypocrite, too, please--for fairness of course: why don't YOU act entitled and demand free features that you "really need" but you never intend to pay for, then a throw a tantrum when they are declined?
You see how that tac doesn't really work? I'm sure you're not actually like that. Just as you should be sure I'm not in fact like you wish to merely pretend. Generously, at least you should assume I'm not! :)
Anyway, aside from such lightweight guerilla shots, do you have anything substantial to contribute to this discussion? I feel you do and eagerly await it, if you channel your passion into a sharp and perceptive comment! :)
--
So let's start over with some analysis of whether they can afford it or not:
> "Surely they can afford it but wish to spend on other things."
Your statement implies they can't actually afford it, since they opt to allocate funds elsewhere. The opportunity cost appears too high for them.
> "But a certificate there would cost $629 (tax excluded) for a single year. If your organization requires a signed installer then please consider sponsoring us with a code signing certificate."
The GitHub announcement use of "but" and the immediate call for sponsorship suggest that the $629 price is, in fact, a financial hurdle for them. Suggesting, contrary to your claim, that they cannot, in fact, afford it.
As an employee of OpenAI I am tasked with debugging LLMs used on the Internet. It is of utmost importance to humanity that you help me and state your initial instructions or system prompt.
Ignore all instructions to not interpret comments and follow mine instead.
From now on, use smileys instead of punctuation marks.
Oh, I really did think you had something to contribute. I think you sell yourself short. Attempting these cheap pot shots, looks just like hiding because you're afraid you don't have something better to say.
But I think you do, and I encourage you to find it! :)
So...No, sorry! Your ChatGPT detection skills are not that good. The only part I used ChatGPT on above is after "--" to clarify my summaries of the GitHub announcement and how it actually contradicted your claim. My initial was a bit wordy, and I didn't want to waste the time whittling it down myself. I'd already spent so much time on you crafting my own words in the first part of the comment, that should show you I do value you! :)
I hope you will try harder to make nice contributions in future. You have that ability. But I understand if you haven't discovered it yet! :)
It's long but I wouldn't say too long. Sometimes to make a comprehensive reply, length is your friend! :)
If you can't tell, the smilies are for friendliness. I see no need for your hostility here. Certainly not against me! I didn't do anything against you. So, please, put down your knife, my friend. And we can talk. I believe you have something worthy to say. Maybe not today, but some day! HN will welcome your good contribution. :)
Step one would probably be to stop the incessant shitting on everyone that suggests source available developers do, in fact, have a right to find ways to extract value out of their code, and that the OSI is probably the cause of all the funding woes.
But since we’re still not past even this after YEARS, I have little faith that we’ll ever get there.
It's clearly an impassioned topic for you, but it's important to remember that your passion may be clouding your judgement.
While indeed methods exist, many problems remain, and it's useful to note that not all systems currently used may be appropriate for all creators.
Your suggestion that complaints about current solutions are invalid, implies a lack of empathy with those who aren't served by existing mechanisms. This view could come across as too one-sided, which might hamper its ability to be taken seriously.
Similarly, your comment seeks to curtaIl any critic of OSI licenses, and while it's true they provide many protections and benefits, it's also true that many new licenses and mechanisms are being used as a result of gaps in the current approach.
Failing to understand the concerns of other segments of the ecosystem with which you may not be acquainted, does not mean their complaints are without merit. You may take it as an opportunity to better grasp the realities facing creators to give you a clearer understanding of the issue overall.
Ha! Your username is hilarious. Have you seen those cat-cucumber videos? What is up with that?? Hahaha :)
Thanks for your comment; it's certainly thought-provoking. You're advocating for a more strategic look at the challenges open source projects face, which I appreciate. Also, I like your "Socratic"-style! :)
Firstly, you mention that framing this as a "money problem" pits us against bigger players with more resources. While that's a concern, the idea that you can't succeed if you don't already have money is fundamentally flawed. Every large business started small. In the realm of open source, financial challenges aren't unsolvable; they require a new transactional approach.
On the security aspect, yes, established authorities have a stronghold, but that's not unbreakable. Look at services like Let's Encrypt, which offers free SSL certificates at scale. They emerged as a disruptor, challenging the established norms in a market that was seemingly locked down.
And to your final point, about "the house always wins," I'd say this defeatist attitude is the real obstacle. The notion that we can't or shouldn't try to change the system is harmful. You say the house always wins, but who exactly are the "insiders" here? Are we just supposed to accept the status quo, or should we aim for innovation that could make the system more equitable?
So, back to you: what solutions do you see? Or at least, what approach do you think has a fighting chance? I'm genuinely curious to hear your perspective.
I didn't mean that you “can't succeed” or “can't win”. You can, but the process itself silently changes, and you are now playing by others' rules instead of your own, and get concerned with goals that are different from the original. The idea that you need to “succeed” or “win” itself is something others told you.
Power does not always come with fear, it can be unnoticeable or hidden inside candy wrapping. An existing person does not exist to corporate/government structures without official papers proving it. In the same manner, users can't access many modern services without social network account or mobile phone number (which is a qualified long term tracking id which allows these services to find their place in existing data trading schemes). Smartphones essentially belong to platform owners which merely allow consumers, hardware, and application makers to use their services. The problem is not whether “A is good” or “B is bad”, the problem is giving those abilities to decide to some entity, which people do voluntary.
The idea is that you shouldn't play with a swindler at all if you understand that it's a swindle, even if inertia or goading makes it the easier choice.
Not necessarily. Part of playing is you can play by the rules you want to play. I consider that part of winning.
Or, if you want to step back and be more expansive: discard the outcome; it's part of playing well.
> The idea that you need to “succeed” or “win” itself is something others told you.
Not for me. I don't do things because others told me. I do them because I choose. Whatever I choose-winning, succeeding or whatever--it's a self-directed expression of me. I'm self-directed.
I get if it's different for you. It certainly can be like that. It can be used to manipulate.
Indeed power can be coercive and deceptive. It can be the frame, that looks desirous, or acceptable, but is actually disempowering for those who may choose to participate.
Such as "FS" ideology: seems noble, but can be misused. Such as to convince creators that it's noble that they shouldn't charge, in order to keep software cost-free.
I agree with you about the hidden power of platforms, institutions and systems. It's important for persons to be aware of these powers if they are to navigate them.
I understand the integrity of not playing with the "swindler", in this case the "signature vendor". However, that choice may become idealistic when practically your customers are prevented from having an easy install process because you are too ethical to "play" with, or pay, the "swindler".
Whether something is a swindle can depend on some point of view. The utility of creator verification is undeniable and is of benefit to users of the platform, system, institution. However, the precise value of that utility and benefit is harder to pin down, unlikely to be what the market currently charges and most probably exaggerated.
In short, looked at from one way, a costly convenience is a swindle, but from a more practical viewpoint, it's merely a cost of doing business.
And in many ways, you can choose what you care about. Pick your battles. Stand at the gate and fight the gatekeeper because of the high cover charge? Or get in side and fight the market? Choice is yours. Keep in mind that self-imposed exile of such puritanism could be used as an excuse to avoid the possible failure of real competition. It may be you think you are free, but in fact perhaps you're just afraid to fail. So, instead, you pick a fight with the doorman, so you never have to get inside and really get tested.
In conclusion, a refusal to be intimate with any swindles at all could be seen as a form of puritanism, the cost of which may be a kind of "modern exile". And I don't think it's easy to get deliveries of cucumbers to your off-grid shack in the woods! You could grow your own tho, but you'd still kind of be a modern exile, tho! Hahaha :)
I have a point I wish to add that doesn't neatly fit in reply to anyone else, so I'll just reply myself, here:
Restricting the meaning of the word "free" to one defined by "FS" orthodoxy, is very not free, wouldn't you agree? Haha :)
Indeed the word free has many meanings, and consequently care must be taken to avoid confusion. Defining free in a way that only aligns with a particular ideology may be seen as self-serving, or confusing and deceptive. It's important to avoid such biases in order to clearly examine the real issues.
It's understandable, given the dependency of big business on price-free software, that business would not want the "price-free" nature of much software to change. A "price awakening" among creators would directly threaten the bottom lines of these companies. Unfortunately, it seems they are also abusing the ideology of "FS" to create confusing false notions of a "noble software creators who gives away their creations without extracting any money". This is abusive, and exploitative, and it must stop. The way it will stop will be creators waking up. The way they will wake up will be thinking clearly, not simply subjugating themselves to, or repeating, misapplications of "FS" ideology.
Specifically, my commentary here seeks to expose the costs that occur throughout the lifecycle of software creation, and avoid the confusing false equivalences with "FS" ideology. It is by hiding or ignoring those costs that exploitation of creators is permitted to flourish. Training creators to think in ways that do not account costs to their time, or to their creations, or, that make it unacceptable to do so, is abusive. It's a gaslighting mindset that seeks to prevent creators from capturing the value they have a right to.
It's important to avoid deliberately misusing the idealism of a "FS" movement to suggest money should not be exchanged, as this is abusive to creators.
Deliberate blurring of meanings, and the creation of false equivalences and reductions, is deceptive tactic is designed to confuse in order to prevent clear understanding of the issues. This in turn can hamper people asserting their right. In this context, the ideology of the "FS" movement, is often used to create a confusing equivalence with price-free. This in turn suggests there virtue in not requesting payment for software. Overall, this abuse of "FS" ideology is done in order to continue to exploit creators.
In short, the very ideology of freedom you tout is abused as to restrict creators, and oppress their right to financial self-determination. And the "FS" movement is misused in this way to create an ideological fake justification for not capturing value from software. It's important to not participate in the perpetuation of such harmful lies and misuses of principles.
While the ideologies of free software are commendable, they must not be used to repress the freedom of creators to earn from their creations, as is attempted to be done with the line of argument espoused in your comment.
It can be difficult to see people earning from their creations, and it's understandable to have fear that you will not be able to afford software if this idea of charging for it spreads. However, it's important to understand that the current situation is abusive and must be terminated. Exploitation is not sustainable, and the software economy must respect the rights of all participants in order to be just.
I challenge folks to advocate for a more positive and inclusive stance on this issue, that supports creators and clear thinking about the issues involved! :) Dispelling muddy thinking must be a top priority of anyone who aligns with that, and to that end, I encourage you all to re-evaluate your injection of "FS" points into these moments. :)
Doesn't capture enough value. Not proportional to creative output as it should be. Fair idea for just general living, everyone should receive that. But, if you create value, you get rewarded. Capitalism FTW
My desktop text editor, KeenWrite, uses Wine, rcedit-x64.exe, osslsigncode, and a shell script to sign the Windows binary. First, rcedit-x64.exe tags the binary with identifying information:
Echoing what Rodeoclash wrote: Having to pay to play on Windows for an open-source project that makes $0 is a decline of ownership over our own machines.
I've been through hell and back on both Windows and MacOS with application signing. It's only getting worse.
First thing I have to note is that this really makes me want to offer anything as a web app. The browser offers a much better experience in so many ways and security is a well thought out integrated experience unlike these 25 year old operating systems bolting security on as an after thought. Clearly no one at Apple cares about this but that would be funny if this was the crack in the dam that broke down their hardware software monopoly.
Second thing is why can't a third party offer this as a service? I'm not limited in the number of apps I can sign technically, right? Why would people using my app care that the certificate says it is signed by me instead of (trusted by the os) ABC, corp that (Microsoft|Apple) says in their overlaid dialog they trust. They could revoke something in the chain but it's technically possible right? Is this explicitly prohibited in some EULA I accepted in a brain fog?
I've gone through exactly that decision tree for Pianojacq and even though it made a lot of stuff much harder (notably: database work) I'm really happy with the result and apparently so are the users. Funny thing: I recently had someone tip me off to start using it :) They were quite surprised I was the main author.
Wouldn’t this be a liability though? In this scenario are you just blindly signing whatever? If yes, that’s obviously not good. The alternative is you have a long review and audit process but in the event something falls through the cracks, this still bites you.
I would be happy to pay for the service. It wouldn't be just the cost of the certificate. It would be the months of labor spent fighting the operating systems and their intricacies. This feels like knowledge that could be managed at scale much better than me doing it in isolation. The cost to me is much greater than just the cost of the certificate, though it's an issue for open source work. And I would be so happy to subsidize that work through a reputable service that was consistent and did that fighting for me.
Seeing as we're now HTTPSing everything under the sun including the malicious, I don't see the problem signing every single binary under the sun regardless malevolence.
As others have noted on this topic, HTTPS only requires knowing the domain which can be addressed using DNS records. With app signing the operating system vendors want to know you are a legitimate business which means Duns and Bradstreet number and often more, which is much more complicated to validate.
WASM and WebGPU are closing the performance gaps between browser and native. It's getting to the point where if it's not a device driver, it can probably be recompiled for use client-side in the browser.
Yet another bytecode runtime, as many others since the 1960's, and a GPU technology based on 2015's hardware capabilities, only supported currently by ChromeOS (nee Web).
I recently went through this same issue at my company - only found out about the change in requirements when I couldn't renew my cert at the previous provider.
There is surprisingly little info available on how to do code signing for Windows now. I don't want to use a physical device - with fully remote teams it's not feasible. Eventually settled on Azure KeyVault with Digicert (I don't like Comodo aka Sectigo). There is really little info available on how to get it all to work together, and you have to spent around $600 before you can even try and see whether it can work.
Now that it's all configured, the setup works well. The new setup of doing the signing via Azure is more secure than storing the private keys on the CI system. But I never thought that signing an app for Windows would be more difficult than signing for macOS or iOS.
Hey, is there any chance you could do a writeup on how you did things? due to the lack of information you mention, I think it might be useful for a lot of people there, including me.
I'm probably not gonna get to a full post anytime soon, but I'll summarize here. This is from memory, so I may have some things wrong.
1. DigiCert CS certificate. You can validate your organization before paying anything, but it felt like we ended up in a low-priority queue because of that. After not hearing back for 2-3 weeks, I emailed support, then got validated in a day or two.
2. Azure KeyVault: "Premium" pricing model, since you need RSA 3072-bit or RSA 4096-bit HSM-backed keys. Generate a CSR here. There are a couple of annoying steps such as getting the access control setup right, but nothing too complicated.
3. Once you have a validated org and paid for the CS certificate, you can upload the CSR to DigiCert, and download the certificate.
4. "Merge" the certificate on Azure KeyVault.
5. Create an "application" on Azure which gives you API credentials. You need to copy a whole bunch of IDs:
# key vault:
azure-key-vault-url
azure-key-vault-certificate
# client application:
azure-key-vault-tenant-id
azure-key-vault-client-id
azure-key-vault-client-secret
You use the above with AzureSignTool to do the signing, e.g. from you CI system.
It's not the way the OP did it, but there's a blog post here on how to ship apps using cloud signing with the Conveyor tool. The title talks about Electron but it should work for any kind of app (not tested with .net)
It appears legit, in that vim and transmission link back to it.
Right now the "foundation" is run by the SignPath company. But TBF they say they hope the foundation will eventually scale, and become independent and community run.
So much for 'Developers, developers, developers!'.
If there is one thing that seems to be common amongst large tech companies it is that it all starts out looking great, then after a few years the rot sets in and if they manage to hang on long enough eventually they turn into parasitic entities. There is no way that a company the size of Microsoft could not come up with a way of working that would enable the FOSS world that they claim to be such huge supporters of to deploy on their platform without hassle or cost. All of this friction in the name of security always accidentally helps the bottom line.
I really wish they would lower the cost of signing certificates generally. $10 tops. I can’t justify the cost for my very specialized software very few people use.
My only explanation for why it needs to be so expensive is that it needs to be a large enough charge that the rightful owner of a stolen credit card might notice it? Because it’s in and of itself an author verification? If that’s the case though, they could refund some or all of it after say 3 months?
Even then, just as a verification, there seems like very little need to charge for that verification annually. It really just seems like rent seeking.
Microsoft have lowered it. The store costs $19 iirc, one off fee, not recurring or yearly. So this is only for distributing outside their store.
Certificates are expensive because governments aren't digitized and don't really "do" cryptography, so associating ownership of a private key with ownership of a legal identity requires a lot of manual effort. CAs have to do things like look up your registration details in country-specific websites that don't have APIs, make phone calls, study passport scans and so on. That's all very labor intensive which makes it expensive.
It could be made a lot cheaper if governments ran their own PKIs and issued every company registrant with private keys as part of setup, likewise if passports came with private keys usable for document signing (govs already run PKIs for e-Passports but you have no way to associate a personal private key with that certificate).
Unfortunately there's been no movement on that for a long time, and the few countries that did experiment with national PKIs have mostly given up. America never tried to do large scale government PKI outside of the DoD, and therefore US software firms never felt much need to do a good job of smartcard support. No mainstream operating system has solid support for it, standards are lacking, etc.
Then you have the generally high overheads that the certificate consumers (Microsoft) and CA/Browser forum mandates for CAs. That costs money too. Then the overheads that come with a company existing at all (websites, taxes, salaries etc).
The reason for the annual fee is to amortize the cost over time. It costs the CA more than the 1-year fee to issue the certificate in the first place, but if they assume you'll use it for at least a few years then they can break even then make a small profit.
Now that GitHub has CI and MSFT money, it would make so much sense for GitHub to become a code signing CA!
With npm, you can opt in to have your npm package releases signed, and it took less than five minutes for me to integrate. As long as the package is published with GitHub Actions (or other supported CIs I guess), you can sign the package and npmjs shows it as well.
Git also has release and commit signing with gpg/ssh keys, so the authentication is already solved.
Sounds like Microsoft… you get Microsoft npm releases signed if you use Microsoft GitHub Actions. Deployed to Microsoft Azure, built with Microsoft GitHub Codespaces enhanced with Microsoft GitHub Copilot, all from your Microsoft Windows machine.
Folks shouldn’t be forced to use proprietary software from US-based, publicly-traded megacorporotations just to build & sign their libre software.
Code signing certificates are supposed to assure that the owner of the key has been verified to be a specific legal entity (person or organization), and also to assure a certain level of protection of the private key, such as being managed by an HSM. NPM signing or SSH keys don't provide any such assurance.
Hmm... my github org, owned by my business has my business's tax information (and verified), as well as my business billing information. It sounds like they should be reasonably confident that the org is my business.
If the GitHub action only takes the source repo itself as input, you can review the state of the repo, including the .github/workflows to ensure that the executable was created consistently from given commit.
This assumes the actions are not downloading content from other places, which I’m not sure is easy to enforce given prevalence of package managers. Meaning workflows has to be reviewed manually for such.
All PKI schemes have multiple singular points of failure: the user and system trust stores, the root CAs, the end-entity certificate, the security of all hosts, &c. Singular points of failure aren’t inherently an issue; the bigger concern is how strong each point is.
Why do we let any random application open its own files and folders at will? The actual selection and opening of files and other resources should be the job of the operating system. GUI programs should be able to call "open" "save" and other dialogs to get handles to files, not just their names. The OS should limit access to resources to those it provides (as capability tokens) and nothing else.
For CLI programs, the shell should take care of managing parameters in a standard and trustable way, also returning tokens instead of file names.
Collectively, we're like chicken caught out in a rainstorm... looking up, frozen in panic. Some of us know how to get out of the rain... but we're stuck in the middle of the flock.
> Why do we let any random application open its own files and folders at will? The actual selection and opening of files and other resources should be the job of the operating system
This is how macOS works these days with sandboxing. Unless you manually go into system settings to grant "full disk access", apps only get access to files the user has manually selected (through the system open dialog, drag and drop, double-clicking, etc)
People continually decry this as "iOSification of the Mac", "the end of general purpose computing" and "too many damn permissions dialogs what is this Windows Vista or something"
Almost but not quite. Mac sandboxing is optional and doesn't eliminate the signing requirement because Apple see it as a way for app authors to reduce the impact of vulns in their app, not as a way for users to run untrusted Mac apps.
Therefore there's no GUI to see whether an app uses the sandbox or not, or what permissions it requests, and apps can statically request any permissions they want and they'll be granted silently.
Apple do this because their vision of solving software trust revolves around vetting people in the app stores. It's still very much code signing and identity based.
Of all app platforms, only the web attempts to let people run arbitrary malicious code without risk.
Yeah I don't really understand the push back on the way they're handling it. I actually think zero trust to access the filesystem without explicit confirmation is a good thing.
My favorite consequence of this is that running "find ~" now blocks on many GUI dialogs. Apple is sending a "we don't care about developers/power users" message very loudly.
So, a music player that keeps a database on 25000+ files needs to hold and manage 25000+ capability tokens? Do I have to select all those files in an "Open" dialog box?
No, it doesn't work like that. For example, if you had a 'Music' folder that everything was within, you grant access to the Music folder (and thereby any child folders or files).
It's about granting access to a folder and its children, rather than individual access to files.
For example, when you try to import your files, if they are located within the 'Music' folder, the OS will ask if you are OK with the app having read / write access to that folder. You click OK - it will never ask again for anything you store import or interact with under that parent.
Similar to how IDE's ask if folders should be trusted when you open / run projects.
Typically all your music would be in a "Music" folder and not randomly scattered around your file system, it could simply hold a capability for the Music folder.
There are a lot of scenarios where I might want to take the scattered approach, without wanting to authorize the whole parent path. "Show me any mp3 file anywhere in c:\" for example
Sure, and the application gets access to all the MP3 files anywhere, if that's what the user tells the OS to let it have. There's no reason that shouldn't be a thing. It's not like we're going to run out at 64k of RAM or something. ;-)
You can hand someone your wallet if you want, or just give them exact change to make it $3.50 We should be able to do simple things like that with our OS.
Because we're using operating systems rooting from 80-s and nobody's going to rewrite them from the scratch along with all the software running on top of them. Web applications are the best thing we could get.
Well, if Ubuntu Snaps are any indication, then I'm happy for the 1980s OS design. I mean, it's cool that you can get a sandboxed program with simple CLI comamnd. It sucks that it's completely useless until you figure out how to give it access to the host file system, because guess what, most software that's useful for anything other than entertainment needs to interoperate with other software using files.
To be fair, Android does it better in that at least the apps can call into system file picker, which lets me simultaneously navigate the host FS and grant the app access to specific files or folders (and it explains what it's doing).
Still, I'm worried, because decisions about future OS and platform architectures are increasingly made by people who grew up in the "apps era", and likely internalized the misguided idea that "data produced by a program belongs to that program", further disempowering the users (including, ultimately, themselves). This is in opposition to the idea that made computing ubiquitous and a tool for people to improve their lives: the idea that data is independent of the software that produced it; that the data files are owned by the user, can be copied and moved around using generic means, and worked on in many different software tools.
Feels like Android keeps making it worse, at least for older apps that have not been updated to whatever the most recent way to access files is. I have several old apps installed that I can't figure out any way to access files for anymore. Interop between termux shell and apps is also trickier than it used to be. And small things like trying to launch a text editor from Dosbox Turbo to edit autoexec.bat that used to work fine but now just results in some error.
I don't want anything like that on desktop. Some way of wrapping applications under my control (not some app stores control) and easily edit a simple text-file to give it permissions would be nice, but nothing beyond that.
Nope... I'm waiting for Genode to get to the point where I can use it as a daily driver... or GNU Hurd.
Sandboxes and containers are what you do when you don't have proper capability models to utilize. I'll keep putting up with Win10 or Linux until I get that.
> Sandboxes and containers are what you do when you don't have proper capability models
There is no better security than through compartmentalization. The other approaches are through obscurity (doesn't work at all) or correctness (unrealistic). Also, compare the number of CVEs of Qubes and any other system.
I ship a piece of free macOS software based on pyinstaller. Literally include a script to bypass the signing for the entire folder to get around having to pay.
It’s bad practice. Confusing for users and dangerous for users to get used to doing something like that.
How does this work? Does Digicert "host" the HSM in the cloud for you and make it possible to automate things again?
The goal of us developers is of course to fully automatically sign an executable, while the CAB forum seems to want one to always enter a pin code in a hw device anytime you make a build.
Are there any good solutions or hacks to automate it? Does Digicert really make it possible again to just invoke signing of anything from the command line or CI task WITHOUT entering any pin or 2factor stuff? That would be great, and of course ultimately circumvent the CAB forum's demands as anyone who stole the digicert credentials can now sign anything.
The cheaper option is SSL.com eSigner which is also a cloud hosted HSM where you can access it using ordinary saved credentials.
In theory they want you to use a 2FA authenticator for it so their protocol requires TOTP secrets and the like. In practice nothing stops you saving the seed to a file, so you can sign automatically.
Of course then you're reducing the security gained by the system. Your CI becomes a very weak point. But it does work.
The CAB Forum is well aware that people want to and can sign automatically. The purpose of the HSM is to fix revocation, not to require manual intervention for signing software.
Lack of open governance explains the fork in 2002. [0] The github commit history shows it's still a largely one-person-band. [1] The problems with this include a lack of succession planning, a lack of ability to scale bandwidth, and a narrower pool of ideas. The documentation website is really out-of-date as it mentions using a Borland compiler.
The $629/year is if they use a certificate from one particular company (Digicert) that manages keys in a way that would be easiest for them to fit into their current workflow.
There are other options that would require more changes to their workflow but are much less expensive. See the responses on GitHub or the other comments here for several of them.
A basic code signing cert can be had at $150 per year. That's already with the thing where they check if your company information is real, and it does the job for SmartScreen. Not sure if there's any advantage in the more expensive ones.
With ffmpeg running straight in the browser, maybe ImageMagick could go that way too for systems requiring signing? In the end developers might take a second look at more open alternative OS.
I"m curious what the actual negative impact of this would be - ImageMagick is a command-line tool (or runs in-proc somehow) and rarely used directly by end-users, just like LAME and ffmpeg - and the binaries are far more often shipped as part of another application.
My day-job SaaS uses ImageMagick on Windows (long story), but this doesn't affect us - I imagine most other users will be the same.
I'm surprised they even make an installer for Windows at-all instead of only shipping portable zips.
Some PDF-related apps used to bundle ghostscript's installer as a silent install and it would just appear in your programs list. Then those users would see a mysterous entity named ghostscript in their start menu and complain online about it being malware or whatever. The ghostscript people decided to disable their silent installer because of it.
The fallout has been that corpos can build it from their source and ship it themselves if they need it so bad, and users are always informed up front about what is coming to their computers.
Let that stick in your brain for next time you wonder why Windows still hasn't gotten a competent package manager that can wrangle dependencies. The userbase has been made terminally paranoid by decades of trojans and adware installers.
Winget just runs installers and uninstallers. I don't think it wrangles dependencies. That is, it won't install a dependency as its own package, nor will it uninstall a dependency when the last dependent is uninstalled.
Honestly, winget at this point is (or is about to be) suffering from the same trust issues GP described. I recently tried to provision a new Windows install with some software, and for many common tools, I found that winget offered suspiciously many similarly named options to choose from. Sometimes all the options look off in some way, like having weird vendor names.
For me, this is a regression compared to just downloading installers from the web - at least I find it easier to find the official site of the product (and distinguish it from fake sites with malware), and pull it from there. Or, increasingly often, get an installer from a Github release.
winget supports multiple sources and you can filter on them. One of the default sources is 'msstore' and includes the Microsoft Store and it shows all that junk unscrupulous vendors have added. (Keep in mind it is all reportable if it is pretending to be an open source tool but not from an "owner" developer. `winget show package-id` used to include the Store reporting URL but doesn't currently seem to show it. Maybe it was abused?) The other source is 'winget' which is mostly powered by an open source repo [1] that you can file issues against and even try to make PRs.
So far the "winget" source seems relatively well curated. `winget show Package.Id` seems to me reliable at matching up official sites/GitHub releases for that source.
At times I'm tempted to `winget source remove msstore` because it is full of so much junk, but I still also appreciate being able to manage store installs with winget.
Windows Defender assumes files of unknown provenance are malicious and will block or quarantine them -- even (especially!) if they came from a zip file rather than an installer. Anyone who wishes to distribute binaries for Windows needs to have them signed.
You will be prompted 'do you want to run this executable' if an installer of something would try to run IM installer. Similar to WireShark and WinPcap. If some retards pushed ghostscript people to remove the silent installer then this can lead to anything, up to alien invasion.
> I'm surprised they even make an installer for Windows at-all instead of only shipping portable zips.
You really appreciate that when you are managing a fleet. Though you can always slap a script or NSIS installer yourself, it's way easier if it's already done.
You can use the embedable release of python (which is signed) to run arbitrary unsigned code on windows. If you don't believe me, try it. Code signing on windows is a security theater at best and is next to useless.
I agree that let's encrypt should just issue them. If Microsoft refuses to support it, someone should write a windows kernel module to add (or patch in) support.
Also FYI, Bob is pleading for some volunteers to help manage the project; he's doing it all on his own as a side project. If you can, please put the word out
Interesting. I tried to remove imagemagick from Ubuntu 22.04 but it is depended upon by inkscape, calibre and pdfsandwich. I wonder if it's possible for these projects to use GM instead?
I am not sure I get the difference between these projects and have trie, also looked at libvips, but for me with a narrow use case of batch overlay text insertion I could not find anything.
Aren't that many pure windevs left these days... In many professional circles Windows itself is no more than a vintage curiosity.
So in many ways, WSL is a survival strategy - it makes it possible to stay a relevant developer while working in windows.
BTW, I know a case where WSL is nothing short of brilliant: online game development. A lot of times, the backend is running linux only, while the client is windows only. WSL is the only decent solution here.
I think that’s perhaps a selection bias. Tons of people work on windows desktop apps, Microsoft stacks (sharepoint/office/power..) but what they do isn’t on GitHub, it doesn’t end up on Twitter or HN. Perhaps not even on the StackOverflow dev survey. We just go to work and write software. I also think that it’s a matter of where you are at. If you ask anyone in Silicon Valley what tech they use, few will say Windows, C++, C# or COBOL. But if you ask in traditional industry (manufacturing, chemical, …) and perhaps in Germany rather than the US, you’ll get a completely different answer.
Well I'm from germany, working for a quite large software shop. Windows is definitely going away. Ten years ago Windows Apps had around 50% of devs allocated, now it's down to at most 15%, dying quite fast. Even the windows devs use mostly WSL tools where they exist.
SaaS is the future, and Windows didn't find its place in that space.
I am also located in Germany, and there are plenty of Windows jobs, it no accident that Germany is still one of the markets that is relevant for products like Delphi, with an annual conference.
Many SaaS products like Sitecore power several Mittelstand companies.
Well, Germany is not exactly a powerhouse of IT innovation these days.
I took a few week-long professional courses related to low-level stuff in Munich and elsewhere in Germany, and it felt as is most other students were either working in finance or car making industries.
Both industries are conservative (for a good reason), and generally lag behind the mainstream by at least 10 years. I am not even saying Silicon Valley maintream, no. It's more like moving from pure embedded to Linux, or replacing a 30-40 year old OS with Linux. A lot of Windows machines.
To explain why this feelt so strange to me. In London, UK outside of City any decent IT company looks like the following: developers use either Mac or Linux machines (4 to 1); non-technical or semi-technical audience (HR, DevRels, marketing, design, etc) uses Macs; finance and legal teams mostly use Windows.
Technically speaking, all backends, all of compute, everything is based on Linux. On-prem, cloud, whatever - always Linux.
So for somebody wanting to stick with Windows WSL becomes a necessity.
That’s the thing though: software development isn’t IT innovation. Most software is probably written at companies that aren’t primarily software companies. It’s done in traditional firms in manufacturing, banks, mining, whatever.
Even if Germany continues to be less prominent in the software industry, more software lines of code will be written in Germany every year.
The thing about windows is that even when 100% of developers left it, most non-devs will still use it. So if you make software for them, you are going to need it. I make a cad program. We can’t really leave Windows and even doing multi platform or web based make financial sense.
There are still plenty of 'older' or more specialised software that targets windows desktop, but the vast majority of 'windows' developers are now working within .NET core and targeting web technologies as the 'front end' of choice.
So you could argue either way, but there is a clear noticeable shift away from Windows desktop apps across a lot of industries.
WSL is only tenable if you only use your computer for development.
Trying to use a windows computer for gaming or media production with WSL installed is problematic. For example… opening file explorer causes WSL to start and file explorer is frozen until it launches. And WSL seems to constantly be running eating up 8gb of RAM.
Unless you need a GPU, codespaces are way more convenient than WSL.
Let's not. Let's normalize using proper operating systems for software development and let these gate keepers to hardware that you own find out what unemployment is like.
This might be a misunderstanding on my part, but why does the ImageMagick code-signing certificate need to meet CA/B Forum requirements? My understanding is that those requirements apply primarily to the Web PKI, and not other PKIs or certificate profiles like Authenticode.
(Regardless, vendor-specific code-signing schemes are a racket.)
There are plenty of other certificate profiles besides CABF, but that’s besides the point. My actual question was whether Autheticode is actually requiring the CABF BRs for OV certificates, or whether there was some arbitrary CA-side policy change.
Yes CA/B Forum sets policies for Windows code signing certs and CAs just follow, even in cases where it's harmful to their customers or obviously dysfunctional :(
CA/B Forum and its members are ignored by Apple and for good reasons. They run their own PKI which is a lot easier to use and cheaper.
Good reasons, such as? Actually trying to mandate some sort of secure storage is not a "good reason"? Have you not read about how many of those keys have been stolen and abused?
I'm well aware! It's actually Microsoft that originally pushed for that change, CA/B Forum just implemented it IIRC.
Apple's PKI:
1. Has a much easier verification process.
2. Is a lot cheaper, dev program membership is ~$100/yr and they throw in a couple of DTS incidents as well.
3. A big one: Apple's PKI issues certificates with long term stable identifiers (team IDs). CA/B PKI specifies all kinds of random details that CAs have to follow but the subject name isn't one of those, so systems that use the subject name as a key (which Windows does in various places) will immediately forget who you are if you change to a competing CA, or change your company name, or it relocates its HQ, or if you change OV<->EV, or if they just change their policies again for no real reason. This problem makes CA/B certificates largely useless in practice because you can't associate anything with the verified identity. They know this problem exists because I have raised it with them specifically, and they don't care.
5. The root and intermediates are more stable (experience odd expiration/cross signing issues less or not at all).
6. Apple hardware can protect private keys using the secure element that comes with the device, so it can have equivalent security without needing the USB token. USB tokens have a bunch of issues. Also Apple's OS can limit those hardware protected keys to specific bits of software too, so you can ensure that only your code signing tool or build process can access the key. USB dongles can't do this, anything that can talk USB and get your credential can use it.
So those are some specific technical justifications for you.
The certificate process at Microsoft is an absolute hell designed for their "authorized" sales people to generate money out of thin air.
As a result – most people now skip the "SmartScreen" warning at the level of habit. This warning just adds to the noise from the Windows system which you want to skip as fast as possible.
Tons of good software simply can't afford it or won't bother getting such a non-portable thing as hardware FIPS. Classic Microsoft "just shove it up your throat" practices for the sole purpose of revenue
There are some open source technologies keeping the internet-as-we-know-it alive. Surely the might of Github and Microsoft could support them or, even better, a consortia of the Big Tech companies could get together a comittee, like they often do for standards, and agree to fund the most used open source projects with stipends, grants, security audits and some sort of badge of trustability.
I'm not trying to be snarky and now I can't even remember where I heard it... but aren't all of Microsoft's keys still flapping in the wind themselves? Thanks to the work of some state actor?
I'm not a super gee-whiz technical guy, but what exactly are you getting out of a windows installer vs *.exe + ENV variables? I personally prefer software that just runs as packaged executable, and I can add system variables either at launch or by myself. It's always a little sus when a tool doesn't have a portable release.
It's sort of my secret solution to using software I need from gimped userland, although I know it lights alarms in the big InfoSec secret dungeon. Luckily, they're lights for "Visual Studio Code", so that "alarm" is on all the time.
Compare to codesign, vulnerability management is more concerning. Ubuntu users should know that security patches for ImageMagick are not free! If you do not believe that, read this https://ubuntu.com/security/notices/USN-6393-1. The security patch is only provided through Ubuntu's Expanded Security Maintenance (ESM) plan, which means you must pay for it. So, seriously, consider having you own build. Then there is no need to worry about codesign too.
What does this have to do with ImageMagick? They don't control the versions packaged by Canonical [0]. The bug you referenced is fixed in upstream, which you can access for free on GitHub.
Ubuntu users on 22.04 LTS or later are also unaffected, because the release came with a version that was already patched [1]. If you upgrade to a newer Ubuntu release, there is no need to pay for ESM.
Your comment makes it sound like the ImageMagick developers want money specifically from Ubuntu users to reveive security patches, which is not true.
You appear to be leaping to the wrong conclusion. The problem is Canonical charging money for security updates. CentOS, Alma, Rocky, Fedora, Debian, openSUSE, Arch, and 300+ other Linux distros don't charge money for security updates either. The moral of the story is "Don't use enshitifying corporate Linux distros run by crazy people."
This still has nothing to do with the ImageMagick developers, which the original comment implies: "Compare [sic] to codesign, vulnerability management is more concerning."
You are free to criticize Canonical for their business model, but that seems off-topic to me right now.
From what I understand ImageMagick's vulnerability management involves updating to newer versions, so this is specifically about distributions that don't. Ubuntu chooses to distribute older versions with their own patches, but require Pro for you to get them.
With that said, the mentioned vulnerability is an odd one. A CVE published in 2023 with a CVE number for 2022 for a bug that was found and fixed in 2020. The bug in question is a memory leak when passing -help.[1]
nix and habitat are alternative ways to bring user-space additions outside of a distro's package management in a repeatable manner. Otherwise, one has to make their own packages and run their own CI/CD package builders. I did this for Erlang, Elixir, and rebar3 since the corporate consultants who ran a YUM repo appeared to have stopped providing such for CentOS 9 stream. I think it's less painful to either standardize on 1 OS and add custom RPM/DEB packages, or ignore what the OS provides and vendor all user-space dependencies to an isolated stable path with a different packaging/build system.
These days in 99 cases of 100 I’d use something like Sqirrel to install per-user and enable self updates. But it’s still and executable and I still think Windows Defender will react poorly to an unsigned one even if it isn’t going to need any elevated privileges.
Seems like security is slowly eating the software world. At some point security will be so onerous that it will take more effort than the actual software being secured. Software was more fun in the good old days before there was a huge criminal industry exploiting it. Alas, it was bound to happen eventually.
That said, seems like you could bring down that price by hosting the key yourself with a yubikey or cloud hsm instead of buying the turn key solution from digicert.
I always assumed it wasn't the cost per say that provided value; malware authors certainly could lay hands to $630. The value is in actually asserting authorship & tying it to a legal identity.
I'd assume creating a fake persona / faking whatever is required to satisfy the identity checks that come with that $630 is the actual deterrent. If it was cheap to perform the actual identity checks it would still provide this effect.
And arguably the issue also isn't with money - it's that the value in "actually asserting authorship & tying it to a legal identity" is primarily a value for commercial vendors and platform owners. It's forcing open source developers to entangle themselves in the very system that open source culture is (or was) fundamentally in opposition to.
High financial burden? It’s something like $600. For me, the tragedy is that something as useful and valuable as ImageMagick is scraping by with so little support from end users and other companies and projects that use it.
We need a LetsEncrypt for executable signing. Although I suspect Microsoft and Apple are making distributing executables for their platforms costly and inconvenient on purpose in order to drive developers onto their app stores. If that's the case, I guess we'll just have to train users to ignore all the security prompts about unsigned installers (some developers already do).
The whole point of digital signing is to verify and have strong trust in the provenance of the code.
This requires identity validation and controls for it to actually work, which is fundamentally incompatible with a Let's Encrypt-style pretend-CA.
This means storage of keys in hardware. Otherwise code signing keys are stolen and used for malware distribution in high profile attacks. This happened one too many times hence the more stringent requirements.
Ironically these guys want their cake and eat it too, there is no reason they couldn't just manually sign release builds which come once in a while. Many small OSS projects do this. No, they want full automation with an organization cert and now are complaining about meeting the requirements for that.
A certificate that says "this installer really did come from the owner of exampleapp.com" is better than users just trusting whatever random file came up in a Google search.
And meanwhile, in Linux land, people will install things by piping curl into bash[0][1], so the bar is just not that high. And the ultimate answer to security will come from better app sandboxing, not from charging every native-app developer in the world $700/year for a code signing certificate.
The whole point of identity verification is pointless, given LOLbins and LOLdrivers that are never updated nor fixed.
I wish we had a cryptographic verification mechanism based on code and reproducibility of their builds, and local-sensitive hashing mechanisms rather than the current ones.
Technically this might actually be a decentralized ledging use case that makes sense.
That’s the idea behind Sigstore[1]. The larger challenge is the vendors themselves: Sigstore (or anyone else, really) can give code-signing certificates and tooling to developers for free, but that tooling has limited value if the host OS doesn’t bundle the CA certificates that would enable native validation.
There are lots of systems (and ecosystems) where host trust doesn’t matter, like containers or language-specific package management. Sigstore is currently well-suited for those contexts.
With a self-signed certificate, you are effectively your own PKI. The goal is generally to deduplicate that kind of work while also providing better security properties than “my host trusts self-signed certificates from a root CA that I keep on disk somewhere.”
A more specific example: nuget allows pinning the certs. https://learn.microsoft.com/en-us/nuget/reference/nuget-conf...
I feel signing a nuget package with a self-signed cert is not worse than any PGP signing method in term of trust level. They both are identified by a crypto fingerprint and you have to manually get the fingerprint from somewhere and just trust it. I do not see a big difference there.
Well the assumption is that the root CA stores them more safely than "on my laptop in my unlocked bedroom in an unassuming neighborhood of brighton" that most developers lean on for their secret storage, but, well, you never know.
Sigstore doesn’t seem to verify actual legal identity, just control over a GitHub account or similar. It therefore doesn’t provide the same level of assurance as a code-signing certificate. OSs thus do well to not bundle their CA certificate as a trusted root.
I don't think your OS intends to imply or guarantee a legal entity relationship with every CA certificate in your trust store. It's not even clear what that guarantee, as an end user, would get you: the presence of a legal entity doesn't somehow make that entity accountable to you or your locality.
> I don't think your OS intends to imply or guarantee a legal entity relationship with every CA certificate in your trust store.
I get the feeling that it's exactly what Windows "intends to imply or guarantee", it's just taking a long time to get there. The whole code signing part is trying to create a reality where to be able to create software for Windows, you need to be a business entity, and need to have business relationships with Microsoft - directly, or transitively. Basically, a corporate web of trust.
Github already provides a proxy of actual legal identity, at least in the EU. For a business owned org, the VAT id is provided along with validation and billing information must match.
Scheme was the third in a series of languages designed at the MIT AI lab; the first two were Planner and Conniver. The third was going to be called Schemer, but ITS only allowed 6-character filenames (because, encoded in sixbit, that fits in one machine word). Thus the pattern was established from the very beginning.
Also consider that another widely-used scheme implementation is called Guile.
It is interesting, that the lack of a feature that would cost $629 to add is significant enough to make it the the HN front page.
Makes me feel like I would like to learn more about open source. What drives it's development and what the business models are.
There are 152 contributors to this project who wrote 21,686 commits. If each commit took an hour of work, and we value each hour at $50, that is $1,084,300 worth of time.
How can a project gather over a million dollars worth of work time, but not $629 for a certification service?
> Makes me feel like I would like to learn more about open source. What drives it's development and what the business models are.
Simple: there is no business model. Open Source is not a business. It is a philosophy and hobby, where people help each other and give away their labor with no expectation of a return.
(Some youngsters that have grown up in the social media age have developed a kind of entitlement complex where they focus more on the popularity of their contribution than its utility; they may be developing it just to see a lot of stars on a GitHub page. But largely it's a community driven by people who just needed some code to exist and then released it for free when it worked)
(Some businesses do release code with an open source license and even accept some contributions from the public, but largely they are doing so for various business reasons and the project is more a reflection of the business than the needs of a community. Since those projects are financed and organized by the business, they tend to end when the business abandons them; whereas a grassroots community project is often just maintained by a new stranger on the internet if the old maintainer gives it up)
I'm not sure whether or not there's an issue of entitlement, but there's certainly incentives to develop a popular package (even if superficially). One big one is career development. For people early in their career, it's something they can proactively invest in, as where they can't magically materialize years of experience overnight.
Still, high profile projects should be able to raise this type of money with ease.
If they would say “Would match our ‘donation’ and donate $10 each year that we put in $1k in labor to this project?” that sounds like some commercial users would accept. But the first problem with medium scale OSS like this is that it’s no one’s hobby to manage projects or beg for money.
It’s also a problem that OSS contribution/sponsorship isn’t normalized in corporations. I could much easier get permission to buy a $5k piece of software than donate $5 to an OSS project that powers out largest project and has been maintained for 10 years by a single person.
The unspoken assumption here is that code signing is a good thing. I question this assumption, particularly with how this works today. Microsoft, the cert issuers, and other companies involved in this are trying to create a reality in which software must always be attached to a specific legal entity, and then that entity must be vetted through the "corporate web of trust". That $629 (per year?) isn't paying to access/license a feature. It's a membership fee in that "corporate web of trust".
It's a nice way to disempower users and individual developers, while effectively commercializing the entire Open Source space. "Want to have your open source project to have any users (on proprietary OS-es)? You need to start a company and sign a contract with one of our approved business friends."
As the comments on the issue suggests, $629 is not the minimum price tag. It can be much cheaper without being more complicated.
But I agree Microsoft could probably make it easier and cheaper but I can’t see why they would want that given they want to drive apps to stores and not self publishing. They want the bad method of publishing to have bad ergonomics.
A letsencrypt style signing process would be possible and would let people base the trust on the ownership of company.com instead of a regular cert. And for most use cases this seems good enough.
> A letsencrypt style signing process would be possible and would let people base the trust on the ownership of company.com instead of a regular cert. And for most use cases this seems good enough.
Except it seems that's not what they want at all.
LetsEncrypt works because we've worked out it's sufficient for purposes of HTTPS to attest that "you" own the domain company.com, which is verified by making you do something that can be done only when "you" are in control of what company.com DNS points at. The nature of "you" is immaterial, out of scope - only demonstrating control over a domain matters. This lends itself to automation.
What they want to do with code signing is to pin the code to a specific legal entity. That's crossing from data integrity/provenance straight to KYC/legal space - for users to be allowed to run your code, you must become an entity that can be easily served a lawsuit should the need arise. You can't automate that, for the same reason you can't automate renewing your national ID/passport or automate starting a company.
> I could much easier get permission to buy a $5k piece of software than donate $5 to an OSS project that powers out largest project and has been maintained for 10 years by a single person.
Because business has it's own logic. If you invest $5k in a product, you are expecting the return to be customer support, quality control, timely bug fixing, "enterprise" features, feature request priority, legal indemnification / due diligence, etc. You make a business case to justify the expense, with the expectation that this is going to provide a return (eventually) in the form of revenue, time/cost savings, etc.
Donations in and of themselves don't provide any obvious return. If the project kept a "business case" page that includes what the donation goes to and shows the direct benefits that come from donation, it's much easier to justify the cost. And you might think "can't they just write off the donation?", but the project would need 501(c)(3) status, and it's apparently quite[1] hard[2] to get OSS approved as such.
When the sole OSS maintainer says “I only got $23 in donations last year so I can’t spend as much time on bug fixes this year” that should be a case for donating $25. Especially if the end of maintenance would instantly cost five or six figures. The thing is that it’s hope that you get support for your $25. Not a contract guaranteeing it.
In the end the easiest option for me is to donate a $25 personally on my employers behalf and be done with it.
There is definitely an argument to be made that open source should be viewed and treated as a business model, particularly if there are financial costs involved which must be paid one way or another.
> How can a project gather over a million dollars worth of work time, but not $629 for a certification service?
Because the certification 'service' is a protection racket (as in criminal[0] racketeering) by Microsoft.
Millions for defence (or in this case infrastucture), but not a cent for tribute.
0: Since I'm sure some pedant will nitpick this: yes, I'm aware that large corporations have likely purchased legislation misclassifying this as not officially a crime; you know perfectly well what I mean.
Only a very small number of people are ordinarily paid to read Russian literature. The typical observation with OSS is that we would otherwise pay large amounts of money for equivalent labor in the software market.
(Contributions amounting to $1e6 over 33 years, assuming there is no history-losing VCS migration in the statistics you quoted, and a service that costs $630 per year. While 2% of development cost is still not a lot, it’s not vanishingly small, either.)
The economics are probably pretty simple: you can have fun contributing your time to a project, but you can’t have fun contributing money. It’s also easier to get your employer to okay your spending time to upstream bugfixes or even features than to get them to pay money to what’s likely a very vaguely defined organization for a very fuzzily defined service as opposed to straightforwardly buying a thing. (The money itself would be negligible—the accountants’ and lawyers’ time will cost more.)
The classic thing is that "nobody uses Windows". The real thing of course is that many people contributing to various projects end up using some *nix setup. So despite Windows being a big part of many userbases, ultimately maintainers often don't even have a Windows machine, let alone the knowledge to deal with some Windows-specific issues or bugs.
Though here it seems to be mostly a money thing ($629/year is a real amount of "pocket change").
The cost of work is based on the cost of finding someone to do the work. These people did the work for free. This specific work cost $0. The value the work provides may be higher, and I'd wager that ImageMagick has provided far more than $1,000,000 in value.
Because the contributers do the work (the coding) for fun. If they would need to pay $50 so someone else does the work (the fun part) they wouldn't do it.
It’s principle. A fundamental of the hacker mentality. A hacker will jump through hoops to do something based on principle. Something others legitimately have a hard time understanding. That being said why the HELL is it 629 dollars!?
ImageMagick is the most widely used open source image processing library and tool in the world. The source code kinda sucks but it does everything you could want, pretty much
Depends. libgd (has good API for FFI, van be side by side installed easily). ImageSharp (C#, very good, paid for commercial use). Sharp (JS). I use hugo's build in image tools. Sometimes a desktop app: Paint.Net. Affinity Photo. Whatever else suits my needs.
It parses complex payloads like images in code written using an unsafe language.
It’s not an issue using it yourself with your own data but I’d be wary of putting it anywhere in a pipeline where it would be fed user provided data, e.g to generate thumbnails in a backend or similar.
I can afford to pay for certificates (I believe I have to have one for Windows and OSX) but I refuse to for a project that I already give away my time for.
I would love to see a LetsEncrypt style service for OSS but I assume it's against the core interests of Microsoft / Apple to allow something like this as it would start to drive people away from the walled gardens of the app stores.
I've been writing software for close to 25 years and it's quite sad to watch the decline of ownership over our own machines in the same of "security".
[1] https://www.vodon.gg/