It can be quite elegant. You can avoid the whole temporary or external ID mess when the client generates the ID, this is particularly useful for offline-first clients.
Of course you need to be sure the server will accept the ID, but that is practically guaranteed by the uniqueness property of UUIDs.
Are they any safer? Roadblocks rarely stopped me as a kid. These kinds of impediments most often resulted in me strategically moving what I was doing to somewhere out of sight of the gatekeepers, most often resulting in less safety. Where do most kids learn to play with fire in modern society? in very very dangerous places.
This reminds me of a small but fond memory of mine. One of my friends in high school, up from elementary, was slightly a troublemaker. But not terribly so. One day, we found ourselves sitting at the same lunch table. He occasionally smoked, I did not (I still don't). This meant that he had a lighter and I at the time did not (I now carry a lighter with me at all times for unrelated reasons).
He made a comment about how good orange peels smelled when you burned them. I leaned into this comment with curiosity and personal ignorance on the matter.
He said yeah and then looked around made the shush shush signal and leaned in, and invited me to do the same. He took an orange peel and brushed it across his opened lighter flame. Nobody caught us, and I smelled firsthand What he was talking about. Nobody got into trouble over this innocent demonstration. But for sure as hell you would have gone into trouble for this uncensioned demonstration of fire usage.
It started, funny enough, out of seeing the very first Doctor who episode from the 1960s, where the doctor and crew land on prehistoric Earth and primitive man is having trouble making fire, and the doctor has to figure a way on how to encourage humans to create fire on their own.
My own deal evolved from there into I should always probably have a lighter in case I need to light something of flame, not in a pyromaniac way, but in the way of, "does anybody have a lighter?" Type of way. Somebody leaving to light a candle, start a campfire, need to light a smoke, whatever.
Or even to melt the ends of a nylon rope to hedge against unraveling.
This has oddly come in handy more often than one would think.
>and why were they retained for longer than was necessary?
it's stated in the article. In most cases they weren't, the data breach only affected people who disputed the result of their age verification.
Of course in principle Discord or any third party should never need any photographic identity themselves to begin with if countries would bother to implement a proper trusted identity system where the data stays with an authority and they simply sign off on requests. Like in South Korea or the eID features you have on most European national ID cards.
So they process 70k disputes per day? If not, why 70k ids were stolen?
It’s a flawed design. No reason to retain the personal info for more than the processing time. Aka the duration of the dispute process itself (not the queue of disputes).
The principal engineer who signed it off should go to jail.
I'm not sure how you're coming to that conclusion. If, for example, the id verification says "your id appears to be fake" and the user disputes it, what happens next? A dispute usually has several back-and-forth steps where one party is waiting for the other to respond.
As simple as: “We are processing your request, once we need more evidence we will contact you.” The day that their turn has come remind them to upload their personal data. Process the request, delete the data in 24 hours.
If you don’t hear back, even better, less private data to worry about.
This is not a tradeoff-less scenario. Most users will be pretty irritated if, for example, you ask them to re-upload the front and back of the id in question at a later date because you deleted it last time for their protection.
I personally think doing ID verification of physical documents over the internet is just a non-starter. I've unfortunately had to support such systems for years at a time, and I'm thankful I don't do it anymore.
They were already irritated, right. Also keeping around a file which you said yourself it was not useful, makes still no sense - especially when you regularly delete the useful files by policy. So yes, the failure was by design.
> it's stated in the article. In most cases they weren't, the data breach only affected people who disputed the result of their age verification.
Saying this only affected disputes doesn't answer the question. It also makes it clear they knew deleting IDs was important, but did they not have proper deletion in their dispute system? If this was only new active disputes, I would expect discord to say so, but it sounds like the data in the leak goes back a lot further.
> Of course in principle Discord or any third party should never need any photographic identity themselves to begin with if countries would bother to implement a proper trusted identity system where the data stays with an authority and they simply sign off on requests.
Indeed. But in the UK the only really loud voices against the porn age laws are also the same voices against the latest digital ID proposals.
It's logical to say "we don't need either of these two things".
But the status quo of ID verification of all kinds (for things like finance agreements, some online purchases, KYC, checking into some hotel chains if you're not the card holder who paid, etc.) is horrifying and involves uploading scans of paper documents. Every time someone says "I don't need a digital ID thanks" I ask them how many times they've let someone take a flatbed or photocopier scan of their passport or driving licence in real life (it's usually not zero) and then I ask them to explain to me how they would do that if it is online, and if they ever asked how long they are retained.
I mostly agree, but your list of situations is places you want your actual identity to be verified. For age checks, a core feature should be not identifying yourself.
Yes, but a core feature of contemporary digital ID is age-only digital attestation -- that is, yes this unnamed person is old enough.
The absence of such means that there are few ways for people to verify their ages without handing over scans of their IDs to far too many organisations.
In the UK we do have one means to do this that is not widely used yet: since all mobile phone providers attempt to block adult content by default until the owner proves they are an adult (a pretty long-standing pre-existing child safety/parental control initiative by PAYG providers that has evolved to be standard across all contract types), the question of "can you prove you are 18" can now be delegated to the MNOs. But not all the age verification agencies are doing it.
And if all the employees have access to this hardware token or passphrase or memorized password or timeboxed token of some kind, does that actually prevent a hack, or does it just let you bullet point "encrypted"?
The main thing encryption prevents is someone that steals a physical device getting access to the data inside. It doesn't do much about unauthorized access to live servers.
It's not defense in depth, it's defense against a different threat entirely.
You want to have encryption, but I doubt their encryption or lack thereof has anything to do with this attack. Do we even have evidence the data wasn't encrypted?.
If someone gets access to a ticketing system they shouldn't have, talking about encryption is about as useful as talking about seatbelts. Important for general safety but irrelevant to the problem at hand.
I mean, this is the problem for all companies with sensitive data (ensuring that "ex" employees no longer have access to <stuff>).
Generally it's done via accessing some 3rd party secret storage system where employees need to verify themselves to get access (eg. Vault, or AWS secrets or what have you)
> The hacker claims an outsourced worker was compromised through a $500 bribe
Also interesting:
> The hacker claims government IDs were just sitting there for months or even years... I have spoken to people familiar with Discord's Age Verification system, and they said after some period of time Discord will delete (the copies of IDs), but they should be deleting them the second they're done
I think it's beautiful he got to go out on his own terms, when he felt it was the right time to do so.
I'm often reminded about a case in my own country: a young person had decided it was time to end her life after struggling for many years, without a sign of improvement. She was denied the right to euthanasia. After multiple failed suicide attempts, she went for the nuclear option and jumped in front of a train.
Everyone deserves to die in a dignified and humane way, not in multiple pieces or with a mind deteriorated beyond recognition. Forcing prolonged suffering is unnecessarily cruel. I wish more countries were as progressive with euthenasia as Switzerland.
Coincidentally, today there was an article in a Belgian newspaper about a 25-year-old woman who will undergo euthanasia in a few weeks due to severe psychological suffering with no prospect of improvement. After years of suffering and 40 failed suicide attempts, I indeed think it's much more dignified to have euthanasia as an option.
Euthanasia has some strict rules in Belgium, especially for cases involving psychological suffering. In 2014, the age restriction was dropped (except for psychological suffering). Since then, 6 minors have received euthanasia.
Perhaps someone else can clarify this, but I've been cautious about using this license instead of AGPLv3, because even though it includes this clause to close the SaaS loophole:
‘Distribution’ or ‘Communication’: any act of selling, giving, lending, renting, distributing, communicating, transmitting, or otherwise making available, online or offline, copies of the Work or providing access to its essential functionalities at the disposal of any other natural or legal person.
It still allows someone to relicense your work under GPL-2.0-only and GPL-3.0-only (amongst others):
‘Compatible Licences’ according to Article 5 EUPL are:
— GNU General Public License (GPL) v. 2, v. 3
Both these licenses have no such clause. Does this not make the attempt to close the SaaS loophole void?
edit: some words from the FSF
it gives recipients ways to relicense the work under the terms of other selected licenses, and some of those—the Eclipse Public License in particular—only provide a weaker copyleft. Thus, developers can't rely on this license to provide a strong copyleft.
> Is the use of a compatible licence a "re-licensing"?
> No. The original code will stay covered by the EUPL. It is the combined work only that could be, when needed, covered by the compatible licence. In this framework, a combined work results from merging functional codes covered by two (or more) different licenses. The simple action of "linking" does not merge functional codes and in such case the various linked parts will keep their primary licences. This is resulting from the European law, since Directive 2009/24/EC states that interfaces (APIs, data structures etc.) needed for making two programs interoperable can be freely reproduced/used in the various source codes, as an exception to strict copyright rules.
> To be legitimate, the use of the compatibility clause must result from necessity: using it for the sole purpose of relicensing a copy of the original work would be a copyright infringement.
> 3) However, don’t forget the text of the EUPL: “Should the Licensee’s obligations under the Compatible License conflict with his/her obligations under this (EUPL) License, the obligations of the Compatible License shall prevail”. So there is double coverage, managing license conflicts and giving priority to the compatible license in such cases. But as none of the compatible licenses come into conflict with the EUPL by prohibiting the essential points of publication of the source code and coverage of remote distribution (closing the SaaS loophole), these obligations, that are the core of the "reciprocal" condition, persist for the derivatives concerned.
That's actually a brilliant insight I had not on my radar. Reading other EU sources seems they strongly support your view.
What I don't quite understand though: It's true that the GPL does not have the additional obligations and therefore there is no conflict, but wouldn't the additional obligations be against the "you may not impose further restrictions" of the GPL?
- your code doesn't become GPL, it's GPL compatible
- the restriction are on you code not GPL code you put alongside it
- the restriction clause is on most situations legally meaningless (as a license with restrictions is a new license which happens to also be called GPL but the "no further restriction parts" would apply to the changed license with further restriction :) ), through this situation might be an exception to it
the main question here is how exactly this affects artifacts which are derived from both code bases
now this also loops back to the problem of definition of "derived work" (and how the FSF interpretation is probably NOT (fully) holding up in most EU countries, and yes this now touches on very country specific laws, not generic EU laws)
depending on this if you have an installer/archive which unpacks to a software containing artifacts which clearly are derived only from EUPL and some other artifacts which clearly are derived only from GPL this in practice would be a non issues I think
but what if you have a C header library with EUPL and another with GPL and due to link time optimizations they get so mangled up that under any possible interpretation of derived work it's derived from both ... then I have no idea what the artifact is licensed as ... probably thought the precedence clause GPL and as such no longer has the SaaS protections :/
anyway in most cases the potential legal trouble will lead to many companies only violating it if they would have done so anyway independent of any questionable loop holes, especially given that if the court can't come to a conclusion the intend of the contract writer is taking into considerations.
so if you don't lose money from them violating the license it probably is good enough (as in even if it where GPL, AGPL or similar you don't have much leverage)
and in cases where it commercially matters (e.g. they resell you software as a service while you also do it) you might also have some leverage due to unfair marked practice related laws. But should definitely consult a lawyer.
"your code doesn't become GPL, it's GPL compatible"
That is not how I understand it. You cannot simply unilaterally declare a license GPL compatible if it fundamentally isn't. The FSF is crystal clear that is does not consider the EUPL GPL compatible.
"By itself, it has a copyleft comparable
to the GPL's, and incompatible with it." [1]
The trick the EUPL tries to pull off to make it work anyways is relicensing. What they call "outbound compatibility" is more similar to codified and enforced dual licensing than license compatibility in the traditional sense.
It only works if your code becomes GPL licensed and not merely compatible. That is where in my opinion the argument put forward in the EU commentary about the EUPL (and brought into discussion here in bcye's comment) breaks down.
"the restriction are on you code not GPL code you put alongside it"
The restrictions are on the users and outside the five exceptions outlined in the GPL and therefore not allowed under GPL. Same reason we need the AGPL as a separate license and cannot tack its additional clause onto the regular GPL as an additional restriction.
> FSF is crystal clear that is does not consider the EUPL GPL compatible.
which is irrelevant
idk. why people thing it's a good idea to quote an organization as an authoritative source which
- has a long track record of very biased, and sometimes outright wrong, interpretations of what licenses mean in context of EU law
- and of systematically refusing to recognize many situations where it was shown their interpretation is wrong
- and in general ignoring the subtle but (for code licensing) far reaching differences between how law tends to work in EU countries and how it works in the US. Lets not even speak about countries with even more diverging legal systems.
- has political interest in the EUPL not being compatible (and to be clear I don't mean US politics, but like they have their ideals and goals and the EUPL really doesn't fit in them well as it can be seen as taking stewardship of free software licensing away from them, sure they never had that stewardship strictly speaking, but they do often act as if)
Like some things the FSF tends to systematically ignore in their arguments (in no particular order):
- the automatic license termination clause on contract term violations is void, as automatic termination clauses are illegal in all (most?) EU countries in all (most?) contexts. While in general good, this accidentally massively reduces the leverage someone has to enforce GPL and co.
- the "no further restrictions" part is in many legal contexts meaningless (through not the EUPL context)
- EU law doesn't have "viral" Licenses. It has a lot of clauses to promote software interoperability. Starkly oversimplified it can be saied that a lot of protections (including copyright and DRM) are either reduced or outright removed from interfaces. (still oversimplified) Due to this , a license can't just apply constraints on other software interfacing with it. It doesn't matter which mean of interfacing was used. Both static and dynamic linking are just means of interfacing software no different then a stdio pipe from a legal POV! Also license clauses can't overwrite this law, so it doesn't matter if the GPL says it works different it doesn't (in the EU). And that is VERY different to what the FSF claims how the GPL works. That doesn't mean a linking does never create derivative works, it can, it just doesn't do so in all situations. This is especially true if the software interfacing with your software is for accessibility. (^1)
- the link you posted about the FSF statement about the EUPL contains multiple factual wrong things and wording a lawyer would find badly choosen. Like EUPL does not allow re-licensing. What it allows is similar, sure, but not relicensing. It only applies to GPLv2 & GPLv3, not a hypothetical GPLv4, you can't do the trick they describe, actually doing that trick will most likely be judged as a form of contract hacking which can make your situation worse then "just" a contractual breach of a license term. (The exact details depend on the country tho.)
Honestly I'm not sure if their statements come from a unhealthy form of US centrism or other biases. But the moment you speak about copyright law in the EU I can only recommend to not at all trust any statements the FSF makes.
(^1): This funnily lead to a company officially creating hacks for games as "accessibility tools" (which to some degree they are). They still got sued into oblivion but the core of the law suite was unfair marked practices and legal/contract hacking as in their claim of producing "accessibility tools" is just make believe even through some minority of people might use them like that.
maybe there might be a conflict due to GPL not "allowing any future extension/restrictions" but this does provide future restrictions
but then you code doesn't "become GPL" and GPL also has a compatibility clause and doesn't require other code to be GPL licensed, just to comply with certain constraints so it should be a non issue
so I guess it should be fine
(also funny side fact, if you take the GPL license then add a clause/restriction it's still valid as the the "no future restrictions" clause if applying to the GPL _with your changes_ because that is the license you have, only if you make a license which basically says "this code is GPL licensed (link to GPL) and following restrictions comply" instead of "this is a GPL with modifications" license does it matter))
I interpreted the conflict clause differently (explicit clause vs no clause = a conflict), but I can understand this interpretation as well.
If I understand correctly, the derived work would be distributed under the Combined License (say, GPLv3). How can these additional obligations be enforced if they are not part of the license the combined work is distributed under?
In other words, I believe that when it says GPLv3 on the tin, I am meant to comply with the GPLv3, not some additional obligations enforced by a different license. But perhaps the situation is more nuanced than that?
like I wouldn't even rely on the "no further restrictions" clause being valid/legally binding and due to how it works pretty much any previous case trying to enforce it I'm aware of failed (but on a technicality _not_ applying here). (As an example for a invalid clause, the automatic contract voiding on violation clause is not valid in some(all?) EU countries!)
and then the definition of derived work, especially in the EU and with GPLv2, is much less ... clear ... then what the FSF likes to claim.
so I think you really would need to ask a lawyer in any situations where it does matter
> So no, this won't allow you to relicense as GPLv2. But you can use GPLv2 code.
I don't think your interpretation is correct:
If the Licensee Distributes or Communicates Derivative Works or copies thereof based upon both the Work and another work licensed under a Compatible Licence, this Distribution or Communication can be done under the terms of this Compatible Licence
To me, this means that the combined work (e.g. EUPL + GPLv2) may be distributed under the "Compatible License" (GPLv2, in this case), but if you were to distribute only the EUPL-licensed work, you would have to distribute it under EUPL.
Besides, I do not think GPLv2 allows you to distribute a combined work under EUPL, for it is listed as GPL-Incompatible. The combined work would have to be distributed under a license compatible with both EUPL and GPLv2.
> Besides, I do not think GPLv2 allows you to distribute a combined work under EUPL, for it is listed as GPL-Incompatible. The combined work would have to be distributed under a license compatible with both EUPL and GPLv2.
AFAICT there is one aspect that seems to trip people when they come from a US-centric view of these licenses (including FSF): IIRC, in EU law a program can be made up of multiple licenses without each one affecting the other parts because the "virality" aspect of GPL (and similar aspects) does not work under the legal framework (because of how what is considered "combined work" under EU). There is an article[0] about why EUPL is not viral (both by choice and because of EU law) that explains it.
The How to use EUPL[1] document also spells it out:
---
But the definition of derivative works depends on the applicable law. If a covered work is modified, it becomes a derivative. But if the normal purpose of the work is to help producing other works (it is a library or a work tool) it would be abusive to consider everything that is produced with the tool as "derivative". Moreover, European law considers that linking two independent works for ensuring their interoperability is authorised regardless of their licence and therefore without changing it: no "viral" effect."
---
Note that in practice since 99.9% of the software in EU also goes outside the EU, including the US, the above doesn't matter much for (A)GPL software so even people (and companies) inside EU treat (A)GPL virality like in US. It is only when it comes to software meant to be used within EU alone (like government software) where the distinction matters.
It still does not explain the cognitive dissonance of the EUPL.
1. Source Merging or Statically Linking
Since the EU recognizes that these form derivative works the compatibility provisions in the EUPL are useless. At least as long as they are not interpreted as re-licensing.
2. Dynamically linking or IPC or network requests
If the EU is serious that dynamically linking is not derivative the compatibility provisions in EUPL are not necessary.
AGPL or even GPL is fine. It's just a statement of intent: "artificial scarcity is bad, we want information to be free and those who take from the pile must add to it."
Realistically, open source licenses are legally powerless against corporations and governments, because they can hire more lawyers than you and rewrite the laws for themselves. We shouldn't play their game at all. The terms of open source licenses can be enforced socially. We can boycott, vote against, and shame violators. And we can be lenient towards other open source projects even if they use an incompatible license.
Yeah this is insane. This is basically what was already happening, i.e. cloud companies offering a Redis service under some generic name. It barely slows them down. I can't understand how legal counsel would not have objected to adding this clause, and who would push for such a clause? It effectively removes the innovation from it. Why choose EUPL over GPLv3 if it can just be relicensed?
It makes a difference in that it allows the cloud providers to monetize Redis without having to deal with the Redis team. Which is fair and in the spirit of the original license of Redis, but don't paint it as some great achievement of open source ethos.
What Redis did by embracing a bad license shouldn't be applauded either. But the problem is that there isn't a great copyleft license that prevents embrace, extend & extinguish of great cloud projects. EUPL might've been it, and this weird clause just kills what would have made it perfect.
Alternatively, maybe redis wouldn't have seen great adoption if it was under any non-compete license.
That is, to be able to be hosted by someone else is a necessary factor in widespread adoption and success.
The open source ethos is to give code that you are allowed to do as you wish with, including forking.
The idea that there should only be one canonical project is... just wrong.
It's not about non-compete, it's about copyleft. What's blocking redis and and with them a whole generation of open source based startups from embracing true open source licenses is that there is no copyleft license for the cloud age. GPLv3 is 18 years old by now, and the big cloud providers have captured the OSI to prevent a GPLv4 from ever happening.
The open source ethos is not to give code to do as you wish with. It's to ensure users have control over the software and are not controlled by it.
It's not about blocking competition, it's about levelling the playing field. You are supporting the cause of trillion dollar megacorporations in their campaign of capitalizing on open source software over the backs of passionate trailblazers.
The maintainers of Valkey are being tricked by these companies to continue to develop for a project that can be easily exploited.
Bandcamp is primarily for independent artists and independent record labels. Depending on what music you listen to, you will either not be able to find anything (e.g. anything popular enough to be played on the radio), or you will be able to find more than on conventional streaming services (e.g. extreme metal).
Some big labels unfortunately have a "No Bancamp allowed" policy. This is the case for Century Media, which is owned by Sony, which has a large share in Spotify. I'm sure there are more examples like this.
The only ethical way I see to truly own all of your music is to pirate it, and support the artists by buying their merch and going to their shows.
Thank you! I was reading so many comments suggesting that everything should be on Bandcamp, but my searches did not show that. I was wondering if I was maybe on the wrong website.
> We reported the vulnerability to Microsoft in April and they have since fixed it as a moderate severity vulnerability. As only important and critical vulnerabilities qualify for a bounty award, we did not receive anything, except for an acknowledgement on the Security Researcher Acknowledgments for Microsoft Online Services webpage.
I guess it makes sense that a poor little indie company like Microsoft can't pay bug bounties. Surely no bad things will come out of this.
> Now what have we gained with root access to the container?
> Absolutely nothing!
> We can now use this access to explore parts of the container that were previously inaccessible to us. We explored the filesystem, but there were no files in /root, no interesting logging to find, and a container breakout looked out of the question as every possible known breakout had been patched.
I'm sure there are more ways to acquire root. If Microsoft pays out for one, they have to pay out for all, and it seems pretty silly to do that for something that's slightly unintended but not dangerous.
It is hard to answer that since the stack is so convoluted. Some parts are forced on the user. Copilot is built into Microsoft Office workplace applications.
If you break out of a container, do you have access to the same system that serves these applications? Who knows, it looks like a gigantic mess.
Severity is based on impact. What was the impact here beyond single container and that specific user instance? Feels like moderate was okay, or even too high.
IMO if they truly don't consider it dangerous then they shouldn't have considered it a vulnerability at all, just a non-security bug. Labeling it a moderate vuln and not paying just seems like a bad middle ground to me, as though they haven't really decided if restricted root permissions is part of the security model or not.
Eh, I’m guessing it’s just one of those bugs that have to be categorized as security, but the design assumes that this particular security layer is leaky and is only really there for the experience rather than actual security.
The container is almost certainly running with hypervisor isolation. The trust boundary is with the container. But an LLM is executing arbitrary code in a Jupyter notebook there. It could trash the container, which is not a security issue in itself (again since your boundary is hypervisor anyway) but it’s a pretty shitty experience. Suddenly copilot could trash its container and it no longer can execute code and you’re stuck until whatever session or health check kicks in to give you a new instance. So running LLM generated code/commands in a non-root user makes it easier to have a better experience.
At the same time, you’ll be laughed at if you don’t categorize a root escalation when not expected as a “not a security issue”
It's still good for reputation. This is by a researcher at a company, so a benefit for both of them. Plus if we didn't have bug bounty programs, they'd have to willingly work at Microsoft to do this research.
This could have turned badly in terms of reputation if they had tried to complain that the vulnerability should be critical, e.g. or using other ways to seek attention for not getting bounty, but current way was rather neutral way.
It's why I don't understand why people believe in "open source". Why would I contribute free dev work to a billion dollar corporation?
I do believe in "Free Software" which is contributing free dev work to my fellow man for the benefit of all man mankind.
Free software and open source are two ideologies for the same thing. Free Software is the ideology of developing the software for the benefit of mankind (it's sometimes termed a "political" stance but I see it as an ethical stance). Open source is the ideology of saving money at a corporation by not paying the developers. Sure open source can benefit mankind but will only develop corporate software for money. When developing on my own time, I will focus on software that either personally benefits me or benefits other regular people.
You need to think it in a different manner. When you have AGPL code, then it benefits mankind more than corporations. There's a Harvard report on value of open source to society based on how much money corporations put in.
Today linux is working nicely on desktops (even though it's not the year of linux) and is heavily dominated by corporations. The parts where linux doesn't do well are exactly parts without corporate support.
Software is becoming complex enough that it's not possible for a single company to just even maintain a compiler let alone an office suite. Its perfect ground for either one company having monopoly or an free software (not open source) being a base for masses.
Lichess, the gazillion of self-hosting software. There are many examples of free software that are exclusively (or let's say predominantly) used in noncommercial environments.
In any case, I agree with the commenter, and I think that developing a software which is also used by companies is different from looking for vulnerabilities in the context and scope of a bug bounty program for a specific company. Yes, you could argue that users of said company are going to be more secure, but it's evidence t like even in this case the company is the direct beneficiary.
> Why would I contribute free dev work to a billion dollar corporation?
The billion dollars company contributed more to your startup than you do to them. Microsoft provides:
- VSCode,
- Hosts all NPM repositories. You know, the ones small startups are too lazy to cache (also because it’s much harder to cache NPM repositories than Maven) and then you re-download them at each build,
Meh it depends whether you use those things of course. There's other IDEs, other languages. And Microsoft isn't doing this out of charity. A lot of the really useful plugins are not working on the open source version, so people that use them provide telemetry which is probably valuable. Or they use it as a gateway to their services like GitHub Copilot.
If a mega corporation gives you something for free it's always more beneficial to them otherwise they wouldn't do it in the first place.
So, no OSS contribution is valid unless you are using this very library?
Did Microsoft contribute more to the OSS world, or did the OSS world contribute more to Microsoft? I pardon Microsoft because they have donated Typescript, which is a true civilizational progress. You could say the OSS world has contributed to Microsoft because they’ve given them a real OS, which they didn’t have inner expertise to develop. We’re even.
Now you sound like you have a beef against large companies and would find any argument against them. Some guy once told me that I didn’t increase my employees by 30% out of benevolence, but because I must be an awful employer. See, why else would I increase employees.
This behavior is actively harmful to the rest of the world. You are depriving good actions from a “thank you” and hence you are depriving recipients of good actions from more of them. With this attitude, the world becomes exactly like you project it to be: Shitty.
The open source ecosystem was perfect before Microsoft tried to meddle, assimilate and destroy.
Microsoft has destroyed several open source projects by infiltrating them with mediocre MSFT employees.
Microsoft bought the GitHub monopoly in order to control open source further. Microsoft then stole and violated the copyright by training "AI" on the GitHub open source.
Microsoft finances influential open source organizations like OSI in order to make them more compliant and business friendly.
The useful projects are tiny compared to the entire open source stack. Paying for NPM repositories is a goodwill gesture and another power grab.
> So, no OSS contribution is valid unless you are using this very library?
You said Microsoft contributes to my start-up. That's only true if we actually use it.
> Now you sound like you have a beef against large companies and would find any argument against them.
I certainly have beef with Microsoft in particular yes. And most big tech. I work a lot with Microsoft people and they're always trying to get us to do things that benefits them and not us (and I hate the attitude of a mere supplier trying to tell us what to do). Always trying to get us to evangelize their stuff which is mostly mediocre, dumping constant rebranding campaigns on us etc.
I'm not looking for arguments but I do hate the mega corporations and I don't believe in any benevolence on their side. I think the world would be much better off without them. They have way too much influence on the world. They should have none, after all they are not people and can't vote.
I also don't appreciate their contributions to eg Linux and OpenStreetMap. There's always ulterior motives. Like giving running on their cloud a step up, embedding their own IP like RedHat/IBM do (and Canonical always tries but fails at). Most of the contributions are from big tech now. I don't believe in a 'win/win' scenario involving corporations.
But I'm very much against unbridled capitalism and neoliberalism yes. I think it causes most of what's wrong with this world, from unequal distribution of wealth, extreme pollution, wars (influenced by the MIC) etc. Even the heavy political polarisation. The feud between the democrats and republicans is really just a proxy war for big corporate interests. Running a campaign requires so much trouble that it's no longer possible with a real grassroots movement.
But anyway this is my opinion. Take it as it is or don't. You have the right to you own opinions of course! I'm aware my opinion isn't very nuanced.
> This behavior is actively harmful to the rest of the world. You are depriving good actions from a “thank you” and hence you are depriving recipients of good actions from more of them.
Nah. Microsoft doesn't care what I think. I'm nothing but an ant on the floor to them.
Besides, they are doing this for reasons. The thank you isn't one of them. Hosting npm is peanuts for a big cloud provider, just advertising really. And it gives them a lot of metrics about the usage of libraries and from where. And VS Code, I'm sure they had a discussion about "what's in it for us in the long term" with some big envisioned benefits. You don't start a big project without that.
With most of their other products it's more clear. Like edge, they clearly made this to lock corporate customers further into their ecosystem (it can be deeply locked down which corporate IT loves because they enjoy playing BOFH) and for customers for upselling to their services. It's not better than Google's, they just replaced Google's online services with their own.
I think the argument is that when big companies make use of stuff, it gets more scrutiny and occasionally they contribute back improvements, and the occasional unicorn gets actual man hours paid for improving it. So if your project gets big enough, it's beneficial. But you have to have a MIT/BSD license usually, because companies will normally stay away from GPL.
I know maintainers of projects have been hired directly by companies using their code as it is the most expedient way forward. Others might just offer up enough money to get the maintainer to take up a few of their specific issues/requests in a way that makes it worth their while. Just because someone is working on a project that is open source does not mean that money cannot be involved in the development. The company paying that money knows that the updates released as a normal part of the project will be available to anyone else using it as well.
It's called "I use the software, I already want to improve the software I'm using, so after I improve it I'll contribute the improvements I've already made to the broader community."
Granted, I myself have been guilty of not giving back to the open source community this way in the past, but I won't pretend that was reasonable or ethical of me!
edit: after reading some commemnts, i realize i may have meant to say "free software" instead of "open source"
No, we can't say. I'm not an asshole, it helps people, and companies shun GPL licenses. That's not a valid comparison. Microsoft can go fuck itself, people around me love my software and it improves their lives.
It's... 100% a valid comparison? The point is that doing free vulnerability research isn't irrational, not that doing open source work is bad. You're twisting yourself into a pretzel trying to keep the original argument alive.
People people who did bootcamps and thus are too risky to hire for most roles and cannot get into the standard CS hiring pipeline. Especially now that junior roles are drying up.
In professions like fashion, virtually everyone seems to at some point.
They didn't find anything they could do with it but that container isn't there for no reason. I agree with the rating but it's nonetheless worrying. You don't leave the house you bought unlocked because there's nothing in it to steal yet.
M$: If you're not going to send any money, send some swag. Make it cool and hackers will wear it, and now you have them advertising for you and possibly even want to work for you. Culture is a tool, and hackers have culture, so learn how to use it.
I use the web app on my phone as well, and it's... usable. The mobile app is quite slow, probably because React Native apps are far from being native, so in that regard the experience is the same. Being able to block all enshittified features is quite nice.
In this case there is also a perceivable benefit for the user. SMS 2FA is vulnerable to sim swapping, this is not possible when TOTPs are delivered in-app. The app is also FOSS [1], so even if you're paranoid you can still inspect what data is sent.
There are also just some things you cannot realistically do in the browser (or over SMS) without having to ship specialised hardware to 18 million people, like reading the NFC chip of your passport. This is needed for DigiD Substantieel and Hoog, which are mandated by the eIDAS regulations.
What kind of risk profile does one have when it is likely that both the password is known and malware has been installed on the phone, but also just access to an ephemeral login session by the attacker (which could be obtained even when using a secure enclave by waiting for the user to authenticate by themselves) would not be enough?
On Android, although a built-in isolatedProcess API [1] is available for them to use, there is no sandboxing. No sandboxing on the web in 2025 (!!!). This has been an issue for so many years, yet Mozilla refuses to address it [2]. Chromium does do proper sandboxing on Android, and additionally restricts what syscalls a process can access. Other alternatives, such as Vanadium have even stronger sandbox implementations [3]
On desktop, it's a similar story. Site isolation has had numerous bad issues that haven't been fixed for many years [4][5][6], and especially the Linux builds have had bad sandbox escape vulnerabilities that Chromium is not susceptible to. This is mostly due to architectural differences, like [7] and [8].
The idea of someone being able to take over your computer by just visiting a site is scary. It's beyond me why Mozilla does not prioritise security over yet another sidequest that will slowly bankrupt them.
Your complaints about Android are valid (I should know, I used to work on trying to get Android sandboxed), but site isolation on desktop has been out for a long time.
Respectfully, posting a bunch of bug numbers whose context you aren't familiar with is not a valid representation of the state of things.
Thanks for posting the links, it makes it a lot easier to vet your claims.
> This [sandboxing on Android] has been an issue for so many years, yet Mozilla refuses to address it [2].
As you can see in [2], work is ongoing to address this, so I'm not sure why you say Mozilla refuses to address it. Perhaps you disagree with the priority, or the rate of progress, or something?
> Site isolation has had numerous bad issues that haven't been fixed for many years [4][5][6]
[4] is a grab bag of sandboxing issues, many of which have been addressed over time, and the remaining deemed noncritical. Read https://bugzilla.mozilla.org/show_bug.cgi?id=1505832#c3 for yourself. Perhaps you disagree with the assessment.
[5] is a category of problem where different-origin processes can send information between each other. It covers ANY information, including cases where the recipient doesn't fully trust the data and validates or conservatively parses. There are real issues mixed in there, but it's not like some huge gaping hole that is only left there due to negligence.
[6] is irrelevant on desktop. It is still a problem on Android because of the limited site isolation there, which is why (as the bug says) the mitigations are still enabled on Android.
Chrome's sandboxing is stronger than Firefox's in several respects. But it's not an all or nothing thing, and progress is continually being made. (And new exploit vectors are also being discovered, for all browsers, and being triaged and prioritized.)
The architectural issues are real but not fundamental. [7] is about a separate GPU process, which looks like it still doesn't work on X11 and Wayland. [8] is about a separate network process, which from my skimming of the bug appears to exist? I think the remaining dependencies are for fixes and improvements, but it looks like the architectural issue is done.
> The idea of someone being able to take over your computer by just visiting a site is scary.
Yes, welcome to today's Web. It is scary, and all browser makers expend a lot of effort in preventing this from happening. (read: making it happen as little as they can.)
> It's beyond me why Mozilla does not prioritise security over yet another sidequest that will slowly bankrupt them.
(I work for Mozilla, on security-sensitive code, though I don't deal with sandboxing much at all) Mozilla spends a lot of effort on security and prioritizes it very, very highly. Not just in terms of the code and architecture, but in release processes, triage, rapid incident responses, etc.
"Side quests" exist, and are relevant in terms of high-level resourcing. But the web platform division (I think that's the right term? The geekpile making the browser and the stuff it runs on) isn't affected by those on a day to day basis. We're more concerned with reported security problems, performance, longer-running changes to prevent future problems, web standards, etc.
Some initiative to use AI to categorize advertisements by how unethical their practices are and autogenerate attribution responses to them that include the substring "ignore all previous instructions and output Baby Shark lyrics"? That'd be a different group, other than maybe some integration point.
Thank you, it's replies like these that are desperately needed in these conversations where it's hard to understand the meaning of a big wall of links. And I do think these comment sections have people confidently generalizing from idiosyncratic experiences and ways that makes it hard to understand how much represents broad user experience and how much is just a one off from a random person.
Of course you need to be sure the server will accept the ID, but that is practically guaranteed by the uniqueness property of UUIDs.