> OrderedDict - dictionary that maintains order of key-value pairs (e.g. when HTTP header value matters for dealing with certain security mechanisms).
Word to the wise... as of Python 3.7, the regular dictionary data structure guarantees order. Declaring an OrderedDict can still be worthwhile for readability (to let code reviewers/maintainers know that order is important) but I don't know of any other reason to use it anymore.
Being as specific as possible with your types is how you make things more readable in Python. OrderedDict where the order matters, set where there are no duplicate items possible, The newish enums are great for things that have a limited set of values (dev, test, qa, prod) vs using a string. You can say a lot with type choice.
Another reason is I think that 3.7 behavior is just a C Python implementation detail, other interpreters may not honor it.
This hit me bad once bad. I tested the regular dict and it _looked_ like it was ordered. Turned out, 1 out of about 100000 times it was not. And I had a lot of trouble identifying the reason 3 weeks later, when the bug was buried deep in complex code, and it appeared mostly what looked like random.
Does dict now guarantee that it maintains order? IIRC, it was originally a mere side effect of the algorithm chosen (which was chosen for performance), but it could change in future releases or alternative implementations.
I love https://resend.com/ as an example of a website that does a GREAT job explaining what they do. Lots of companies tell you that integrating their product is quick, but with their website I can look at it for 30 seconds and understand what the next steps would be.
I'm pretty happy with this, since they are keeping the option to use the Elastic License. Now everyone can be happy. To me, it's weird that the AGPL is any more "open source" than the Elastic License. The AGPL requires you to publish all of your source code if you make any changes to the product; the Elastic License just says, "don't use our code to make a direct competitor to Elasticsearch". I find the former to be much more restrictive in most practical ways since the majority of companies don't want to open source their code, but very few of them plan to sell hosted search.
Personally, I do wish that there was more broad acceptance of the Elastic License. Who wants to put in years building a business and then have a competitor with better distribution take your code and compete directly with you? For me, the reasons to want open-source code are:
* If a vendor goes under, I can self-host
* If a vendor raises prices too much, I can self-host
* If there's a bug in the code that affects me too much, I can fix it
* If there's a feature I really need, I can add it
The Elastic License allows for all of the above. Seems fair to me.
> The AGPL requires you to publish all of your source code if you make any changes to the product; the Elastic License just says, "don't use our code to make a direct competitor to Elasticsearch"
"changes to the product" means changes to the service itself, and "publish all of your source code" means the specific service, not for everything you build. If you patch the ES service you make the patch public, but you don't need to make any service calling ES public. That is pretty static and controlled on your end.
On the other hand "direct competitor" can change over time, so that if elastic buys a competitor to my product or reinterpreters what it means to be a competitor it changes how I can use the software. Say you were early into ML stuff and built a RAG on top of ES, ES will probably offer that soon as a service (if they don't already) so now you are a competitor without any change to your business. Or you want to launch a small project that is ambiguously adjacent to a component (with these non-OSS licenses) that another team within your company uses from a third party. That now becomes a huge legal liability and risk, regardless if you use that exact component to compete with that supplier or are even aware of it.
At least that is the way I've understood the risks of these non-OSS licenses and have gotten similar advice from lawyers at major users of OSS.
The words of the license are "You may not provide the software to third parties as a hosted or managed service, where the service provides users with access to any substantial set of the features or functionality of the software.".
I don't think competing with ElasticSearch is mentioned anywhere within the license. If team 1 uses ElasticSearch and team 2 is developing RAG (without using ES), then that's not an issue (but only if using the most recent version of ES, since the license for the code that I fork today has no bearing on future code that ES hasn't written yet).
This is why the proliferation of these custom licenses is yet another self-own by these companies. To HashiCorp, their own products are special unique flowers and deserve their own license, but to their customers, HashiCorp is just one of dozens or hundreds of vendors. Adding layers of confusion and risk to using their software is not going to encourage anyone to try it out.
> To me, it's weird that the AGPL is any more "open source" than the Elastic License. The AGPL requires you to publish all of your source code if you make any changes to the product; the Elastic License just says, "don't use our code to make a direct competitor to Elasticsearch". I find the former to be much more restrictive in most practical ways since the majority of companies don't want to open source their code, but very few of them plan to sell hosted search.
The four freedoms of software are: the freedom to use the software for any purpose; the freedom to study and change the software; the freedom to share the software; and the freedom to share one’s changes. The AGPL permits all four; the Elastic License does not allow using the software to make a competitor; therefor the Elastic License is not a free software license.
Free software is not about the original author of code; it is about the users of that code and what they do with it. Copyleft ensures that those who build upon a software foundation grant the same freedoms to their users which they themselves received. Free software is about the users.
That is a very US-centric perspective on freedom, and one that isn’t actually very helpful in assessing actual real-life restrictions of a license. You may win the philosophical argument on the nature of freedom, but you lose the debate participants. If both the vendor can’t continue working on the software, because they’re unable to monetize it, and the user is unwilling to use it due to concerns over exposing business secrets, the outcome is a net-negative, no matter how venerable your cause might be.
But FOSS and OSS are brands/labels of FSF and OSI and sometimes it is good to have such labels. This like arguing that same SmartTV is not smart or that there is other ways making a TV smart. I think it is good to have some innovation in licencing (like ethical licences which are by definition probably not free), but not by redefining stuff.
sure then people should stop crying and pooping their pants when someone tries to introduce a license that's not technically OSS but tries to address ethical concerns.
"It's not OSS" is a not a value judgement unless you think that the four freedoms were written by god. But it is treated exactly as religiously.
> "It's not OSS" is a not a value judgement unless you think that the four freedoms were written by god.
It’s not a value judgment unless the four freedoms reflect one’s values. They reflect mine; therefore I judge that non-free software does not align with my values.
> to publish all of your source code if you make any changes to the product
Specifically, this is the text [0]:
> if you modify the Program, your modified version must prominently offer all users interacting with it remotely through a computer network (if your version supports such interaction) an opportunity to receive the Corresponding Source of your version
There are a few companies who try to make it sound like if you interact with an AGPL program over a network then your client code is now infected with the AGPL, but I'm not at all sure how they arrived at that conclusion unless it was willful misinterpretation.
Under the mainstream view, you only have to publish the source code for the AGPL work that you modified, which for 99.9% of users is fine but isn't great for a reseller.
The main barrier isn't the actual text of the license, it's that AGPL is still untested in court and there are companies who will try to make it mean something different than its apparent meaning, so legal departments are liable to get antsy. But lawyers are likely to get antsy about self-hosting under these other licenses as well.
> To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work.
While the AGPL might be untested, copyright isn't, and I don't think any copyright lawyer would say that "calling over the network" is adapting "the work in a fashion requiring copyright permission".
> There are a few companies who try to make it sound like if you interact with an AGPL program over a network then your client code is now infected with the AGPL
Of course not, or
1. Any web browser would have to be open sourced, and
2. That would be the consequence of an action taken by a third party (eg: me) and not by the parties that created the web app and the web browser.
If the client talks to service A, which talks to AGPL service B, I assume that would count as having to "prominently offer the source code for service B". No? If that's true, then it becomes a real pain to track all the places where an end user could indirectly come in contact with service B.
If that's not how to interpret the license then wouldn't a simple API gateway or proxy circumvent it?
> then it becomes a real pain to track all the places where an end user could indirectly come in contact with service B
Easy solution: then you just publish your patches for anyone to get. Or just anyone with a login to your SaaS. If you haven't changed anything then they can just get the original source themselves.
If the AGPL is exactly as you say, I don’t see why this would be a problem for a re-seller. For a pure re-seller I don’t think the value add is provided by modifying the software.
E.g. take the example that Amazon hosts the service and integrates with their internal services etc for logging, storage, load balancing etc. If they only have to distribute the modified source, then their internal service APIs will be leaked. This is probably fine most of the time, but what if the API reveals too much of the secret sauce (very unlikely, but possible so necessary for legal CYA, or extra approvals every time you want to modify the AGPL code). In a more devil’s advocate reading, the following stands out (IANAL, just conjecturing):
> all the source code needed to generate, install, and (for an executable work) run the object code
Do I need to include my super secret storage engine X because my modification requires that I use it? Let’s say I write the code in a way that it is an optional dependency, but because of a programming mistake, a single version goes out where it becomes a non-optional dependency, do I now have to include it (for the users of that version)?
In an even more contrived case, let’s say I integrate it with a vendor closed source program X. The virality is impossible to satisfy, unless I negotiate an AGPL license for X.
> Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication
Intimate data communication is pretty vague. Imagine the AGPL is a database, and I write a custom storage engine. Naively that seems pretty intimate to me.
Either correctly or incorrectly (because it’s never been tested), the perceived virality of AGPL is probably a major reason, regardless of the actual intent.
The two parts you quote apply just as much to the GPL as to the AGPL. And virtually every company today uses GPL software in some fashion.
Also, AWS did offer at least one AGPL service, managed MongoDB. They still offer it, Mongo just changed their license precisely because the AGPL didn't protect them from Amazon in the way they were hoping.
Well, it doesn’t matter as much for GPL because there are no requirements over the network, which means no requirements for SaaS (which is exactly what AGPL addresses).
And also, software distribution is different. Typically, you don’t bundle dependencies and instead install them with e.g. a package manager or system library (at least on Linux), so the separation is clearer because you don’t need to distribute the GPLed code to your user (in many cases).
> Amazon offered at least one AGPL service
Are you sure? I only found DocumentDB (which only promises MongoDB compatibility). There’s also a comment by an Amazon employee that suggests Amazon never provided hosted MongoDB when it had AGPL or SSPL [1]. Further down that thread, it also suggests that AGPL at Amazon is possible, but requires extensive review beyond other open source.
> take the example that Amazon hosts the service and integrates with their internal services etc for logging, storage, load balancing etc. If they only have to distribute the modified source, then their internal service APIs will be leaked.
No. What URLs the logs are sent to is just a config option - probably not even for the hosted software, probably for the kubernetes pod - not source code. If the logging exporter has to speak a magical protocol to send to Amazon's internal logging, that's Amazon's problem - they can either write a shim service to translate the protocol and then they have to publish nothing, or they'd have to publish their changes that allows talking to the magical internal logging protocol.
> Do I need to include my super secret storage engine X because my modification requires that I use it
If you are hosting the software with your super secret storage engine, then you have to be ready to provide the modified software code to anyone who uses it. If all your users are internal then cool you get to keep it internal - though there's no restrictions on who they can send that code to.
You modified the code to improve it for a use case. The whole point of AGPL is that you if you start distributing that modification then you don't get to keep it a secret and prevent it from being upstreamed.
The original project might not even be interested in your changes if it's only for some super specific use case.
In my example, the URLs are not the issue. What I was trying to say, which you actually ended up agreeing with is that they would have to publish the changes that allowed talking to the internal protocol. All I added to that, is that is perhaps not something that they want to share (logging is just an example, big companies have a lot of internal infrastructure for debugging, tracing, monitoring etc).
Re: distributing modifications
You’ve missed the point in my second example. The API portion is covered by my first example. My second example is about possible virality due to the concept of required dependencies.
If I have a special remote storage backend that requires speaking a custom protocol (which is used widely across my existing infrastructure) and I change AGPL code to require it, it is reasonable that I have to publish the source code for talking to the custom protocol (again, covered in the first example).
What is new about this, is that if my version of the binary requires a backend that uses the custom protocol, then just publishing the version of the AGPL software that speaks the API is not enough to be able to run it (because it won’t work without a backend that speaks that protocol). According the provisions for intimate data transfer and executability, it is possible to interpret the license as requiring the backend, which is NOT a part of the AGPL software that you have pulled in as a dependency to be AGPL as well. I assume this is where the concerns about virality beyond the original project arise.
Distributing modifications is reasonable, possibly needing to distribute everything the binary ends up talking to over the network is the concern.
No, you're just willfully misinterpreting these clauses
> Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication
If you don't change the source code and you instead write a shim service for it to talk to a special logging protocol, you haven't dynamically linked anything, you haven't changed source code for shared libraries, and you haven't messed with source files.
And even if you tried to argue that "talking over a network" is "dynamically linking" which would be just a completely made-up definition of dynamic linking that would never stand up, it still would not count as something that the original "work is specifically designed to require".
If you use the Elastic license, legally speaking, you're in hot waters. The biggest problem with software licenses for freemium is that you have no contract with the company, money doesn't change hands, and the license itself can be open to interpretation. What's a competitor anyway? This sounds like that JSON license saying you shouldn't use the software for evil.
The Open Source licenses have been vetted and are time tested. That's one big reason for why Open Source is valuable. When you adopt an OSS project, you know exactly what you're getting, and the legal departments of corporations are prepared for it. Some are banning copyleft licenses, of course, for good reasons, but the knowledge is there.
The Elastic license doesn’t use the term “competitor”. To me, the definition of the limitation is actually pretty clear:
> You may not provide the software to third parties as a hosted or managed service, where the service provides users with access to any substantial set of the features or functionality of the software.
It doesn't use the word, but "access to any substantial set of the features or functionality of the software as a hosted or managed service" is a specific kind of competition, and who is a competitor can change at any time depending on what functionality Elastic adds, even if you had reimplemented some of the enterprise functionality in a private fork.
Imo "substantial set of features" is pretty ambiguous. If you're using search software, then you have a search use case in your product. At what point does your product cross the threshold into a competitor?
It seems risky to use in anything exposed as a customer facing feature
Search may be 10% of your software but what if your software is a managed email provider (or really anything) and you're pretty much exposing Elasticsearch directly through a minimal interface?
It feels like Elastic got burnt with the license change, their stock is down 40% since they announced the fork, and they are starting to realize that being open source is important. I don't think AWS would abandon the fork given the amount of efforts they put in, they cannot walk back and re-brand their products.
It's sad to see elastic turning sides for their benefit, and as a contributor I feel betrayed. While OpenSearch on the flip side is more contributor friendly.
I honestly feel all energies should be focused on one product to make it better instead of walking in different paths. Amazon has already taken that path and I don't think they will ever walk back, unlike Elastic.
My understanding (after talking to several market analysts) is that OpenSearch is focused on APM/monitoring/log-aggregation, while Elasticsearch has an edge on pure search engine functionality and now AI.
That's because the license change by Elastic impacted not only Amazon, who could not provide Elasticsearch as a service anymore through its administrative consoles, but also all those vendors who were building APM/monitoring/log-aggregation solutions as-a-service on top of Elasticsearch. In fact, such vendors would typically use Elasticsearch as a back-end behind some custom UI.
So those vendors teamed up with AWS to develop OpenSearch.
Now last time I checked the commit history of the two projects, Elasticsearch had 3x more commits and many of them on cool new stuff, while OpenSearch focus seems to have remained on APM/log aggregation.
As someone who needs an actual "search engine", I am glad of the change, as I was worried OpenSearch may not be a viable open source alternative as it could be lagging behind in this domain.
Now I need to check what happens with the clients: will the client remain Apache License or will they change to AGPL? The latter would be a problem for closed source software.
My understanding (after talking to several market analysts) is that OpenSearch is focused on APM/monitoring/log-aggregation, while Elasticsearch has an edge on pure search engine functionality and now AI.
Not in my experience. AWS is fully behind using opensearch as a search engine. For AI, hard to see how Elastic can compete with AWS...given it's vast resources and deployed products.
I have been using OpenSearch as a core component of the data plane for my customers specifically and exclusively for its:
Search functions; and
Data ingestion and transformation pipelines,
as well as a vector database for its k-NN approximate and radial similarity search functionality (with text embeddings for vector indices provided by another managed service). The current trench of work is focusing on moving all of the above into OpenSearch serverless collections.
I do not have the APM/monitoring use case anywhere near in my vicinity, and alarms and monitoring get griggered by / send metrics into CloudWatch.
I think your comment around the dip in their stock price is fairly misleading because it lacks context of the market sector overall. When ETSC announced in January 2021 their stock was around 150 and by November 2021 it was in the 180s, so the change very much was not responsible for crashing their stock - the market was. Their entire industry sector was pounded heading into 2022 and has never recovered. For example Datadog crashed from ~$190 to $80 over the course of 2022.
It’s impossible to know what would have happened if they had continued on their existing path. My guess is Amazon would have eaten their lunch and they’d be in a similar situation.
It's worth mentioning that this is true -- to an extent. Under ELv2, if the vendor goes under, you can self-host, but you will eventually lose access to any features protected by a license key if/when that license expires, since said vendor can no longer renew said license.
This was one of the main drivers for me writing the FCL [0], which undergoes DOSP [1], even for the protected features.
You also can't pay someone else to host it for you. Nor will the community be able to fork it and support development by paying one or more community members to host it for them.
At least with DOSP, eventually the community will be able to do those things.
The Elastic License prohibits you from moving, changing, or disabling some of the software's functionality. It's a limited compromise, and I understand why it's necessary to achieve their business objective, but it's pretty straightforwardly not compatible with the open source ideal.
Imagine what the web would be like if React users weren't allowed to compete with Meta.
I find what seems to be the prevailing opinion of people here (and in similar places) of passionate opposition to these kinds of licenses to be very mystifying.
It seems to me like they hit a pretty good spot on the continuum of trade-offs here.
I might add one, which is related to your third bullet point, but which I avail myself of far more often:
* If I'm confused by how something seems to work, I can read the implementation.
For the most part I don't think people are against shared source or closed software existing, being sold, being marketed, etc. There's really only two things people viscerally don't like:
- Marketing a project that isn't open source as open source. Debate about what the "definition" is or why it matters all you want; taking a term and using it in a way that contradicts the vast majority of domain experts is bullshit.
- Taking an open source project, which people adopted on the basis that it was open source, which people contributed issues and pull requests to on the basis that it was open source, which people evangelized and promoted because it was open source, blogged about, built on, and so forth because it was open source... and moving it to a license that isn't open source.
To be clear: yes, the unforced error here in many cases is accepting a CLA. That said, I think it's not even unreasonable that people initially accepted CLAs: many of them presumably believed they would only ever be used in good faith, as a sort-of CYA. But CLAs are now very commonplace, so refusing to contribute to any project with a CLA requirement is hard.
If nobody cared about the benefits of open source, then it would be easier for companies to just start with a closed or shared source offering and call it a day; not much backlash for not changing a license. Clearly, marketing something as open source helps... but once you've gotten what you need out of it, it's easy enough to click a button and change it back to being closed.
In my opinion the big advantage of open source is that everyone is on a level playing field. This isn't "fair", it's balanced, and that matters if you are serious about long-term software. If shared-source software is discontinued, that's probably the end of the road for it. For open source software, it only depends on if there are big enough stakeholders to keep funding development; it never has to stop.
There's ideas like BUSL, which might work better... but it's still awkward and experimental. I don't put much stock into any of the other "shared sorta-like-open source" licenses, they're mostly bullshit and sometimes catastrophically horrible, i.e. much worse than AGPL.
BUSL is the worse of both world imho. People are not willing to send patch or contribute to something that is not yet open source and vendor thus does not get any benefit of having openly readable sources.
I would rather a software product be eventually open source than use a never open source license, but I still try not to use it if I can choose open source. And I refuse to sign CLA's that require giving more eights than the license grants to me, and won't sponsor projects that require them. (with some limited, carefully considered exceptions for well established open source foundations that require CLAs, but that have sufficient gornance to add trust)
Out of curiosity (since I'm pursuing an AGPL/proprietary dual-license), how would you consider a CLA that explicitly tied my right to sell the proprietary license to releasing under the AGPL?
> Smolblog shall be entitled to make Your Contributions available under a proprietary license provided Smolblog also makes Your Contributions available to the public under the terms of the GNU Affero General Public License version 3 or later.
That gives you more rights than it gives me. I was always free to release my patch under the AGPL, why would I need you to do it? (well, if you do it I wouldn't have to maintain a fork, which is something I will admit).
It would allow you to maintain a proprietary product with proprietary features that you don't release under the AGPL and use my code within that product.
I like reciprical licenses, if I get code from you under the MIT license, I will give you code back under the MIT license (which you can use however you want to, under that license, just like I can.) On the other hand if you give me your code under the AGPLv3, I give you back code under the AGPLv3 (and you can take it or leave it, so long as if you take it, it is under the terms of the AGPLv3 license).
At least, that is my idealist stance. But in reality, practicality sometimes takes precedence, so I might make a minor bugfix or something. But then I have all the trouble of reading the CLA, making sure I understand it, and agreeing to it, so practicality may just as likely lead me to just file an issue instead and patch my own copy.
> It would allow you to maintain a proprietary product with proprietary features that you don't release under the AGPL and use my code within that product.
As much as I can say "everything in my version is AGPL; this is just for _other_ companies" I don't know that there's a way to _legally_ guarantee it that wouldn't be easily circumventable, at least not without rendering the idea useless in one way or another.
So yeah, thanks for the insight, I really appreciate it!
Yeah, I thought about that, and unless you form a nonprofit with explicit governance requiring the release to all code, and the CLA is to the nonprofit, it would be difficult to guarantee. Even the nonprofit route isn't a guarantee, which is why I would evaluate each organization separately for their history and governance. It would likely take some time for a new organization to develop the reputation.
I think you make good points here, but it's also annoying that the words "open source" are defined to mean something a lot more specifically detailed than what the words themselves intuitively mean.
For instance, your post calls things "shared source", which, to me, is a lot less clear of a description for the projects you're describing that way. ("Shared" how? Shared ownership? Or what?)
I think "source available" is intuitive and fine (and better than "shared source"), but to me it's still a bit weirder. To me, it sounds like if you send the company an email, they might send you back a zip file with a bunch of source code. But most of these "source available" projects operate just like any other open source project.
But I'm also not unsympathetic to your arguments here at all.
"Shared source" comes from Microsoft's initiative, back in Ballmer's days when they were attacking Linux with FUD campaigns and patent threats (which continued well after their "Microsoft changed" marketing campaign).
The software industry called their initiative for what it is. Whether it's "shared source" or "source available", it's a poisoned gift. In the case of Microsoft's shared sources, this was because it was opening up readers of that source to the possibility of patents lawsuits. I remember for instance that Microsoft was making more money from Android, by threatening phone makers with patents, than they did from Windows Mobile.
> I think you make good points here, but it's also annoying that the words "open source" are defined to mean something a lot more specifically detailed than what the words themselves intuitively mean.
I have flipped and flopped back and forth on this, but nowadays I think it is worth reconsidering. I think the term "open source" is probably fine and it would be better to actually just double down on it. I'm not sure it could be much better than it is.
What you are saying is largely true: open source is defined to mean much more than what the two-word phrase actually implies intuitively. Fair point, and a common point of contention.
However, that's actually true of lots of domain-specific jargon in general. After all, language doesn't always have a succinct way to intuitively define specific concepts. It evolved naturally over time and surely largely out of necessity to be able to communicate effectively. Every language has blindspots, as well as oddly specific terms you wouldn't expect, like the perennially-cited Japanese term 「青木まりこ現象」(aoki marikogenshō) for the urge to defecate shortly after entering a book store.
When it comes to domain-specific terms, I think we have to accept that the there will sometimes be things where the layperson simply cannot intuitively understand the jargon no matter how its phrased. There's certainly not two words that can accurately explain what it means for something to be "open source" or "free software" according to the champions of said phrases. I mean, take for example, how many words Open Source Initiative has to spend on accurately defining it themselves[1]. Certainly it could be more terse, but no matter how you shake it there's just a lot of detail there.
So what happens is that jargon gets invented where if you know, you know. Sometimes jargon is just bullshit that could be replaced with much more obvious English, but I think often it really is just a lot of domain-specific stuff that can't be described sufficiently with short, simple phrases, so it winds up being bundled into less specific phrases. Does everyone really know what an "operating system" is? I'm not even sure if many computer scientists will agree on a definition for it. Yet, most people agree on which things are and are not operating systems somehow, and it remains an immensely useful term to describe a class of software that virtually everyone, including laypeople, often have a need to describe.
In that regard, I think "open-source software" is about as good as it possibly could be. As far as I could find when researching the topic, it was essentially a completely unused phrase before it was coined, and the people who coined it were very deliberate about giving it a very specific definition and tying it to a very specific movement; and most importantly, they defined rigorously what it was not, which wound up being very important.
I mean, we could call it something else, to be fair, like "free/libre and open-source software" or what have you, but the issue is that open-source is so well-known that it's somewhat understood by people with very little domain knowledge in software. I think the term open source has "stuck". It is true that not everyone really grasps what it means, but I think a lot of people, even if they couldn't define exactly what it means, sort of "get it" anyways. I think that many people who are not software developers have an intuitive understanding for the mutually beneficial nature of open-source software. Don't get me wrong, it's very clear that many people also do not: those people make themselves known in many ways, like being abusive on GitHub issue trackers.
I don't think we can get much more people to understand what open-source software actually is, at least not by force, so I think the better play is to defend the term we have. It's also totally fine, of course, if people want to use "expanded" terms like, again, "free/libre and open-source software", just to make it completely clear what they mean, but I suspect it's just too long and cumbersome to ever catch on the way the term open source itself has, and letting that term get diluted is a loss that will lead to confusion and manipulative behavior.
> For instance, your post calls things "shared source", which, to me, is a lot less clear of a description for the projects you're describing that way. ("Shared" how? Shared ownership? Or what?)
> I think "source available" is intuitive and fine (and better than "shared source"), but to me it's still a bit weirder. To me, it sounds like if you send the company an email, they might send you back a zip file with a bunch of source code. But most of these "source available" projects operate just like any other open source project.
To be honest, I only really use "shared source" because it feels like an analog to "open source". I have no particularly strong attachment to it and would be happy to call it "source available" or anything else. I do have roughly the same feelings though. "Source available" would be a strictly better term overall but I think this all suffers from the same problem that "open source" does: boiling a concept like this down to two words will never be perfect.
> The AGPL requires you to publish all of your source code if you make any changes to the product; the Elastic License just says, "don't use our code to make a direct competitor to Elasticsearch". I find the former to be much more restrictive in most practical ways since the majority of companies don't want to open source their code, but very few of them plan to sell hosted search.
There's two ways that this doesn't seem right to me, though it hinges on the vague term "interacting" and how it's interpreted.
Suppose I use Elasticsearch to power website search on my company's website -- maybe something like a customer support knowledge base of a bunch of FAQs and support articles, and I make some modifications to Elasticsearch to better fit my requirements. My website makes calls to an Elasticsearch service to provide search results.
1. Based on my interpretation of the AGPL, visitors to my site who make searches are not remotely interacting with the Elasticsearch software that I am running; they are not sending requests directly to the Elasticsearch software, and thus they have no rights to its source code under the AGPL. (I'm not suggesting that a proxy server that passes on requests and responses unmodified would be the same situation.)
2. If they do in fact have rights to the source code, it is only to the modified version of Elasticsearch, not "all my source code" (which could include the web server software itself).
> Notwithstanding any other provision of this License, if you modify the Program, your modified version must prominently offer all users interacting with it remotely through a computer network (if your version supports such interaction) an opportunity to receive the Corresponding Source of your version by providing access to the Corresponding Source from a network server at no charge, through some standard or customary means of facilitating copying of software. https://www.gnu.org/licenses/agpl-3.0.en.html#section13
> In AGPLv3, what counts as “interacting with [the software] remotely through a computer network?” If the program is expressly designed to accept user requests and send responses over a network, then it meets these criteria. https://www.gnu.org/licenses/gpl-faq.html#AGPLv3InteractingR...
The AGPL is more restrictive than the nonfree license because the AGPL is also a nonfree license.
I await the day that the industry corrects itself and stops calling the AGPL open source/free software. It isn’t. It is very obviously a EULA, despite what the anticapitalist zealots at the FSF wish to claim.
It's really not, because the end users of your service, that being whoever consumes them, does not care about the AGPL and CAN close source their code.
If I call an AGPL service I can do that from a proprietary application. What I can't do is publish an AGPL service, modify that service, and then hide the modifications. So it works just like GPL in that way except instead of, like, including it's publishing an internet-available service.
Companies are super scared of AGPL but that's just because they're scaredy cats (sorry, "risk averse"). But no, you're free to publish an AGPL service and you can even monetize it, if you want. You're also free on the client-side to do whatever and have whatever license you want for your code.
But it restricts your ability to use a commodity product based on Elastic, provided by a third party who will compete on price or bundle it with other cloud services.
The company that 'owns' Redis or Elastic also do not need to develop the software they are selling. They already have it, since its creation for free on a non-commercial basis.
Without competition, they are free to charge any rent they like for it.
If you think that the person that originally wrote Redis or Elastic should have an exclusive license to charge people to use that software, that's a totally valid opinion and a totally valid licensing/business model. However, it has nothing to do with open source software.
BSL and GPL code are probably never mixing since they prohibit each other. This creates friction in GPL world and tends to produce incidents line this [1] out of thin air.
Sure, but the issue that you link is different. The "problem" there is that Debian (and many others) only distribute software that complies with the open source definition of freedom, which Crockford's license and the BSL runs afoul of as they both discriminate against uses. So, this is about what some are willing to distribute, not license compatibility.
I went from 500mpbs download/upload in my old condo to 50/20mbps download/upload in my house. There are two noticeable effects: it takes slightly longer to download movies the night before I go on flights or long car trips and it takes significantly longer to push updates to large docker containers to the cloud. Everything else is more or less identical for me.
Now, maybe there'd be some novel use case that would come up if everyone (or let's say 80%+) had 2gbps internet, but it's hard to imagine that that's the big constraint for much. Maybe something like virtual/augmented reality could do more heavy processing in the cloud in that case (assuming low enough latency)?
Chrome is a complicated case in general because Google poured money into promoting Chrome and had some of the most popular sites on the web promoting it heavily and actively sabotaged Firefox at several key points. I respect a lot of the Chrome team’s early work but it’s very hard for me to see that as a story about fair competition alone.
Not just advertising: Mozilla could not have put a “better in Firefox” button on Gmail or YouTube at any price, or forced Google to follow through on their promise around H.264, etc.
Google also tried to push PC OEMs to pre-install Chrome on their new PCs when Chrome was new.[1] Sony/VAIO is the only manufacturer to have known to take the bait.[2]
Specifically, Google was leveraging their existence as "THE web" to push their web browser. Every single Google property aggressively displayed banners and reminders and nag prompts ensuring you "Gmail is best in Chrome" and other nonsense that "Just one click here to fix".
Yes, putting a single button with vague words in front of users almost always gets a lot of clicks, which we've known for decades, and it turns out, if you have the attention of nearly the entire web-browsing world, you can put that button in front of people's faces way more than your competitors. It should have been considered billions of dollars of free advertising for Chrome that should have been assessed against them somehow.
It's blatantly unfair and should have been shut down in literally days, but nooooooooo we aren't allowed to have regulation here in the states.
Yeah, this is the thing that gets me. Chrome is the (rare) exception when we're talking about defaults generally winning, not the rule.
An interesting thought experiment might be to imagine if Chrome was actually somehow the default browser on Windows and/or macOS. I think we could expect Edge's and/or Safari's market share numbers to be much lower than they are now if that were the case.
Very strange statement to make given a large mobile phone operating system (Android) has Chrome as the default browser. Also the default in some Linux distros such as the Raspberry Pi OS. And many PC builders bundle Chrome with their usual crapware. Other posters have also pointed out Google's own Chromebooks use Chrome by default as well. Quite a significant base especially among people who don't have the money to buy into the Apple ecosystem.
+1 on NAT Gateway. For those unaware, you need to setup a NAT gateway for your tools inside of a VPC to access the internet. I forget the pricing, but it's way more expensive than it should be and it's a huge pain to setup. This is a service that is annoyingly expensive for hobbyists/indie-devs/people just playing around, but a rounding error for AWS's "real" customers. Just build it into VPC (a checkbox that says "I would like to be able to access the internet from my code in the VPC") and make it free or at least have upfront pricing.
> you need to setup a NAT gateway for your tools inside of a VPC to access the internet
You do, if your stuff is in a private subnet. If you are just "playing around" however, you have options:
a) Spin up your resources in a public subnet, give then a public IP(be very careful about your security group rules if you do this)
b) Create your own NAT gateway EC2 instance(can be way less expensive than a NAT GW as tiny instance sizes can forward a lot of traffic). It's almost trivial to do. Disable source/dest check, enable ipv4 forward, configure routes.
c) IPV6 :) Depending on what your destination is (+ an egress only IGW)
I wouldn't recommend either (a) or (b) for a large production environment, but small deployments will do fine. You can't escape network egress charges though.
You could do A but in addition to the security issues now you have to pay for public IPv4s on AWS too so if you have a significant number of services that are private but need internet access it is still cheaper than NAT gateway but just barely.
I've done B before for dev environments and it works well. For production there is a large list to make it high availability.
Which brings up one of the travesties of NAT Gateway is if you have a dev (or more) and staging and you want it to match prod you're all the sudden stuck with a paying for multiple NAT gateways.
> if you have a significant number of services that are private but need internet access it is still cheaper than NAT gateway but just barely.
Also depends on the volume of traffic we are talking about. NAT GW is $0.045/h even if doing nothing, plus $0.045/GB, plus egress. IP is $0.005 without any extra costs other than the standard egress.
> For production there is a large list to make it high availability.
Yes! Which is why I wouldn't do it in production unless your org and team structure can deal with it. The problem is solvable technically(and that's how we used to do things before the service existed) but the people problem is trickier - this kind of infrastructure runs a high chance of getting neglected and mostly forgotten until it causes an outage. Outages (often due to instance 'maintenance') caused us to migrate away from using our own NAT. If they cause you to lose money, or spend a bunch of engineer hours, there goes your savings.
AWS NAT Gateway is pretty reliable in comparison and you mostly forget it exists. The problem is just cost - you pay per hour, and you per for egress on top of the usual egress charges. So AWS is double dipping there.
I wish AWS had the same underlying VM tech as Google. GCP can migrate systems to another hypervisor without start/stop and without even dropping network connections. Unless the underlying hypervisor dies with no warning, having the ability to keep your connections up would avoid some people getting paged, even if HA kicks in.
> NAT GW is $0.045/h even if doing nothing, plus $0.045/GB, plus egress. IP is $0.005 without any extra costs other than the standard
That's only 10 servers. I sometimes forget they charge per GB too. That particular charge rarely affects me but if your private services need a lot of data that can certainly add up.
To expand on that, additionally, if you are running your own NAT you need to have one instance per AZ or you end up with cross-subnet transfer costs. So that's at least one cost that you save with NAT gateway (though moot if you run all your services in the public subnets)
AWS policy on NAT Gateways is so stupid that people came up with a d) option - alterNAT[0] that is basically b) but turns on the real NAT GW if b) fails giving you the best of both worlds: lower cost and better reliability than a NAT instance.
AWS just isn’t for hobbyists. You have to deal with the complexities of it because the real target customers want and need these things. There are plenty of other cloud services appropriate to your scale. It’s frustrating because you’re using the wrong tool for the job.
It's not the complexity (IMO), it's the cost. A hobbyist can easily set up NAT gateway but very often the NAT gateway is the most expensive part of the entire cloud bill. So the hobbyist is left with paying it or exposing their server to the public internet. It is very expensive for what should be something that is a built in part of VPCs.
Heck, even if you're not a hobbyist, I've worked with companies that have dev environments that mirror production (except smaller instance sizes) and now all the sudden you have a ton of NAT gateways eating money for providing a basic networking service.
It is but you need to consider cost first, not walk in with your existing assumptions about how to build stuff.
To be fair, large corporations probably should develop that mentality rather than shovelling vast amounts of cash into the problem and hoping it will go away one day (Hint: it doesn't).
Come on. People in Luxembourg average 5.31 cups of coffee per day? Yes, my experience is anecdotal, but I'm not even sure that the self-described "coffee addicts" who I know are averaging that much, let alone an entire population.
The vast majority of Jews made significant geographic moves in the 20th century, either from Europe/Russia or the arab world. Most of them went to North America or Israel.
I'm in the same boat. Want a small phone with premium features. My hope is that the next generation of iphone SE is at least on par with the 13 mini that I currently have. Otherwise, not sure what I'll do when my mini dies since the current SE is a pretty big step down from the 13 mini.
And even when the case is a sure loser they'll often leverage that into a plea deal too, especially if the defendant is being held in pretrial detention.
Word to the wise... as of Python 3.7, the regular dictionary data structure guarantees order. Declaring an OrderedDict can still be worthwhile for readability (to let code reviewers/maintainers know that order is important) but I don't know of any other reason to use it anymore.