Just wanted to say that this doesn't impact OpenTF too much. It's an extra step we need to take before a stable release, but long-term it'll make us more decoupled, which is great.
As someone else commented, all providers and modules other than Hashicorp's are hosted on GitHub and the registry is just a "redirector". We'll do something similar, other than some special handling for Hashicorp's providers.
Also, I know you want us to finally publish the repo - we're working very hard to make this happen and it should be a matter of days now.
Disclaimer: Work at Spacelift, and currently temporary Technical Lead of the OpenTF Project, until it's committee-steered.
I sure hope that opentf doesn’t just cause a fragmentation in the ecosystem that ruins the user experience, especially for the many people who are happily using unpaid TF and self hosted state stores.
That's the comment that made the issue clear -- specifically TOS were amended for https://registry.terraform.io to state:
> You may download providers, modules, policy libraries and/or other Services or Content from this website __solely for use with, or in support of, HashiCorp Terraform.__
ie., it looks like the intent is "You can't use OpenTF with registry.terraform.io".
IMO, that feels a little petty. But, I guess if OpenTF is taking a position of "Use us instead of Terraform", then they shouldn't expect to get the usage of Hashicorps infra.
> But, I guess if OpenTF is taking a position of "Use us instead of Terraform", then they shouldn't expect to get the usage of Hashicorps infra.
It’s totally something they can do, but it seems short-sighted. They had to know that this wouldn’t actually stop the momentum around OpenTF, but just result in HashiCorp giving up the control they have over the canonical namespace.
As precedent, Docker allows Kubernetes and Podman to access its registry, for example.
If you look at it from a proprietary software company (which is what Hashicorp is now), it's totally expected and understandable. Why should they spend money supporting extra load on their infrastructure from people that are not directly paying for that? It makes perfect sense.
Is it short-sighed? If you think they will go down in market share from now on, yes. But Hashicorp probably thinks it won't or it wouldn't have made the change in the first place. For them, it's all the way up from here.
> Why should they spend money supporting extra load on their infrastructure from people that are not directly paying for that?
There same logic applies to users; at this point, I have to assume that the only reason that Hashicorp is providing anything to people who aren't actively giving them money is to try and get money from them later. This is also one of the reasons I'm abandoning them ASAP; now that the money squeeze has started, it's not like they're going to stop.
From my point of view, controlling the canonical namespace is a form of soft power in the ecosystem. Since as the post says, the actual files are hosted on GitHub, the cost of the extra load on their infrastructure is real but probably not material to a company of their size -- as I understand it, it's more like running a DNS service than a file host.
So it _seems_ (the word I used above as well) from my perspective that they’re giving up a bunch of soft power for little gain, but it's very possible that I'm either wrong about the value of the soft power, or wrong about the cost of running the infra.
I published a terraform provider when the registry in beta while working at a startup. They do some “value ads” like code signing and such, but you’re right - it can functionally be replaced by the GitHub release page.
Well, in practice DockerHub isn't usable for many Kubernetes places anymore due to rate limiting.
Anybody should be, as is sensible in any case, mirroring in a local registry as minimum. Probably it's even better to investing capabilities to be able to build all the images you rely on for being able to apply security patches etc
I haven't tried every single one, but the AWS public ECR has a bunch of the "library" images from Docker Hub (e.g. public.ecr.aws/docker/library/golang:1.21 ) and as the "public" part implies, no creds required to access it
They do have a search interface, if one wanted to look for their favorite: https://gallery.ecr.aws/
We switched many images to use the ecr.aws registry because we were getting rate-limited on dockerhub. Our k8s cluster was on EKS and it worked out very well.
you probably need mirroring anyway. Lots of stuff missing on public ecr. I haven't verified if those on public ecr are legit or not or at least, same as the dockerhub counterpart.
Companies want the benefits of open source (massive community contributions, lower development costs, better security, community exposure, marketshare) but don't like the downsides (from their perspective: forks, lack of control, freeloaders etc).
TOS changes like this are fine, it's their garden, but ultimately all they'll do is put even more power behind forks and alternatives. Because the people that were motivated to use OpenTF are now also going to offer an alternative registry, which results in further loss of control and community contact and shifts attention away from HashiCorp's offering. It's very difficult to pull off this sort of dual stance without each move that you make impacting some aspect of your operation in a negative way. You need to think this through very well from day one because any change later on could easily be perceived as the beginnings of an attempt to squeeze the installed base and once that is the impression you will lose users rapidly.
I think this damages all commercial OSS projects. Because after Docker and Terraform, any effort to bring in some OSS project as a dependency will be met with "Sure, it's open source now, but what about when...".
IMO it shouldn't be much different than "sure, it's open source now, but what if it stops being maintained?"
Because open-source code never stops being open-source, it just stops getting open-source updates. Closed-source in some situations can prevent you from using existing software; you also have other closed-source issues like bugs being unfixable.
Would you rather use a service that worst-case, you have to patch and maintain yourselves? Or a service that worst-case you have to switch to an entirely different service?
That's true, but for a lot of companies having an entity behind a piece of software will definitely be a major factor in their decision making process so these kind of tricks ultimately benefit closed source offerings. And that's quite annoying. The idea behind open source is very solid, but the assumption that any user of a large package is ultimately able to continue to maintain it is faulty and that means that for a lot of parties the benefits of open source only materialize if someone else is willing to take up the mantle, they are simply not able/staffed/wealthy enough to do this.
So every time someone does a rug-pull all FOSS projects suffer.
> Would you rather use a service that worst-case, you have to patch and maintain yourselves?
Having to patch and maintain a complex piece of open source code yourself for most smaller non tech companies for all practical purposes means that it might as well be abandoned closed source code.
Folks are wise to question if they should use OSS that has a CLA assigning copyright because that means they should have zero expectations about the license not changing.
I could fork any MIT, BSD, or Apache-2-licensed project without a CLA and start publishing new versions under HashiCorp's BSL tomorrow.
HashiCorp's CLA doesn't assign copyright. It just licenses it. Hence CLA---contributor license agreement. Contributors keep their rights to use and license their work otherwise.
Seeing legal FUD via throwaway account here doesn't make me happy for HN.
Projects without a CLA can't be relicensed unless every contributor to the project agrees to do so. So if a project without a CLA has many contributors to it, you do have some reasonable assurance that its license cannot change (legally), because it would be impractical to do so.
You can’t change the license of the original code. But you can say “all changes from now on are licensed under the BSL”, with no need to physically separate the BSL and non-BSL parts. The result is a codebase where users have to comply with both the original license and the BSL. For most permissive licenses, that’s basically the same thing as a pure BSL codebase, except that typically the original license notice needs to be preserved in addition to the BSL one.
This isn’t the case if the original license is the GPL, because the GPL requires derivative works to also be licensed under the GPL.
I suspect you're conflating two meanings of "relicense" here. That's understandable: we've badly overloaded the term.
For the kind of "relicensing" relevant to HasiCorp and similar, the question is "What will be the license terms for new work going forward?"
Unless authors are claiming to revoke the license under which they previously released code---neither Hashi, nor Elastic, nor Mongo, nor the Commons Clause companies I saw ever claimed to do this---those old license grants do not go away. Hashi Terraform releases from the before the announcement are still available under MPLv2, and likely always will be. There is no backwards-looking, retroactive change.
When I fork, say, an Apache-2-licensed project, add my own work, and release under Hashi's BSL, the original, Apache-2-licensed code doesn't cease to be Apache-2-licensed. However, Apache 2 license terms don't apply to my new work unless I say they do. If I choose BSL instead, users of my new releases---old Apache-2 work plus my new Hashi-BSL-licensed work---have to comply with both Apache 2 and the BSL. They can toss my new work and just comply with Apache 2, but they can't have the whole package with my new work without my terms.
The situation's akin to using an Apache-licensed library or copying in Apache-licensed code snippets. All of this is possible because Apache 2, a "permissive" license, doesn't restrict how I can license new work.
The "relicensing" that requires getting every copyright owner's agreement is giving license grants under new or different terms for old releases. Say we're trying to "relicense" a project "from GPLv2 to GPLv3". If we get all copyright holders to sign off, the old releases essentially become "dual licensed"---another overloaded term that here means effectively "available under the user's choice of two or more licenses", specifically GPLv2 or GPLv3. Once copyright holders in existing work have agreed to make that work available under GPLv3, as well, future developers can license further work under GPLv3, too, effectively choosing to comply with the new GPLv3 grant for the old code, rather than the old GPLv2 grant.
We tend to see CLAs less often in permissive-licensed projects. There are various reasons for that, one of which is that permissive terms are by nature less complicated, more stable over time, and contend with fewer "license compatibility" issues. But we still see CLAs in some permissively licensed projects that never anticipate making grants under new terms for old releases, because the project stewards want to make sure people have the legal rights to license copyright in the contributions they offer, and they want documentation to back that up if there's a dispute. Typical CLA forms also address that problem.
So "uses CLA" is only a poor proxy for "stewarded by a company that may not choose to make its new work available under the old license forever". Developers can certainly choose to develop patches to these projects and refuse to give the steward a contributor license agreement. But the steward isn't obligated to accept or maintain those patches. They may very well refuse to do so without the flexibility the CLA provides.
When the public license for the project is copyleft, commercial-company project stewards are highly unlikely to give up licensing flexibility, agree to comply with the new contributor's copyleft license, and "lock the project open" just for some new patch, even if it's quality work. Depending on the copyleft license, that work may have to be licensed under the same copyleft terms, rendering it "compatible". But it simply won't be merged.
This happens, but in my experience, pretty rarely, and with little lasting effect. Outside developers usually aren't interested in spending all the time developing patches to other people's projects in the first place if their work won't be merged to mainline and kept up by the maintainers driving development.
Even for tech companies, the problem becomes that's yet one more thing to maintain. I'm perfectly capable of maintaining a fork if I need to, but that doesn't mean I have the time and energy to do so.
Are you capable of maintaining a fork no matter the language or level of complexity? Are you going to maintain your own fork of Chromium? WebKit? MySQL?
But if you don’t have the time or energy, you don’t have the capability.
I might as well say I have the capability of flying my own private plane if I had the money to buy the plane and the time to learn how to fly it.
Obviously yes, the more complex the more difficult. And if it's a technology you don't have experience with it's going to be extremely difficult to impossible. But that's not what I'm talking about.
I'll give a clearer example taken from my actual job:
At my job, we needed to use a NodeJS library to accomplish something. We found one, but it eventually fell out of maintenance. We discovered a bug that was blocking us from continuing to use it, so we had the choice: fork it or find an alternative. We chose to find an alternative because even though we had the technical capability and understanding to fork the library and fix the bug, we realized that would be one more piece of infrastructure that would take time and energy from our team which are a finite supply.
It's more equivalent of an airline pilot saying "I wish I had more time and energy to fly a plane for fun, and not just for work." The ability is there, just not the bandwidth.
Not really. If you need it fixed or extended and you can't do it yourself, you can always hire someone to do it for you. With open source, you have options.
I think it shows the huge benefit of OSS. If TF had been closed from the start, users would have no recourse at all. Instead, there is a viable path forward.
That viability hinges on people actually doing it and the evidence that this is happening at scale just doesn't exist. There are some forks of major packages out there that have been extended but that's because the company had entirely different ideas about the direction in which to take a package. But for the vast majority of FOSS out there the current maintainers are the ones that are capable of doing so and the entities using the package are not able, do not have the funds or the time or people to do so themselves (and more likely a combination of those).
The two specific cases cited in the post to which I responded were Docker and Terraform.
I think there is a great deal of evidence that people are using alternatives at scale.
Let's take a look at some other examples: OpenOffice, Node, Hudson.
Of course, there are plenty of cases where forks have not thrived. But that is not really the question. The question is, can you show me a single closed-source project that did something highly objectionable to a large portion of the userbase that was then rescued by the userbase?
Can you explain why I should choose IIS over Apache just because there is a possibility that somebody could fork Apache? If either IIS or Apache decided tomorrow that they would no longer support TLS, but only support eeeTLS, why would the Apache situation be worse just because it is OSS and some users might fork it to keep TLS capabilities? It seems obvious to me that in fact Apache is the wiser choice in that regard.
It doesn't matter if you agree, it only matters if there are some decision makers that agree, and I know for a fact that there are. So the damage has already happened.
Likewise. This is precisely how it happens, and no amount of wishful thinking about how companies are going to insource the maintainance of some major FOSS package is going to change that. It simply will not happen that way, companies want to focus on their business and not on the vast amount of tooling that underpins basic functionality, they'll just use another supplier.
The only one are those that build stuff on top of terraform.
And you know what? It will always be worse building stuff on top of a proprietary technology managed by a company that can decide to discontinue it anytime. Look a all those companies that had build services using twitter api.
The alternative to a commercial OSS project is a commercial project without the OSS component. If the quality is perceived to be similar and the OSS license is no longer a factor then it might as well be closed source.
The answer is "then a viable fork will sprout up and we can switch to that instead" which is what has happened for every notable widely used OSS project I can think of.
And it's not like you have to sit idly while that's happening. A small company might not be able to fork and maintain their own fork but they can pitch in with other companies by providing some development time, or money or infrastructure to help make an OSS fork successful as a community project.
This is the same series of events that we saw (and continue to see) play out after IBM attempted to shut off access to Redhat by competitors. They had control of every user's attention, and they exerted huge control over the direction of development. They lost control when they gutted CentOS, and now their competitors are banding together to make their own offerings. This might end up fragmenting their ecosystem in the long run.
I see a similar result of HashiCorp's attempts. OpenTF will make a suitable replacement, and adoption may sway to their fork rather than HashiCorp's. As they continue to meddle, the community (which wants free software) will continue to push back.
It's to be expected. I've yet to see a corporate take-over by a strategic player where there wasn't some major downside for those who helped establish the acquired party in the first place. It always feels like being stabbed in the back.
Getting a lot of docker hub vibes from this one. HashiCorp is course within their rights. Can't be cheap to run the registry given the obscene size of some terraform providers.
$ ls -lah terraform/providers/registry.terraform.io/hashicorp/aws/5.14.0/darwin_amd64/
total 368M
Anyone have an idea of the reasons terraform needs a 370 MiB binary just to call REST APIs?
> Anyone have an idea of the reasons terraform needs a 370 MiB binary just to call REST APIs?
That's because Terraform fell for the Go trap. When space and bandwidth are cheap, why not go for an environment that only ships fully self contained binaries? Oh, and why not go for a language that attracts hipsters like fruits attract flies, but is a nightmare to develop in?
Bloody ridiculous, it's a miracle Terraform got as far as it did.
(Yes, I'm working with Terraform every day and it's pretty decent but I'd love to extend it for Atlassian Cloud stuff without having to add a sixth language to my already sized toolbelt. Why Atlassian doesn't offer Terraform integration on their own is beyond me in any case)
> Can't be cheap to run the registry given the obscene size of some terraform providers.
Some providers are also hosted externally. I guess if traffic is going to be a problem they might also just switch to hosting every provider that is build on GitHub to GitHub releases (and hope that GitHub won't change its policy)
For their own managed providers they no longer provide binaries in GitHub releases, and serve them from their own servers instead. Which feels like a trap BTW.
I didn't realize that, and extra weird they publish the SHA256SUM file as a release artifact but it references 14 zip files and the manifest.json so, ... thanks?
But, in their defense, installing an "unofficial" provider (or build!) into TF is some JFC so there's that. We'll just add that onto the damn near infinite pile of "I hope OpenTF fixes ..." things
The AWS Go SDK is the vast majority of that bulk. In general go binaries can get pretty big but AWS has hundreds of services with thousands of APIs and it’s all going to have to get included in the AWS provider.
Also AWS has so many services that the SDKs are mainly generated from json descriptions + nice wrappers on top. That leads to a different and less abstracted type of code than you'd write yourself - which leads to bigger compiled objects.
Ruby had this problem too and at some point split the SDK into multiple gems so you don't have to install everything.
The Azure sdks are the same. Auto generated from some underlying description. Then for backwards compatibility, every previous version is its own complete copy, all included in one single bundle.
If I am a cloud provider and maintain a Terraform binding I now have a strong reason to target OpenTF over HashiTF as they inevitably begin to diverge. It's hard to see how Hashicorp survives trying to compete over a consortiom of companies for whom TF is a complement and not the product and are incentivized to use it as a free value add. Especially when those same companies are the primary targets for Terraform usage.
To me, this entire story was about HashiCorp updating their TOS without changing the "updated date" - which to me is reason enough to drop use of all products from a company. That being said, it does look like the updated date has been modified to reflect the change. So maybe it's not so "silent" anymore, or maybe they changed it because of the backlash here.
I mean it's reasonable for HashiCorp to limit who uses their infrastructure since they foot the bill for it. Google did a similar thing for the Chrome Web Store to download extensions when Microsoft released Edge, pushing Microsoft to host its own distribution channel.
The registry is more similar to https://sum.golang.org/ than the Chrome Web Store. It pretty much just stores a checksum database, a list of links to github (which actually hosts the cross-compiled binaries), a channel [Official, Partner, Community], some ownership metadata, and some static markdown per provider/module version for documentation.
E.g. back-of-envelope for terraform providers this is:
Metadata: 4KB JSON [0] * ~15 OS/arch combinations * ~50 versions * ~3000 providers = ~10GB in total
Docs: ~700Kb [1] * ~50 versions * ~3000 providers = ~100GB in total
In my mind the analagous behaviour would be if the golang checksum database added in license terms that stated "you need to abide by a BSL to use data from this service". What that actually would mean is so nebulous that it feels threatening.
(NB: in airbyte's case the TF Provider was generated from a ~150Kb OpenAPI spec via https://speakeasyapi.dev: implying docs could be compressed even more)
Even though this will likely prevent OpenTF from connecting to registry.terraform.io to get plugins, the source code for most (all?) plugins is still open source and actually stored on GitHub (e.g. https://github.com/terraform-provider-openstack/terraform-pr...).
More work for OpenTF to get up and running, but also feels reasonable that HashiCorp wouldn't allow connecting to their service.
I think that's true, but it's probably hard to recreate large parts of the index. I don't think there's any mandatory manifest or something like that where you can reliably identify a repo as something that appears in their registry. Probably some missing metadata too.
So at the end of the day, it's either about the effort to inventory the current registry and find sources that are available, or do this on an ongoing as-needed basis (eg something like para would allow an index to be hosted in a github repo managed with PRs).
Silently infers they're trying to sneak something in without everyone knowing but the discussion has been going on for a few weeks now, iirc? They just finally made the change they said there going to make or is this something else?
What's the legal theory that allows HashiCorp to control the use of data after downloading it? Is it like if I offered MP4 files on my website that can be downloaded by anyone but added the condition you must view them in VLC only? (which I'm fairly sure would be unenforcible even if the MP4 files were proprietary content)
The bigger thing, though, is that they don't necessarily need to be on completely solid legal ground. Just enough semi-plausible legal ground to out-spend you on the matter.
I think the freedom to contract is the main legal theory.
That, as long as there’s a proper contract that doesn’t contain unconscionable or otherwise illegal terms, then adults can enter into contracts with the consent of all parties, and the parties get to decide the terms of the agreement.
The leadership at Hashicorp doesn't seem to "get" open source development, or at least doesn't seem to value the community of their customers.
The change of license could have been attributed to them trying to protect their SaaS business if you squinted. But now they're saying "this is ours and you can't play", even though that hurts tf in the long run due to alienation of the community. I get that they need to generate revenue, but I don't understand how alienating their customers makes money even in the long term?
I've seen a lot about OpenTF and I applaud the (very large) effort to get a viable alternative in place in short order. Is anyone aware of corresponding projects for other tools in the HashiCorp suite?
I don't exactly understand how you could use it for commercial use in the original license quoted in the thread:
>You may download or copy the Content (and other items displayed on the Services for download) for personal non-commercial use only, provided that you maintain all copyright and other notices contained in such Content. You shall not store any significant portion of any Content in any form.
Does personal non-commercial use allow use for work purposes somehow?
Yeh I thought Terraform might not have been a direct target of the BSL license change (vault/consul etc are bigger fish) but this makes it clear this is not the case.
I had Hashicorp in my mind as a cool and respectful company. These recent events have made to reconsider and throw them into the same bag as Oracle. I find this pretty sad.
It actually was a respectable company. But the guy who started it all (Mitchell Hashimoto) isn't there anymore (at least not as a decision maker) so the corporate drones are taking over.
It's sad too, mitchellh used to be fairly active in the HN community but it seems he is not able to for likely professional reasons after the BSL fiasco.
Remember, he was CEO but didn't like that role. He stepped down from the CEO role as well as the board so he could continue to hack rather than lead. I agree, its likely not how he wants things to be run but its not his decision anymore.
Nonsense? Now that the company is public we can see the issue. They are losing a ton of money. This isn't about quarterly projections, it's about keeping the company afloat.
The license change is only for future changes. The existing codebase cannot be relicensed as HC does not own full copyright on 100% of contributions AIUI.
IANAL, but I don't think anyone can change license of something that has already been published. Doing so would defeat the purpose of having a license, in the first place.
They absolutely were at one point, this is why the BSL move was so upsetting, not because it necessarily stopped people from using it (unless they of course were offering competing services), but because it signaled that the phase of HashiCorp being a cool and respectful company are over.
It's almost a capitalist gut punch. Chasing that dollar, companies will squeeze every last drop of altruism out of a company.
It makes me think that the gov. should provide incentives to keep open source software backed by corporations going by figuring out how to offer a tax break on all of the "lost revenue"... We just need a good way of figuring out what the lost revenue is.
Just wanted to say that this doesn't impact OpenTF too much. It's an extra step we need to take before a stable release, but long-term it'll make us more decoupled, which is great.
As someone else commented, all providers and modules other than Hashicorp's are hosted on GitHub and the registry is just a "redirector". We'll do something similar, other than some special handling for Hashicorp's providers.
Also, I know you want us to finally publish the repo - we're working very hard to make this happen and it should be a matter of days now.
Disclaimer: Work at Spacelift, and currently temporary Technical Lead of the OpenTF Project, until it's committee-steered.