> GitLab, a software product used by 2/3 of all enterprises, ...
This claim is false. The linked survey shows that GitLab has a 2/3 market share in self-hosted Git providers. Not every enterprise self-hosts a git repository.
I tend to believe claims of the form X is used by Y% of Z, but not give much weight to them because they are usually less impressive than they sound.
For example, I recall IBM marketing once claimed that OS/2 was used in some very high percentage (it was close to 100%, if I recall correctly) of Fortune 500 companies (or some similar group). Did that mean OS/2 was taking off in the enterprise, displacing Windows left and right?
Nope. It just meant that most Fortune 500 companies had at least one OS/2 system. Take advantage of its excellent DOS multitasking to migrate a few legacy DOS systems from dying old hardware to a single new machine under OS/2, and your Fortune 500 company counts as using OS/2 even if that is the only OS/2 system you have.
I bought this up on HN in the past[1], but it gets instantly downvoted( by gitlab employees?) , my comment was a response to CEO claiming how their marketing is honest[2].
They also declared Gitlab CI as number one CI by excluding Jenkins because of their made up defintion of "modern" .
This whole thread is full of their employees arguing (quite aggressively in some cases) with people for calling them out or questioning their marketing claims. I wouldn't be surprised if they're downvoting en-masse too. Doesn't inspire a lot of confidence in their company and I'm surprised this type of behaviour is allowed on HN.
I get that. But most likely, that whole spin is the marketing dept's fault. As I said in another comment, they are not technical enough to understand CI/CD and devops terminology.
They are technical enough to evaluate Jenkins and declare it be not worthy enough to be counted as competition, but not technical enough to understand CI/CD.
I understand that too. But in my experience, GitLab CI Runner is the fastest for our stack. I've seen build times reduced from 3 minutes to 59 seconds. I can give exact numbers if you want to scrutinize my statement more.
Edit: I should add, our build artficats are deployed to staging and NOT production.
I don't have a problem believing that Gitlab could be faster for an org's stack, at all.
I do have a problem with you implying Jenkins is not-modern by saying "depends on your definition of modern". Speed is only one factor. It's not ok to lie and hide behind "our marketing is clueless about our tech."
To be completely honest, Jenkins is garbage slow enterprise tier. GitLab CI Runner is a pleasure to work with. Jenkins was hosted on our production box, and it used to freeze up whenever someone pushed.
Marketing folks aren't technical enough to understand CI anyway. This is why marketing folks shouldn't step on us tech folks toes - you'll just look like an idiot and end up embarrassing yourself.
How much do you really care if their marketing department is aggressive? Isn't it better that GitLab is open source and a reasonably good member of the OSS community? Honestly I don't even count those stats as false; Jenkins feels super old and the context weirdness ("Not every enterprise self-hosts a git repository") actually makes sense. If this is the trade to support OSS, it's way better than building a shadow surveillance state under the guise of profit via advertising revenue.
Telling lies /= aggressive. Where advertising makes claims that are objectively false they should be called out. This isn't 'puffing' or hyperbolic opinion. This is saying X is Y. That these lies are in support of an agenda with which many here might agree does not matter.
Sure, but the link text is deliberately misleading. The stat "3/4 of all enterprises", on its own, is patently false. The fact that you can click the link to "fix" the lie is irrelevant.
In the jobs boxout on StackOverflow I'm pretty confident that McLaren Applied Technologies is looking a lead C#/.NET developer.
On another page, the UK Mobile network '3' is offering 12Gb for the price of 4Gb. I know that's true because I took them up on it (and even for 4Gb it's still cheaper than any other network).
I could provide many other counter-examples but I don't need to because you used 'always' in your absolutist statement. Yes, a lot of advertising twists the truth. No, not everything does. And for things that we hold to a higher standard (like a great open source project like Gitlab) we can point out places where they don't meet that standard in the (not unrealistic) hope that they'll improve.
> I'm pretty confident that McLaren Applied Technologies is looking a lead C#/.NET developer.
Are you certain? Maybe they have an internal promotion but company policy/legal compliance forces them to advertise the role publicly?
> On another page, the UK Mobile network '3' is offering 12Gb for the price of 4Gb. I know that's true because I took them up on it (and even for 4Gb it's still cheaper than any other network).
How long for? Is this one of those contracts where it is 3 for 1 for a year then double price next year...
NB I'm absolutely not accusing either of these specific companies of being dishonest.
pretending everything is black and white and painting everyone who makes a non complete statement as liar, is what lead to the current political misery.
There's a very big difference between "2/3 of all enterprises" (press release) and "a 2/3 market share in self-hosted Git providers" (glutamate, above).
I've been a fan of GitLab's for quite a while (mostly due to your transparency; I just created an account a few weeks ago to try it out) but -- unless you are claiming that GitLab is being used by two-thirds of "all enterprises" -- this is very misleading, IMO.
> There's a very big difference between "2/3 of all enterprises" (press release) and "a 2/3 market share in self-hosted Git providers" (glutamate, above).
And even that is a total pulled out of the ass claim. At most it could be "who responded to our survey"
We claim it based on the assumption that almost all very large companies (enterprise) host their own source code and they are switching to git. We have 2/3 market share with self hosted git as shown with two data points https://about.gitlab.com/is-it-any-good/#gitlab-has-23-marke... I'll consider adding the rationale to the page.
We're seeing organizations adopt SaaS but the largest companies tend to be the last ones to switch.
You quote 'a software product used by 2/3 of all enterprises'. I see this as a different and weather claim. We and our competition can easily make that claim since almost all Fortune 500 companies have at least one use of GitHub, GitLab, and Atlassian.
> We claim it based on the assumption that almost all very large companies (enterprise) host their own source code and they are switching to git.
Almost all × Almost all (even assuming that's not overstated for either of those claims, which given the evidence of your inappropriately narrow definition of “enterprise” and my experience in the parts of that space you don't seem to have considered, I doubt) isn't the same as all, or even almost all.
> We and our competition can easily make that claim since almost all Fortune 500 companies have at least one use of GitHub, GitLab, and Atlassian.
“Enterprise”, as a software market, is more than Fortune 500. Not only is that an overly narrow definition for private, for-profit enterprises, “enterprise” also included large public and non-profit (e.g.—but not exclusively—academic) institutions.
I dunno. I do R&D tax credit studies for hundreds of enterprise software firms and I always ask what version control system(s) they use and where/how it is hosted. I would guesstimate between a third and a half use GitLab in some capacity. And even then a lot of the time it's only for some teams or some projects. I do work with a disproportionate number of MS shops and a disproportionate number of medium to large companies (and few very large companies), but that's kind of the point. Those companies account for a lot of the enterprise market, yet you aren't including them in your claim. You've still got amazing market share and an impressive product, so there's no need to exaggerate. Just say 2/3rds of self-hosted git. That's a powerful enough marketing statement in and of itself.
Just to clarify, between a third and a half use GitLab in some capacity. But to answer your question, I see a lot of TFS these days. Easily 90% of the MS stack firms, and even some with more diverse stacks. And increasingly with the VS git for Windows plugin as the VCS. Those are probably around 50% of my clients.
SVN and Mercurial have a noticeable presence, say around 30% of my clients, but that number seems to be declining. I mostly see this with smaller firms with a single large, mature, legacy product that's in the cash cow stage. They're barely investing in new features, so why bother to upgrade their internal systems?
A really shocking number of firms don't have any VCS whatsoever (around 15%) or individual teams decide on their own solutions (around 10%) or individual teams may supplement the firm wide VCS for production with their own dev solutions (around 15%, which overlaps with all the other categories). A lot of those teams seem to use GitHub private repos or GitLab, though frequently I see institutional constraints prevent them from adopting a self-hosted solution (a company too dumb to set up VCS isn't likely to spring for a server--these are the same people that have dev and prod but no test servers). I frankly don't know how the firms without a VCS function. They tend to be my most challenging clients, so the answer is probably not well.
And then a really large number of firms have legacy homegrown solutions. Probably like 20%. These tend to be larger companies with an in-house tech department supporting a really unremarkable product that's been in use for 20+ years, like an eCommerce solution for a mid-sized retailer or an inventory system for a manufacturer. I think it's probably a much larger segment than shows up in a lot of stats because they tend to be... weird. They feel less like development teams and more like overgrown IT departments.
Notice that these numbers don't add up to 100. That's because there's lots of overlap, especially at larger firms that have grown by acquisition, where individual entities can operate nearly independently. Really, there's a huge selection bias here too. My clients tend to be either very well organized MS shops or nightmarishly anarchic hodgepodges. That's driven by client size and by my firm's market position and by my own sales abilities. I do better with MS shops because that's the stack I use and it makes it easier for me to speak their language, as it were. My firm targets mid-sized companies over very large companies or very small ones.
I do agree that the majority of Bitrise their population is not the enterprise at all. The only other representative data we could find was from BuddyBuild https://www.buddybuild.com/blog/source-code-hosting#selfhost... and has the same problem. You can see that it also isn't enterprise because the vast majority of their respondents is cloud hosted.
If there is a better data source we can use I would love to know. It seems hard to get data on self hosted organizations. Traditionally you looked at the number of paying customers, but that doesn't work with open source.
>I do agree that the majority of Bitrise their population is not the enterprise at all.
> We and our competition can easily make that claim since almost all Fortune 500 companies have at least one use of GitHub, GitLab, and Atlassian.
Not sure if you are trolling or are missing the point people are trying to make. You are admitting that you made that conclusion based on wrong data and yet continue to say you claim is correct.
These numbers are the most valid numbers we can find and match what we estimate based on other even less exact data (like the version check build into GitLab).
If you want to publish a research paper, "Yeah, I know this isn't really a valid measurement, but it's the best I can do" isn't going to cut it. This isn't a research paper, but honesty is honesty. If those are the best numbers you have, I would simply not say "X% of enteprises are using" -- because you don't actually know that. I'd say what you know instead, however you can describe it. Yep, it doesn't read as well marketting-wise, true.
> You can see that it also isn't enterprise because the vast majority of their respondents is cloud hosted.
So? I work for a extremely conservative enterprise customer (with strict compliance concerns), and we're still agressively moving to cloud-hosted solutions for most things.
Also, still exclusively using TFVC in TFS for source control.
I would assume, at the very least, that some significant portion of those enterprises who are switching to Git are Microsoft shops who are just switching to a version of TFS that includes Git.
I'd also guess that "Bitrise users" is a sample group that differs in significant ways from "Enterprise users".
As someone who has attempted to contribute to the GNOME project, this makes me super happy. The current process is archaic: attaching patch files to weird bug trackers and then sorta just waiting for someone to get back to you... A proper pull request system with code review would lower the barrier of entry for contributions to these projects significantly.
Then you also didn't work on the kernel, the community where git came from ;) There you have to attach the patch to a weird e-mail and hope somebody picks it up.
But it's true - a pull system is nice for drive-by contributions. In some projects I noticed that after being listed in GitHub (be it as mirror or primary place) the number of contributions improved, but also many low-quality contributions came in.
The kernel’s process would be far superior to what GNOME does currently. I’ve simply not sent GNOME patches because they won’t accept them over email, and require using a wonky web interface (bugzilla) with an account to do it instead.
I would really much rather to be able to just do a `git send-email` as many other projects (not just the kernel!) accept.
> a pull system is nice for drive-by contributions
Did you mean a patch system? With a patch, you just send upstream the patch, and you're done. Pull-based syncing is good for (and was designed for) frequent collaborators. As a one-time contributor doing pull requests, after you've cloned the repo and made your changes, you have to set up some public remote so that others can pull from, push up your changes to that remote, send upstream a message that it's ready for them to pull, and make sure that remote maintains uptime until at least upstream has had an opportunity to pull, at which point you can then do whatever you want with it. It really doesn't make a lot of sense to use this workflow for anything but frequent collaborators, where the setup costs are supposed to pay off later, after you've shared your nth change.
Or did you mean it's nice in the sense that increases the volume of contributors, because there's a long tail of people who are familiar with and only know how to deal in pull requests?
I don't know what you're referring to. Not quite as laborious as what? When I mentioned things like "setting up a remote", I was specifically thinking of GitHub and GitLab's web forms. It's as exactly as involved as I outlined above.
I don't understand how this is a response to anything I wrote in my first comment, or the followup that you're directly responding to. Once again, pull requests work exactly as I outlined above.
No one is disagreeing that this is the technical process behind creating a pull request. It just turns out that a number of companies including Github and GitLab are willing to do all of these steps with a few button presses for free for anyone on behalf of any open source project.
I've long felt I was missing something about the github process. It seems insanely complicated to me. Pushing the fork button is a very small piece of the process. Setting up remotes and things is very much NOT button- automated. And at the end of the process, you have this extra repo lying around in your github account that you'll probably never use again. The effort and amount of artifacts generated by a simple change are way out of proportion.
And yes, I do appreciate the rare cases when I can just make the change in the web interface of the original project. But most one-time changes I make involve compiling (or at least running) something.
GitHub also lets new contributors edit in upstream repositories - and then coordinates a new fork, branch, and PR for the change when the user clicks "Save".
Yup, I can’t count how many times I kept bugfixes, improvements, or tweaks to myself because the submission process is too complicated and user hostile.
This is honestly a problem with any submission process involving patch files. I’ve submitted a few via mailing list and I can’t express how much I dread seeing source controled this way.
For sure, and it's arguably worse when they claim "patches welcome" and then the PR sits for years without so much as a "please wait." Insult to injury when the PR is just a typo fix, or cleaning up a blatant lie in the documentation.
Fedora requires an FPCA. In the comments below https://about.gitlab.com/2017/11/01/gitlab-switches-to-dco-l... we say the following about that: "While FPCA may not be a typical CLA with regard to rights and restrictions, this is not the only factor we looked into. We also were looking into whether there were terms in general, other than commonly used open source terms. Our analysis took into account that non-legal users do not always understand the nuances of legal language and can be deterred by any CLA, restrictive or not, if they do not understand the terms."
I understand your position. But it hurts me a bit to see all of the hard work we put into replacing the Apache style CLA with the FPCA and our messaging around it to make sure we explain it in transparent ways being reduced to "doesn't really matter, it's still just a CLA if you're not an expert" :(
I would like to personally apologize. No offense was ever intended. Looking into what the industry was doing was not a determining factor when we decided to move to the DCO. I personally can appreciate the amount of work it takes to make changes to licensing terms. It is a difficult task to try to create terms that will please your users. As I am sure you have experienced, you will never be able to please everyone, there will always be nay-sayers and opponents. However, if you have any interest in moving to DCO, I would be happy to talk to anyone interested in talking about our analysis and process.
Mmm, I'm sorry to hear that. I agree that between CLA's there can be vast differences in the terms. We did not intent to detail those differences in our blog post. BTW Have you considered changing it to a DCO?
I've casually suggested this to some people involved in Fedora. FWIW I think it would be a good idea, and I think at this point the retention of the FPCA by Fedora is mostly a matter of inertia.
But I share jwildeboer's sentiment, having been directly involved in the drafting of the FPCA to replace the old Fedora CLA. If one looks at the FPCA in historical context, it may be clearer that it is intended as an "un-CLA".
One could argue that the DCO is a type of CLA. I know some people who call the DCO a contributor agreement, at least, and if it is a contributor agreement, it is one that refers to licensing of what's being contributed.
The title says "Debian and GNOME announce plans to migrate communities to GitLab" but unless I'm missing something, there's nothing in the article about this supposed migration.
GitLab being open source and easy to run on your own hardware is a huge "selling" point. I don't think it's helping their income much, but it's growing the community of users and the money will probably come later. It's very hard for GitHub to compete with that. GitHub could introduce CI and other things that GitLab has, but it would be just one extra feature you have to pay for.
Being opensource is not normally an aspect that helps the income, actually the other way around. Opensource is the argument that wins for the customers that appreciate lack of vendor-lock-in.
I'd argue otherwise, but have no source to support my claims. Maybe someone can help. Here's my 2 cents:
By being open source and allowing people to run it on their own servers, they can potentially convert a massive amount of people to their platform that otherwise wouldn't make the switch. Now the next time these people don't want to run gitlab on their own servers for whatever reason, they'll use their service. And pay for it. because they know the tools, the logic, the UI etc.
The reason I'm arguing for this is because that's exactly what's happening with me. I'm a very happy github user, but I might need to run something similar on my own servers. If I do, I'll host a gitlab. And then, for my other projects where i don't need to run my own instance, I'll probably just use their webplatform. Because it takes me less time to navigate it now that I've run it.
So i can see myself paying for gitlab some day because they've open sourced it and got me 'hooked' on their product as a result.
It's not so easy. Which lunch are you talking about? Self-hosted or hosted?
I don't think GL will ever have anything much in the hosted sector, for two main reasons:
1. GH's infrastructure is far ahead of GL, and
2. the social factor also affects companies that have private repositories, but also public ones - it wouldn't make sense to only move the private repositories to save money
When we think about the self-hosted, then it's possible, but I'm not sure if that would significantly affect GH, which has a different business model.
The way I see it going is that many, many large companies will have a bunch of teams on GitHub (with good reason), with both public and private repos. They will also have a bunch of teams on GitLab instances that have been wildcatted into existence, possibly before GitHub had decent inroads, but more probably so the code doesn't leave the firewall.
At some point, a Boss who may or may not have Pointy Hair will look at the bill for GitHub, and the bill for the GitLab infrastructure, and say "Why am I paying for both?" At that point, the teams with code that can't leave the firewall have the trump card, and private repos on GitHub have to move off.
It's not the money itself that makes the argument, it's paying twice for the same thing.
Moving a significant part of the infrastructure is no joke, and requires extremely pressuring reasons.
Licensing of course is one of those, but it's not necessarily the general case.
When it comes to GL vs. GH, right now, speed is also a factor
- try to fork a large project on GH and on GL; the last time I did, I was worried that I triggered a bug on GL, given how long it was taking. I speculate GL will have significant hurdles, since they work in the cloud (GH is on metal, and the reason a lead engineer gave me is speed), and they may have troubles scaling while containing expenses.
I also doubt that there are so many companies with large projects on GH and small, "wildcat" projects on GL - GH dwarfs GL for hosted services.
If and when mass migrations will be a concern, GH will surely have the tooling ready.
Pointy hairs are not really relevant, since they may take any decision (even the opposite) for any whim.
It's not that speculative. It's predictable from the direction the company I work for is going in.
> Moving a significant part of the infrastructure is no joke, and requires extremely pressuring reasons. Licensing of course is one of those, but it's not necessarily the general case.
"Extremely pressuring reasons" can be as simple as "we don't like that clause in the contract with supplier X, so we now have to move to supplier Y. Yes, I do mean everyone." We've gone through precisely this in another part of our estate, and it triggered a 10 month migration which is still in flight. IP constraints combined with "we must only pay once for a given function" is easily enough.
> I also doubt that there are so many companies with large projects on GH and small, "wildcat" projects on GL - GH dwarfs GL for hosted services.
I can only talk about what I can see: an enterprise, where external GH and internally hosted GL both started very small, and have now consolidated into a single GH org and two (or three? not sure) internal GL services. I'd be pushed to guess where more projects currently live.
Relevant to this conversation: as a company, we looked at GH Enterprise, and decided against it. Or probably more accurately, didn't decide for it. I don't know the reasoning (but I suspect cost).
> Pointy hairs are not really relevant, since they may take any decision (even the opposite) for any whim.
Since they are the ones making the decisions, their reasoning is extremely relevant. There is a logic to how they work, it's just not the one engineers on the ground would pick. So, for instance, they probably won't care about speed of forking unless it actually gets in the way of delivery.
Interestingly, one place GL is winning internally is GitLab CI.
Money is not the only factor. IT informs me that we are getting great support from github for our enterprise instance. As a user I have no clue what support means, but the people in IT are the ones getting that help.
So long as gitHub provides value we will stay there. Support is worth it when several thousand developers get an unplanned paid vacation every time a server goes down (of course git is distributed so the a few minutes of downtime will affect nobody, but much more than that and people notice).
Because github still has 99% of that market. Just look at how many repositories opened by Google, Facebook, Apple, Microsoft, NSA, GCHQ and 100s of others big players are on Github and how many on Gitlab.
That is true. But I reckon those big players just said "let's go with the flow and move some stuff to Github, good from the marketing point of view, etc". My point being that is just a trend. If Github is not careful, that trend can change anytime. Debian and GNOME are well known, big open source projects. This helps them a lot and pushes that trend a wee tiny bit in Gitlab's side.
People/companies go to GitHub to be social. It's the Facebook of computer code. That's why all the big players are there, so they can share/market themselves. It's also where a lot of programmers put their code so they can show it for interviews. That's why there are so many public repositories there.
Totally agree with this. This is the reason that I self-host with Gitea but mirror everything to Github.I essentially use Github as a forwarder to my Gitea instance.
However, I have seen a drastic decrease in interaction (PRs, issues, etc) with my self-hosted repos even though you can auth with your Github credentials.
Yes but how many of those are just mirrors and sources for non-contributors to find the code. See Linux Kernel as an example, the Kernel is mirrored on GitHub but none of the actual development takes place there.
I don't think it's about mirror of source code being there, I think it's more about a community of people and developers being able to report issues there. Eg. steam client is not opensource, but it has github repo where we report distro/Xorg/Wayland/libs related issues, and they often respond.
While I'm glad to see competition gearing up, I don't think GitHub and GitLab are in the same ballpark yet.
While I'm a big fan of their team and culture approach, we yet to see better/respected GitLab.com SLA guarantees — it still feels like they're doing too much "testing on production".
"testing in production" indeed. One of my repos has been stuck in limbo for over 2 months now after I attempted to migrate it to a group namespace from my own profile.
Though I generally do like GitLab and favor Github alternatives over Github as I see it as a single point of failure for the FOSS community at this point.
> "testing in production" indeed. One of my repos has been stuck in limbo for over 2 months now
> after I attempted to migrate it to a group namespace from my own profile.
I tried some various solutions via CURL to get the repository unstuck (basically rewriting the URL the request is sent to since the project settings page was pointing at the wrong repo) and it basically reports an empty repo on the target (which doesn't show up for me on my profile) and a 404 on the original (which does show up on my profile)
> While I'm a big fan of their team and culture approach, we yet to see better/respected GitLab.com SLA guarantees — it still feels like they're doing too much "testing on production".
Agreed. However, if you need more reliability than gitlab.com it's not difficult to host GitLab yourself in a VM or on bare metal.
Many companies already have their own physical servers in a DC (either colocated or leasing). Hopefully the people managing these servers know what they're doing and can give the developers a good SLA. Although some additional work on file/DB syncing would be necessary to have a hot spare.
Most non-huge companies are unlikely to be able provide _more_ reliability from a self-hosted installation than a cloud hosted installation. I suspect this is true of gitlab, but if it's not, it would speak negatively to gitlab's ability to provide a reliable service.
Certainly while github is occasionally down or misbehaving, there's no way any enterprise I've worked at could self-host with as much reliability as github.
We have a plan to address this and are kicking off our move to GCP shortly. Our top-level engineering department goal is to make gitlab.com ready for mission critical workloads and are targeting industry standard SLA's (e.g. three 9's). You can see our proposed architecture here if you're curious: https://about.gitlab.com/handbook/infrastructure/production-...
We're not done yet making GitLab faster but it is getting better quickly. Last big thing we're working on are merge requests with hundreds of comments. Should come out on a few months.
I'm not sure what UI changes are in GitLab 10 but something that seems a little off for me is that when you're on a repo homepage, there can be quite a bit of scrolling before you get to the readme. For example:
Perhaps this is by design, to emphasise the code rather than the description of the code, but in my opinion the repo homepage is mostly for those new to the code to start becoming acquainted with it (other pages are more focused on the needs of regular contributors, as they should be), so it makes sense (to me at least) to make the readme more prominent.
I'm not going to suggest how to change it, as it's a decision that should be driven by the UI design team, but just flagging it as something to consider.
I don't actually understand this. Are they referring to the agreements for contributing to GitLab itself, or for contributing to things hosted on GitLab, which surely have nothing to do with the hosting at all?
GNOME and Debian wanted to replace some of their infrastructure with Gitlab but before they did that they asked Gitlab to get rid of their CLA. Before this announcement, people contributing to gitlab itself effectively gave full control of copyrights to gitlab but from now on contributors will keep full control of the copyrights for what they contribute.
GNOME and Debian are very serious about free software and are wary of using software with CLAs because at any moment the company can switch future versions of the project to a non-free license, without input from the community and despite the software containing code originaly from the community.
I wouldn't expect Gitlab to do something stupid like this today but getting rid of the CLA protects the free software community from Gitlab doing stupid things in the future. For example, if they end up being acquired by Oracle or some other FOSS-sabotaging company.
I wouldn't expect Gitlab to do something stupid like this today but getting rid of the CLA protects the free software community from Gitlab doing stupid things in the future.
I'm confused now about what was in Gitlab's CLA. The CLAs I've signed never transferred rights to someone else, only asserted that I had the right to grant a license and was granting a license I wouldn't later revoke (since the point of a CLA is to protect a project from "oops I wasn't supposed to contribute that" or "oops, new boss here decided to revoke all our contributions").
I'm also confused why GNOME and Debian would have a problem with a CLA even if it did constitute a transfer of rights, since FSF actually does require you to sign over copyright to them in order to contribute, and GNOME and Debian seem happy to use FSF projects.
I could have been a bit clearer with what I wrote. I can imagine why someone would ask those questions...
> The CLAs I've signed never transferred rights to someone else
This is in part because copyright assignment/transfer is tricky and in some jurisdictions it isn't even possible. Instead, CLAs will often grant the company a perpetual license to reproduce, redistribute and sublicense the copyrighted work. Technically, the original contributor still owns the copyright to the work but the company is still able to use the contribution as part of non free software because the CLA gave them more powers than the original free software license did.
For example, Canonical's CLA[1] contains the following language:
> (b) To the maximum extent permitted by the relevant law,
You grant to Us a perpetual, worldwide, non-exclusive,
transferable, royalty-free, irrevocable license under the
Copyright covering the Contribution, with the right to
sublicense such rights through multiple tiers of
sublicensees, to reproduce, modify, display, perform and
distribute the Contribution as part of the Material; provided
that this license is conditioned upon compliance with
Section 2.3.
> Based on the grant of rights in Sections 2.1 and 2.2, if We
include Your Contribution in a Material, We may license the
Contribution under any license, including copyleft,
permissive, commercial, or proprietary licenses. As a
condition on the exercise of this right, We agree to also
license the Contribution under the terms of the license or
licenses which We are using for the Material on the
Submission Date.
Notice how it explicitly mentions that Canonical is allowed to use the contribution as part of non-free proprietary software. I can't find a copy of Gitlab's old CLA now but I guess it had a similar clause somewhere.
> (since the point of a CLA is to protect a project from "oops I wasn't supposed to contribute that" or "oops, new boss here decided to revoke all our contributions")
If the goal is to provide a paper trail ensuring that the contributor had the copyrights to their contribution, you can do that with a simpler certificate of origin document (like Gitlab is doing now) instead of the stronger terms usually found in CLAs.
> since FSF actually does require you to sign over copyright to them in order to contribute
The FSF's copyright assignment is a bit of a special case. It doesn't exist to protect the FSF's interests and is instead focused on making it easier for the FSF to litigate against GPL violators[2]. In particular, the FSF's copyright assignment contract[3] has a clause explicitly binding them to keep the software free and forbidding them from re-licensing it as proprietary software. The worst they could do is re-license it under a more permissive non-copyleft license, like MIT or BSD.
If FSF can re-license under a more permissive license, then FSF's assignment agreement implicitly allows the contribution to someday be used in proprietary software (since FSF could relicense to BSD or MIT, then someone else could take that code and incorporate it into a proprietary project).
So why doesn't anyone have a problem with the FSF's assignment?
The fsf's copyright assignment is also controversial (at the very least there is the issue that it adds an extra layer of bureaucracy for contributors).
Honestly, I think it only gets a pass because it is the FSF, a nonprofit entity that is probably the most vocal proponent of copyleft licensing out there.
The DCO is not a CLA. It's more a document that says that the submitter is indeed the copyright holder of the submitted work (or his/her employer/etc. is) and that the contribution is licensed under the same license as the original software.
Yeah, I'm not sure what they mean by "migrate their communities".
I'm not exactly real clear on "... and open source projects" either. Debian used to use Alioth [0] pretty extensively. Maybe they're getting rid of that and moving everything that was on there to GitLab.
I get that this is a press release but it could be a bit more informative, IMO.
The Debian project has been investigating several alternatives to Alioth's version control functionality. I have not 100% kept up on the discussion, but I think gitlab is either selected for sure or is a leading candidate. https://wiki.debian.org/Alioth/GitNext
Even if they adopt gitlab for version control and pull requests, it's a very misleading headline that the "debian community" will be migrated to gitlab.
I think they will migrate the systems which they host VCSes. Since they may need to make changes in the code to suit them, abandoning the legal stuff makes using, modifying and contributing back stuff easier.
Their infrastructure tracker has a lot of interesting info about the move[0]. You can also read through the development mailing list, it has a lot of discussion about the GitLab migration.
At GitLab we're tracking the blocking issues they've brought up in this issue[2].
An active gitea user here, gitlab is too heavy for my person self-hosting site, at the moment, github for social coding, gitea for personal repo, tried but not jumping on gitlab and do not see a need for that.
Personally I avoid using GitLab for professional and private use as it is using Omnibus packaging and comes with a whole bunch of software that usually is neatly packaged by any Linux distribution.
I was under the impression that Debian is in the process of moving to Pagure [1]. The reasons mentioned in the LWN article are not invalidated by Gitlab's announcement of dropping the CLA.
Based on the Debian Wiki, plans have since changed[0] (unless this page is outdated?). That LWN article is from June, the wiki article was last edited in September.
My understanding from the mailing list is that they're planning on using GitLab, at least that's what I heard last. I could always be wrong.
In that case, yes, that applies to GitLab too. The main problem is how GitHub and GitLab manage subprojects: GitLab forks are strictly hierarchical in how they manage pull requests and contributions (you can only submit patches from your repo to the single repo that you forked from).
But that may not be an issue for per-package repositories like Debian has on alioth. Debian does not have a monotree like the Linux kernel, every (source) package is a standalone project so the same objection does not apply here.
Note that "you can only submit patches from your repo to the single repo that you forked from" is no longer true for GitLab starting with 10.1 (released October 22, see https://about.gitlab.com/2017/10/22/gitlab-10-1-released/#me...), and hasn't been true for GitHub for a while (perhaps forever).
I'd really appreciate Web based way of Debian bug reporting. It doens't need to replace the current one, I suppose they can make some Gitlub plugin for e-mail method.
Just navigated my own running instance of gitlab-omnibus with Firefox's javascript.enabled set to false. It looks fine to me-- I can navigate, view the issues, and presumably post comments.
What is it that you are claiming is unusable without JavaScript?
Although unrelated, but with recent migration to Meson in Gnome ecosystem I really want to contribute to their code base in my spare time. Autotools was just like hell to work on!
This claim is false. The linked survey shows that GitLab has a 2/3 market share in self-hosted Git providers. Not every enterprise self-hosts a git repository.