Hacker News new | past | comments | ask | show | jobs | submit login

I’ve always thought of version control systems and issue trackers as separate products. GitHub just happens to implement them both in one place.

As others are alluding to in the comments, I’m highly skeptical of coupling the issue tracker and other non-VCS features with git. A far better solution would be to keep them decoupled, but easily pluggable and extendable.

In fact, this is basically the status quo with any issue tracker that has git integration (ability to tag issues in commits, etc.) If this was the only problem GitHub solved, then it would be competing with pivotal and the like.

But issue tracking is not the only problem GitHub solves. The reason GitHub grew so large is because it created a community within a single space. It gave devs a place to collaborate on, and discover projects within an environment highly integrated with VCS and issue tracking.

People don’t publish to GitHub for the issue tracker or repo hosting. Those are solved problems and many companies even use separate software for them. People publish to GitHub for the visibility, discoverability, and community.

There is no technical solution that can usurp the advantages of GitHub as a social network, just like creating mastodon is not enough to kill twitter.

The technology features of GitHub are easily substitutable. The community is not. Network effects are real, and they are the reason Microsoft felt confident paying $7.5bn for a platform that has no major technical differentiation from its competitors, but does have a huge and defensible moat around mindshare, due to its main value-add being rooted in network effects.




I feel like the HN community fails repeatedly to really grok this concept. They focus on the negatives of having a single dominant player for a service (with worries about monopolistic practices and stagnation), but completely ignore WHY these single providers become dominant.

There are HUGE network-effect benefits to having a single dominant provider. Right now, if I am looking for a code library, I pretty much only use the ones I find on github, even if the google search shows me projects hosted on other places.

I want to be able to fork, clone, and contribute back without having to create accounts on other VCS sites. I don't want to learn another interface, or have to remember which site had which project. I want to be able to have a single list of 'starred' projects that I am following. I want to only have to learn one system and check in one place.

No matter what, that is just easier as a user. I know there are costs, but those really have to outweigh the benefits to make it worthwhile having multiple providers. I have yet to see a federated system that solves these problems, and the fact that no federated system is as popular as the centralized systems lead me to believe it might be an unsolvable problem.


It works both ways. If you're too lazy to learn a service I chose to host my code, you don't deserve my code.

(Sorry to put it harshly, but it's felt like we're all getting a bit spoiled. Especially compared to the old days.)

For better or worse, you are the product if you use a centralized, free service. And the only thing keeping those dominant players dominant is the blind loyalty we seem to give freely.

It's all about that convenience! I get that. But morals have their place, and in the coming months the world will show whether they care at all about centralization.

It's probably important to stay nervous about centralization. If we let our guards down, we could find ourselves on the short end of an upsetting stick. History has shown time and again that when companies have no incentive to compete, they tend not to try. The reason Facebook can be so free with your data is that there is no way to compete with them. With github, at least there's a way.

Honestly, the biggest thing holding back Github competitors is that they won't just make their service look identical to Github. Gitlab looks strange every time I run across a repo. Instead everyone wants to be different, and usually it's not better.


I use (for example) 50 libraries, and want to submit bug reports and pull requests to all of them over the course of 5 years...

and your attitude is "well.. if you don't make 50 accounts you don't deserve it!". Really? And also that's why we have package repositories... npm, nuget, PECL, composer, should i also register on 500 websites just so i can build a website or two?

Also, github is free for open-source projects, but it's paid for teams and enterprises. it's a win-win situation where they do social good and get paid for private service.

Alternatives & competition is good, but too much competition is not that great either in this case.


This is what I like about collaborating on code that still uses mailing lists to collaborate. I already have an account that I can use for every mailing list in existence -- my email account.


> they do social good

Maybe, but maybe I disagree with how they treat their female employees, or maybe I don't like that they financially support some political thing or whatever.

Capitalism doesn't work without real competition. I shouldn't be obliged to do business with this one particular company if I want to develop Free Software. My point isn't that GitHub is evil; it's that each person should be free to decide that individually.


Correct, capitalism isn't inherently evil, competitions is healthy. That's why we have github, bitbucket, sourceforge,gitlab,gitea,phabricator, and probably few more projects like this. however imagine if we had thousands and the open source community was give or take evenly distributed between them. that would simply be nightmare. discoverability will be pretty low, contributions will be even lower and the community wouldn't be thriving as it is now. path of least resistance and all that..


> imagine if we had thousands

First you spoke of 50, then 500 (web sites), now thousands (yet you only managed to name 6).. this is starting to resemble reductio ad absurdum.

You haven't really made a convincing argument for why we might have excessive variety, such as an unusually low barrier to entry compared with other open source tools.

Moreover, I'm pretty sure your arguments could also be used in favor of federation, rather than centralization.


Why would discoverability be low? Search engines exist.

Why would contributions be low? Presumably we'd have a common way to contribute from your own federated instance.

There are thousands of websites (maybe even more!) and it's possible to discover them and comment/upload/whatever. Would it be better if they all moved to Medium, Wordpress.com and Facebook?


> It works both ways. If you're too lazy to learn a service I chose to host my code, you don't deserve my code.

I have infinite work that needs to be done, and a finite amount of time. I have found in my experience that only looking at github works for maximizing my productivity; I can get almost everything there, and the returns for learning another system does not make up for the time it takes. This isn't about laziness, it is about choosing to put effort where I get the most value for it.

> For better or worse, you are the product if you use a centralized, free service

I pay for github (7 bucks a month for a personal account, and my company pays $100k+ for github enterprise).

I think the thing we should be nervous about is not centralization, but vendor lock-in. As long as we can switch, we are ok. Making sure we use abstractions in our interaction with github API helps with this.

However, as long as they keep providing the best service, I will keep using them.


I would not say that GH offers the best service. There are lots of features that they are lacking compared to BitBucket and GitLab. I think it is about the interface and that we are just too used to it and lazy to retrain around brains... again


> It works both ways. If you're too lazy to learn a service I chose to host my code, you don't deserve my code.

Not really, you just get less people using, contributing, and testing your project - software should try and stay in the known, not scattered around sourceforge, google code ... etc


> It works both ways. If you're too lazy to learn a service I chose to host my code, you don't deserve my code.

Linus is that you?

[1] in case you don’t get the reference.

But in all seriousness this is a terrible attitude to have. One of the things I enjoy about being a developer is the community and collaboration and this type of thinking is the antithesis of that.

I’d much rather have centralised tools and a distributed/diverse development community than decentralised tools and isolationist community.

[1] https://github.com/torvalds/linux/pull/17#issuecomment-56546...


> I have yet to see a federated system that solves these problems, and the fact that no federated system is as popular as the centralized systems lead me to believe it might be an unsolvable problem.

A dominant siloed service is precisely what stops a federated network from emerging. This is why I'm very pleased that Microsoft has bought GitHub, because it's apparently jolted a lot of people into realising that GitHub is a single dominant provider.

I think GitHub became dominant the same way Twitter and Gmail did. Techy early adopters ignored the fact that it was proprietary, because it was a nice service among many options; but this growth and the network effect led to an effective monopoly:

1. Neat tech demo, siloed but harmlessly tiny

2. Techy early adopters start to rely on it because it's genuinely useful

3. Network effect brings in the masses

4. Scalability problems: interoperability is still desired, but less urgent than fail whales

5. Investors invest; money pays for scaling up and fixing fail whales

6. Investors get itchy at loss-making, want a return on their investment

Now you have the masses using your service, and any competitors are ghost towns. Investors want money and your most valuable asset is your user base. Interoperability would make it easier to lose your users, so lock-in becomes essential to your business strategy.

If enough people jump into the silo before federation works, federation will never be added. Avoid supporting proprietary networks.


I find that this network effect really makes me want to improve my projects, too. I've worked for hours on README files, making sure I had my licenses in check, and crafting the short repo description to be informative and useful for this reason [0]. If it's on GitHub, I feel like it needs to be presentation ready. Not everyone feels this way, but I do.

If I just need a git repository, I store it locally. Git is a DVCS for a reason -- most single person projects don't need a remote anyway. This makes GitHub the home for my more permanent projects, with local git for everything else.

[0]: https://github.com/Pryaxis/TShock


>If I just need a git repository, I store it locally. Git is a DVCS for a reason -- most single person projects don't need a remote anyway.

I use a remote as an easier-to-maintain-and-test backup than trying to maintain a mirrored local repository on another drive.


> I use a remote as an easier-to-maintain-and-test backup than trying to maintain a mirrored local repository on another drive.

You should really consider getting a good backup solution, like Tarsnap or Backblaze, to follow from the "two is one and one is none" principle of backups. That is, if you haven't already.


I already use Backblaze as it's important to have off-site backups whenever possible. However, needing to wait several weeks to download my data is not always an option. I'm a digital hoarder and by that I mean I have >30TB of data. My backup solution for most things is 3 local copies and 1 remote copy. Files I don't deem important enough to pay to store 3 times locally aren't backed up locally but still are backed up with Backblaze. Some things are easier to entrust to someone else (eg: git repos) than trying to maintain local mirrors. Also a Backblaze backup of a git repository doesn't do me much good if I want to quickly grab it (talking minutes to download from Gitlab instead of hours/days from Backblaze) so it makes more sense to use Gitlab as a remote so I can quickly grab it if I lose my local copy.

All of that aside, it's still easier to maintain-and-test a remote repository than to maintain-and-test local repositories.


I intentionally leave some of my projects in a super-raw state as a way to obfuscate their true intended purpose; utilizing a public [but obscured] repo as a quasi-private one. :)

I realize not everyone feels this way, but I do.


Why not just host them somewhere private repos are free of charge? (i.e. GitLab, Bitbucket)


It's like putting something on the blockchain-- I want it there so that I can point to it from the future as "my work" but without attracting too much attention in the meantime.

If the attention comes, no worry. If it doesn't, perfect.


There are a lot more eyes on those repositories than you think. Things like misplaced API tokens are vacuumed up nearly instantly. You'd be much, much better off hosting private stuff on GitLab or BitBucket.


I do have the really secret stuff on BitBucket. :)


Appreciate all the hard work! TShock is great :)


Thanks! It's a community effort, though, and I only deserve a small fraction of the credit. I'll pass along your message to my team.


I havent found a code yet that I will only keep locally in a git repo. Can you give an example of such type of code?


I don't know about other debs, but I have a decently large directory of python scripts that are kinda 'single-use', such as scripts to reformat a json that is screwed up in some oddball way. I used git for version control, and save them on the off chance they'll be useful someday for cut n paste, but there is no need for storing something I may never use again remotely. In fact, the only reason they aren't deleted is the space saved wouldn't be worth it when archive disks are so cheap. If my house burns down or the NAT is stolen, that code is the last thing that will be on my mind.


> Right now, if I am looking for a code library, I pretty much only use the ones I find on github, even if the google search shows me projects hosted on other places.

I don't know what ecosystem you're working with, but I typically find code libraries for my projects using the Python Package Index (PyPI). A lot of packages on PyPI are located at GitHub but by no means all. pip can install packages from other sources that PyPI or GitHub. It's not that difficult.

Regarding creating accounts, yes it's a speedbump to have to create multiple accounts but using a good password manager with autologin features can take away a lot of that pain.


>I feel like the HN community fails repeatedly to really grok this concept

No. Two weeks ago if you took the temperature of the crowd it would seem dead set on singing the praises of github centralization, encourage others to put all their eggs in that basket, and pretending like open source doesn't exist unless on github.

The popular conversations here have shifted. It probably indicates some shift in what the audience really thinks, but mostly just shows whats popular to bikeshed this week.


> WHY these single providers become dominant.

In this case, providing a free "let's pretend we're doing open source" play area to millions of people who can't code their way out of a wet paper bag,


> can't code their way out of a wet paper bag

There was no lack of competent, original code on GitHub last time I checked... What are you referring to?


And this is exactly why it's wrong.


Exactly. But an OSS community is very different from Twitter or Instagram. People will migrate to another website if Microsoft screws this up. Even if they don't screw it up, a lot of people are already willing to move because they hold grudges or are against a company like Microsoft owning the space.

If a GitLab or Gitea-backed community surfaces managed by a foundation, GitHub will be vulnerable.

Right now there isn't much of a point in moving to GitLab.com because they could be acquired as well.


> As others are alluding to in the comments, I’m highly skeptical of coupling the issue tracker and other non-VCS features with git.

What do you mean by coupling? Are you talking about what Gitlab Omnibus does by bundling an issue tracker and other web services along with its core git functionality?

If so, you're wrong-- Gitlab's approach helps a busy maintainer get their work done without being hemmed in to a service like Github. The issue tracker's defaults are sane, non-technical users can easily sign up using the other bundled services and communicate using it, and it gets upgraded as a side-effect of upgrading Gitlab itself.

If I had instead used one of your claimed "far better solutions," I'd have spent hours researching issue trackers and installing/configuring one of them. That's time taken away from developing the software I'm housing in Gitlab.

Or, even worse, I'd have asked on the dev list, "what's a good, solied open source issue tracker?" And we'd have bike-shedded for 10x the time.


> I’ve always thought of version control systems and issue trackers as separate products.

Agreed, but coupling has extreme benefits. No matter how stringent you are on your commit message requirements, you'll never capture the scope of a code change as well as the originating issue.

They may be separate products, but the need for a cross-vendor common interface/implementation persists. Also, the ability to take your ball and play elsewhere is required too. If a git-like approach were taken towards issue management and all of these platforms could implement/use it, nobody would ask to piggy back on git. Then again, you can apply this same discussion to any form of structured, persisted digital content...nobody wants to be locked in. Just so happens that the code part happens to have an unencumbered implementation that others have embraced.


> I’m highly skeptical of coupling the issue tracker and other non-VCS features with git.

But isn't it nice that you can say "issue X has been solved in commit Y"?


You can do that without having the repo and the issue tracker in the same product. For example, at work we use Jira for issue management and GitHub for code, and with a Git Jira plugin, they integrate pretty seamlessly in both directions.


Yes it’s nice, and there are dozens of products that allow you to do that. My point is that this is not the main value-add of GitHub, and discussions of “replacing GitHub” that center around re-implementing these features are missing the point. The value-add of GitHub is its community and years of developer mindshare. There is no simple technical solution to replacing that.


You seem to focus on step 2, e.g. having community and mindshare, while completely ignoring why all the people got there in there in the first place and why it gained such momentum, the step 1.


Step 1 for a category defining product will never be the same as step 1 for a future competitor to that product. What worked for GitHub as step 1 will never work as step 1 for any service that replaces GitHub, because GitHub’s very existence changes the environment that allowed their step 1 to enable step 2.

This is the luxury of first-mover advantage. GitHub only had to implement “good enough” features to attract a community. Any competitor that usurps GitHub will not only need to implement the right features, but also figure out how to move the community from GitHub to the new platform.


It feels like there might be two different uses of "coupling" here.

The linked article argues that GitHub locks in users by not storing everything in git, which what the post you're replying to is skeptical of. (I am, too.)

Being able to say "issue X has been solved in commit Y" is coupling the issue tracker and source control at a user experience level. The back end solutions are immaterial; they just need to be talking to one another, whether it's through plugins, or through proprietary features like GitHub already has (in which commits can be automatically linked to issues and vice-versa, merging PRs can automatically close issues, etc.).


That’s a matter of preference. A git repo will probably outlive fashionable-issue-tracker-of-the-day, so it might make sense to not embed this sort of information right inside git commits.


This also accurately describes what Microsoft purchased with LinkedIn. It still isn't the best platform in terms of what it covered but there's immense value in the community that was purchased. Because I see the pattern now, I'm curious what other communities will get folded into the MS ecosystem as a result in the future.


Skype was also a similar purchase. The pattern seems to be about dominant companies in the productivity/development space.

I wouldn't be surprised if companies like Atlassian, Slack, Jetbrains, Docker or Hashicorp are next.


> GitHub just happens to implement them both in one place.

Anyone who has version control and bug tracking under one roof in one company has them "in one place". Plenty of organizations have them integrated with some sort of common "dashboard", and multi-way navigation between tickets, commits and reviews.


If you are only interested in the version control part and you are a Mac user, you can give EasyGit (https://easygit.me) a try. It stores your repos privately on iCloud and it doesn't use any 3rd party servers or analytics. In fact its sandbox doesn't allow any outgoing network connections, apart from talking to iCloud.

Disclaimer: This is my app and btw I'm running a WWDC promotion and you can download it for free till Friday: https://itunes.apple.com/us/app/easygit/id1228242832?mt=12


Exactly this. I can discover and contribute to many projects (creating issues, submitting pull requests, with discussion) without having to learn 100s of subtly different UX each “distributed” system chooses to implement it.

And I know most other developers can also contribute to my projects in a familiar way.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: