Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Given the number of worktrees I have of some huge repos (nixpkgs, linux, etc) it would likely mark a significant reduction in CPU/disk usage given what Syncthing is having to do now to monitor/rescan as much as I'm asking it to (given it has to dumb-sync .git, syncs gitignored content, etc, etc).

Are you really hitting that much of a resource utilization issue with syncthing though? I use it on lots of small files and git repos and since it uses inotify there's not really much of a problem. I guess the worst case is switching to very different branches frequently, or committing very large (binary?) files where it may need to transfer them twice, but this hasn't been a problem in my own experience.

I'm not sure you could really do a whole lot better than syncthing by being clever, and it strikes me as a lot of effort to optimize for a specific workflow.

Edit: actually, I wonder if you could just exclude the working copies with a clever exclude list in syncthing, such that you'd ONLY grab .git so you wouldn't even need the double transfer/storage. You risk losing uncommitted work I suppose.



inotify has pretty paltry limits. My ~/code is only 40-50GB but there's no way inotify can watch it all.

Thus, syncthing basically constantly has to rescan. It's not great.

And yes, rebasing linux+nixpkgs on even an hourly basis is absolutely devastating. lol




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: