I don't understand why gitlab/github/bitbucket don't provide better tools for monorepo. This is a topic pretty trendy. But there is absolutely no tools helping with control access, good ci, ...
What's missing in these is cross-reference, which is not possible without somewhat established BUILD system (caps "pun-intened") - e.g. like bazel/build, then a source code indexer, etc, etc.
This becomes very critical for doing reviews, since it allows you to "trace" things without running them, apart from many other things. For example large scale refactorings looking for usages of functions, and other examples like it.
Why githab/gitlab/etc. can't do it? Well because hardly there could be one encompassing BUILD system to generate correctly this index.
They can create a standard file format that has to be generated by build system. github is in a pretty powerful position. They can create even a shitty version of it and people will follow.
I've been thinking about a tool like this for a long time. A way to attach to each commit not only the diff in the code, but also the list of places affected by the changes (usages of functions that are modified for example). Then during review we wouldn't have only a stupid diff. We would have a list of place to check to be sure that the changes make sense in the context of the project.
Even if they can, it's one thing indexing your own source files every night, another indexing a much bigger amount + massive amounts of branches, clones, etc. (I'm talking about github) - e.g. not practical - as there is no no clear way to say which branch (from git) must be indexed (obviously not all) - e.g. there is no encompassing "standard" saying so.
That by itself is another BIG PLUS for mono-repo (and "mono"-rules) - things are done one (opinionated) way, trunk based development - but thus giving you things that you won't be able to have normally.
Now indexing source file is not an easy and cheap task - it's basically a huge MapReduce done over several hours (just guessing), so there must be a reason for this to be done.
Hey. What is the error rate you face? Like, how often it is not possible to render a page because the navigation fails, chrome crashes, a strange exception is raised by puppeteer, ...
At Ahrefs we are running between 130 and 170 millions sessions a day. The project started before the release of puppeteer. I can tell you the release of puppeteer saved me a lot of time. It's way more easier to keep a correct browser state using it than what was previously available. It also interacts well with lighthouse. I wouldn't call chrome headless stable or bug free (it doesn't even handle https correctly). But it's a good thing to have it available.
I don't think we have a fancy usage of puppeteer. But still we get between 2% and 5% of errors depending on the version used. It's a small ratio, but at this scale it's a lot of pages.
Awesome stats, I've actually run across Ahrefs a few times during my development and purview of other products. Really: well done on that, I hold Ahrefs in high esteem
Those games are always for vim. But I would love to have something like this with emacs using all the transposition commands or what is available from paredit/smartparens.