It sounds like it's a separate, single instance. Definitely doesn't use the same infrastructure as gitlab.com itself (which is a good thing, since that's what it's monitoring), nor is it built to be scalable really. So, no great surprise that HN-level traffic overpowered the instance.
I do hope they're using Prometheus federation to expose this instance to the fickle internet and that they have one or more internal Prometheus instances that aren't directly queried by this instance. After all, that stuff is responsible for paging if something goes wrong in prod.
It is a separate instance from our internal one. We have a cron job that automatically copies the internal Grafana dashboards to the public one, so you still see exactly what we see.
We used to use Federation, but now we just have the public server scrape the same targets as our private one.
I ask because it's not a particularly good luck that the viewers from HN are capable of bad gateway hug of deaths to the site?