Related but slightly different. The accuracy is real but it is only valid at a point in time. Consequently, you can have both high precision and high accuracy that nonetheless give different measurements depending on when the measurements were made.
In most scientific and engineering domains, a high-precision, high-accuracy measurement is assumed to be reproducible.
I think this is a charitable interpretation of the remark which deprives GP from learning something (sorry if this comes across as condescending, I'm genuinely trying to point out a imo relevant difference)
No it's not at all accuracy vs precision. That statement is about a property of the measurement tool, where one can have systematic offsets [0] (think about a manual clock, where the manufacturer clued the finger on with a slight shift) vs they can simply be inaccurate (think about a clock that only has a minute finger, but not one for seconds).
The thing pointed out by the original comment is about a change in the _measured_ system. Which is something fundamentally different. No improvement in the measurement tool [1] can help here as its reality that changes. Even writing down the measurement time is only going to help so much since typically you aren't interested in precisely the time of measurement and will do an implicit assumption of staticness of the real world.
[0] The real reason for those is that it is _much_ simpler to build a precise relative measurement tool (i.e. it's easier to say "bigger than that other thing" than "this large"). One example is CO2 concentration measurements, they are often relative to outdoor CO2, which is - unfortunately - not stable
[1] Assuming that the tool is only allowed to work on one point in time. If you include e.g. a weather modelling supercomputer in your definition of tools, that would again work.