Leap seconds have nothing to do with precision time. It's a non-periodic alteration to the definition of UTC. Those niche applications that require second-accurate knowledge of Earth's rotational accuracy don't need to rely on leap second changes to UTC; they could get that information out of band, and will need to anyway when they need sub-second accuracy. The 99.999% of other applications that don't need to know Earth's rotation orientation relative to the sun within a second -- but do have to calculate time differences between two UTC times -- would be much better off not having to deal with leap seconds. Leap seconds were a terribly misguided idea.
I prefer dropping leap seconds, but I wouldn't call them misguided. UTC and leap seconds come from maritime celestial navigation, where tracking rotation is actually important. Civil time then just piggybacked on that, which at the time probably was perfectly reasonable solution.
A detailed look at the negotiations that led to leap seconds shows that they were not for maritime celestial navigation. During the process several different times the celestial navigation folks set a limit on how far radio broadcast time signals could deviate from astronomical time, and every one of those limits was violated as the negotiations proceeded. By the time the draft recommendation was given to 12th Plenary Assembly of the CCIR it allowed for leaps of multiple seconds, and that draft was amended on the floor to leaps of only one second before they voted to approve.
Precision isn't a binary, it's a continuum. The question at hand is whether there are significant use cases where 1 second of slack in matching the Earth's rotation is acceptable, but 30 seconds is too much.
GPS isn't an example of that because 1 second is already a quarter mile off.
Pretty much all telescopes (under computer/time control) on this planet rely on UTC being quite close to UT1, since every second difference leads to a 15 arcsecond error in the pointing of the telescope (error between where you want to point the telescope on the sky and where you are actually pointing).
While <5s is usually not a problem (except for some instruments with very very small FoV), at 30s it really becomes a problem for instruments with modest fields of view.
I would think this is an argument for telescopes using UT1, not whether UTC should be adjusted for leap seconds. This is such a fringe application and all involved are aware of the issue, that I don't think it's relevant here at all.
UTC is defined to be within 1s of mean solar time at 0deg longitude (= UT1).
If leap seconds are a problem for the intended application then UTC is the wrong thing to use. There are other scales that are defined differently that can be used in it's place (such as TAI).
However if exact frequency is not critical one could also directly use UT1 instead of UTC (people that use leap second smearing for instance should have no issue with that..).
As long as everyone agrees on the time GPS will work. It's a question of how much deviance from solar time we are willing to tolerate vs how much effort we want to tolerate.