It seems to me like most security engineers have a strange view of security. They seem to take an abstract view of security as enforcing certain invariants in a system, but have no problem with exploiting other systems or abusing human beings. Google will gladly exploit my system, coming up with ever-more-nefarious ways of making it disclose my private information and trying to limit what software I'm able to run. Yet they employ myriad security engineers and have the ability to dictate de-facto standards for the security of the web. How is it that their "Project Zero" can speak at a conference without getting booed out of the room?
> but have no problem with exploiting other systems or abusing human beings.
Security is about making sure that something happen via a certain path, without things going wrong in ways that can be abused.
Web Integrity effectively enables a way for, say, a bank to validate that the request isn't coming from botnet VMs spun up in the cloud, but rather an actual device with an actual human behind it. In fact, many websites already try to accomplish this with fingerprinting, which is why certain browsers and VPNs are already effectively blocked from the web.
Yes, it has the effect of limiting your freedom and privacy, but there is a definite security gain here that will lower the likelihood of account compromise.
Security != privacy; and security != freedom. Often there's a tradeoff. Security engineers are employed by their company to increase security - not freedom, and not privacy (unless mandated by regulation), hence the bias you typically see from them.