You might be confusing forward- and reverse proxies. Transparent forward proxies can now go bugger off. They will not be able to intercept HTTP/2.0 traffic.
Reverse proxies, in front of web applications will need to terminate the SSL before caching. Same as today.
Proxies don't have to be transparent. Non-transparent forward proxies that I set up and choose to use are very handy because I get a direct performance improvement out of them.
As an aside, I hear that in much of the more distant parts of the world (Australia/New Zealand) transparent forward proxies are common amongst consumer ISPs to help with their high latency to the rest of the world.
What's wrong in principle with transparent forward proxying anyway? From almost any perspective other than security/anonymity, forcing a client to make a TCP connection to a publisher's computers every time the client wants to read one of the publisher's documents* is a terrible decision: stark, screaming madness. If transparent forward proxying breaks things with HTTP then that's a problem with HTTP. Even from a security/anonymity point of view, an end-to-end connection is no panacea either: if encrypted it (hopefully) prevents third parties from seeing the content of the data sent, but it also makes damned sure that the publisher gets the client's IP address every time the client GETs a webpage, and as recent events have illustrated publisher endpoints aren't super-trustworthy either.
* Unless the client itself has an older copy which it knows hasn't expired; but a single client is much more unlikely to happen to have one of those than is a proxy which handles requests from many clients, and probably has more storage space to devote to caching too.
Reverse proxies, in front of web applications will need to terminate the SSL before caching. Same as today.