Seems like an incredibly long winded way of saying 'To go faster you either need to split up each instruction into lots of parts or increase the voltage for the transisters. We've split the instructions as much as we can, and power consumption is proportional to Voltage cubed, so it's not a scalable plan.'
More importantly than mere power consumption, we don't have a way to remove the waste heat generated. Dennards law (like Moore's law but for power consumption per transistor) ended 10 years ago. Exactly the same time clock speeds stopped improving. There are actually a few computers out there that run around 10Ghz but they all have impractical cooling systems.
If there ever were a return to exponential scaling, we would very soon run into the Launder limit.
For those who are curious, see the Wikipedia article on Landauer's Principle [1]
I hadn't heard of this before, but it sounds like we are a long way off from reaching the limit; as the article states, modern computers use millions of times more energy than what Landauer's Principle implies is the lowest possible amount.
BUT....The Launder limit is over optimistic because unless your computer runs at absolute zero you need to keep redundant copies of each bit for error correction. Transistors are implicitly error correcting in the sense that each bit is represented by a current of a few thousand electrons.
Factoring in the redundancy requirement we are likely only off by somewhere between 100x and 1000x. If there were ever a return to exponential technological improvement, we would run out of road after a few years.