I think it's a little more complicated than running more cables. Most datacenters have a total capacity they can handle, based on how many connections they have to their local grid (or grids, as datacenter places like Santa Clara have multiple power grids to give datacenter redundancy). You need to make sure your internal power distribution systems can actually handle the amount you want to push through, and you need to ensure that your backup power is actually enough to get you through major outages.
AWS, as an example, tends to only have 20MW to 30MW for each of their datacenters- anything above that they say isn't worth the hassle when they can just open a new datacenter. Power is definitely a limiting factor.
Getting more power into a datacenter is a different problem than getting more (already available) power into a rack. I suppose I could have added "if your existing power distribution system can handle the extra power capacity". That includes service entrance, transfer switching, standby and backup power sources, and distribution to the rack level.
The point I'm trying to make is that, all things being equal, it's _much_ easier to handle un-equal power load between individual racks than it is to deal with the cooling side of the equation. Adding more power to a single rack usually just means a few more whips from your distribution. Getting that one extra-hot rack in the aisle to be effectively cooled requires a lot more infrastructure than running some cables.
AWS, as an example, tends to only have 20MW to 30MW for each of their datacenters- anything above that they say isn't worth the hassle when they can just open a new datacenter. Power is definitely a limiting factor.