The paper includes details that show the motivation a bit better. Broadcast erasure codes will multiply the number of blocks by a large amount, say 16. With traditional hashing, you have to generate all of these extra 'check blocks' to hash them as part of the publishing process. Using this technique, you only have to hash the actual file and can then generate all of the other hashes using the homomorphism.
edit: actually, the receiver should be able to use the homomorphism to generate hashes for the check blocks as well. so the list of authoritative hashes to download should only be for the original chunks. the goal of the technique is to be able to verify the check blocks before being able to reconstruct the original blocks.
Actually, it's even worse than that - with a rateless code like Fountain codes, you can generate an effectively unlimited number of encoded blocks, making pregeneration of hashes for them all completely impractical.
Read up on fountain codes. A good starting point is the previous post on the same blog [1], actually linked in the first sentence of the article.
The basic idea is that, instead of sending N blocks in the content file f, you have a block generator gen_block(f) which can create an infinite number of blocks as random combinations of input blocks (plus a little "magic"). A peer who downloads any 1.01N or so blocks can reconstruct the original file (with high probability); because of the "magic," it doesn't matter which 1.01N blocks they are!
Fountain codes work beautifully in broadcast protocols, where the receiver has no way of communicating back to the sender (this is not generally true for TCP/IP outside of exotic scenarios, for example multicast).
For Bittorrent-like protocols (peer-to-peer large file distribution over TCP/IP), the practical problem that is addressed by fountain codes is making sure that the entire file will still usually be recoverable from data that exists within the network if all the seeders (peers with complete a copy) leave the network; in traditional Bittorrent, often there are a few missing blocks in this situation.
The practical problem addressed by homomorphic hashing is how a receiver can rapidly identify peers that are attacking a fountain-code-based P2P network by deliberately sending corrupt downloads (and as a side-effect can of course detect accidental corruption as well.)
I had written my comment after reading the performance section of the paper where they point out that homomorphic hashing uses much less disk I/O than hashing the entire output of a fixed-rate code while missing the point that the receiver takes advantage of the homomorphism as well, and thus arrived at the actual capabilities somewhat circuitously.
edit: actually, the receiver should be able to use the homomorphism to generate hashes for the check blocks as well. so the list of authoritative hashes to download should only be for the original chunks. the goal of the technique is to be able to verify the check blocks before being able to reconstruct the original blocks.