I would say it depends a lot on your background. The whole thing is very detailed, but ideas can be lost in detail-oriented proofs.
Reading the first few sections, it seems that the ideas are there - especially in the proofs - plenty of motivating ideas, and the kind of "raw index crunching" that the paper begins with gives way to more ideas. Doubters might read section 1.6 about the power method for finding the largest eigenvalue. It convinced me that the ideas were worth reading.
It's so cool to see of why this works (as an engineer I learned about power method with handwaving explanation "it works in the limit" but I never knew why it works).
So what do we do if we want u_2, the eigenvector that corresponds to lambda_2 ?
Math overflow says we can just subtract the u_1 subspace from A [1] then repeat, but would that be numerically stable? (i.e. will that work with floats?)