Thanks for the link; that made a lot more sense. The way Wired described it made it sound like the L1 minimization was almost irrelevant. The closest to a pseudocode I could get out of their article was something like: 1) apply an L1 minimization to do a first-cut noise reduction on some random noisy data; 2) apply an iterative algorithm to the result until convergence. That made it sound like all the magic was in the invention of some crazy algorithm for #2, which isn't really the case.