There's some confusion between "compressed sensing" and "L1-minimization" in the comments and the article.
L1 minimization here refers to trying to represent the image as a linear combination of basis functions a1 f1+a2 f2+ ... by minimizing objective function fit(a1,a2,...)+sum_i |a_i| where "fit" represents how closely the reconstruction fits the image (for instance, squared distance). The second term in the sum is the L1 norm
of the parameter vector, and it has a special property of being non-differentiable at 0, and often causing individual components a_i to become exactly 0 during minimization, hence resulting in "sparse" solution vector.
Compressed sensing refers to the idea that when the image is sparse in the chosen basis {fi}, it's sufficient to take a small number of random measurements of the image. We can then compute "fit" approximately which gives us a different objective function that we minimize. The result will be close to what we'd get if we used the original (full) "fit" function
L1 minimization here refers to trying to represent the image as a linear combination of basis functions a1 f1+a2 f2+ ... by minimizing objective function fit(a1,a2,...)+sum_i |a_i| where "fit" represents how closely the reconstruction fits the image (for instance, squared distance). The second term in the sum is the L1 norm of the parameter vector, and it has a special property of being non-differentiable at 0, and often causing individual components a_i to become exactly 0 during minimization, hence resulting in "sparse" solution vector.
Compressed sensing refers to the idea that when the image is sparse in the chosen basis {fi}, it's sufficient to take a small number of random measurements of the image. We can then compute "fit" approximately which gives us a different objective function that we minimize. The result will be close to what we'd get if we used the original (full) "fit" function