As a programmer who also happens to be a photographer, that's flat out wrong.
The M8 produces a very crappy imitation of out of focus blur of what the bokeh of an camera with a thin depth of field looks like. [1] The linked article is actually being "neutral" and fairly generous; the pictures, frankly, look like shit.
It is difficult to properly do DoF, even if you know the Z-distance of every object in the scene. It's not just a simple blur of things that are deemed out of focus; things like spherical differences in the lens to produce hard/soft bokeh matter. [2]
"So easily simulated" couldn't be further from the truth. A bunch of academic papers have been written about this, [3] which include ways to reproduce effects that are similar to the "thin DoF" effect in large sensor cameras, but few try to tackle _all_ of the issues and nuances present.
in [1]:
"Now, this simulation above may look too artificial and unintuitive. Real lenses do not behave like this, right? They actually do. Just look at this picture below."
So he produces an accurate convolution model of how bokeh looks in soft/hard configurations, and that's proof it's not easily simulated? If it's not a straight Gaussian blur (as the HTC is likely using), that's fine - it can still be simulated.
What plenoptic cameras do is use a microlens array to perform focus bracketing. One can use a macrolens array as well, if one lives in a rich enough 3D-interpolating image processing pipeline.
Take a brick-sized lump of plastic, apply, say, 20 tiny cameras (each of four corners and center get a sub-array of medium_NIR_filtered-far-medium-near), and four temporally staggered Kinect structured light projectors on side, and you get all your cues double-checked by redundant information. Your algorithms get to play with direct low-resolution texture-focus info, stereo distance for a bunch of long-baseline pairs in two dimensions, redundant short-baseline pairs, active focus returns on laser dots as an anchor, stereo returns on laser dots, everything. One checkerboard calibration and such a system cross-calibrates itself and characterizes a position, focus, & lens distortion model for every lens.
Add additional lenses if you want to play with more of or some other type of bracketting.
The point is, attempting proper computer vision scene capture involves a crapload of sensors checking their results against each other, ideally diverse sensors and code. We simply haven't tried that, outside perhaps professional motion capture suites. The sky's the limit on such things.
To be fair, Bokeh on the Illum is pretty poor, too. The M8 bokeh looks fake. The Illum Bokeh looks noisy and pixelated (and the effective resolution looks far lower than 5MP to me). I wouldn't bother with either.
The M8 produces a very crappy imitation of out of focus blur of what the bokeh of an camera with a thin depth of field looks like. [1] The linked article is actually being "neutral" and fairly generous; the pictures, frankly, look like shit.
It is difficult to properly do DoF, even if you know the Z-distance of every object in the scene. It's not just a simple blur of things that are deemed out of focus; things like spherical differences in the lens to produce hard/soft bokeh matter. [2]
"So easily simulated" couldn't be further from the truth. A bunch of academic papers have been written about this, [3] which include ways to reproduce effects that are similar to the "thin DoF" effect in large sensor cameras, but few try to tackle _all_ of the issues and nuances present.
[1] http://www.trustedreviews.com/opinions/htc-one-m8-camera-vs-... [2] http://jtra.cz/stuff/essays/bokeh/ [3] http://dl.acm.org/citation.cfm?id=2389316.2389318