Actually, the convergence problem is theoretically solvable with a light-field projection system. That is, instead of having 2 images that are simply "fed" into each eye but projected onto a flat screen that is not converged with the 3d disparity "information" contained in the images, you have a system where the screen emits a light field that your eyes can focus on at any depth. We already have consumer level light field cameras and attachments. the displays are tricker, but possible, traditionally through the use of hexagonal lenticular lenslets carefully registered onto a 2d "flattened" projection of the light field capture. This is the "True hologram" dream mentioned. The primary limitations being the level of precision and resolution with which you can register the lenslets with an image.
that's fine for a display but is obviously impractical for a large scale theater projection system. Just an arbitrary possibility I thought of just now: suppose the projection screen had a retroreflective surface- That is, light projected at the screen gets returned at exactly the angle it arrived at. Combine this with a domed mirror and a backwards pointing projector or set of projectors, with all the requisite optics math and geometry work, it may just be possible to project a lightfield at a screen that bounces back at the audience and appears as a tangible hologram to them.
I want to be clear that not any of this is easy.
I think as soon as you try it, your first attempts are going to be very blurry, squint inducing, and especially very dark. A lightfield display, in order to achieve the same level of perceived brightness as a traditional 2D display, needs to generate 2-10 times (perhaps much more) more actual light, with all the requisite power requirements that entails, since the light is distributed directionally instead of diffusely.
Going in this direction basically takes the level of resolution and precision in image reproduction we've achieved back a decade or 5, since the "pixels" or resolution units are spread over many more views than just 1 or 2, or perhaps something more recognisable as a continuum of "infinite" views, or whatever number is visually indistinguishable from infinity.
Why would it need to generate more light? Suppose you told your lightfield display to simply display all white - wouldn't its light distribution be the same as a white LCD, hence taking the same amount of power?
In a projector system you generate light, and then you block it using either photographic film or LCD to produce the image. A lightfield display can be thought of as a 2D, higher resolution generalisation of a lenticular ("no glasses") display, or for example, the display of the 3DS.
In these kinds of displays, you have a light generator (LED or flouro or reflection) with the same surface area as a normal print or display. But to produce the directional light, for say the left eye, you must block the portions of the image that relate to the right eye from the light travelling to the left eye. And vice versa. This halves the amount of light for 2 directional images. thirds it if you want 3. So you end up with an image that is much darker then normal. you must compensate by generating 2 or 3 times the amount of light.
This problem gets worse the more "views" you add on. So if you want 10 omni directional views you have to generate enough light for all of them, since, even if you set everything to "white", as you suggest, most of that white is getting blocked from your view.
Or to put another way, you get allocated a smaller source 2d image plane surface area to generate your viewpoint.
Why do you have to block the light? Sure, primitive stereo parallax barriers like the Nintendo DS has do, but nobody uses parallax barrier tech for any proper lightfield application. The system you mentioned, a lens array over a high-PPI screen, blocks nothing; light is only refracted. Ergo, the energy required to run the screen is the same as without the lens array, e.g. a normal screen.
I was recently at the Exploratorium in SF (Embarcadero). They have a mirror there that tricks your brain into thinking you are seeing an upside-down reflection of yourself hanging in the air about 5ft in front of where the mirror actually is. I actually tried to touch the 'reflection' and my hand just met the air. My first thought was why don't we use these to create 3D images. Would your domed mirror proposal work similarly?
well yep, but that one has the advantage of using the lightfield already reflecting off of you and simply redirecting it back with relatively simple, shall I say, macro-optics.
The kind of system I'm talking about needs a 2D image with enough "pixels" to fill a volume convincingly instead of just a 2D plane, and the optics would need to be far more complicated, precise, and created at a very small scale which would appear as a textured surface like a fresnel lense or one of those 3D lenticular stickers you sometimes see. Then once you've figured that out, you need to get the projection and the optics to line up precisely- unless you can figure out a way to build in some tolerance to the alignment of the projection.
Aside from that, the goal is essentially the same. To produce a field of light coming out of some "window" with the same directional qualities as the light coming out of a real window.
One solution for holographic lightfields is to use nanoantenna arrays. Multiple antennas in an array, in which each antennas phase and amplitude is modulated is able to produce these directional lightfields by interference, and by modulation it's possible to change the shape of the radiation pattern without touching the antennas. In nanoscale, it could create waves in optical wavelengths.
It's also (mostly) solvable with active lenses and eye-tracking. What you can do is infer distance from vergence and then adjust the effective focal distance of the screen.
I don't know how cumbersome the lenses would have to be, or how insanely fast the system would need to operate, but it's at least possible in theory.
"retrofelctive" - thank you for using this word! I basically froze up for a good while one time looking at a street sign at night, realizing how it's 'reflection' was different from typical mirror reflection. Now I know what the term for that is and how to build one!
The real problem with 3D is much like the problem with hearing 'whispers' behind you with surround sound... I go from being immersed in the movie to being rudely reminded that I am in a cinema watching a movie. Until the experience is something that doesn't announce the technology it's only ever going to be a gimmick and, in terms of my connection to the story/character/situation, I'd much rather lesser quality simulation with greater emotional connection and persistent immersion than something 'technically' better.
3D strikes me as superfluous. Our brains already look at the images projected onto a 2-dimensional plane and infer the third dimension. It's absolutely a gimmick, and don't get me wrong, it can be fun, but it doesn't contribute anything meaningful to the viewing experience.
Call me when the brain interface is ready and we can actually travel around in the space, because that's another story.
I've watched a few 3D movies. The gimmicky parts are very gimmicky, but the normal scenes, where we see the background actually behind the foreground, and don't see a drop of water fly right into are eyes, the 3D looks a lot better then the 2D version.
2D seems the same as looking around with one eye closed. I don't notice the difference when I am doing it, but when I open my second eye, everything looks subtly better.
The Doctor Who 50th anniversary special was possibly the best use of 3D I've ever seen.
In one scene, there is a 3D oil painting. We see the characters marvelling at the effect they're seeing, but in the broadcast version we don't know what it is they're looking at until the camera pans around to show us.
In the 3D version, however, you can tell. You don't get the full effect until later, but even then it's far more pronounced than it is in the 2D version.
There are also a few other depth tricks they use, but they use them sparingly. In one scene, we see the image of The Doctor as he is broadcasting a message to another party; in the close-up of the feed (i.e. when it takes up the full screen), we see the image as we normally would, but in the four corners are an overlay, like a HUD in a video game or presumably like a HUD in an F16. They don't move, there's no animation, they're just a bit of stylistic flair, but you can tell that they're 'over' the image, which gives it a more pronounced effect, and makes the transmission 'feel' cooler and more futuristic.
I'll be glad when '3D all the things' is gone, but there are a few neat tricks that I hope we can keep in the future.
A "brain interface" is also superfluous because your senses are already "brain interfaces". Any interface connected to your brain will "appear" to you as a new sense organ.
I'll take some new sense organs for sure, but a "brain interface" is not the panacea people make it out to be. For one, the thoughts in our head are a jumbled mess and are only clarified through expression, which involves sensing and acting with your body. Any new interface will become a new body part. You cannot speak before you speak and you cannot write before you write just as you can't run before you run.
Your comment about 3D being superfluous without contributing anything meaningful really hits home when I try to recall the 3D movies I've seen. Even the one that made the biggest impression on me, the stunning and very well made "How to Train Your Dragon" in 3D, but to this day I remember it in my mind as if it was only in 2D.
Actually depth perception is possible with only one view-port. Both my eyes work (for varying definitions of "work") but they don't work together very well (tests show I only tend to use one at a time, which one gets control depending on the location of the object(s) I'm trying to focus on and how tired I am) and I can perceive depth just not quite as well.
The main clue the brain uses for depth perception on top of the binocular input (of instead of the binocular input for those who don't have it) is parallax effects gleaned from moving objects within the view passing behind each other. On top of that comes knowledge of the world (how big things should be relative to each other) though that is very easily fooled.
On top of that the perception of parallax exploited further by moving your head. Ever seen a cat size up a jump is isn't entirely confident with? You seem them bob their heap up and down a bit giving them extra parallax clues effectively extended the effect of multi-eye vision into a second degree of freedom. This effect can be used by single ey eby moving the head more.
Binocular vision can help 3D perception a lot, which is why most creatures use it, but it is far from the be-all and end-all of it.
I'm always skeptical of claims that "X will never work" that are not backed by solid research or a mathematical proof.
The current 3D may be nauseating to a number of people, but I do note that travelling by car is nauseating to some too, and I've seen older people get similar effects when reading a computer screen that is scrolling too fast.
"They are doing something that 600 million years of evolution never prepared them for. This is a deep problem, which no amount of technical tweaking can fix. Nothing will fix it short of producing true "holographic" images."
I've no knowledge of the field, but I am reminded of Clarke's first law:
"When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong."
Quotes like that are the perennial mottos of cracks and kooks, like the people claiming they invented a "perpetual motion machine" or "cold fusion engine".
It's based on a story-fied version of science (the young rebellious upstart, the established elderly opponent, etc). Science seldom works that way and seldom involved "breakthrou changes". Most of it is incremental work.
Roger Ebert is the sort of person who believes any technology older than him is normal and anything newer is inherently wrong because it's not what he's used to. We didn't evolve to blend a series of 30 static images per second into seamless motion, but we cope with that so well that no-one actually thinks about it.
Eventually 3D will become consistently well done instead of a gimmick, and it will be just another thing to use when crafting a movie like colour, moving cameras and depth of field.
> Roger Ebert is the sort of person who believes any technology older than him is normal and anything newer is inherently wrong because it's not what he's used to.
I think you're being far too hard on him. He had seen the 3D fad come and go before. He had a set of standards and I think he was right to stand by them. As it is 3D adds almost nothing to film making at the moment: there are technological limitations and we don't have a good idea of how to use it. Right now it increases ticket prices, is usually 'shoveled-on' to a movie, and reduces the light hitting the viewer's eyes (a long-standing pet peeve of Ebert).
> We didn't evolve to blend a series of 30 static images per second into seamless motion, but we cope with that so well that no-one actually thinks about it.
There is a reason we don't see many fast pans in movies. 24fps forces some compromises. I have no doubt that 48 (or something higher) will eventually become the standard, but just like color/sound/3D we'll need to develop the techniques around it to use it properly.
As much as respect and often agree with Ebert, he has an annoying tendency towards absolutes, just like the video games and art thing. I'm in full agreement with him on this at the current level of technology, but "never" is a long time and I'm certainly not willing to bet on what's going to happen 50-75 years down the road.
> We didn't evolve to blend a series of 30 static images per second into seamless motion, but we cope with that so well that no-one actually thinks about it.
While this is mostly true, 30fps is not good enough to the point that no one notices. For example, even with the relatively little gaming I do, I notice when the fps falls to 30. In movies, the Hobbit was shot at 48 fps, and viewers complained that it felt to life like (this is likely just a way of saying 'more lifelike than I am used to').
The same man who says games will never count as art. He's just scared the entertainment industry is changing.
I've watched literally dozens of 3D movies and have never once had any kind of side effect. I think it's a self-fulfilling prophecy for most people - they expect to feel something, so they do.
The brightness issue is exactly why 3D screens have higher powered projectors and more reflective screens. I saw a 3D film in the 1990s and that was dark; modern ones are not.
Strobing is a side effect of crappy framerates (24fps should not be acceptable for anything, ever), not 3D.
I'm not even going to bother with the focus 'issue' as so many people, me included, don't even experience it, but yes, it is resolvable.
As for immersion, meh, he can speak for himself, the most immersive experiences I've had were 3D, and the main immersion-breaker is other people moving around, making noises, eating, etc. What annoys me the most is badly done 3D movies though, as they look bad and ruin the overall perception. A movie made in native 3D will always look better than a postproduction kludge like Clash Of The Titans or most things Disney did. I also hate it when 3D is used an excuse for cheap effects like having things fly directly at the viewer or hover in front of them, as that ruins both the credibility of the quality of the effect and the seriousness of how it can be used.
When audio was added to movies, people said it ruined them; then colour; 3D is just the next step of that iteration.
I agree. 3D is an "obvious" step up, even if it's imperfect. And even if the vergence/focus issue is both worse and unsolvable by any reasonable technological means, it isn't the end of the world. People will get better at viewing it with experience, and people who have grown up with it will never have difficulty with it.
3D movie projection is exciting because it may allow film, as an art form, to be more like video games.
Those who say video games are an art form tend to offer two explanations: 1) they can convey thought-provoking stories, much like traditional movies; 2) they're spatial competitions, similar to the way there's an art in learning how to maneuver around a tennis court.
When 3D projection is incorporated into some movies (note: some), it can bring out this spatial element in interesting ways. A great example of this Gravity (2013), with George Clooney and Sandra Bullock: http://www.youtube.com/watch?v=OiTiKOy59o4
The main character in that movie was essentially space. Although there wasn't much dialogue or traditional character development, it managed to engage audiences and critics until the last minute. The appeal of movies like Gravity is somewhat new to the film industry; it's similar to the appeal of watching a tense sports match or navigating your way through a puzzle game like Portal.
Because it was fake.
The moment the accurate physics were broken to kill of Clooney was the moment it lsot my attention.
I already was very bored of the story (no I don't need an action packed movie to entertain me), and the thing I liked was that they were being realistic in space. But then they screwed that up. (Clooney got "pulled" away from whatshername while tension should have already pulled him back, there was no spin involved either)
//end of off topic
I think that eventually movies/games will be like experiencing something as real as a dream, ie on the track that the Oculus Rift is going.
The 3D we have now is just okay I guess, but most of the time I choose to see 2D versions. At the end of the day, I just want to enjoy a good story... and on film preferably, as digital still looks too TV-ish for me.
Those interested in a positive take on 3-D might be interested in reading Thomas Elsaesser's paper, "The “Return” of 3-D: On Some of the Logics and Genealogies of the Image in the Twenty-First Century"[1]. One of his examples is the movie Coraline which uses 3-D "not in order to emphasis depth, but to construct spaces that do not follow the rules of perspective and introduce
slight anomalies into it." I haven't seen the movie myself, and my only experience with modern 3-D movies is going cross-eyed watching YouTube trailers, but dismissing 3-D outright seems premature to me.
That is exactly my problem with 3D, total gimmick. Not to mention how dim it is compared to normal 2D.
A director cannot get away from having something come flying at you. I recently went to see the 2nd Hobbit (in 2D) and it is so painfully obvious when you can tell something is supposed to be "flying at you". What is this Disney World?
I am however looking forward to VR people making "Ready Player One"-like movies in the future where you are truly immersive.
Gravity was the first 3D movie I've seen, and I was quite pleased at how little they decided to gratuitously do that.
On the other hand, there are numerous animated films I've seen on DVD in the last few years (Despicable Me 2 being the latest example) that had entire sections that clearly existed for no other reason than to justify the 3D ticket price.
I enjoyed Gravity, the 3D effect occasionally added to the experience and didn't detract much. I wait for the next time 3D really adds to the experience before I do it again, and I'm guessing that will be a long time.
The 3D ads before Gravity started (including Hobbit 2), on the otherhand, were often unwatchable.
I think Gravity is a special case, Cuaron (and/or his team) can make depth out of flat 2D, Children of Men had me speechless quite a few times (the motorcycle ambush...).
Oh it was. I've never seen a 3D movie before, I went specifically because so many critics I trust had said it may have been the first case of 3D done well and actually being useful to the storytelling.
But that's the same reason I don't expect to do it again any time soon. What were the previous movies where 3D was supposed to be a big part of the experience? Avatar? Polar Express? Even if those were perfect movies, they were years and years ago. The last time I even gave it a second thought was Hugo.
Gravity made a lot of sense, because you convey just how alone the characters were in the volume of space. It's going to be a while before someone makes a movie 1) with a good script and 2) a good director that 3) really uses 3D well.
I have no problem with 3D (when it's used properly, but that's true of everything and shouldn't need to be said). I like how the article is supposedly using "science" to "prove" what is already disproven plainly and empirically. 3D does work.
The funny thing about this round of 3D is that it isn't a bold new venture for cinemas. It's a sign of stress. Cinema ticket sales have been in decline for over a decade now [1]. Just as TV stole away the everyday crowds from golden-age cinemas', ever-improving home theater quality and video gaming are steadily chipping away at what remains.
When TV started stealing business from cinema's, Hollywood's response was to use new technologies to give cinema patrons something TV's of the time couldn't. Hence, widescreen aspect ratios became widely adopted and, later on, the first wave of 3D, stereo, surround sound, etc.. TV technology stagnated and an equilibrium was formed that stood until home video came along and started disrupting things.
Today, the second wave of 3D is an attempt to tear people away from their hi-definition, audiophile-grade, surround-sound home-theaters and drag them back into cinemas (at double the normal ticket price). It will work, for at least a little while, until 3D becomes ubiquitous even amongst relatively cheap home video displays. At that point, 3D may very well die another death because Hollywood might not be willing to tolerate higher production costs (and limitations of the technology) for a gimmick that doesn't bring in enough extra cash. What will likely determine the longevity of 3D is if those costs will come down faster or slower than the sales-boost tapers off!
The next obvious step for viewer immersion is virtual reality. If VR headsets such as the Oculus Rift or what Valve has been secretly working on take off in the next few years and develop a large enough user-base, there's a remote chance that we might see some movies developed for them. Cinema's might also introduce VR rooms, making Hollywood investment in VR films more likely. These might be entirely on-the-fly rendered machinema that allow users to walk around freely inside the film, or pre-rendered films that place the viewer on a rail with only the ability to move their head to look around. Gimicky, yes. Highly unlikely to replace traditional film, yes. It could happen though, as one more way to boost sales.
Honestly, the number of people who have at home experiences even close to cinema quality or "audiophile" grade is incredibly few. Most people have average priced HD TVs, a cheap bluray player and maybe a soundbar.
The people who own the systems that they've meticulously picked out and spent thousands on will go to the cinema no matter what, because they deeply care about the media and are willing to pay to see it early on the big screen. It really comes down to people just not caring about seeing movies, and has little to do with the at home equipment.
There’s a particular conundrum that makes 3D difficult, and that’s the frame rate. Broadcast TV (which we perceive as smooth and life-like) is 60 half-resolution frames per second—standard film is 24fps, 2.5 times less temporal information. Walter Murch makes a pretty strong case in his book In the Blink of an Eye that your brain actively works to fill in the difference, effectively imagining the rest in the same way it does while listening to a storyteller. This is what makes 24fps such a compelling frame rate for fiction, and why 60fps feels “too real,” — at higher rates of motion there’s no longer a need for imagination, and the “man behind the curtain” is revealed.
Unfortunately, in 3D, the temporal limitations of 24fps become apparent, perhaps again because the visuals start to become real enough that your brain no longer works as hard to synthesize reality. But now if you increase the frame rate, you end up with the first problem again, and maybe even worse—when watching The Hobbit in 48fps 3D, I was painfully aware of every camera movement, no longer feeling like a passive observer hovering in the air. It’s clear that if 3D really is the way things are from here on, many new techniques are needed, from the styles of acting and lighting designs to the way the camera moves and scenes are edited.
I’d guess one compromise would be splitting the difference, 3D projected at 36fps—something tells me that won’t come to pass though, and so maybe indeed 3D never will work…
1) High frame rate - really, enough of this "24fps looks better" nonsense. We can make adaptive frame rates if need be. This kills the nasty tearing you get when cameras pan, particularly noticeable over fancy landscape scenes.
2) Brighter projectors - don't know why this isn't the case already.
3) Actually shot in stereo. There's a very good chance that the last 3D movie you saw was depth-ified in post process. Shooting in 3D is expensive and requires more editing, calibration etc, so people don't like doing it.
>enough of this "24fps looks better" nonsense. We can make adaptive frame rates if need be.
I don't even understand the 24fps logic, as all it causes is tearing and motion blur. We need better framerates, but across the board, not just in action spots, as 48/60 give smoother motion overall. The Hobbit was 48fps; I bet most people either didn't notice, or thought it looked better. Certainly, there was no appreciable motion blur or tearing in a film that would have been full of it at 24.
>2) Brighter projectors - don't know why this isn't the case already.
Already done, along with more reflective screens, but yes, still needs improvement.
>3) Actually shot in stereo. There's a very good chance that the last 3D movie you saw was depth-ified in post process. Shooting in 3D is expensive and requires more editing, calibration etc, so people don't like doing it.
Yep. This is the main problem. Having movies shot in 2D and made 3D in postproduction is like shooting in black and white and having a 6 year old colour them in with crayons. Native 3D shooting is easier than it was thanks to James Cameron et al but still requires more investment in skills, equipment, time, calibration, etc, and better ongoing reviewing and monitoring during production. Some people just don't like to spend money where they should, but still ant to reap the benefits.
The Hobbit was 48fps; I bet most people either didn't notice, or thought it looked better.
I, and everyone in my family, thought it looked terrible at 48fps and in 3D—like a mid-80s BBC soap opera. I didn't see the 2D version, so maybe the film itself just sucked even in 2D.
There's a very good chance that the last 3D movie you saw was depth-ified in post process. Shooting in 3D is expensive and requires more editing, calibration etc, so people don't like doing it.
This is just flat out wrong. "Depthifying" things in post produces better 3D, full stop, because a single 3D depth works poorly across the entire image.
Every. Single. Animated. Film. uses multiple 3D depths in the same shot, which is what "depthifying" allows you to do, and that's a huge reason why animated films have the best 3D currently. If you just shoot 3D in-camera, you're forced to choose a particular 3D depth and the results, in most shots, are sub-par.
BTW, my information comes from talking with actual 3D supervisors in Hollywood (where I lived) and the cml-3d list, which is where the people who actually do this shit for a living hang out and talk about the 3D releases as they come out, the techniques they used, and why. If you're curious about the craft, you could do worse than signing up for the mailing list and listening in on the conversations happening there.
>This is just flat out wrong. "Depthifying" things in post produces better 3D, full stop, because a single 3D depth works poorly across the entire image.
No, you're flat out wrong. Again, would you colour in a black and white film in postproduction and expect an accurate result?
As movies normally create depth by blurring objects at different distances to mimic the human eye, this interferes with actual focusable depth if it is postprocessed into 3D. It is possisble to do it right, but it takes the best part of a year (see: the 3D-ifying of Titanic), not two weeks like Disney moview or Clash Of The Titans took.
Point taken about animated films usually having better 3D, but that is exactly because it's easier and cheaper to produce natively.
Another Ebert declaration ('Case Closed.') where he betrays an incomplete understanding of the topic at hand. I personally liked 'As a editor, he must be intimately expert with how an image interacts with the audience's eyes.' contrasted against this part of guy's letter: 'Somehow the glasses "gather in" the image'.
Not to mention, of course, that there is always scope for better tech to come along. Sensationalism, thy name is Ebert.
1) For some reason, when I watch one, the first 20 minutes or so my left eye feels "numb". Hard to describe, just unpleasant.
2) I wear glasses, and I can't wear contacts. Clunky 3D glasses don't work for me, and some theaters don't use polarizing filters so there's no clip-ons
3) This seems to be related a lot to how a film is edited, but there's an effect that makes everything on screen look like miniatures (as in tilt/shift photography) to me. It's not the same in every movie. Avatar = good, Hobit = soso, John Carter = very very bad.
Sidenotes, yeah, the picture is darker. And somehow the movie does indeed feel "smaller", those effects don't bother me much, but they don't help.
Consequence: I don't buy a 3D-TV, I don't buy a 3D-Beamer, and if I can avoid it I don't go watch a 3D-Movie. Unfortunately, I like going to cinema, and sometimes cinemas only have 3D screenings, which is annoying because then it's a choice between not seeing the movie in a cinema or 3D (both bad options).
If the goal of displays is to reproduce the real world (which may or may not be true, but I'd argue is true for some class of displays) then I would argue that 3D will eventually become standard, simply because the world is 3D. However I think the biggest current barrier for accurate representation of the world is contrast ratio, which we are orders of magnitude off of.
I propose a sort of turning test for displays where the goal is to have a display that is indistinguishable from a window into the real world. Let me know when that happens and I will be the first to buy one.
>3D will eventually become standard, simply because the world is 3D
I don't disagree with you at all, but if you're using the term "3D" in the same way as the article, the first "3D" in that sentence is referring to stereoscopic displays that are driven by our specific animal biology of having two eyes, while the second "3D" in your sentence refers to the external world, which would still have three dimensions even if we only had one eye, in which case stereoscopic displays would not be necessary.
If instead you meant holographic-style displays, then nevermind :)
The accommodation/vergence problem is well known but primarily a big deal for VR headsets like the Rift, not for cinema screens. The a/v disparity is the inverse tan of distance so it quickly fades into irrelevance at the 10ft range.
And such disparities can be solved via Virtual Retinal Displays (http://en.wikipedia.org/wiki/Virtual_retinal_display) which shine laser light directly onto your retina and can simulate any level of depth and focus accurately.
Honestly, any article that says something like "it never will" is setting itself up for failure. Writing a commentary on current technologies is fine, but implying some knowledge of the future and what is possible is just ignorant and obviously motivated by feelings outside of presenting a true and unbiased article.
The technology isn't perfect, but to say that it will never "work" is just ludicrous.
Another factor is simulator sickness. Even if you could get the perfect 3D environment going (say via the Oculus Rift), your inner ear will start to complain that what it is seeing is not matching what it's feeling and your stomach will thinks its been poisoned and induce vomiting.
The only way '3D' will ever work is if the movie is jacked straight into your brain, ala 'Strange Days'.
Why Film doesn't work and never will.
The notion that we are asked to pay a premium to witness an inferior and inherently brain-confusing image is outrageous. The case is closed.
Isn't it also a trick to make the brain think that it's seeing continuous images instead of discrete single images?
Maybe because movies are more impressionist/symbolic devices rather than simulations. I personally don't care about 4K HD, or 100Hz and 3D. Everytime I read about these, I remember the first minute of Alien. No 3D, no green screen, analog ... yet I feel immersed into a ship.
If you're a fan of Ebert's writing or movie reviews I highly recommend his autobiography "Life Itself". One of the better books I've read in the last few years.
I think it is useful to compare with how stereophony works, and it might indicate that 3D is indeed a dead-end, a superfluous gimmick (as noted in other comments).
The simple --too simple-- view is that screens have to reproduce reality, that the world is 3D and thus that 3D will eventually win. But this has been proven false, at least for audio (which I know better).
Some people think they hear left or right by doing some triangulation between the two ears. Nothing more wrong: with only two ear we would not perceive height, and people deaf of one ear certainly do not "hear in 1D".
In fact we localise sound because of
- The shape of our ears. (See how complex are the ears of some animals)
- Tiny movements of the head.
- Past experience (learning) of the shape of reverberation and reflections in common rooms.
A full "real" simulation of sound localisation, which has been experimented and works, requires:
- Sounds recorded in an anechoic chamber (these are very small and expensive, you won't get a philharmonic in it, and playing music in this echo-less room is extremely painful).
- Microphone must be perfect, a thing that do not exist.
- Synthetic room reflections computed on the fly according to where the listener sits when listening (shape and texture of the room and where are the two ears in the room)
- A polar reflection model of the ear shapes of the listener.
- An helmet detecting tiny head movements and adjusting all the computation above accordingly.
- Perfect earphones inside the ears of the listener.
So this all works in theory and has been tested experimentally, but it has not crossed anyone's mind that we really need this to enjoy a properly spatialized concerto. We can approximate a soundscape enough with the very crude left-right localization provided by stereophony, and this is quite enough to enjoy good music.
It is certainly different for the visual field, but I would bet it will be ressembling in the big strokes: music, movies, books, painting, all these create illusions, automomous worlds that do not need to match reality perfectly. It needs to be realistic enough and based on accepted conventions: When we see the image of a plane taking off, we accept that our hero is likely inside, and that it is related to the story, e.g. not a random plane talking off as we would see from our window.
But it doesn't need to be "pixel-perfect", as exemplified by the many great black and white movies.
I was going to post the same idea. Since i've tried occulus rifts, i'm really impatient to see the first version of a lord of the ring or star wars in full 3D with an occulus rift on my head. Imagine being right in the middle of a space battle, being able to move your head all around, or sitting on a horse while charging sauron's armies.
that's fine for a display but is obviously impractical for a large scale theater projection system. Just an arbitrary possibility I thought of just now: suppose the projection screen had a retroreflective surface- That is, light projected at the screen gets returned at exactly the angle it arrived at. Combine this with a domed mirror and a backwards pointing projector or set of projectors, with all the requisite optics math and geometry work, it may just be possible to project a lightfield at a screen that bounces back at the audience and appears as a tangible hologram to them.