Hacker Newsnew | past | comments | ask | show | jobs | submit | kguttag's commentslogin

Maybe I didn't make the point well in the article. It is very limited to special cases when they limit the IBIS pixel shift to in-camera processing with only a JPEG output.

The camera should take pictures with the IBIS fractional picture shift and save the RAW files. They should give the option of how many "cycles" (go through all the shift orientations more than once) of pictures to take. With that level of information, smart software will be able to figure out and deal with at least small hand motions and considerable motion in the subject.

Smartphones are already using computational photography, combining multiple photos for things like panning and taking pictures in the dark. For a dedicated camera like the R5 mk ii, I would want the camera to save RAW images that can be put together later by "smart" software on a computer with greater processing and memory resources.


Well, they put a full-size HDMI port on the R5 mk. II if that is all you are worried about.


That's not all. But it's long overdue.


Good catch on the "combined with custom catadioptric lenses." Which by definition means a combination of mirrors and refractive optics. But I still don't think they are pancake lenses, but something more akin to what Limbak, which was recently bought by Apple was famous for designing catadioptric optics. Before being bought by Apple, Limbak was best known for their catadioptric design used by Lynx.


Is there a ray diagram the Lynx optics somewhere? I never fully understood how they work (reflection is off the sides of them?). With Apple showing the lens element shapes, is there room for reflecting off from the side?


You are correct that MR today implies that the Virtual image is locked to the real world with some form of SLAM.

I was a bit sloppy in that regards. I was more worried about what was going on. But when combined with the word "passthrough" to see the real world, I tend to drop back/slip to the term AR passthrough which is what this type of thing was called for years. MR, as I remember it, is a more recent term used to distinguish different types of AR. Then things flipped and XR (=AR/VR/MR) was used to mean was AR used to mean.

My guess is that late in the program, the importance of passthrough was elevated, perhaps in response to Apple rumors, but they were stuck with the hardware that was in the pipeline. The passthrough available is a big improvement over the Quest 2 for use in seeing the grouse-level surroundings, but not up to regular use in full Mixed Reality.

There are probably decades of "optimizations" left to do.


If you are talking for many "game playing" applications you may be correct.

But it be ridiculous to thing that people are going to be going out in public with a VR headset with cameras (known as Passthrough AR). On top of a number of human factor issues, it would dangerously block a person's view of the real world.


Just crank the FoV slider up to maximum.


As you wrote, I don't follow VR.

That said, I'm very skeptical that a VR headset will work in place of a computer monitor for long term use with business computing.


I don't think the issue is color. Their original product was trying to differentiate on Vergence Accommodation Conflict (VAC) and did a poor job of it. They were also going after the consumer with a product that was always going to be way too expensive for consumers with an image quality that was going to be too poor. They also made what I think is a bad set of trade-offs in terms of ergonomics and human factors.

The problem with medicine is that is it a tiny market. You can't justify the type of money they were trying to raise for that market. Under Abovitz, they were always a swing for the fences company.

All the above said, nobody I know of is making money selling AR headsets today. They are either subsidized by VC or other investment money or by big companies funding R&D efforts. The market is not other product areas where you can make money with an MVP and grow the product.


I vaguely remember Rony and friends reversing the concept of the endoscope product they developed prior to have the moving fiber optic strand project light into the side of a lens which would channel the light and reflect it into eyes instead of capturing light and projecting it onto a light sensor in the case of the endoscope. Everything they did had fiber optic strands for RGB on that original idea. Having one color would have been an easier place to start. Nonetheless, it seems like they had other more difficult unrelated problems such as Vergence Accommodation Conflict.


I think we need to see Kura actually demonstrate that their technology works.

You might want to see this video (queued to his discussion of Kura at CES 2022: https://youtu.be/S0heZVN5NCs?t=1328


>I think we need to see Kura actually demonstrate that their technology works.

Indeed.


While the market is "hot" in terms of awareness, it is still a very small market facing major technical challenges. I think AR while useful in enterprise markets (measured in the 100's of thousands a year) is a long way from being ready for the mass consumer market.

I am seeing a lot of progress in some areas and will be publishing an article later this week on them. In particular both Dispelix and Digilens have made considerable progress on the "glowing eyes" issues (Dispelix all but eliminates it). Avegant has a very nice small LCOS light engine that pairs nicely with the Dispelix waveguide.

I think the biggest problem for AR is that the expectations are very high and the physics is very tough. Many physical optical features within a few wavelengths of light were diffraction ruins everything.


It seems to me like a regular VR headset + dual cameras on the front might be able to get much better results than actual AR?


Even if it did, people won’t wear them in public. Google Glass looked half normal and people were still getting accosted in restaurants.


People freak out at the idea of cameras of some random dude constantly watching them.

This is a bit funny because they'd be watched by multiple cameras in any restaurant or supermarket. OTOH the viewpoint of the overhead monitoring cameras is very distinctive, and the resolution is usually barely enough to see a face. The Glass's camera gave more "normal" and higher-resolution footage.


There is no irony here.

People freak out because it's worn by a _person_ who, specifically, is watching _them_.

You'd probably react similarly if somebody, during a party, for no reason kept pointing a recording microphone at you, even though there were voice assistants like Alexa in the room.


People are discussing whether AR device could be worn in public. But I’m wondering: should it? I mean, I’m the last guy to question novel technology. I was desperate to get the first smartphones. But why would you want an always-on screen in public?


That doesn't matter for industrial applications though.


Yes, but that problem was much more “Google” than “glass”.


Perhaps so. I’m just skeptical that the viewed would accept such VR-pass through style goggles even if the viewer finds them innocuous.


I was working on enterprise Google Glass apps in 2014. The problem was lack of applications. You could do very few things with it beyond showing some text and pictures.


I think this is a "grass is greener" type argument. There are also massive problems with pass-through AR (VR with cameras).


I'm not entirely sure it is. Pass through has serious engineering hurdles to get over first with latency, power consumption, weight etc.

But the waveguide method has limitations rooted in physics and math that won't change until 30%+70% stops equalling 100%.

I know which one I'd put more Hope's in.


Did you get to try the Avegant prototype? It sounds quite promising.


Yes, I took the picture of Avegant used in the article. The Avegant and Dispelix waveguide combination looks pretty good. It is only a prototype without any tracking/slam. It is a display only demonstration by a component company.

I was impressed by the size of the Avegant engine and the transparency and lack of forward projection by the Dispelix prototype. They are claiming they will get 2,000 nits to the eye out of the design which should be good enough for outdoor use IF they have some form of clip-on sunglasses (2,000 nits is not enough for outdoors in full sunlight without some help).

The current Avegant engine has an optical component in that was depolarizing the light from the LCOS display and losing contrast so they wanted me to wait to take through the lens pictures. The image from the current prototype looked sharp but did lack contrast.


They seems to have done some good things. Sadly in AR as well as many high tech markets, it is possible to have a great technical achievement but miss the market requirements. The dimming feature caused them to lose at least 70% of the real-world light off the top. Nothing else they could do would made up for that mistake. It looks like they were focused on novel features over utility.

I think a large part of this is that they were building the ML2 for the high end consumer video game market when one day they they were told it was too expensive and it would now be an enterprise product. I hate the display quality of the HL2, but at least they improved ergonomic and interface issues over the HL1, whereas the ML2 repeats the mistakes of the ML1.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: