If you could somehow project that as a HUD in a car, then you'll have a great solution to a big problem in the navigation space. Many people have trouble reading maps because they have difficulty in translating the pictographic display from some other screen into the roads and signage they see in front of them. If they no longer have to do this translation, then following directions becomes significantly easier for a large class of people that struggle with current navigation systems.
There have already been concepts shown like this, the prior art will be very sticky for anyone that tries it.
It looks like Pioneer already shipped something much cruder but of similar idea, and I swear I saw a Microsoft concept video with something more refined.
I may be getting too cynical, but almost immediately after thinking "very cool", I wondered how many people would sue the company putting this on the market for leading them from the pavement onto the street.
For navigation, I think this may come over as having better precision than it really has.
I've only seen the video but I would guess it's using either Apple or Google's nav system. Also I know we live in a supposed culture of lawsuits but I doubt you're going to have any luck suing here. If you walk onto the road without looking you're an idiot. Especially when you can view the street through the device you're looking at.
Don’t get me wrong this is pretty cool, but one thing that makes me a bit annoyed is the way the author frame the video. With the current version of ARkit and most AR libraries these days, there’s no occlusion of the virtual geometry, meaning that you can actually see the blue line and arrows overlay the real world at all time (So some far away virtual arrow will be on top of a real wall right in front of you). That’s a bit less sexy and it’s clearly hidden here. It’s also possible that some directions look confusing with the lack of occlusion and too numerous virtual objects on the screen.
I’m looking forward to when AR libraries use their point cloud technology to also do object reconstruction and virtual occluders using real world objects!
We're planning to bring occlusion to ARKit with Forge. The accuracy can't match a depth camera, but still plenty of use cases. Here's an example on a Pixel:
Project Tango can kind of do this already, albeit the quality of the pointcloud tends to break the illusion. The other solution, at this large scale, is to have pre-baked maps, we know where all the big occlusions are in a city.
I agree it'll be amazing when device depth sensors can build up the occlusion mask on the fly, but that still requires a hardware upgrade (iPhone8?, android tango etc). IMO, the beauty of ARKit is getting reasonable AR tracking on the current crop devices (with a single camera lens!)
also, I think there are other ways to build up reasonable occlusion nodes manually. for example, its probable that the Google Maps iOS team is currently adding an AR directions view that use the Streetview point cloud to build up occlusion areas around most streets. Likewise, some areas of the OSM dataset include building footprints with height attributes, this could be used as well. No where near perfect but I think it would help in the situation you described above
Well the current iPhone 7s with its dual cameras can generate a realtime (albeit low-res) depth map in the iOS 11 beta, so you could probably get one working (probably somewhat crudely)
I haven't upgraded anything to an iOS 11 beta so I haven't been able to try this myself - but how effective is ARKit if the camera loses a view for a little bit?
Here is a video upload from a July 4th ARKit experience I made. If you have small movements and have occlusion, ARKit does well. You see in the video that after movement and occulusion it loses its bearings.
But whats crazy is when you return the view and ARKit fixes itself
This must be their endgame. I doubt they would have released ARKit in the first place otherwise. If any company in the world can finally pull off consumer AR it's Apple, so here's hoping.
They have to go back in the line.
Google is already using google glass in some manufacturing environments.
Microsoft already released hololens, although for now mainly for devs.
magic leap hopefully is coming next year and accordingly to the rumours should be quite a big leap.
It's quite unrelated tech wise. This seems to be a simple inverse polar projection to map location around you based on your compass. It doesn't see anything, it's just a different skin for maps overlayed on the camera stream.
I've had it on my Nokia phone and it worked horribly. It's really hard to understand what is where in a jumbled mess of pins and there was no visual difference between a pin that represents an object 10 meters from you and the one that is 1 mm away from you.
I swear I had an app installed on my iPhone 3G (I guess 2008/2009) that did something similar. Was it Sam Altman's Loopt that had a 3D view that used geomarkers and displayed them as you rotated the phone with the camera view leaking through it? I believe it was L*something.
I remember that (haven't used it in years though). I remember when it launched it was a big deal. I think the big deal here is that any reasonably skilled iOS dev could write this in a few hours. Yelp certainly spent more time and money doing it. So the big deal is the tools have finally reached a point that doing useful AR work isn't difficult.
that said, when using worldalignment gravityAndHeading, any location inaccuracy when the ARSCNView starts up will throw off the AR illusion, sometimes considerably. I hope apps will be able to correct during an ARSession when better location data is detected
there is no iBeacon (not sure where you're quoting that from?), its just spatial data (lat/lons) projected into the SCNScene coordinate space (which is relative to the location and heading of the device when the ARSession starts)
ok yes, an iBeacon is one possible way CoreLocation determines a device's location. in this case there was none, only GPS (and I guess Wifi is used as well)
I think the way GPS apps have resolved this is using your bearing, comparing it to lanes on the road, and moving the route to match where you are actually moving. This might need some finer adjustment based on where your camera is pointing and your bearing.
https://twitter.com/AndrewProjDent/status/888380207962443777