Hacker News new | past | comments | ask | show | jobs | submit login
ARKit + CoreLocation [video] (twitter.com/andrewprojdent)
101 points by gfredtech on July 22, 2017 | hide | past | favorite | 41 comments



The really interesting stuff is in Part 2 of his demo:

https://twitter.com/AndrewProjDent/status/888380207962443777


If you could somehow project that as a HUD in a car, then you'll have a great solution to a big problem in the navigation space. Many people have trouble reading maps because they have difficulty in translating the pictographic display from some other screen into the roads and signage they see in front of them. If they no longer have to do this translation, then following directions becomes significantly easier for a large class of people that struggle with current navigation systems.


There have already been concepts shown like this, the prior art will be very sticky for anyone that tries it.

It looks like Pioneer already shipped something much cruder but of similar idea, and I swear I saw a Microsoft concept video with something more refined.

https://www.youtube.com/watch?v=RFZ4lHenItE


It could have some negative side effects, as in: "But the arrow pointed straight off of the edge of the cliff!"


If they drive off a cliff when using the system, would they not be more eligible for a Darwin Award?


I may be getting too cynical, but almost immediately after thinking "very cool", I wondered how many people would sue the company putting this on the market for leading them from the pavement onto the street.

For navigation, I think this may come over as having better precision than it really has.


I've only seen the video but I would guess it's using either Apple or Google's nav system. Also I know we live in a supposed culture of lawsuits but I doubt you're going to have any luck suing here. If you walk onto the road without looking you're an idiot. Especially when you can view the street through the device you're looking at.


Don’t get me wrong this is pretty cool, but one thing that makes me a bit annoyed is the way the author frame the video. With the current version of ARkit and most AR libraries these days, there’s no occlusion of the virtual geometry, meaning that you can actually see the blue line and arrows overlay the real world at all time (So some far away virtual arrow will be on top of a real wall right in front of you). That’s a bit less sexy and it’s clearly hidden here. It’s also possible that some directions look confusing with the lack of occlusion and too numerous virtual objects on the screen.

I’m looking forward to when AR libraries use their point cloud technology to also do object reconstruction and virtual occluders using real world objects!


We're planning to bring occlusion to ARKit with Forge. The accuracy can't match a depth camera, but still plenty of use cases. Here's an example on a Pixel:

https://www.youtube.com/watch?v=K9CpT-sy7HE


This is awesome! Is the sdk available?


Project Tango can kind of do this already, albeit the quality of the pointcloud tends to break the illusion. The other solution, at this large scale, is to have pre-baked maps, we know where all the big occlusions are in a city.


I agree it'll be amazing when device depth sensors can build up the occlusion mask on the fly, but that still requires a hardware upgrade (iPhone8?, android tango etc). IMO, the beauty of ARKit is getting reasonable AR tracking on the current crop devices (with a single camera lens!)

also, I think there are other ways to build up reasonable occlusion nodes manually. for example, its probable that the Google Maps iOS team is currently adding an AR directions view that use the Streetview point cloud to build up occlusion areas around most streets. Likewise, some areas of the OSM dataset include building footprints with height attributes, this could be used as well. No where near perfect but I think it would help in the situation you described above


Well the current iPhone 7s with its dual cameras can generate a realtime (albeit low-res) depth map in the iOS 11 beta, so you could probably get one working (probably somewhat crudely)


Hololens' occlusion is pretty solid.


I haven't upgraded anything to an iOS 11 beta so I haven't been able to try this myself - but how effective is ARKit if the camera loses a view for a little bit?


https://twitter.com/AndrewMendez19/status/888765856225923072

Here is a video upload from a July 4th ARKit experience I made. If you have small movements and have occlusion, ARKit does well. You see in the video that after movement and occulusion it loses its bearings.

But whats crazy is when you return the view and ARKit fixes itself


I can see Apple dropping some Glass reinvention swiftly.


This must be their endgame. I doubt they would have released ARKit in the first place otherwise. If any company in the world can finally pull off consumer AR it's Apple, so here's hoping.


They have to go back in the line. Google is already using google glass in some manufacturing environments. Microsoft already released hololens, although for now mainly for devs. magic leap hopefully is coming next year and accordingly to the rumours should be quite a big leap.


If there's one lesson to be drawn from Apple's history, it's that first to market is not a good metric for blockbuster success.


Nokia had something similar years ago : https://youtu.be/55Qdem9pJxY


It's quite unrelated tech wise. This seems to be a simple inverse polar projection to map location around you based on your compass. It doesn't see anything, it's just a different skin for maps overlayed on the camera stream.


I've had it on my Nokia phone and it worked horribly. It's really hard to understand what is where in a jumbled mess of pins and there was no visual difference between a pin that represents an object 10 meters from you and the one that is 1 mm away from you.

The second part of the OP tweet is where the real magic happens: https://twitter.com/AndrewProjDent/status/888380207962443777


I swear I had an app installed on my iPhone 3G (I guess 2008/2009) that did something similar. Was it Sam Altman's Loopt that had a 3D view that used geomarkers and displayed them as you rotated the phone with the camera view leaking through it? I believe it was L*something.


I believe you're thinking about Layar. https://www.youtube.com/watch?v=b64_16K2e08


That was it. Thanks.


Not sure about Loopt, but Yelp did it in 2009.


eventually we are going to fall back to a google glass style device when people realize how stupid it is to have to walk around with a phone up


Hasn't Yelp's app implemented this feature (called Monocle) for half a decade now? Could anyone explain the difference?


I remember that (haven't used it in years though). I remember when it launched it was a big deal. I think the big deal here is that any reasonably skilled iOS dev could write this in a few hours. Yelp certainly spent more time and money doing it. So the big deal is the tools have finally reached a point that doing useful AR work isn't difficult.


is this a trick or is geolocation on iPhones better than the 5m typically quoted.


I generally saw 1-3m accuracy when I was trying out ARKit and some GIS data, YMMV

https://twitter.com/bFlood/status/888485889248157697

that said, when using worldalignment gravityAndHeading, any location inaccuracy when the ARSCNView starts up will throw off the AR illusion, sometimes considerably. I hope apps will be able to correct during an ARSession when better location data is detected


Super interesting! What do you mean by better location data detected? Is that something specific for iOS when you ping for location?


>relative to a nearby iBeacon

where is the iBeacon in the parking lot?


there is no iBeacon (not sure where you're quoting that from?), its just spatial data (lat/lons) projected into the SCNScene coordinate space (which is relative to the location and heading of the device when the ARSession starts)


>Core Location provides services for determining a device’s geographic location, altitude, orientation, or position relative to a nearby iBeacon.

https://developer.apple.com/documentation/corelocation


As the quote states, it can (determine the geographic location, altitude, orientation) OR (determine position relative to a nearby iBeacon)


ok yes, an iBeacon is one possible way CoreLocation determines a device's location. in this case there was none, only GPS (and I guess Wifi is used as well)


Most of those buildings are pretty far away from the camera, so it doesn't seem like it would need to be that accurate.



I think the way GPS apps have resolved this is using your bearing, comparing it to lanes on the road, and moving the route to match where you are actually moving. This might need some finer adjustment based on where your camera is pointing and your bearing.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: