> Join a Spatial meeting from HoloLens, MagicLeap, VR, PC or Phone
I think supporting cross "platform" is a really cool viral feature.
At first it's just going to be one guy leading the meeting with real full body tracking
(think meetings with architects or mech engineers where you might want to discuss and move around 3d models)
Anyone else working remotely can facechat in with their phone (w/accelerometer & gyro or ARKit/ARCore) or skype in on desktop.
But every time they want to discuss something that's outside a non-vr/ar user's fixed field of view, the dude leading the meeting will have to rotate the model for them. They will feel left out and eventually will want to buy their own AR/VR setup.
(or more realistically they will preshare their cad files first ... but the above still sounds like a plausible future)
Looks like a serious attempt at an ambitious concept. I love how the video showcases what seems to be an actual working prototype, with its capabilities and imperfections – as opposed to the usual trend of AR 'demo' videos that are actually just mockups, and completely unrealistic ones at that (as seen with Google Glass, Pokemon Go, and Magic Leap, just from memory.)
I'm curious about the hardware. Is there a base station or two hiding in a corner somewhere? If there is, why isn't head tracking accurate enough to prevent "sway" of virtual objects with respect to real-world ones, which seems to be visible in the video? But if not, how does it capture the position and pose of the user's hands? In any case, what kinds of sensors are being used?
Unfortunately, when watching the video, what really stands out is that the 3D "ghosts" of the other participants have juddery, unsmooth motion. Surely it couldn't hurt to add a bit of interpolation? It would increase latency, but not by much, given that a user's view of a different user's body pose is not especially latency sensitive.
Edit: On second look, it seems like the HMD is just a Magic Leap One, though that doesn't answer the question of whether there's other hardware in the room.
There's no other hardware required. It's just hololens and magic leap (plus other devices like iPhone, laptop).
Hololens and Magic leap both have hand tracking, so the device knows if you are putting your hand up (and has some sense of its position and orientation), plus head pose.
"Sway" on device is actually more minimal that what you're seeing in the video.
I love that people are investing time and money to make this happen. From my point of view, remote collaboration might make more sense in VR than in AR, though. AR tech is much harder than VR, and with today's AR headsets this kind of thing is not really usable. However with today's VR tech, it's fun!
What is the rationale to make this happen in AR here? Is the physical location important for remote collaboration?
We are implementing a similar concept using VR. To us http://stirlinglabs.com the choice between AR & VR is simply if the object of the discussion fits into the room or not. For our customers creating massive structures such as ships, hospitals or airports, a tiny scale model sitting on a desk using AR isn’t particularly useful or interesting. They are more interested in switching their existing room for a space in their new structure.
In lots of situations, there are 3 or 4 people in one location and one or two remote people joining. With current VR tech, in order to make this happen you'd have to have 4 beefy laptops with Windows MR headsets in the meeting room. People wouldn't be able to see their local collaborators faces, drink a glass of water, or sit in a chair. With AR, the local peole can all see each other and conduct a meeting, while also seeing AR people at the table with them.
Additionally, we have all of these devices (laptops, phones) that we'll continue to use for the foreseeable future. VR forces you into a single mode where using your phone is impossible. AR lets you do more of a hybrid approach.
Long term I agree that VR will have more benefits (especially if you had hybrid ar/vr glasses), but in the near term it has too many drawbacks to be useful for collaborative productivity sessions like this.
You can just have Oculus Go's, no need for beefy laptops ;-) Everyone would just see an avatar of the other people, remote or local.
If you absolutely need to drink water during a meeting you can just quickly move your headset up with one hand and drink a sip, while still hearing in your headphones what's going on.
You can do VR with a hybrid approach too. Same implications as calling in with a PC/mobile phone into an AR meeting.
Only upside I see with AR currently is:
- Easier to speak to people in the same room (but not necessary if half the people attending are remote anyways, in fact, it would level the playing field)
- Work on your computer during the meeting (which you shouldn't do anyways =P)
Ah, yes. Good point, I forgot about standalone headsets. Although one drawback is that everyone would have to remain seated if it's 3dof like Go.
Standalone inside out headsets could work (quest), but people still would bump into each other since they wouldn't have a good sense of where other headsets are.
These drawbacks would be OK for some meetings, but dealbreakers for others.
In many of the meetings where you could remain seated, there's a requirement to screen share from a laptop (as you mention) that likely won't go away in the near term.
- Not sure what kind of meetings you attend where it's essential that people stand up. But I would estimate that for most meetings you can be seated (citation missing of course)
- With the Rift you can already access your desktop. It's only a matter of time until they launch that feature for the Go.
Good points. There are def a lot of cases where vr could work.
I think it will come down to how willing people are to not see the outside world, plus how tied people are to using (at least in my little bubble) mac laptops and google docs / web apps designed for laptops. I could imagine a mobile platform like quest or go having a web browser and a keyboard, but you'd have to have one of those logitech keyboards to even be able to see it to take notes.
Again to your point though, there are lots of meetings where you are just talking and where perhaps a simple mobile-vr-based web browser could suffice. For example, I'd potentially sometimes rather watch a movie with someone remote using a Go than using a hololens. There are also lots of examples of cases where everyone is remote and you want to control the environment that they come into (like a training session or school :) where it would probably make more sense too.
Re: Meeting types, the most valuable use cases for xr meetings right now are likely not 1:1 facetime style meetings, because it's difficult to replicate that personal connection in xr right now (video is better in some ways). One argument is that a more valuable use case would be brainstorming meetings (https://www.google.com/search?safe=off&rlz=1C5CHFA_enUS696US...) where people are often pretty active.
Yeah I can see your point when people need to use Docs or other software that is currently software only. Thinking about the near future, I'm pretty sure we'll see ports of these for VR and also new input devices/ways that will work great with VR.
For brainstorming, just spawn a huge whiteboard in VR. It can be huge. With AR you're limited by the size of the room you're sitting in :-)
It'll be interesting to see how this feels for collaboration compared to current video chat. It clearly has the advantage of allowing people to 'interact' with things in the space, but you lose the feedback aspect of people's facial expressions.
With video, especially in a group, you can see if people are following what you say, nodding or shaking heads, have confused looks, or are not engaged (or falling asleep). Compared to voice-only discussions, for example, sometimes people won't ask the question if they're confused, presumably thinking they're the only one and don't want to waste others time and/or embarrass themselves.
Maybe facial expressions can be emulated -- but there's a very big uncanny valley to get over to make that usable.
I think the initial use case of this will be with everyone’s videos in rectangles floating in space. People are already comfortable with video. It’s much more natural to glance at a shared workspace and then up at them, than it is in a current call where their video gets covered by another window on your screen.
We've been experimenting with using VR for remote meetings and it really does work in making physical presence more relevant. I think Spatial is really onto something but they can simplify the product even more. My main "immediate" use cases are actually
a) meetings while taking notes or dealing with Trello and,
b) pair programming,
So, all I want is to stream my desktop and some aspects of my physical self (face and hand-movements are enough).There is no real need to bring in aspects of individual desktop apps into immersion (as they show with notes or 3D design tools), just stream my desktop and let everyone see. For pair programming, this would need to work seamlessly hours at a time.
P.S: Incidentally AR vs. VR is less of an issue here, however, if it was real MR (i.e. I can see my real laptop screen exactly where it is) and I don't have to stream my own laptop that would be great.
The video is real. It’s a recording of the actual software being used live.
Things that are different irl vs the video:
1: Experience irl is way cooler. We use it internally for meetings. It’s amazing to take a photo on your phone and see it show up in the air in front if you. It’s awesome to see someone rez in and start walking around. It’s fun to put your hands up, start talking, and watch 3d models and images show up. The playfulness and interactivity is much more exciting in person than in the video.
2: The field of view on existing AR hardware is limited. Everyone working in AR hears this to the point of exhaustion, but anyone trying AR for the first time can’t help but notice it. So, the experience is different than in the video because you can only see AR through a smallish view. This will, of course, improve soon.
3: The video ui isn’t rendered additively and has some added zing that is less performant when using in real life (eg higher antialiasing)
A powerful use of low cost AR/VR, which I don't see a lot of people pursuing, would be to 3D render building plans while displaying feeds from smartphone cameras as "viewports" inside the rendered building. This could be used to very quickly get information from people onsite to experts and decision makers, with more easily displayed and digested contextual information. This could also be used in the building of large machines as well.
A lot of the challenge in this sort of technical communication is conveying the Point of View of the person onsite. We have the technology now, so why not just render it?
There are companies that have been doing this for a while. I think you are slightly overestimating the capabilities of the technology, but largely overestimating the demand for this stuff over the way things are traditionally designed/built/inspected. It's not a killer use case that is forcing architecture/design/manufacturing firms to adopt the tech or die...it's currently still in gimmick stage, but is getting better slowly but surely
There are companies that have been doing this for a while.
Can you provide links? I'd like to see what they're up to.
I think you are slightly overestimating the capabilities of the technology,
No. I know how janky smartphone GPS is from personal experience, especially inside a structure. Big companies with deep pockets are working on Vernor Vinge's "localizers," however.
but largely overestimating the demand for this stuff over the way things are traditionally designed/built/inspected. It's not a killer use case that is forcing architecture/design/manufacturing firms to adopt the tech or die...it's currently still in gimmick stage, but is getting better slowly but surely
As you are implying, the demand is closely related to the jankyness event horizon. It's just like smartphones and tablets. They existed many years before the iPhone and iPad. However, prior to the iPhone and iPad, only propeller-heads wanted those things. There's a point at which the technology has matured to the point where it doesn't get in the way and it's actually nice to use. At that point, it will explode.
Check this out if you're in the boston area, I've been to a few AR-focused meetups and they're pretty good at surfacing what is out there and state of the art: https://www.meetup.com/BostonAR/events/255921064/
(and/or email the organizer to get slides/company names if you're not in boston)
One of the apps mentioned there, Pair, is run by Andrew Kemendo who is an active participant on HN, started his first AR company in the space in 2011 and is very knowledgeable, check his stuff out: https://news.ycombinator.com/user?id=AndrewKemendo
In the engineering space PTC/Microsoft (Vuforia/Hololens) and UpSkill (founded 2010) jump out in my mind as leaders. PTC has been doing CAD modeling and full PML/ALM tooling etc. forever and started to look at AR applications in the 2013/2014 timeframe to build on their modeling, maintenance, and workflow tracking expertise
A common thread is that many of these companies were founded in 2010/2011 and have rebranded around the 2015 timeframe to focus on different business problems and market their solution as "AR"
And yeah I agree, I think the jankiness of the hardware is still a huge sticking point, nobody has nailed the comfortable glasses form factor with enough POV/battery/processing power to really make AR take off
I'm having a bit of trouble grokking how this experience works for each participant.
I get the AR piece: People who are physically in a location they see their remote peers magically floating in the room with them while they can interact with virtual objects. That's cool.
But what do the remote people see? They don't have the benefit of AR, so I'd imagine they don't experience the real-world environment as they're sitting at a desk with a headset on, or just simply viewing a 3D world like they're playing The Sims?
From what I can tell, both sides are using AR goggles. They just don't show the goggles on avatars of remote peers, which aren't real video but just posed 3D models customized to (somewhat) match the corresponding user's real appearance.
Right, so I guess my question is: The remote users don't have the benefit of being in the same room as the rest of the group. The AR people/objects wouldn't likely "fit" into their physical world. Like in the initial demo video (2 guys sitting, 1 woman standing near Kanban board) -- What does she see from her perspective?
It feels to me like the "host" participants can use AR but anyone remote would need to be using VR as the physical environment mapping wouldn't make any sense for them.
It's not explained clearly, but I believe the same AR elements are mapped into different environments. One user's wall may not be in the same place as another's, but they would both have the same content.
Correct. Walls are mapped to each other, so if you are looking at your wall in location A and putting a shared screen there, it is also on my wall at location B.
There are some gnarly issues around this, of course, but Spatial has a pretty advanced approach that works in most cases.
It's quite an interesting and ambitious concept, but I really hope they fix the avatar designs. I talked to some people around the room and the general agreement is that it's stuck in a sort of "uncanny valley" where users look like eerie ghosts with only half of a torso and no legs.
I think supporting cross "platform" is a really cool viral feature.
At first it's just going to be one guy leading the meeting with real full body tracking (think meetings with architects or mech engineers where you might want to discuss and move around 3d models)
Anyone else working remotely can facechat in with their phone (w/accelerometer & gyro or ARKit/ARCore) or skype in on desktop.
But every time they want to discuss something that's outside a non-vr/ar user's fixed field of view, the dude leading the meeting will have to rotate the model for them. They will feel left out and eventually will want to buy their own AR/VR setup.
(or more realistically they will preshare their cad files first ... but the above still sounds like a plausible future)