To me this looks amazing and although LEAP seem to be pushing for you to get rid of your mouse/keyboard, personally I think this is probably best as an addition to it. Imagine if you had one of these built into the keyboard.
You're typing an email need to add a location switch over to google maps, hands off keyboard as you manipulate it around to get a decent resolution, 'tap' the address bar to copy it, swipe left to switch back to the email program, tap again to paste and boom carry on typing.
You wouldn't need to be using it all the time for it to be extremely useful.
We keyboard jockeys sometimes forget how much faster something like this would make the less shortcut-key knowledgeable users!
As much as I would love toying with this, it really all about implementation.
There are a lot of issues to consider - what if I mean to swipe one thing but the system recognizes another? how is that handled? Does it calculate the position of my head and the perspective I'm seeing?
I agree with the above comment that this would ideally be an addition to our growing arsenal of HIDs, including keyboard, mouse, touch mouse, joystick, wacom tablets, and others are not intended to replace one or the other, most of them are complementary to other HIDs.
Imagine surgeons who can't touch a surface with his/her hands due to hygiene concerns using it to manipulate various images/scans of the patient on the table in front of him/her.
Well yes. If multiple solutions gain traction, it just proves that there is a market for this kind of thing. It's not about being first, it's about executing well. If Leap is more accurate for tracking movement, it could prove more suitable than the kinect.
The problem is one of digital vs analogue, and ballistic vs servo. On a keyboard, you either press or do not press a key, and assuming that your finger is aimed somewhere in the right box, you need no visual feedback to see where it is and which thing you're touching. You also get the tactile feedback of "yes I hit the key", "I hit the key but it was at the edge, better recalibrate", etc.
So while there are a basically continuous range of positions you can put your fingers into (and with fifty or so degrees of freedom), the differences between many similar positions are subtle and require feedback for you to see which of them you're in---looping in your visual system and slowing down the interaction considerably. Which is what you want for certain sorts of continuous-ish interactions, and not at all what you want for certain sorts of digital-ish interactions.
All of which is to say, this sounds suuuuuper cool, but it's not going to replace the keyboard.
Agreed. It reminds of when the iPhone introduced people to the idea of an on-screen touch keyboard. Many people said they didn't like it and would keep using their Blackberries. How's that working out for RIM these days?
The point is that if the technology works (which is the important question, IMO), then users will adapt and embrace it.
The touch screen keyboard is still inferior to a hardware keyboard for typing. There are other reasons that companies produce more touch screen only devices and people buy them. Style, cost, simplicity, larger screen etc.
Language is a powerful way to interact with systems. With the mouse, we got first a single primitive - point and click. That vocabulary expanded to drag, but still that's about it. With touch interfaces, we got swipe and pinch thrown in. The kinect expanded that with body gestures. Following that trend, it looks like leap vastly expands our visual interaction vocabulary, or at least has the potential to do that. Considering all the things human hands have made in history, it would indeed be a shame if our computers thought of us as one fingered two dimensional entities!
Not to be cynical, but this reminds me a little of Alan Kay's comment:
"""
By the way, Sketchpad was the first system where it was discovered that the light pen was a very bad input device. The blood runs out of your hand in about 20 seconds, and leaves it numb. And in spite of that it’s been re-invented at least 90 times in the last 25 years.
"""
I think there will be some things it's great at and some it's terrible at. Here's one for free: how about using this to create the Rosetta Stone of sign language? Or a portable sign-to-speech translator? (These are hard problems for other reasons, but this thing brings you a lot closer.)
Think of it like this: Alan Kay isn't wrong, but you could say the same thing about any input. The mouse is a very bad input device. It takes forever to use it for on-screen typing. The keyboard is a very bad input device. It can't tell how hard you're hitting each key when you do musical typing in GarageBand. The microphone is a very bad input device. Voice control is way slower than just clicking the menu item you want ...
If this thing is real, then within a couple of years there will be a dozen reasons to refuse to buy a computer without one.
(I like the idea of a pressure-sensitive keyboard. Could be useful for music. Probably hard to engineer in a way that would be useful and which wouldn't compromise button input. Blue-switch cherry keyboards are annoying to type on. Still, interesting idea.)
You're papering over the problems with relativism.
The keyboard is rapid to use and flexible. Although it has the possibility of physical health problems with RSI, with light pen interaction you have the certainty of pain and fatigue. Sign speakers don't need sign-to-speech - there are keyboards already. There's been multiple attempts at text to speech as well, and the result is more fiddly, less flexible and less rapid than you can get with a keyboard.
There could be an argument that keyboards are complicated and have a learning curve. But computers are the super-tool of our age. Why would you not learn use of a tool that gives you combines great power with flexibility.
I think we still have discussions about alternative user interfaces that hark back to the way humans interact with each other because most of the population are not yet expert keyboard users. This will change. Once the developed world is flush with expert keyboard users, user interfaces will go back to putting greater emphasis on them.
I don't think it's one or the other. Has the mouse replaced the keyboard? It's an additional way to interact.
Hell, I could have used it a little earlier today - I was bleaching my hair and realized my machine wasn't playing any music. Currently my options for remedying this are (1) poking at the media keys on my keyboard (and getting bleach all over them), or (2) picking up my Wacom stylus, going to the Dock to bring up iTunes, and hitting its play button, thus smearing bleach EVERYWHERE. With one of these sitting on my desk and wired into some global hotkeys, I'd have had the additional option of waving my hand in a particular way.
I'm also thinking I totally want to incorporate one of these into the media PC I'm working on based on a Raspberry Pi, a pico projector, and half of a rubber cat[1]. Slouch on the couch, wave my hands in precise gestures to control it instead of having to bring up a mouse somehow.
Using it for long, sustained periods? Nah. Using it now and they? Oh yeah.
[1] the whole thing is disguised as Nyan Cat, with the projector poking out of its snarling mouth and the plaque it's mounted on painted like a Pop-tart.
One change since that time is that there is way more flexibility in the placement of displays.
This also isn't a light pen. You do not need to hold it close to the display.
Finally, there is way more casual computer interaction. A light pen could be fine for short interactions (especially if you do not have to search for the pen first)
Painters are able to operate a brush for full working days in the studio without the elbow support that a desk supplies here. It seems if the benefit is great enough people can work up the supporting muscles through repetition.
In fact, if a painter (or user of Leap) were to continually rest their elbow on some support, they could compress the ulnar nerve and cause cubital tunnel syndrome. If you've ever hit your "funny bone," that's the ulnar nerve.
Heck, consider how light a painter's brush is in comparison to a fencer's epee (which only weighs about 1 lb at the most, but still, try holding one for an hour without having any experience!)
These devices should be used with the monitor turned into a drafting-table like angled work surface instead of mounted vertically in the usual way. If you can work touch/resting into the screen, you have a much more humane setup.
I'm sad the word 'robot' hasn't appeared in this thread yet. Let's correct that.
Visual slam is great for medium distances, but pointclouds aren't really that dense and are slow to update. Also the lidar to make the point clouds is stupid expensive.
Add one of these guys onto your robot and you've got a really cool set of 'wiskers.' Short range, highly sensitive, super fast update. I'd love to put several of these on a robot and use that to give it a sensitive field surrounding its body.
Depending on how open the software and hardware are this will be a great addition to the robotics community.
I work on adaptive mobile robots as part of my research, and I'd be very interested to see how the LEAP compares to the Kinect in this area. I submitted a developer kit request, so maybe I'll get to find out.
Also, from the Ars Technica post on LEAP:
"The company says the breakthrough in resolution comes not from the hardware, which consists of relatively standard parts, but from what CTO David Holz calls 'a number of major algorithmic and mathematical problems that had not been solved or were considered unsolvable.'"
I'm conflicted by that statement. As a current academic, I hope they publish these supposed breakthroughs, as hiding them behind trade secrets makes me sad. As an entrepreneurial-minded person, however, I understand the desire for competitive advantage.
Likewise, I submitted a dev request. I'm already working with Kinect/Primesense tech to get point clouds and do gesture recognition work, but this could be a cheaper and potentially better option. At this point however, who knows? There's just not enough info on the capabilities, features, and API.
Even putting aside concerns about openness, the mere fact that they are talking this early about an app store is evidence that they're not focused on delivering value. Apple did not start talking about an app store for iDevices until they had millions of satisfied customers. I mean, why not use the Mac App Store? Maybe they have a good answer to that question, but if so, they should tell us what it is.
As long as the platform remains open to hackers, perhaps an app store is a good thing for something like this.
There's nothing more infuriating than a cool piece of tech with half hearted vendor supplied software that requires lots of kludging to use outside their thinly defined parameters.
An app store done right could encourage creation and easy distribution of novel uses if the kit to the non-hacker community.
Actually the appstore itself is not "open", as in Google Play is proprietary. Unless by open you mean anyone can submit an app. But parent said "how open the software is".
This is very cool technology. The company was formerly "Ocuspec". One break-through is using very inexpensive hardware (<$5 at RadioShack) to get that sub-mm resolution. At the other end of the spectrum, they can cover a football field (and more in the future). What they are showing now is just the beginning. Kudos to rolling out with an SDK. I can inly imagine what sorts of applications developers will dream up.
Waiting for the load to drop so I can try to get a preorder!
Question: How can you render the "other side" of the hand at the 51+ second mark? If this is indeed possible, that's quite a remarkable technology you have.
I suspect it's an illusion and that does not represent their true 3d point cloud. It's probably a skinned 3d mesh of a hand which is then moving in sync with the detected hand.
Their webpage suggests they can handle multiple devices connected to the one PC, so I was wondering whether they had another sensor out of shot above the scene pointing downwards.
Actually - scrap that. A better hunch is that it's not structured light at all, but actually an electric field sensor. See this Quora answer (disclaimer, by someone who "knows shit all about this"): http://www.quora.com/Leap-Motion/What-is-the-technology-behi...
What is the API of the dev kit like? Does it give the programmer events translated into hand kinematics (like, 'right index finger pointing forward') or is it just a cloud of points?
How will you license the technology for others to reproduce? Will you be aggressive at licensing trying to profit , or will you be permissive and partner with other manufactures to make this truly ubiquitous?
Please give us some technical details to satisfy our curiosity. Perhaps I missed it on the website, but I didn't notice any mention of how it functions (IR? Sound waves?), what sort of range of distances it works in, etc.
Very very inconvenient. Writing on air is completely different and considerably more tricky that writing on paper. Physically more demanding too. It's one of those things that sound nice in theory, but not really practical.
Yup. ASL uses a pretty large amount of space and it's not clear to me that this thing can resolve every gesture. For instance, what happens when one hand occludes another from the Leap's perspective? Can it know that my thumb is up if the rest of my hand is hiding it?
But we can imagine a kind of ASL shorthand, like the original Palm Pilot's 'Graffiti'. Or, maybe if you buy two or three, you can set them up to combine perspectives, giving them more insight into the full 3d space.
That could be useful to deaf people and maybe could also serve as a new form of shorthand for people who are writing long texts on devices without keyboards.
Writing in air is flashier for press videos, but we understand this bit well. We can track your pen as you write on normal paper too. We'll post a video of this sometime shortly.
You're right. I just tried it (not with the leap hardware) - the inability to "lift up" off the paper without leaving the sensor zone is weird. No haptic feedback when you're on the "paper" and not.
Still need to lift the marker to separate the words. And to touch it back rather precisely so not to have the next word to be an inch below the first one.
I disagree with you on the details a bit, but I think viewing this as good at somethings and bad at others is far more constructive than arguing that this should replace a mouse, keyboard etc.
However, I would make the good/bad list a bit more general: * good for manipulating UI elements that represent 3D
* bad for manipulating UI elements that represent 2D
Maps and camera interactions (CAD) are perfect examples of things that represent 3D elements. Short games are another area that can represent 3D - longer games might also work well, but the user is likely to get tired of waving his/her arms around.
Much of what we do on computers today is strictly 2D. Coding, word processing, most web browsing, email, etc. Pencil tools/drawing tools are similarly usually just a 2D activity, so using a 3D-capable tool and reducing your movements to 2D doesn't really make sense.
I'm not sure how they're handling variance in user perspective, but assuming they've got that figured out, if you were to couple this with a stereo vision setup and some form of haptic feedback a lot of companies doing 3D design (both CAD and 3D artwork) would eat this up. It won't replace a keyboard and mouse, but it would provide a much more "immersive" way of interacting with the media.
One of the things I've dreamed of using gestures for is controlling behavior (scrolling, navigation, etc.) in a second monitor without changing focus. I hope somebody implements that.
I’d love to get one of these and play with it. Will the SDK and spec for talking to it be freely available after the initial batch of preorders and free dev kits?
As a counterexample, Emoviv gave a TED talk a while ago showing off a headset that lets you control your computer with your mind. When you visit their website you discover that you can only develop with a $500 “developer edition” headset that comes with a single, nontransferrable license to use the SDK (additional licenses are $99). The consumer model of the headset only runs approved applications.
Dev Kits ship for free to 20,000 developers in 1-3 months.
Pre-orders are for consumers at $70 and ship this winter.
The idea is to give all the hackers maximum access to create awesome apps and then deliver a healthy shiny ecosystem to the consumer. Also, we'd like to see a larger shift towards people creating things, so encouraging early adopters to get aboard the coding train is a positive trend.
It's a huge new interaction space, and we're looking for innovators to explore it!
That’s awesome. Are there any differences between the consumer model and the dev kit, other than the price (and will people who bought the consumer model be able to develop with it later)?
Other people have asked about openness… will there be any kind of control over what programs can use it (i.e. do they need to be approved by you guys to work?)
Well, that's a big turnoff for a dev. Why instead of maximizing the short term profit you concentrate on the long-term and make this system open with certain limitations to assure your business?
Contrast with Android where every device is a developer device - one tickbox in the settings is all that is required. No accounts, registration, handing over money, authorizing devices etc (yes I'm looking at you Apple).
The first thing I thought after watching the video was how much money will they make with pre-orders on this site and how much would they have made with a kickstarter campaign?
I really can't imagine using this for a longer period of time. Maybe as an extension of keyboard and mouse/trackpad that you would use to scroll trough pages when researching something or stuff like that. You still need a keyboard to type as far as I can see.
That being said; I really like the idea and would love to know the tech behind it.
I can't see why there is so much negativity as to end use applications. I can see lots of potential for this - slideshow presentations, laptops (goodbye annoying trackpad), as well as a stylus-and-tablet replacement for designers, etc. etc.
I think the key however the key will be in the recognition of subtler gestures. If you can show me a man using two hands to type, then moving them not far from the keyboard to activate simple gestures for navigating a document, I'd be really sold that this is for everybody.
There's no perceptible latency in its response to gestures -- I'm very impressed assuming it's not a rigged demo. (In videos I've seen of the Kinect the software responds to a gesture only after a noticeable fraction of a second.)
- If this works as advertised, this company will never ship the product. They will be bought within months for a huge sum of money, even if they do not want to be bought. Reason for that is:
- Litigation, litigation! They will need deep pockets to defend themselves against patent claims.
Forget the desktop. With the sensor being this small, I can imagine hanging this from your neck and have gesture sensing anywhere, SixthSense-style. ( http://www.pranavmistry.com/projects/sixthsense/ )
I think i'll need to demo this unit before I purchase it. I remember getting burned in the early nineties by the power glove's cool commercial: http://www.youtube.com/watch?v=93iDhnBcMGo
Interfaces like this look cool in movies but your hands aren't 'designed' to be above your heart for an extended period of time. Now if you had something like a drafting table with a touch screen I'd be first in line.
Mount it above your monitor pointing down instead of setting it below pointing up. Basically, point it at your keyboard. Type, type, type, lift your wrist a couple inches, pinch, zoom, drag.
If you're an interface designer (I am), you should pre-order this thing. This will be a standard form of interaction in a couple of years and you should jump on it early and start figuring out the kinks. Too cool.
It really won't. Have you any idea how tiring waving your arms around like that would be all day?
There are so many use cases where it doesn't even work that would require a complete rethink of how anything is presented on in internet. For example how about the HN comments where it's pasted as code and it scrolls horizontally. Target that and scroll it with that system. As soon as you have a single use case where a mouse and keyboard is more effective you blow your value.
Hm, maybe. But that was my point with saying interface designers should hop on this: they design the interaction to be simple enough that it doesn't wear you out (at least, that's how I'd look at one facet of designing for this).
Moreover, it's not enough to just copy our current understanding of UI over to a form of input like this. Of course that's the natural inclination, but in reality, interfaces will change and adapt to things like this (meaning scrolling may not exist and a whole new form of pagination may be invented). The "how" is up to the designers.
More than anything, you really articulated my point by saying that "it really won't." You're right: as things stand in terms of interaction, this would become tiring. You just have to think of a way to make it not.
I'm familiar with the usual "arm-waving sucks" arguments against gesture-based inputs, but I was just wondering -- is there any reason this couldn't just replace the touchpad on laptops, maybe being integrated into the forward edge for a larger field of view?
They do claim sub-mm accuracy; maybe applications in the small are realistic.
So instead of arm-waving, think of rotating your hand just above the touchpad to rotate and object in 3d space, but briefly. And the touchpad would still work like a regular touchpad, but maybe you don't even need to touch it.
Sub-mm accuracy seems to imply that really subtle gestures could work.
Not being able to un-touch your pointing device on your laptop would really suck. It'd have to be mixed with a capacitive plate that knows when you touch it.
Agree that it can't provide the precision we need for arbitrary complex actions.
Disagree about the physiological concern. Provided your elbows are resting on the table, it would be easy to get used to. Humans would adapt and it would be healthier than our current much talked about static postures.
I'd love something that was a combination of the two, a touch surface and something like this looking 'down' toward that surface at what my hands were doing. I could make typing like motions on the touch surface for typing. But more importantly mostly my hands would be resting on something rather than hanging out in front of me.
As a concept this is pretty awesome and it removes the need to touch your screen so I guess you can move the sensor closer to a more relaxed place, I can see it getting tiring pretty quickly. Just like a standing desk though I guess you just get used to it over time.
Aside from end use issues, the tech behind this is very nice
Yeah, my guess is some kind of LIDAR type device and perhaps a mix of a couple of other things. A researcher friend of mine was musing over possible uses of LIDAR a couple of years ago and this was one of his end uses if my memory is correct.
I could of course be wrong, so if someone knows different
I thought LIDAR was prohibitively expensive technology for use in consumer applications, but I could be wrong. I remember reading about it when Radiohead did their "House of Cards" music video with LIDAR tech.
Not to nitpick, but there is a stereo aspect to it. The IR projector is offset laterally from the IR camera, and this is critical to evaluating the depth, because the disparity of the IR dots is more extreme the greater the distance between the emitter and the camera.
It is actually exactly stereo, except one of the cameras is replaced with an IR projector; you basically do the standard stereo math as though the projector is a camera 'seeing' the image that is being projected.
you must be thinking of something else. kinect has two cameras, one for laser IR particle-filter type stuff, and then an rgb camera. it isn't stereo vision with two regular 2d cameras, however, doing displacement math. it is looking at the grid of dots projected and seeing where the dots moved in order to create a depth map.
We asked one simple question: ‘What feel[’]s natural?’
Two or three hundred thousand[s] lines of code later
The Leap is a small iPod[ ]sized USB peripheral
Do you support [w]indows?
When do dev-kits ship[ ]
I've been searching all over for something like this as a potential mouse replacement to help with my finger tendinitis. I just pre-ordered. Also an interface designer, so I'm excited to see where this goes.
While waving your hand in front of a monitor looks pretty cool, its definitely not going to work well for most work or extended use. Also, I think the reason touch-based devices have caught on so well is that we like the tactile feedback of dragging our finger across a defined plane.
I'd be interested to see what you could do if you projected an image onto a work surface to make that more interactive. Seems like it would be easier to draw or manipulate 2D things on a plane rather than trying to wave your hand in 3D space. (Image Manipulation, Graphic Editing, Maps, etc)
So could this be a Flutter.io competitor? Flutter.io is purely software driven. I feel flutter will release a SDK/API as well at some point.
So I think intuition suggests that whoever can execute these best, will likely succeed:
- Rich feature set to capture gestures.
- Simple API. Should be easy to integrate with 3rd party apps.
- Performance.
Leap already has an edge in that they are releasing a SDK, Flutter should follow this quickly (hopefully). Flutter makes it easy to get it to people as it is just pure software, but can they achieve capturing rich gestures?
I want to play Modern Warfare 3 (or more likely BLOPS2) with that thing.
I've already started thinking about some gestures that could be used for this, but I'm wondering, how hard it's going to be on the hand(s)? I mean with the mouse and keyboard (supposing PC gaming) the hands are resting on the table 90% of the time, with this the hand(s) will be up in the air.
...unless someone puts a nice glass table on top of that thing so that my hands could rest... could this work?
We’re distributing thousands of kits to qualified developers, because, well, we want to see what kinds of incredible things you can all do with our technology. So wow us. Actually, register to get the SDK and a free Leap device first, and then wow us.
Do you support windows?
Yes! We also support native touch emulation for Windows 8.
How about Linux?
Linux support is on the agenda.
When do dev-kits ship
Depending on which batch you’re in; anywhere from 1-3 months.
I'm calling fake or very optimistic to say the least.
The promo video doesn't show a physical device, the price point seems ridiculously low especially for a resolution of 0.01mm. And also there is this http://bit.ly/KOqDi2 the physical hand and the point cloud don't match. It's like someone is moving their hand(s) fast to mimic the movement of the visualization.
Been running a little through all the vids of it around.
I'm still undecided. Perspective on each supposedly "fake" screenshot could explain the mismatch. In your example, you actually don't see how far his fingers are apart. Also, it might be attributed to that particular finger going to the borders of the 3D interaction space of Leap.
I dropped out of state U. after my 3rd year (math major), but that was years ago. At my current start-up, I have recently been forced to learn much more than I was expecting to about probabilistic graphical models and curve similarity measures (gladly though; always been interested in pattern recognition).
Anyone with a vision for this, consider dropping me a line. I might be able to help.
I'd love to integrate it with my KType application ( http://ktype.net ) or build a new software that converts hand gestures / ASL to speech. I think it's totally doable and would be a lot of fun technically too. What do you think?
Well, off the top of my head I don't see a straightforward way to integrate with KType (nice work btw!). Just because there would be so many possibilities for mapping hand gestures to the KType actions/commands.
On the other hand, converting sign language to text/speech seems like it should be quite straightforward. Not knowing anything about sign language, I'm assuming signs map (more or less) one-to-one with words. The input from LEAP appears to be extremely high resolution, so if the sign gestures are properly normalized (and judging from the demo video, it looks like the LEAP SDK itself already does a good degree of input normalization), you should be able to just train your classifier (neural network, SVM, etc.) right out of the box.
Of course, things are never as easy as they look so in all likelihood there are plenty of complications I'm completely overlooking at first glance. But I agree with you 100% that it sounds totally doable.
So sign language isn't really one-to-one, but that is something that I really want to work with, so I look forward to getting a unit. I think this system has a lot of potential as a learning tool! Awesome job guys!
Google should be all over this. Microsoft has Kinect, there are rumors Apple has been working on it, too, and I know Google hired the product lead from Kinect a while ago (George something), but I haven't seen much come out of that. This could be very useful for Google TV and who knows what else in the future. Better to integrate it in the Google X lab.
> also the demo did not show a way to type information
I don't think people in the gesture interface market are looking for ways to replace the essential function of the keyboard. For all intents and purposes, it's probably the best way to input textual data into a machine.
Keyboards are the also the only way ( until now) to input textual data.not implementing that makes this at best a companion device to existing setup and not a replacement.
[edit]. voice to text transcriptions can be an alternative..
Looks like it'd be a nice augment to the usual keyboard & mouse/trackpad setup (& to a touchscreen, too).
For some tasks (e.g. changing to a diff browser tab five across from the current one) I can imagine that pointing at it would be the quickest and easiest way to switch to it.
I can imagine it'd get a bit tiring if you were relying on it too exclusively.
They should mount this inside a keyboard, I don't want an extra dongle randomly lying on my table.
On a completely different note, though, I wonder what the range on these things is. Could have excellent applications to robotics, I hope they don't completely close the vision outputs behind their own gesture APIs or something...
It'll be interesting to see how this pans out - but I can't see it being widely adopted in the home any time soon. People who want gesture based stuff will use Kinect.
However, I can see this being huge in the commercial market. I can easily imagine using something like this in a shop, or for presentations at work.
Googling for "kinect price" gives me numbers of about $150; this is priced at $70. Even if we assume this is a discounted price to lure in pre-orders, that still leaves room for it to cost less.
I would not think of getting a Kinect but my thought upon seeing the video was "I want to get one of these and use it to control the RasPi I'm sticking in some fake taxidermy along with a pico projector for a micro media PC."
I think you're spot on. Every mall in the developed world could have easy to use 3D maps and offer/advert displays - that's a huge market right there, whether or not the desktop ends up being one.
Motions like they use in the fighter jet game or the shooting game (and some others) may not come naturally to everyone. In fact the one for continuous shooting specifically seems to be counter intuitive.
But the idea of bringing such gesture based interaction to just about any device is really great.
Motions like using a mouse or a keyboard are not a single bit more intuitive for flying planes or shootings guns. It just an issue of familiarisation with the interface as it was with the mouse, as it was the keyboard.
Beside that, it really doesn't matter as these were only two examples of a large variety of possible applications. If it doesn't fit the need - don't use it. There are other interfaces. There is no need for one interface to rule them all but for interfaces that really hits the spot for particular applications.
Regarding Leap: Looks really promising. Though i would prefere if it would be "hidden". Anyhow, can't wait to get my on it. Or over it.
This is gonna be a pain in the arm for photoshoppers. But i'm interested to see this technology grow into popularity. I can't believe we're actually having something like this in our time. 20 years ago, you can only see stuff like these in science fiction novels and movies.
Surprised by the negative reactions here. If Kinect has shown us anything it is this thing will be used for a host of things not shown in their demo videos. Robots, Interactive Art, etc,
ChrisFornof: What is the normal workflow after filling the dev kit application form? I got redirected to the homepage with no confirmation step or email. I wouldn't want to double-apply.
Well I just pre-ordered one and applied for the dev kit. I don't even have a specific plan for this yet, but I think the 3-d object scanning ability alone makes this worth it.
Did anyone get an email confirmation after applying for the dev kit? After I hit the button, it reloaded to the home page, and I haven't seen any emails.
Just wondering if I should fill it out a second time, since I never saw a confirmation screen. I understand it will take some time to ship, just want to know the application actually made it into the database! :)
Another interesting application not mentioned yet is 3d scanning... and their site mentions 3d modeling work as the initial motivation for the product.
Dev kits go to anyone internationally. Pre-orders are domestic only (for the moment).
And we've got nothing against our northern neighbors, nor is it some grand favoritism conspiracy. Pure rollout logistics. Lots of people are getting confused on this point, we'll announce more shortly.
Hmm I've just pre-ordered two from Japan (and applied to the dev program too).
I believe the form included international locations, can you confirm it?
Pretty shitty. It comes across to me like they're trying to give a head-start to developers in the US; Not the best way to welcome the international developer community.
I'm happy that the team took their passion to execute one something so effectively.
But not all ideas are good. It violates Fitt's law by placing all user interaction within a vertical band next to your computer. This is very uncomfortable and decreases the usefulness for most applications (because missing the interaction band is likely).
If these were wireless gloves that I could more easily (with my arms in any location) then I'd love this.
Maybe it could have a better use in a Minority Report style setting, recreating a large gesture based interface without requiring gigantic multi-touch screens.
Sat in front of a desk, with a keyboard and mouse in front of you, I can imagine it will have some pretty attractive uses, but might not be as comfortable long-term as gesticulating at a mounted whiteboard.
Why would "a more useful critique" be looking at pro-LEAP applications? Is it because this what you want to hear? LEAP is a bad idea, not bad due to software, bad due to the constraints of human beings.
Waving your hands in front of your computer is goofy too. :)
Gloves would give significantly greater control, flexibility, and functionality. Three examples:
1) Control: Since the computer is aware of my digits significantly more complex movements are possible. With two hands you'd have a ton of customization.
2) Flexibility: I can do all actions sitting comfortably from my chair. My arms don't have to leave the arm rests. Or, I can be across the room swiping through my media or photos.
3) Functionality: Fingers, or motions like raising a hand, could act like key commands. Want to submit a password? Turn your hand like a key. Want to refresh a page? Drum my fingers. Want to clear the screen? Slide my hand across the table. None of these would be possible with LEAP because each would be obscured by or outside of the field of view.
Similar tech seems to already exist in Microsoft's Kinect. It can also cover entire rooms, but would do so more usefully. And the technology is already there to identify human hands, skeletons, and faces.
I want amazing technology just like you, but we need to be willing to not trumpet bad ideas just because it could be cool given enough effort/marketing.
So the obvious weakness here is that this will be less precise than a control system that uses direct touch such as a keyboard, mouse or touch pen. It looks fun and dramatic though.
To me this looks amazing and although LEAP seem to be pushing for you to get rid of your mouse/keyboard, personally I think this is probably best as an addition to it. Imagine if you had one of these built into the keyboard.
You're typing an email need to add a location switch over to google maps, hands off keyboard as you manipulate it around to get a decent resolution, 'tap' the address bar to copy it, swipe left to switch back to the email program, tap again to paste and boom carry on typing.
You wouldn't need to be using it all the time for it to be extremely useful.
We keyboard jockeys sometimes forget how much faster something like this would make the less shortcut-key knowledgeable users!