Hacker Newsnew | past | comments | ask | show | jobs | submit | b-3-n's commentslogin

Thank you for sharing this creative idea. "so that the fingers can touch (or almost touch) the screen" I think this is a big advantage of this approach since you can only achieve this with the back facing camera. On the flip side, with a back facing camera you either have to place the camera in between yourself and the screen which might be awkward or you have to ensure a placement of the camera behind you that isn't prone to occlusions (e.g. your head or chair might occlude your hands from the cameras point of view). The latter might also make calibration more difficult or impact precision since you might have to mount the camera with some elevation causing a less optimal camera angle.


Thank you for the feedback.


Thank you for the feedback. I can confirm there seems to be an issue with iOS/Chrome and created a GitHub issue for it (https://github.com/handtracking-io/yoha/issues/5).

Note that if you were trying iOS/Safari and not iOS/Chrome there is nothing that can be done due to a limitation that is documented in the section "Discussion" here: https://developer.apple.com/documentation/webkitjs/canvasren... Will document this.


Thank you for the feedback. I would like to fix this but I neither own a Chromebook nor does it seem like I can use a platform like browserstack to reproduce the issue (didn't find Chromebook as available device there). If you would like to help debugging the issue you can open a GitHub issue here: https://github.com/handtracking-io/yoha/issues


Thank you for the question. Since a similar question was asked already let me refer you to this comment: https://news.ycombinator.com/item?id=28830943


Thank you for the pointer that I had overlooked.


Thank you for the feedback.


Thank you for the feedback. You are right, the home page should probably be enriched with more information and maybe I can make the information you were looking for stand out better. As a side note: There is a link to GitHub in the footer. The language ("TypeScript API") is also mentioned in the body of the page. But I see that these two can quickly go unnoticed.


Thank you for your feedback.


One can build this pretty easily for a website that you are hosting with the existing API (https://github.com/handtracking-io/yoha/tree/master/docs).

However, you likely want this functionality on any website that you are visiting for which you probably need to build a browser extension. I haven't tried incorporating YoHa into a browser extension but if somebody were to try I'd be happy to help.


That's nice but I'd also want it for general desktop stuff.

So I guess it would have to be sitting on my machine.

For example hand gestures to switch the desktop workspace.

Swipe left/right motion to switch desktop workspace. That would be the dream :)


Thank you for this inspiring question. For interpreting sign language you need multi-hand support which YoHa is currently lacking. Apart from that you likely also need to account for the temporal dimension which YoHa also does not do right now. If those things were implemented I'm confident that it would produce meaningful results.


It's worth noting that movements of the mouth are extremely important in ASL (and other sign languages) and so this probably isn't as useful as it might seem at first.


Thank you for pointing this out. I overlooked this. I presume that on top of that what also could be relevant is the movements of arms, facial expressions and maybe also general body posture. Please correct me if I'm wrong as I'm not too familiar with sign language.


Signs also tend to be expressed by the hands' position/movement in relation to _other_ body parts.

Edit: OTOH fingerspelling (https://en.m.wikipedia.org/wiki/Fingerspelling) might be a more feasible usecase!



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: