Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
651 views
in Technique[技术] by (71.8m points)

opencv - Finger/Hand Gesture Recognition using Kinect

Let me explain my need before I explain the problem. I am looking forward for a hand controlled application. Navigation using palm and clicks using grab/fist.

Currently, I am working with Openni, which sounds promising and has few examples which turned out to be useful in my case, as it had inbuild hand tracker in samples. which serves my purpose for time being.

What I want to ask is,

1) what would be the best approach to have a fist/grab detector ?

I trained and used Adaboost fist classifiers on extracted RGB data, which was pretty good, but, it has too many false detections to move forward.

So, here I frame two more questions

2) Is there any other good library which is capable of achieving my needs using depth data ?

3)Can we train our own hand gestures, especially using fingers, as some paper was referring to HMM, if yes, how do we proceed with a library like OpenNI ?

Yeah, I tried with the middle ware libraries in OpenNI like, the grab detector, but, they wont serve my purpose, as its neither opensource nor matches my need.

Apart from what I asked, if there is something which you think, that could help me will be accepted as a good suggestion.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

You don't need to train your first algorithm since it will complicate things. Don't use color either since it's unreliable (mixes with background and changes unpredictably depending on lighting and viewpoint)

  1. Assuming that your hand is a closest object you can simply segment it out by depth threshold. You can set threshold manually, use a closest region of depth histogram, or perform connected component on a depth map to break it on meaningful parts first (and then select your object based not only on its depth but also using its dimensions, motion, user input, etc). Here is the output of a connected components method: depth image connected components hand mask improved with grab cut
  2. Apply convex defects from opencv library to find fingers;

  3. Track fingers rather than rediscover them in 3D.This will increase stability. I successfully implemented such finger detection about 3 years ago.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...