Development of position recognition

I finally made something that is actually working. I have been working on the code for a couple of weeks now and I am happy to see that the Kinect is recognising properly the movements. Using the Kinect’s skeleton, I start to annotate all the possible correlation between the different body’s point.

IMG_8115.JPG

This is my annotations on the block-note. In particular the first two ballet standard positions. I have to compare the skeleton’s position (X and Y of the screen) of the two arms, that consist in hand, elbow and shoulder. In this way, I can see if the arm is in a certain portion of the screen, compared to the other arm.

IMG_7744.JPG

In this picture, you can see part of the code that I wrote in Processing. There are elements like SKEL_RIGHT_ELBOW_2D or SKEL_RIGHT_HAND_2D, these are the PVectors that I am using to record the X and Y of the skeleton’s points.

if(shoulderHandL < 20 && shoulderHandR < 20 && shoulderHandTR < 20 && shoulderHandTL < 20 ){

    //position 1

This is the code that I wrote for recognise the first position. In particular, I check if the position of the left shoulder compared to the position of the left hand is less than a certain space and then I do the same for the other parts of the body. If so, I know that that is the first position.

IMG_7743.JPG

IMG_7741.JPG

From these pictures, you can see what the computer shows. The big blue shadow is the person and the white lines are its skeleton. If the computer recognises the position, I make it print on the screen the word “GOOD!”.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s