Use the program Wekinator to teach your machine gesture recognition using your laptop and a webcam!
In this article, which is part 5 of a larger machine learning series, you are going to learn about gesture recognition using machine learning through the Wekinator platform. We will use the webcam of a laptop, and through it, give the input to the Wekinator according to the gesture shown. After training the Wekinator, we will show the images on the processing output window according to the output of the Wekinator.
Catch up on the rest of the Wekinator Machine Learning series here:
How to Run the Program
First of all, you will need to download the sketch from the examples page of Wekinator.
Download source code to process a simple 10x10 color grid. Unzip it and run the code in processing. This program will use the webcam of your laptop, and depending on what you are doing in front of the camera, it will give the input to the Wekinator.
You will need another sketch for the output from the Wekinator. The code for that sketch is given at the end of this post. Paste that into processing and save the file. This sketch will show the images on the processing output window depending on the gestures you are showing in front of the webcam.
The images you want to show on the processing output window should be placed inside the sketch directory.
After placing the images in the sketch directory, run the code. Both the processing windows should look like shown below.
Now open Wekinator and configure the settings as shown in the figure below. Set the inputs to 100 and the outputs to 1. Set the type to all classifiers with 3 classes.
Click on “next” and a new window will come up as shown below.
Sit in front of the camera with both of your hands down and start the recording for half a second.
Now sit in front of the camera with your right hand raised and set the class to 2 as highlighted in the below figure. Then start the recording for half a second.
Now sit in front of the webcam with your left hand raised and set the class to 3 as shown in the below figure. Then start the recording for half a second.
After that, click on train and then click on run. Now an image in the processing output window will be shown according to your gesture (with right hand raised or with left hand raised or with no hand raised).
Processing Code (Output From Wekinator)
Run the code below in processing to see the output screen with a figure representation as shown above.
// Below libraries will connect and send, receive the values from wekinator
import oscP5.*;
import netP5.*;
// Creating the instances
OscP5 oscP5;
NetAddress dest;
// These variables will be syncronized with the Arduino and they should be same on the Arduino side.
public int output;
// Variables to store images
PImage img, img1, img2;
void setup()
{
size(350, 300);
// Loading the images in the sketch
img = loadImage("nohand.png");
img1 = loadImage("righthand.png");
img2 = loadImage("lefthand.png");
// Starting the communication with wekinator. listen on port 12000, return messages on port 6448
oscP5 = new OscP5(this, 12000);
dest = new NetAddress("127.0.0.1", 6448);
}
// Recieve OSC messages from Wekinator
void oscEvent(OscMessage theOscMessage) {
if (theOscMessage.checkAddrPattern("/wek/outputs") == true) {
// Receiving the output from wekinator
float value = theOscMessage.get(0).floatValue(); // First output
// Converting the output to int type
output = int(value);
}
}
void draw()
{
// Black background for output window
background(0);
// Matching the output from the wekinator
if (output == 1)
{
image(img, 0, 0); // Showing the first image in the output window
}
else if (output == 2)
{
image(img1, 0, 0); // Showing the second image in the output window
}
else if (output == 3)
{
image(img2, 0, 0); // Showing the third image in the output window
}
}
Try testing this interaction in various settings and tune it for your application. Have fun!