Code Guide
This code guide is intended to help explain the process from start to finish which creates the final demonstratable project. We have attempted to keep technically complex descriptions to a minimum, and use layman's terms where possible. If you are interested in looking at the source code for the project, it has been thoroughly commented here for your convenience: https://github.com/UQMozart/UQMozART-Interactive-Installation
Kinect Depth Thresholding
- For every frame that the sketch is running (e.g. 30 frames/second), the Kinect will send an array consisting of 512 (width) x 424 (height) depth values, of which each value is between 0 and 4500 (corresponding to the distance in centimetres from the Kinect itself).
- Processing will then take this information, as well as the threshold that has been set as just in front of the spandex, and will check each depth value against the threshold.
- Prior to this, Processing will have created a blank image of black pixels with the same dimensions as the Kinect (512 x 424).
- For each value in the depth array, if the value is in front of the threshold, Processing will colour the corresponding pixel red. If the value is not in front of the threshold it will be left as black. So now we have a new image for each frame which highlights only where the canvas is being pushed.
- We will then take this image and apply a blurring algorithm to it to remove any stray particles the Kinect may have accidentally picked up, so that we end up with a much cleaner (albeit blurry) image.
Blob Detection
- Next, this image will be passed on to the Blob detection library, which will work its magic to determine where blobs are located in the image, which is why we applied a different colour to the pixels in front of the threshold.
- Once the Blob detection library has finished its work, it will return an array of Blobs of which each has its own accessible properties (height, width, X, Y, etc.). However, this array is replaced every frame, and blobs in the array are in order of top left to bottom right, meaning that the same blob may not be in the same array index position across different frames, which is problematic.
- In order to track these blobs across frames we are storing the array of blobs from the previous frame too, and them comparing it to the current frame to find which blobs existed in the previous frame.
- There are more blobs in the current frame then there were in the previous frame (so new blobs have occurred)
- There are less blobs in the current frame then there were in the previous frame (so some blobs have died)
- There are the same number of blobs in the current frame as there were in the previous frame
- The code that compares frames for blobs works by checking each blob in the new frame against every blob in the previous frame and comparing the distance away it is. It will find the closest blob within a specified range, and assume that it is the same blob, and will continue using its unique ID.
- If a new blob has occurred, a new Blob object will be created with a new unique ID.
- If a blob is not present in the new frame, it will be marked to be deleted and will survive for the next 5 frames before being deleted. If a similar enough blob reappears before it is deleted, then it will be marked as alive again, and will continue using its unique ID.
- After analysing the blobs, we will have a resulting array of ‘Tracked’ blobs, which each have their own unique ID (a value incremented at every new blob), and will retain the properties of the blob it was identified as, including X, Y, width, height, etc.
- For each of these ‘Tracked’ blobs, a number of functions are available to assist in calculating things like the speed, depth, and number of frames the blob has been alive for, all of which will be used in creating the sound and visuals.
In this comparison, there are 3 cases:
Sending MIDI
- The next step is to start sending the MIDI data to Ableton, which is done via 6 virtual MIDI ports created by the program loopMIDI, which will need to be running at the same time.
- The list of ‘Tracked’ blobs will be iterated over, and separated into three new lists, each representing a section of the canvas, the left, middle, and right. This is done by checking the X position of the blob and determining if it falls within the left third, the middle third, or the right third of the canvas.
- Once the three separate lists have been populated with ‘Tracked’ blobs, three arrays are also created, each to represent the two tracks corresponding to that session, which will be used to determine if MIDI is already being sent to a channel or not when iterating over the blobs.
- For each blob in a section, it will first be checked whether the unique ID of that blob was present in the MIDI being sent to the tracks in that section in the previous frame. This is done so that if two blobs are sending MIDI each to a track, and one blob is removed, the second blob will continue sending MIDI to the correct track.
- If the blob ID is found to be in the previous frame, it will send the new set of MIDI data to the same track, and will mark that track as being filled.
- If the blob ID is not present in the previous frame, it will check if either of the tracks in its section are free, and if they are it will send its MIDI data to that track and then mark the track as filled.
- The Y position will be grabbed for the blob and will be mapped to a value between 0 and 127 (after subtracting the empty space above the frame which is picked up by the kinect). This information will then be sent in a Note On MIDI message with the Y value now acting as the pitch.
- The depth of the blob will then be calculated, using the X and Y positions to pinpoint the corresponding index in the depth array. This depth value will then be mapped to a new value between 0 and 127 using the threshold of the spandex as a reference, and then sent as a MIDI CC message.
- The alive time, X position and speed are all mapped similarly to a new value between 0 and 127, with a few upper limits in place to ensure things do not get out of hand. These values are all also send as MIDI CC messages.
When sending MIDI data to Ableton, a number of calculations need to be performed.
Sending TUIO
- In order to create the visuals to correspond with the user's touches on the spandex, a number of TUIO messages must be send via an OSC port to the visuals sketch.
- At the start of the sketch, we set up new OSC ports 3333 and 3334. OSC port 3333 is the default port used by MSAFluid, though this can be altered. Port 3334 is set up to receive messages back from the visuals, but this is not required.
- The OSC port is given the local net address '127.0.0.1', since messages do not need to be sent between computers.
- Two other OSC ports are setup for the purposes of sending additional data to the visuals outside the original MSAFluid ports. These ports are 12000 for sending and 32000 for receiving.
- At each frame, a new OSC bundle will be created to which we will add individual messages.
- The first message sends the stamp "/tuio/2Dcur" to identify that TUIO cursors are to follow, and then sends an "alive" string followed by the IDs of all the blobs that are present in the current frame. TUIO will use these alive IDs across frames to know when blobs disappear and then it can delete them itself.
- Before the OSC message can be sent, a number of calculations occur on the X and Y positions, speed, depth and alivetime values so that they are in meaningful forms to be forwarded.
- For each blob, a separate "/tuio/2Dcur SET" message will be added to the bundle containing all the data associated with that blob.
- At the end of the OSC bundle will be a message which specifies the number of the current frame or the "fseq", which is used for idenitfication of frames incase they are lost in transmission.
- All of these individual OSC messages are then added to the OSC bundle and sent to the 3333 port where it is picked up by MSAFluid.
- Some other averaged information is sent to the other OSC port which is also picked up by MSAFluid, and contains things like the average speed of all blobs, average depth, and average alivetime.
Receiving TUIO and Constructing the Visuals
- The visuals are created using a modified version of the MSAFluid demo sketch, which comes included with the MSAFluid download. As such, most of the code required to obtain the incoming TUIO messages is containg within the library itself.
- However, to explain briefly: The TUIO messages are received and converted into TUIO cursors which generate the visuals based on the X and Y position as well as velocity of the blob.
- Additional data values sent from the master project are also used to manipulate the visuals created from the blobs, such as the speed and alive time.
- Additional information can be found in the commented source code files in the GitHub repository under the MasterVisuals folder.