December 13th 2017 IM Showcase

Physical Set-up

First, I constructed a 122×122 flat and painted a white circle (to allow for it to be projected on) with an inside diameter of 61cm and an outside diameter of 122cm. I did this at the workshop in the NYUAD Arts Center.

After that I was able to set up a Kinect and Projector from the the Grid, both facing downward toward the playing space. The playing space is limited to the the flat, after mentioned as the stage. The stage was spiked on the floor to accommodate for people moving or bumping the sage out of position.

  

Processing

In processing (the code can be found below) I used Kinect tracker  with OpenCV to track objects with a bounding box in the playing space. The point of the object was established as the center of the bounding box. I adjusted the threshold to ensure it only tracked object (or people’s arms as the case may be) above the mid torso level of a human.

I created 8 sections in an array. The sections were then used, also in an array, to establish the audio and visual triggers and boundaries.

The section were bounded by 8 radians and an outside and inside circle. The radius of these circles could be adjusted using a key press, allowing for easier set up with the physical playing space. The center of the circle as well as the before mentioned Kinect  distance threshold could also be adjusted using Key presses.

The large circle was set smaller enough to avoid nosie from passers by and the small circle set small enough only to cover the individual in the middle of the stage. I used createGraphics in the the KinectTracker to overlay the smaller circle so that the Kinect would not track the individual who was stood there.

Midibus was used to send notes on a C major scale to sforzando. During testing I discovered that a  C major is not the best set of notes to use because, in an installation setting, the notes often clash due to inexperience with the system. I therefore changed it, first to a diminished 7th, then to a C, Eb, G, Ab, Bb, C, D, and and F (60, 63, 67, 68, 70, 72, 74, 75).

During testing we also discovered the notes needed to be faded in and out, rather than simply triggered on and off. We therefore sent the sforzando output through ____, by way of Soundflower, and used the reverb setting to set the decay time to 60 seconds.

Isadora

In Isadora I coded (can be found in the link below) a color circle with 8 sections using the GCLS Shader actor, corresponding to the section coded in Processing. The 8 sections could be individually triggered to appear through 8 OSC receivers on different channels from Processing. The sections had to be matched between Processing, Isadora, and the projector so that they all aligned.

The final step in Isadora was projection mapping the Shader to the stage. Because the stage is based on a 122×122 flat, I created a white square that overlaid the circle to allow for quicker and more accurate mapping.

Sound Test
Video Test
Performance and Installation:

Code: December 13th

https://github.com/edl330/SBM/tree/master/IMShowcase

Code: December 15th

https://github.com/edl330/SBM/tree/master/IMShowcase/IMShowcaseUpdate

 

Disclaimer: ‘I’ refers to the combined efforts of Ethan David and Prof. Aaron Sherwood

IM Showcase – Current Progress

Woodshop Sketches, Meeting, and Plan

I met with the Daniel from the Woodshop.

Using CNC machine. Will probably cut radius of larger platform to 120 cm diameter from 180 cm because of plywood size. The larger option would require the structure to be cut from two plywood section and then fastened together.

Supporting structure to be constructed from soft wood (cut using conventional tools).

Platforms to be constructed from plywood (cut using CNC).

Platforms and Support Structure connected with screws.

Next step: clarify sketches and make changes. Could be assembled in “a few hours.”

EthanDavid_Sketches_IMShowcase

 

Code

Circle and sections Coded.

Kinect coded (on top of circle).

Code can be found here: https://github.com/edl330/SBM/tree/master/IMShowcase

Kinect can track individuals using Average Point Tracking.

Sound is triggered from one section by routing it to Pianoteq 6.

Introduction to Computer Vision

The intention was to adapt my TPO Farfalle patch for an IR camera and projector in Isadora.

The first order of business was to download Andres Colubri’s Syphon Syphon Library in Processing 3.

I used circle tracking in Isadora to receive the feed from the IR camera. This meant feeding that the Syphon receiver actor (for which Processing 3 has to remain open and playing for it to work) into the Zoomer actor. Since the IR camera could see more than the projector’s playing space, the Zoomer actor had to shrink the image down to fit within the boundaries. This was particularly tricky due to the fact that tracking a person is dependent upon their height. The wall made setting the place particularly difficult. The Zoomer actor was then fed into Horizontal Flip, Difference and Contrast Adjust actors, which was then fed into a Video Mixer actor, along with a second input from the Horizontal Flip actor, and then finally into the Eyes actor.

From the Eyes actor I used the horizontal and vertical positions, as well as the object velocity, all of which went through a Smoother actor. The GLSL Shader used the horizontal and vertical positions and the velocity defined the saturation level, as it did in the original patch. Screen shots of the patch can be seen below.

Here is a link to a video of what the patch looks like: https://youtu.be/KtPY9qWXnFU. Note that it is slight ahead of my feet due to me bending over. It is tracking the top of my head.

I also attempted to incorporate two objects into the patch using Eyes++. I did this by adding two Blob actors and using the Calculator and Limit-Scale Value to average the position between the two objects, as well as average their velocity. While I was able to get the correct readings and it worked in theory, the patch does not lend itself to using two objects and I did not like the practical result. Screen shots of this patch cam be seen below.

 

 

Post Class Update:

Rather than using the Circle Tracking technique, I have updated the Isadora patch to use Background Subtraction. This has allowed for a more stable tracking of the object/person in both the single and double version, taking into consideration their extra movements, and has made the double object version able to work when one or both of the objects/people are stationary.

Here is a screen shot of the Background Subtraction (single object version):

Here is a screen shot of the Background Subtraction technique in the entire patch (double object version):

Here is the Isadora Patch:

SBM/TPO Background Subtraction.izz

Here is a video demonstration:

WILL COME WHEN I HAVE BEEN BACK IN THE LAB TO DO THIS

Arduino Game

Intention

Using a body switch, the game functions as a reaction test. The game utilises two LEDs of different colors (Green and Blue in this example) which light up randomly. Each LED has a body switch in which on end is help by a person and the other end held by a third person, the player (the player would therefore be holding a wire for each LED). When each LED lights up the player must react by touching the person the LED light corresponds to, if the player is correct the light will turn off and after a brief delay one of the LEDs will turn on randomly. This repeats until the player gets one wrong. When the player touches the wrong person the LEDs flash four  times and then stay on until the game is reset for another turn. The purpose of the game is to turn off as many LEDs as you can before getting it wrong.

Code

void setup: Sets the LEDs as outputs and allows them to be randomly triggered.

void loop: Sets the analog read of the body switch to be touchAmount. The if statement consists of 4 possibilities. If LED 1 is on and person 1 is touched, the next pin is activated. If LED 1 is on and person 2 is touched, the LEDs flash and it is game over. If LED 2 is on and person 2 is touched, the next pin is activated. If LED 2 is on and person 1 is touched, the LEDs flash and it is game over. The Game Over option had to come first to stop people being able to touch both people at the same time and still progress through the game.

void lightRandomPin(int pinNum): If the right person is touched, this randomly lights the next LED.

void flashLED(): This is the Game Over option.

Wiring

Open Studios Installation (Ethan + Grace)

Welcome to the I.M. Lab Cat

Two images designed on Adobe Illustrator and Sound Effect. Using Isadora with a IR range finder and Serial Xbee. Signal from the receiver trigger the meow sound effect and ‘Welcome to the I.M. Lab’ image. Uses a gate and trigger delay to avoid toggling the image and sound effect continuously when multiple people walk through.

Neopixels

A strip of Neopixels line the bottom of the TVs the cats are on. These are controlled by a Redboard. When a signal is received from the Xbee connected to the Redboard, two pulses of bright light travel from the center of the strip outwards, leaving behind a dimmer trail of color. Multiple pulses can fire at a time. These pulses are controlled by a matrix that keeps track of the pul’s location, whether it is active or not, and what color it is. The is also a pulsing center section of light which quickly randomises its color.

User Testing

Individuals noticed the cat first but failed to notice the neopixels. After discussion, we concluded that this was due to the lights not being at eye level and the cat, therefore, taking attention, further due to the additional sound effect. We moved the lights up so they are visible with the cat.

We also deemed the original cat sound effect to be annoying and sound like a sickly cat. The sound effect was changed to a kitten’s meow to be more pleasant to the audience.

User Testing Documentation

 

 

Creative Switch

I programmed a switch in which, when pressed, the LED’s will blink from Red to Green to Blue to Yellow, holding on each light for 6 seconds. While using a millis code would have made it possible to stop the sequence midway through, I am not familiar enough with Arduino coding yet, although I think I have a rough idea of how it works after reading the article.

My initial issue with the current code (that I mentioned via e-mail) ended up being a wiring issue. I was accidentally putting twice as much power into the system as needed and was also causing a kickback into the computer whenever the button was pressed, which crashed it. This was easily solved once the mistake was realised (and now I have a better idea of how to wire in future). I was also wiring the button through pin 13, which I switched to pin 12  so it didn’t control the LED on the board. Initially the button could only turn off the lights, but the re-wiring made sure that pressing the button meant the sequence could be initiated. Surprisingly my code ended up being reasonably solid and, having coded it myself from scratch, I am more confident moving forward with Arduino’s language and will therefore be able to look at more complicated things.

To make the switch “creative,” I wanted to use everyday objects around my room in a practical way. I ended up using a ‘Coaster Switch’ in which a wire is attached to a Coke can and another attached to a coaster made of aluminium. When the coke can is put on the coaster, the light sequence is activated.

Sensation in Psychology Reading and Workshop Response

What struck me most about the reading, both before and after class, is the discussion of the different thresholds. Absolute Threshold, for example, came into effect when Grace and I were looking at each other for the 5 minutes. I became very aware of the slight movement in her features. My absolute threshold was a lot lower the longer we looked at each other and movements I would not normally notice in everyday life, such as a small twitch in the corner of the mouth, suddenly became much larger. Difference Threshold stood out to me when I was being guided blindly around campus and the Arts Center. Trying to read the change of Grace’s movement and correspond that to my own body was a challenge in noticing the change in pressure. How much pressure did Grace need to exert in order for me to notice her dire for me to change speed, direction, or orientation? Signal Detection came into effect in the same exercise. Due to my lack of sight, the touch of the ground, while normally a background sense I apply little attention to, became a primary signal in my exploration of the surrounding area, as well as keeping me safe from stepping too far into water, stones, or shrubbery. Finally, Sensory Adaption, was notable while exploring 036. My senses adapted to the dark light and I began not to notice it as I found new ways to explore. When I stepped into the beam of the grid lights, however, I detected a change and became aware of the light again before getting used to the brightness and allowing my other senses to come into fruition. This back and forth of spending time in the light and out of it allowed different senses to come into the forefront as I got used to a lack of change in the others.

So why is this important? I think it’s applicable for me, especially with the use of light, in the design of durational sets in a theatrical, performance, or installation setting. For example if the entire back wall was an LED light panel that was getting brighter over a long period of time. As the piece progresses when does each audience member notice the change from the beginning? How can you correspond that change thematically with the other elements of the piece? If you block out the audience’s other senses, or overload their senses with different elements, how does this effect their perception of the change? And, what if it stopped changing half way through, can you trick the audience that it’s still changing slowly, because it’s been changing for so long before that moment?

 

Isadora for Farfalle – Oct. 2

Continued from last week.

  • The mouse watcher now has a smoother attached, ready to be attached to Farfalle’s motion tracking software.
  • I have adjusted and simplified the speed/saturation effect (right side of the screen shot).
  • I have made 5 looks/effects that change over time using trigger delays. I have some continued problems with this and, while it works sometimes, it needs to be simplified as it has a habit of malfunctioning.
  • The 5 looks/effects are Regular, Motion Blur, Shimmer, Dots, and Reflector. All are attached to Difference.
  • I have removed the key watchers from the Colorizers. There are now 5 specific color combinations associated with each look/effect.
  • I primarily use triggers and toggles attached to the bypasses to work the system.

Shaders

Mouse controls the X axis with a Stage Mouse Watcher.

Y axis remains even.  I did this because I discovered that putting Stage Mouse Watcher through a calculator stops the Shader from receiving the inout.

The shader is put through Difference which is activated when the mouse moves. I then colorised the shader it with four different colors, activated at will by different Key Board watchers (and toggle switches attached to the bypass) to create different variations of colors.

The intention was to give a interactive neon lighting effect. Inspired a little by Tron.

I changed the speed of the lines in the coding of the Shader, as well as the size of the lines and the distance between them. I also had make the addition of the mouse to the code. The issue that came up was that (0,0) is dictated as the middle of the Stage in the coding. I counteracted this a little by changing the line vec2 uv = fragCoord.xy / iResolution.xy -.225but it meant that the mouse only lines up in the middle and gets slightly off at the edges of the screen. I was unable to find a solution to this because, as mentioned earlier, putting the mouse through a calculator negated it’s input.

 

Particle Patches – September 18

Musical Particles

My first patch, Musical Particles, uses a sound watcher to dictate the number of particles being generated as well as the size and color of the particles. This was done by limiting the range of three Sound Level Watchers and connecting the outputs to a Color Maker HSBA, 6 Comparators, and the Mid Size of 3D Particles. I also used a mouse watcher to rule where the particles would come from, this would allow a person to trace a dancer across a screen. The Stage Mouse Watcher was connected to a series of Calculators, Maps, and Limit-Scale Values. A 3 Random number generator also decided the respective x, y, and z velocities. These Random number generators are also connected to a Comparator and Sound Level Watcher to ensure no particles are produced under a certain level of sound.

Rainbow Snake

My second patch, Rainbow Snake, uses the Stage Mouse Watcher connected to two Maps and, in the case of the vertical position, a calculator (to counter the vertical mirror you get when using the Map). A Pulse Generator is used to generate particles on it’s highest frequency, giving the illusion of a continuous line. The number of particles aloud on screen is also increased to it’s highest setting. A Wave Generator is then connected to a Color Maker HSBA in order to cycle through the colors of the rainbow as the line is traced out. In honor of the game Snake, a Picture Player is used, with the logo of the game, as a Texture Map.