Research into Final project – Interactive Geodesic Dome


I worked on figuring out a prototype of the cells I’d use to make up my final project. I made two triangles out of rolled up paper, stapled two layers of fabric onto them, then attached tilt switches to the center of each. I then added a neopixel strip that would be activated by one of the tilt switches. You can see the results below:

From the back:

I was able to run the wires through the tubes to keep things neat.

Generally this setup is able to distinguish between the two cells, unless the frame is bumped. Giving a bit of slack to the fabric of each cell seems to help prevent cross-activating switches across cells. I had the tilt switches pointed up, and had the code set to activate the lights if the switch sent a 0.

The code is below:

#include <Adafruit_NeoPixel.h>
#ifdef __AVR__
#include <avr/power.h>


// Which pin on the Arduino is connected to the NeoPixels?
// On a Trinket or Gemma we suggest changing this to 1
#define PIN 6

// How many NeoPixels are attached to the Arduino?
#define NUMPIXELS 40

// When we setup the NeoPixel library, we tell it how many pixels, and which pin to use to send signals.
// Note that for older NeoPixel strips you might need to change the third parameter–see the strandtest
// example for more information on possible values.
Adafruit_NeoPixel pixels = Adafruit_NeoPixel(NUMPIXELS, PIN, NEO_GRB + NEO_KHZ800);

int delayval = 100; // delay for half a second

bool on = 0;

void setup() {
pinMode(5, INPUT);
pinMode(4, INPUT);

// This is for Trinket 5V 16MHz, you can remove these three lines if you are not using a Trinket
#if defined (__AVR_ATtiny85__)
if (F_CPU == 16000000) clock_prescale_set(clock_div_1);
// End of trinket special code

pixels.begin(); // This initializes the NeoPixel library.

void loop() {
int One = digitalRead(5);
if (One == 0 && on ==0){
on = 1;

if (on == 1){
for(int i=0;i<14;i++){

// pixels.Color takes RGB values, from 0,0,0 up to 255,255,255
pixels.setPixelColor(i+14, pixels.Color(delayval,delayval,delayval)); // Moderately bright green color.

//; // This sends the updated pixel color to the hardware.

// delay(1); // Delay for a period of time (in milliseconds).

delayval –;
if (delayval == -1){
on = 0;
delayval= 100;


I have also been researching the feasibility of building a geodesic dome. Given the time constraint and amount of materials required I am opting instead to make smaller frame that can hug around a wall or corner of a room to make a makeshift tent. This would be made with a large wooden frame filled in by smaller triangles made of rolled up paper to form the individual cells. Instead of neopixels, I plan to install a projector above the structure with white triangles or squares projection-mapped to the triangles of the structure, as I envision wiring up that many neopixels and getting the light to diffuse properly would be very difficult.

Aesthetically, I want to mimic the look of a butterfly or moths nest that is wispy and light, or cocoon-like, so that it could almost seem like a fantasy fairy dwelling that has a sense of magic to it.

Introduction to Computer Vision

The intention was to adapt my TPO Farfalle patch for an IR camera and projector in Isadora.

The first order of business was to download Andres Colubri’s Syphon Syphon Library in Processing 3.

I used circle tracking in Isadora to receive the feed from the IR camera. This meant feeding that the Syphon receiver actor (for which Processing 3 has to remain open and playing for it to work) into the Zoomer actor. Since the IR camera could see more than the projector’s playing space, the Zoomer actor had to shrink the image down to fit within the boundaries. This was particularly tricky due to the fact that tracking a person is dependent upon their height. The wall made setting the place particularly difficult. The Zoomer actor was then fed into Horizontal Flip, Difference and Contrast Adjust actors, which was then fed into a Video Mixer actor, along with a second input from the Horizontal Flip actor, and then finally into the Eyes actor.

From the Eyes actor I used the horizontal and vertical positions, as well as the object velocity, all of which went through a Smoother actor. The GLSL Shader used the horizontal and vertical positions and the velocity defined the saturation level, as it did in the original patch. Screen shots of the patch can be seen below.

Here is a link to a video of what the patch looks like: Note that it is slight ahead of my feet due to me bending over. It is tracking the top of my head.

I also attempted to incorporate two objects into the patch using Eyes++. I did this by adding two Blob actors and using the Calculator and Limit-Scale Value to average the position between the two objects, as well as average their velocity. While I was able to get the correct readings and it worked in theory, the patch does not lend itself to using two objects and I did not like the practical result. Screen shots of this patch cam be seen below.



Post Class Update:

Rather than using the Circle Tracking technique, I have updated the Isadora patch to use Background Subtraction. This has allowed for a more stable tracking of the object/person in both the single and double version, taking into consideration their extra movements, and has made the double object version able to work when one or both of the objects/people are stationary.

Here is a screen shot of the Background Subtraction (single object version):

Here is a screen shot of the Background Subtraction technique in the entire patch (double object version):

Here is the Isadora Patch:

SBM/TPO Background Subtraction.izz

Here is a video demonstration:


Computer Vision assignment – Grace

The goal of this assignment was to take the patches developed for the Farfalle workshop and adapt it to a IR camera and projector set up in the lab using computer vision.

The camera feed was collected using Processing and sent to Isadora through Syphon.

In Isadora, I made a patch to figure out computer vision with background subtraction. The first thing to do was use Zoomer so the projection space plus a little extra was captured.This then went to a Freeze actor to grab a frame of this image when the stage was empty. This went into an Effect Mixer with the unfrozen but zoomed feed, set to difference. This information all went into Eyes, which could track the location of one person(or blob) moving around the stage. I copied this into a couple of my Farfalle patches, and then mapped the location values to correspond to the projection when someone was walking inside it. I did not do all my patches as there were too many. I had the problem that when under the projection I would cast a shadow, which made it harder to see the effect of the patches. This was especially noticable as the projected surface was smaller than it was in the Farfalle workshop. Some patches felt better to move around under, especially if it had some constant movement and involved the whole of the stage rather than being focused on the location of the person onstage.

I used the Processing code provided in class unchanged for sending the IR camera feed via Syphon to Isadora.

The Isadora patch I made can be found here:


A video:

I was able to detect two objects using Eyes++ as well, though I had the issue of the blob number being identified constantly changing and jumping between blobs, making it hard to maintain 1 blob to grab a location from. I fixed this by decreasing the number of objects to 2. I do wonder if there’s a way to have this actor default to blob 1 if no other blobs are detected.

Here are the two chairs on the stage:

Here they are on Isadora as two separate blobs:

I didn’t do anything with this however, as I didn’t have any patches capable of using two objects (without freaking out).

Touch Sensor Game – Grace

The game I’ve made is a mix of a matching game and Snap.

The setup:

5 pieces of cutlery are arranged on each side of a box. Each piece is connected to another piece on the opposite side with a piece of wire under the box. A weight- triggered switch (made with wire and a folded piece of paper) is placed in the center of the box with a teacup (or other object) on top. Two pieces of wire are attached to metal rings at one end. On the other end, one is attached to the 5V pin on an Arduino, the other to the A0 pin (with a pulldown resistor). Two LEDs are also powered through the arduino and are visible if sitting at the box. One is a dim red and the other a bright blue. The red will light up when switch at A0 is activated, the blue will light up when the central weight-triggered switch is de-activated. OPTIONAL:place a plate in the center of each row of cutlery for prizes or sweets. Lacy tablecloths are also optional.

To play:

Requires 2 players. Each player puts on a ring and sits on opposite sides of the box. Together, they can touch the cutlery on their own sides to find two that are connected by wires. They must watch the for the red LED at the Arduino to tell if it is a match. Once a match is found, it is a race to be the first to grab the teacup from the center. Once the teacup is removed, the bright blue LED with light up, signaling the end of the round, the player holding the teacup when the blue lights up wins a point and can optionally eat a sweet from their plate. To begin a new round, place the teacup on the paper sensor.

Players should not repeat a previous match. The Arduino cannot enforce this so it is up to the players themselves to remember, or mark the matched cutlery somehow.

Here is a picture of the setup in progress:

From inside the box:


Finished setup:



Pictures of the paper switch. One of the wires goes to pin 5, the other to 5V:


The arduino and breadboard

The arduino code:

Video of playing the game:


I had originally wanted each piece of cutlery to be its own switch, so that the matches between cutlery were handled in the Arduino and the combinations could be changed to give the game more longevity, though this proved too complicated to achieve in time. I also generally struggled with making sure there was enough conductivity across all the cutlery pieces, and had to be very careful with wrapping everything with wire and having a paper switch that was activated by weight (I originally tried using copper tape on the bottom of the teacup to bridge the gap between two wires but this made for an inconsistent switch).  I still have issues with conductivity and getting stable readings from the cutlery switch, especially with establishing contact through the rings.


Arduino Game


Using a body switch, the game functions as a reaction test. The game utilises two LEDs of different colors (Green and Blue in this example) which light up randomly. Each LED has a body switch in which on end is help by a person and the other end held by a third person, the player (the player would therefore be holding a wire for each LED). When each LED lights up the player must react by touching the person the LED light corresponds to, if the player is correct the light will turn off and after a brief delay one of the LEDs will turn on randomly. This repeats until the player gets one wrong. When the player touches the wrong person the LEDs flash four  times and then stay on until the game is reset for another turn. The purpose of the game is to turn off as many LEDs as you can before getting it wrong.


void setup: Sets the LEDs as outputs and allows them to be randomly triggered.

void loop: Sets the analog read of the body switch to be touchAmount. The if statement consists of 4 possibilities. If LED 1 is on and person 1 is touched, the next pin is activated. If LED 1 is on and person 2 is touched, the LEDs flash and it is game over. If LED 2 is on and person 2 is touched, the next pin is activated. If LED 2 is on and person 1 is touched, the LEDs flash and it is game over. The Game Over option had to come first to stop people being able to touch both people at the same time and still progress through the game.

void lightRandomPin(int pinNum): If the right person is touched, this randomly lights the next LED.

void flashLED(): This is the Game Over option.


Open Studios Installation (Ethan + Grace)

Welcome to the I.M. Lab Cat

Two images designed on Adobe Illustrator and Sound Effect. Using Isadora with a IR range finder and Serial Xbee. Signal from the receiver trigger the meow sound effect and ‘Welcome to the I.M. Lab’ image. Uses a gate and trigger delay to avoid toggling the image and sound effect continuously when multiple people walk through.


A strip of Neopixels line the bottom of the TVs the cats are on. These are controlled by a Redboard. When a signal is received from the Xbee connected to the Redboard, two pulses of bright light travel from the center of the strip outwards, leaving behind a dimmer trail of color. Multiple pulses can fire at a time. These pulses are controlled by a matrix that keeps track of the pul’s location, whether it is active or not, and what color it is. The is also a pulsing center section of light which quickly randomises its color.

User Testing

Individuals noticed the cat first but failed to notice the neopixels. After discussion, we concluded that this was due to the lights not being at eye level and the cat, therefore, taking attention, further due to the additional sound effect. We moved the lights up so they are visible with the cat.

We also deemed the original cat sound effect to be annoying and sound like a sickly cat. The sound effect was changed to a kitten’s meow to be more pleasant to the audience.

User Testing Documentation