Final performance

So it went well!

There were a few hiccups – namely that we did have to rely on someone else to do the narration since our microphone was picking up its own sound and giving too much feedback. It was unfortunate that we didn’t have our updated script but it didn’t matter too much since our draft was pretty close to the final.

We ended up having close to no time to rehearse since we were planning on rehearsing the day of but time wasn’t on our side. My class ended up going way over time, and Nahil had to set up for her other presentation. But anyways – I made sure that the tech was fine. Our final performance turned out better than expected, in all honesty.

A couple of things could have been a bit better. Like we should have taken one more paper off the IR light so that there would be some light on the puppets. The lines looked great in the video James Hosken filmed of our performance, though it seems that the lights on the wizard had dimmed and weren’t as strong as we wanted it to be.

A rehearsal beforehand certainly would have relieved the stress, but it was also that we had a lot of parts that would work sometimes and other times wouldn’t, that we needed to continuously make changes too. Like the mapping, despite the podium being in the same spot.

All in all, it worked out great And our one minute performance didn’t feel as long as I expected it to be as our wizard and dragon rode off into the sunset.

Output – concerns and successes

We were still waiting on the dragon to be done. Oh, the anxiety. It was taking longer and for some reason the 3D printer kept misprinting what we wanted. From the kindness of several students who were able to supervise the printing when neither of us were in the lab, the dragon was eventually done. But putting it together was a whole other struggle.
While Nahil was trying to put the dragon together, I was working on fixing the mini wizard and making it as mobile as possible. I was also experimenting with where the third light should go. We knew we wanted it to be on the hat. We could tuck it into the hat itself, but I was concerned that the connection would be loose – especially since I had already snapped one battery holder (oops).

The hot glue gun was our best friend in the whole process.

One thing that we were struggling with though was that unusually, the batteries were dying on us really quickly. They would initially be really bright on the IR camera, but then when we went to test them again, they wouldn’t be as bright or for some reason they didn’t work. Sometimes there was a loose connection but other times, the battery had just died.

We finished our puppets in time to come up with a proper story and choreography for our show. But one concern remained: would we have enough time for a rehearsal?

From presenting in class, one of the things we added to our performance was an external IR light to cast some light onto our puppets so that we would get a ghostly effect. We also decided on our performance space as being the podium that we would project onto the front of.

We also decided to add a webcam so that the magic spell the wizard cast would change the scene from a webcam view to the IR view. This would be triggered by a key on the keyboard via a keyboard watcher actor on Isadora.

The dragon puppet turned out grade. We glued some joints together since we didn’t need every single part to move. As long as the head and the wings moved, it could look alive.

The wizard still lacked in being able to properly move. Though the kebab sticks helped, it wasn’t ideal. But we tried to make the most of it, and I was able to figure out some positions to make the choreography easy.

Still no proper rehearsal with all our tech in place and at the last minute, we decided to add a narration which meant that we also had to add a microphone.

Input – trials, errors, successes

Let’s face it – IR lights are just magical. The idea that you can only see the light on camera – totally magical (yes I know, science).

We initially had problems with getting our IR camera to be detected on my computer, but once that was solved (thanks Aaron!), the lights were working and were being detected. The downside was this though: we didn’t have as wide a reach as we thought we would. The camera lens was not as wide as we had hoped, and we couldn’t get very far away form the camera before the lights didn’t show up at all. So we had to massively scale down our performance idea. Instead of a human and a puppet, I thought we should go with doing two mini puppets and a mini puppet performance.

We began building the patch with Isadora. We used a blob decoder to single out each light. We decided to use three lights on each puppet rather than using six per puppet, which was our initial thought when we were thinking about making bigger puppets. Scaling down also meant, for the positive, that we were scaling down our quantity of materials. I decided to make a puppet out of found material. I was able to procure odd-shaped pieces of wood that when I found a way to put together, I was able to create a wizard. This surprisingly took SO LONG to make a TINY puppet, because I had to choose the right type of string, figure out where I would want to put the lights and figure out how many parts of it would move, and how it would move. It ended up being a puppet that was a combination of a string puppet and a rod puppet since once we soldered the wires and the lights and attached them to the hands, the mini wizard loss its ease of mobility with its arms. Using kebab sticks gave it more force to move the arms, but it wasn’t ideal. If we were to go back and do this again, I think I would have either chosen a different spot for the lights, or done the arms exclusively out of wire instead of wire on wood (that was from a mini popsicle stick).

Nahil decided to 3D print a dragon. The problem with this was that it would take hours to make, which was concerning since I wasn’t sure when we would have time for a rehearsal. She seemed determined to not make the puppet out of wood or cardboard, so she went ahead and found a design for a dragon and 3D printed it.

Meanwhile I was super frustrated that the lines weren’t working out like we wanted them to. I couldn’t get a super accurate read on the individual lines, and they also didn’t always draw on. I abandoned the work in frustration one night and returned to it the next morning, where I increased the threshold, added more lines actors, and also added a motion blur actor, which produced a really nice effect. When we were doing our first presentation in class, we were quite happy with the effect the lines were creating.

I made user actors in Isadora to clean up the patch a bit, but this got only crazier when we did add lines connecting all six lights to each other. Here’s what our Isadora patch ended up looking like, even with some user actors thrown in for tidiness:

Brainstorming – Connectedness

Nahil and I first started by having a conversation about our own ideas and sharing videos that inspired us. I was keen on doing a live video performance, but also trying out something I had seen in a video. Coincidentally, the two videos we shared with each other were similar in that they had to do with lines that were made or affected by human movement.

https://www.youtube.com/watch?v=JA6kl4RlA0I&app=desktop (reference point- at 0:20 or 0:40)

https://www.youtube.com/watch?v=g-a9WJA1aJY (reference point- at 0:30)

We then talked about what our options would be to detect human form and movement. The kinect was one, and colour detection was the other. Before getting ahead of ourselves, we decided to settle on a theme. Our main vague themes were –

  • human-robot relationship
  • gravity-anti gravity
  • connectedness

We landed on ‘connectedness’ as a theme. I still wanted to involve live video performance in some way, by the turn of our discussion that didn’t look like it was going to happen. I gave up on trying for it to be, especially since it seemed like we were beginning to have a solid idea on what we wanted in our performance. Settling on connectedness as a theme also meant that we would have two people/objects in our performance. Inspired by the videos we had shared, the lines would be drawn between the people to show a relationship between the two rather than them being seen as isolated figures.

We both knew that we didn’t want to be the performers in the piece. Since I was working on Rita Akroush’s Capstone, puppets were on my mind so I suggested we have one human performer and one puppet performer. Nahil suggested that to differentiate between the figures, inclusive of the figure operating the puppet, that blob detection and chromakeying would be the way to go. I tried working with the kinect, but I really wasn’t happy with the lag time in detecting human form. It also wasn’t detecting the form of the puppet very well. It sometimes did (hooray), but most often wouldn’t. I reluctantly abandoned using the kinect and decided that Nahil was right – that we had to use a PS3 IR camera.

T-Rex meets Kinect

Marika and I first thought about what story to tell. We decided to take a scene from Rita Akroush’s ’65 Joules’ and create our own narrative from it through a series of five tableaus (notice the storyboard behind Marika in the video). This is what we came up with:

From experimenting with the kinect in class, we were able to trigger a sape to change colour and a sound to appear when we moved our hands to the center. This pose felt very superhero-like, so we thought to build on that feeling. In Akroush’s play, we took a moment when there is an imaginary T-Rex. We decided to create a narrative where a T-Rex attacks someone driving int heir car and gobbles them up. Then the superhero emerges who kicks battles with the T-Rex.

The sounds were a combination of sounds we made from Logic Pro and sounds we were able to find online (like the cookie monster is the sound of the T-Rex eating). We first had trouble with finding the range of values for the coordinates as they would fluctuate between extremities. For most of the coordinates we ended up not necessarily specifying a range, but playing saying > or < a specific value. For example, the superhero victory sound is made when the left hand has a y coordinate > 70.

The other challenge we encountered was that every time we played a sound recording, because we were changing the speed from 0 to 1, if we reactivated a sound it would just pick up from where it left off as opposed to starting again. So next time around, we would have to come up with a logic patch that would tell Isadora to restart at 0 whenever the speed has changed back to 0. We used two different tracks for the sound of the kicking so that there would be some variation, and we connected this to a counter. Ideally we would have had more sounds, but we had a similar problem of not being able to specify which information we wanted to activate the sound. Because information is always coming in, the counter would just keep moving and re-triggering the sounds, which meant that nothing would actually play because it was always changing the numbers.

Kinect version of video

Project(ion)

I was so ambitious in trying to projection map and picking an unconventional object. I settled on a rose with a long stem, and spent time mapping around the leaves.

I was hoping that the object I would pick would inspire what the visuals and performance would be, but that didn’t turn out to be the case.

I then tried mapping to a blocked bookshelf and trying to do something with sound, before realizing that I should have the visuals decided before even thinking about sound. I recalled the Blue Man Group performance we saw in Dubai, and thought to do something with miniature drums. This would’ve been what I would have gone for, if not for the fact that I couldn’t get two tilt sensors to work in Isadora at the same time. I got them both to read data – which was such a minute moment of happiness because I hadn’t gotten two sensors working on my own before. But this joy faded away when I couldn’t get Isadora to read them both. So I tried working with a distance sensor and tilt sensor, and was able to get the serial communication happening.

In hindsight, I would have approached this project starting from am idea rather than jumping from mapping to sound to visuals to sensors. What I should do is create a plan for myself of what I want to achieve, and how I would do it, and what would be the first steps. It’s just a little disappointing because I feel like since we’re nearing the end of the semester, my weekly projects should be shown with some confidence and success rather than tried and failed experiments.

Anyways, here’s what I ended up with. I used found materials as a projection surface. The form is inspired by Semiconductor’s Catch the Light.

projection week (film file here)

Music is in the air

I share a lot of similar interests to Imogen Heap. When I began learning how to VJ, I had a MIDI control that was limited to 8 touch pads and 4 knobs. Turning the knobs was slightly annoying to do because they were small, and the grooves in them that were meant to help have a tight grip on them with your fingertips felt rough. My movements were constrained to a controller not shorter than the length span of two of my hands. I wanted to move (and especially since I’m working with VJing to music, then I definitely wanted to dance). So I thought I would build my own kind of glove that would allow me to manipulate visuals with my movements. I quickly realized that I did not know how to do that and it would take forever, so I started looking at what technology was already available that I could use. Because surely there had to be something.

I came across several gloves, including those by Imogen Heap. Everything was well out of my price range, so I discovered and happily settled on using a Leap Motion and a Numark Orbit MIDI controller that I could operate wirelessly.

Imogen Heap’s gloves are absolutely incredible because of how precise her movements can be translated into an effect or function. I really like how she can just run her hands in the air and she’s actually scanning through a sound wave. What’s also really cool is how she wants to incorporate changes in the sound based on her proximity to the audience. If I’m not mistaken, she uses a kinect that can be seen in the video to track her movements across the stage. Even though her gloves turn on unexpectedly during her talk and she has to do specific gestures to stop them from recording or adding effects while she talks.

Watching her perform her song made me realize that aside from creating the gloves, there’s also a whole choreography to create. And that, figuring out which gestures would be feel intuitive to enable an effect would have taken a considerable amount of time to play-test.

Shaders scare me

I don’t think I know how to use shaders….

So it was an interesting experience to try to pull shaders from shadertoy.com to Isadora, and encountering the problem of some errors in compilation appearing.

When I did find one that works though, the next challenge was to figure out what to do with it. I realized that I had kind of been using computer vision already since I used the webcam as an input for my Teletubby project and my midterm.

I felt limited because I could only figure out ways to use horizontal or vertical movement as an input (though the vertical was more finicky). I tried playing with the CI Distortion actors so that the movement of someone could have an effect beyond just moving a visual left or right or up or down, but for some reason this did not yield the expected effect. I mainly ended up playing with motion blur, and changing the scale of the visual.

In both, the central visual element responds to movement by the individual and serves as if you’re painting the screen. This is emphasized through the use of motion blur on the blue shape in the first video, and with the slit scan on the second video that is painted from the velocity of the movement.

Stray away from confusion: response to the spring break readings

I really enjoyed reading Golan Levin’s article, “Computer Vision for Artists and Designers: Pedagogic Tools and Techniques for Novice Programmers” because it was a great overview into the history and use of computer vision. I’m excited to learn about it more in class and play with the kinect in developing work. Levin refers to many examples that can be split into ‘obvious interaction’ and ‘subtle interaction’. Obvious, being that you can see your figure is mirrored and your motion (or simply you exist as a solid figure) create a response. I take a subtle example to be Rafael Lozano-Hemmer’s installation Standards and Double Standards (2004). He takes, “full-body input in a less direct, more metaphorical context” by having hung belts rotate according to your presence. This really intrigues me because there’s a process of discovery that comes with it. I’d want to aim for wonder or curiosity, not confusion. It’s like tigoe suggests on their blog posts about setting up the stage, shutting up and listening: in pre-scripting your interaction, “you’re telling the participant what to think, and by extension, how to act. Is that what you wanted?”

The last article especially was a little discouraging to be honest.Ultimately interactive projects involve the use of a glove, of motion, touch, sound, etc. It’s a broad but limited list. How can I create work if it’s already been done before? And, how can I reinvent work that’s already been done before?This is where Romy Achituv’s Text Rain is a great example of taking the fact that mirroring figures and showing them in a different way has already been done before but trying to come up with an alternative visual – like falling letters you could play with. This has me thinking about my final project in relation to performance, especially since I’ve seen several videos before on dancers using computer vision by creating visuals that respond to their movements. But can I create something in a performance space that has a subtle interaction?

I think the two most important lessons to take about the articles is that simplicity is key, and you should be sure of what you want. In being sure of what you want, however, one needs to navigate how to make sure the audience is not restrict, but they understand the language and the rules of the interaction. It’s like in theater, when we are taught that you must teach the audience the language of the play by giving them clues. If it’s okay for the audience to laugh and respond to what you’re saying, you need to give them a clue that this is okay. If your audience is meant to understand that a specific visual means something, you need to introduce in the very beginning somehow that this is the case.

I think I should be treating interactive media as a theater class, and see where I go from there.

 

Midterm review

I’m not sure where to begin. So I’ll talk about how I feel and we’ll go from there.

I feel really disappointed with my midterm project. I think I really struggled to figure out an interaction that could be interesting. I didn’t want to use a sensor that measured distance because I had already used it before. I spent some time trying to think of ways to create the illusion that a participant could blow at the clouds and they would float away. There were two problems to this; the first being that I didn’t know how to make that happen and distorting the image wasn’t producing the effect I wanted, and the second was that I wasn’t sure how to make a participant aware that they could blow at the clouds.

So then I reverted back and thought to go simply and have distance affect the video in some way. I ended up getting so wrapped up in making the Isadora patch that it didn’t occur to me till later that I hadn’t actually used arduino…. problematic. But it was working great and I was able to achieve what I wanted: that an image appears pixellated until someone moves. If you stay still, it returns to being pixellated.

As Pierre pointed out, in terms of creating a meditative space, this seemed counter-intuitive. Firstly, I realized that ‘meditative’ didn’t have to mean stopping and staring. And secondly, I became interested in reigniting the body and encouraging people to be active with their bodies.

Since I realized I wasn’t making use of the ultrasonic sensor, I thought that ideally I would present the project without making my laptop visible since I was using the webcam and the in-built microphone as inputs. I tried to use a sound detector then to read sound in a room, and though I was able to finally figure out a simple code for doing this, it occurred to me that it was best suited for picking up close-range vibrations. I borrowed an external webcam, which didn’t work until I figured out what was wrong on the morning of, but this proved ineffective for the space. Most of my time was spent trying to figure out how to present the project as since ideally this should be on a screen, I couldn’t figure out the best spot for the projector without a participant blocking the light. So it had to be at an angle, but this proved challenging in terms of where to place the webcam and because of the windows in the room the webcam was being funky and not picking up differences in brightness well when a body would move. So most of my last 24 to 48 hours were spent trying to figure out the ideal set-up and trying to find ways to use a sensor.

In terms of approaching a future project, I would try writing down steps and spending more time figuring out the interaction and then trying to see what’s possible versus trying to do something simple and manageable.