Phygical Expression

The Concept

The idea behind our final project is inspired by the game ‘Just Dance’ that Omar and I both played when we were younger and still continue to. It is also well-known and universal and almost every individual in our generation has played it or at least knows about it.

This is an image for reference of an older version of Just Dance where the players are supposed to make the same dance moves like the ones displayed in front of them on the screen.

Seeing that we currently live in a time with restrictions on the number of people in the same room at the same time and the limit of interaction, the installation we made requires having just one person in the frame at a time.

The first person that goes in has a snapshot taken of their first pose, this image will then be displayed on the screen presented for the people that come after and they would reenact/imitate such image. Once they do it correctly, the screen turns green for a second and plays a ‘success’ sound. They are then given a few seconds to pose in a way that expresses their thoughts, ideas, or a move that they find personal. When the timer gets to 0, the system takes a screenshot of their pose and displays this image for the next person. The next person that goes into the frame will do the same; stop, pose like the previous person, record their individualized pose, and so on.

The Final Interactive Installation

gg

Here are some of the poses that we collected put together in a collage

The Implementation

When working on the implementation, we found that using PoseNet and ml5.neuralNetwork() would be the best way to record poses and check for them later. In order to implement it, we looked into machine learning and watched videos online to learn about it which was challenging but also really exciting. It took us a while to understand key concepts but once we did, it was fairly easy to apply them to our project and plan out how each part of the project would be coded. After setting things up, we took turns doing a standstill pose on camera as the code collected training data. The data was put into an array and saved as a .json file, which we later normalized (the large x & y values for) and used to train the model. At this point, the code runs normally and the users can go in.

The Code

Below are the most important sets of codes that are needed to explain how some of the complex functions are actually implemented.

The first function (keyPressed) is basically what we use to record the data, train it, and save it. The second function is our setup function that initializes the canvas, loads the prerecorded pose, and loads the sound effects.

These two functions are the ones that do most of the work. The first one (classifyPose) is the one that saves the different data points of the poses in the system to then compare with other poses to check if they are correct. The function after is the one that actually gets the result (gotResult) which checks if the confidence meaning the accuracy of the pose is close to 98% to leave space for a slight margin of error. If that’s the case most boolean variables change and are used elsewhere. The success sound is then taken and everything else occurs.

Pay attention to what you touch! – Robert and Suzan

CONCEPT:

The idea of this project is to set up an object that says ‘move me,’ that invites people to touch it. Moving the object sets a trigger of 3 interactive visuals that change every 20 seconds. While the audience is distracted by the visuals, trying to see what exactly each achieves, we are secretly testing to see if they touch their face during the process, ultimately transferring germs that they touched onto their face. At the end of the visuals, images of them touching their face is shown, exposing them and bringing awareness to how they may be exposing themselves to unwanted germs.

PROCESS:

As we progressed, we jumped around how many shaders, types, and styles we wanted to present. We finally reached the conclusion that we would go for 3 shaders, each signifying something and trying to achieve something.

  1. A simple one. We hope a simple one may make the audience feel underwhelmed and would like to overcompensate with movements
  2. Shocking– one to make the audience feel observed, overwhelmed, and nervous.
  3. Strange, difficult to understand, a bit queasy.

All these are in hopes that the audience may feel a bit awkward and fall into some nervous tics such as touching their face or covering their mouths.

1. Simple
2. shocking
3. Strange

The second and third where difficult to accomplish. The second was remade to make the eyes track someone, instead of opening and closing randomly. And the third took long because of the very long and complicated code. But eventually, we got the three of them to work in one code, seperated by constant intervals.



The object was created using paper, and in the shape of a pyramid. This shape was chosen because it was the easiest option to move and hold, as it has an easy grab, and also could remain the sturdiest option to create out of paper.

We also decided to add an interval shader between runs, so that the laptop is still displaying something intriguing when it hasn’t been triggered yet.

screensaver shader

RESULT:

after testing with a few people, we came to some interesting conclusions. Although most people did end up falling in the issue where they touched their face, some did not. These people tended to be the people that I didn’t know at all. I realised that although I expected the people who felt awkward to these tics, if they felt too awkward, they tended to be very stiff, not move at all, and ‘pass the test.’ They found more difficulty in interacting with the shader. The more comfortable the person was around me (usually my closer acquaintances, the more they moved, enjoyed the shaders, and touched their faces.

Because of privacy concerns, I will only show two of the screenshots taken of the user testing

Reflection:

Despite it not always succeeding, and the social anxiety I got asking people to try my project, it was very rewarding to see people understand their error and the shock on their faces when they realised what they had done. Overall, we are really happy with our idea and are proud to have been able to implement it.

Final Project Demonstration

My room, my haven

We have all developed personal relationships with our rooms during the COVID-19 pandemic. During quarantine periods, we spent most, if not all, of our time in our rooms, thinking, moving and feeling different things in our personal space. Personally, my room became a safe haven for me, because I knew that no matter what happened, whether I was fearful of the pandemic, or experiencing social anxiety, I could always go back to my room and keep myself company. As I spent more time in my room, my mind started to divide it up into parts – holons. Each section of my room evoked different emotions within me. For my final project, I wanted to capture this feeling, as I believe that we can all relate to this, especially because of the pandemic.

What: ‘My room, my haven’ is a project that highlights how each part of my room makes me feel. It combines visuals and sounds together to make one interactive piece. I divided my room into four main sections: bedroom section, kitchen, cosmetics section and workspace. Each section has an accompanying visual and sound that is triggered once I move within the section. The sounds become louder if I start to move more in each section.

How: I used frame differencing, shaders and TidalCycles to implement this project. I browsed on shadertoy and chose four shaders that I thought represented each section of my room nicely, and made appropriate adjustments to the shaders on Atom. After this, I combined the four shaders into one shader file and used ‘if else’ statements to place each shader on an appropriate part of the screen. I also included an extra ‘else’ statement at the end to make sure that the unused parts of the screen would just be the webcam. Next, I used frame differencing in the sketch.js file to detect movement of pixels in specific locations where the shaders were placed. If the movement in a specific section was greater than a threshold chosen by the user, then a boolean will equal true, and this would be sent as a uniform value to the fragment shader file. In this file, if this boolean is true, then the shader will turn on. I also used the function setTimeout() to set a timer of 1 second, so the shader turns on for only one second unless there is more movement in the section. I also used OSC to send values from the sketch.js to the TidalCycles file containing the sound, so that I could control sound based on my movement.

Evaluation: I really enjoyed working on my final project! I learnt a lot by coding and experimenting with different shaders and sounds with TidalCycles. I am very satisfied with my final project because it fulfils my original vision of what I wanted it to be. It was very difficult for me port the first shader from shadertoy, because it required version 300 of glsl. But after fixing all these errors, I got the hang of it. The part I also found difficult was using frame differencing to trigger each section of the screen, but after receiving help from my Professor, I was able to complete this. Overall, I believe my final project was successful, and I look forward to sharing my work with the rest of my class!

Here’s a recording of me ‘performing’ my final project in my room:

https://youtu.be/gssK75pVI5Y

Final Progress

This week Suzan and I worked on getting everything together for our final project. It was super interesting for us both to work on this project because of how far each of us are from each other. Our idea for our project seemed simple and interesting enough to do and should have taken us not much time to complete since we both had a clear idea of what we wanted to achieve, the technicalities to complete it but that does not actually matter when coding. Suzan and I would work on each small part we had each assigned for ourselves and then compare notes on what worked, what didn’t and how we would fix the issues and what to change.

This required a lot of random and unprompted Zoom calls to pair program and debug and get things working. I worked on the user’s eye tracking (that we used to have the shader follow the user as they moved), handOverFace detection while Suzan worked hard on getting the shaders to all work in sync and with the interaction with the Arduino. The most challenging aspects of all of this were the parts we expected to be the simplest and easiest to implement. Displaying the image screenshots we took, for reasons unknown to us, were not being displayed the way we had expected nor were the shaders responding adequately to the users movement. After multiple back and forth calls and pushes to Github we were able to get the project to work mostly the way we wanted it to.

Despite the challenges we had it was very rewarding in the end to see our goal be achieved and our project come to life. The collaboration was the most fun and interesting part because I felt I was able to learn a lot from working with Suzan and appreciate her impact on the project from the ideation and brainstorming process all the way to the problem solving and implementation. We look forward to seeing how actual users interact and respond to our project and learning as much as we can about human behavior and interaction and most importantly how much the pandemic has altered our behaviors when it comes to touching our faces and so on.

When in doubt, move your head around… A final SBM Project

This final project has been an interesting process. Through conversations with friends I gained new perspectives on the meaning of Zoom in our life and what we can make of this Zoom University experience. Through much experimentation with different shaders, I now feel more comfortable with P5.js and navigating too many subfolders in the terminal.

The Question(s)

My final project resolves around the question of how we connect (or don’t connect) through Zoom which has now become a central element with classes and social gatherings taking place virtually. Do we feel a connection to the people we see on our screen? Why? Why not? How do we connect? We see each other, though only our faces, constantly. Our body is somewhat reduced to whatever we show/see in our little rectangle. So does the body play a central role in our connection? If so, which parts?

The process • Part I • Part II

For me, part of this final was an artistic way of exploring present realities, trying to make sense of them, and trying to somewhat visually transmit where I see the body in Zoom meetings inspired by the thoughts shared by the friends I interviewed as well as my own thoughts.

I asked several friends to share their view and experience with “Zoom connections”, specifically asking which role they attribute to the(ir) body in those connections. There were thoughts on keeping the camera on to be “physically present” though “mentally absent” or also keeping the camera on to encourage themself to stay engaged. There were thoughts on feeling watched or watching others, as in a Zoom meeting, you never knew who was looking at you or at somebody else. There were thoughts on people being very still, almost like a picture of themselves, and thoughts on people constantly moving around, perhaps changing space. These thoughts inspired me to create this little video which is all about being watched/watching as bodies are still or in movement. I initially wanted to overlay it with voice recordings I have from the different interviews but decided to stay with the somewhat uncomforting not completely silent silence coming from the different recordings I put together – a silence I still feel weird about when nobody speaks in a Zoom meeting but everyone stares at their screen.

Part I – Many Bodies. Many Eyes.

Besides this more research based outcome, I also wanted to create a simple, fun, more interactive outcome which resulted in an idea for a real time Zoom intervention. If you have some synchronous classes left, give it a try 🙂 For this, I edited the delay shader we looked at in class to have many more layers. I capture the browser window in OBS, start a virtual camera from OBS and use that camera as my Zoom camera. As long as I don’t move, everything is fine. But once I move, it first seems like unstable Internet and if I move faster, like many versions of myself. Unfortunately, my laptop reached its limits with this experimentation and the video output in Zoom was much slower and not as clean as the one in the Browser or OBS but maybe that makes this end-of-semester mood even more realistic.

Part II (Demo) – Weirdly moving my head here – this is were the project title comes from. Also don’t focus too much on what I say 🙂
Here is a “behind the scenes” so that you can employ this effect in your next Zoom meeting yourself.

Week 11: Final Progress

This week, I focused on how I would implement my final project. I initially planned to use GridEye to detect my location in my room. Then Professor suggested that I use body pix and blob detection. However, this would make my computer go very slow. After more brainstorming, I decided that I would use frame differencing instead to implement my idea. I would split my room into four sections, and each section will have its own shader and its corresponding sound that will be triggered when I move in each section. This week, I focused on choosing the appropriate shaders, and making sure they work on atom. I got several errors, but after spending hours on google, and asking Professor for help, I was able to fix these errors and successfully implement the shaders on atom. Moving forward, I will focus on creating the sounds using TidalCycles, then using frame differencing to tie everything together. I look forward to seeing how everything turns out in the end!

Week 11: Final Progress

Edit: During class, Aaron and I talked about the idea of bringing the shader effect live into a Zoom meeting which would place the piece in a context unexpected by others, surprising, maybe prompting reflection but also allowing to amplify my own mental state through the visual effect.

• • •

This week, I asked more friends for their thoughts on ‘connecting through Zoom’ and gathered some interesting ideas on the role of our body in making virtual connections and the role of the mental/physical space we find ourselves in while trying to connect virtually.

One person talked about how they considered the interpersonal relation and the content of their conversation more relevant to connecting than their physicality which is an important perspective since many facilitators prefer cameras on as first step to “connecting”.

Another person talked about the difficulties of entering different mental spaces while remaining in the same physical space which sometimes also made it difficult to connect virtually. This goes a bit off topic to the role of the physical body in connections though it relates to our mind.

My piece will be interactive in the process but less so in the final outcome as it is more a way of artistic research and presentation. Those who watch the final video will not be able to interact with the shader but are invited to reflect on their own experience prompted by the different ideas voiced in the piece.

I am currently editing the shader I want to use and mainly exploring how my body can represent different ideas voiced. I will record and then bring everything together using FinalCut.

Here is a sketch of what I envision the final video to look like:

Very rough sketch of my visual vision.

Week 11: Final Progress w/Omar

For this week, Omar and I met up to work on the implementation of our project. We put a lot of thought into brainstorming to see what visuals and shaders we’ll use in order to optimize our output. As we went through the process of outlining our idea, we kept on getting more ideas of somewhat different applications of shaders as we wanted it to be even more fun and interactive.

The idea we decided to go ahead with and implement is inspired by the game ‘Just Dance’ that we both played when we were younger and still continue to. It is also well-known and universal and almost every individual in our generation has played it or knows about it.

This is an image for reference from the 2017 version of Just Dance.

Seeing that we currently live in a time with restrictions on the number of people in the same room at the same time and the limit of interaction, we are working on an installation that requires having just one person in a frame at a time.

To start off, an image that we initialized would first be presented to the very first person which they would reenact/imitate. Once they do it correctly and follow its silhouette, they will then be given a few seconds to stand in a position that expresses their thoughts, ideas, or a move that they find personal. When they do that and standstill, the system will take a screenshot of the individual’s pose and save this image for the next person. The next person that goes into the frame will do the same steps; stop, pose like the image, and record their individualized pose, and so on.

When working on the implementation, we found out that using PoseNet and ml5.neuralNetwork() would be the best way to record certain poses and check for them later. Looking into machine learning was challenging but also really exciting. It took us a while to understand key concepts but once we did, it was fairly easy to apply them to our project and plan out how each part of the project would be coded. After setting things up, we took turns doing a T-pose on camera as the code collected training data. The data was put into an array and saved as a .json file, which we later normalized (the large x & y values for) and used to train the model. Although we have a clear path laid out for how we’re going to move forward, we ran into some problems here. In the p5 editor, the Training Performance window is blank, so we cannot track the training of our model. We also get an error and we never get a model downloaded, as well as an error code that we spent hours trying to troubleshoot and that doesn’t seem to make a lot of sense.

Screenshot of the error displayed in the console below the p5 sketch.
Screenshot of the error displayed in the console in the developers tools.

Week 11: Final Progress

For this week, Robert and I dived deeper into how we will proceed with this final.

After thinking, and discussing together, we have reached some conclusions and made some assumptions. We decided to use around 3 shaders of different styles to provide a time or a purpose for the fake event. For this to work, we had to amplify the (maybe social) anxiety and queasiness of the audience member. The idea is to have 3 shaders and have the goal of the event for them to identify which shader they like the most, mean-while the shaders are different targets onto their feelings of anxiety. We also hope that during this interactive installation, having someone standing and watching them will add to their feeling of nervousness and anxiousness. We hope that these feelings will add to their nervous habits, one of which is touching their face.

This week Robert’s goal was to get the posenet to detect face movements, and capture the screen, while mine was to find shaders to work with.

For the three shaders I split them into categories:

  1. That is a bit calm, in a way that the audience may feel underwhelmed and would like to overcompensate
  2. Shocking- maybe queasy and disturbing
  3. Strange, difficult to understand.

For number 1, I decided on something that we took in class. The sine split was a great example, you could clearly see something happening, but having a body watched to see how you would react to this is anxiety inducing.

For number 2, I knew the moment I saw the example. This DNA example that uses posenet as well stirs a lot of unsettlement and queasiness in me that i hope does the same to most audience member. It is also resemblence of germs, dna, and bacteria- a forshadowing of what our key message is on the spread of germs

Lastly, the more complicated one- true to its title. For this one I chose one of shadertoy which I found really interesting. It seems very complex, and uses sound as well. The issue is that it isn’t the easiest to figure out how to translate, and still hasn’t been fully integrated, but i hope to achieve it soon. Here is the link to the shadertoy:

https://www.shadertoy.com/view/3lXBDf?fbclid=IwAR3XWMp7b_ato6RQ1fiwZaq68lBQljQSfI3kavz3fX6c3-xaUqZABruKZNU

We would be happy to hear some thoughts and opinions on our choice of strategy and visuals! Thanks!

Week 10: Progress

I started preparing for my final project by planning what parts of my room I want to be included in my final project. My room doesn’t have much in it: it consists of my bed area, my ‘kitchen’, my work area and my cosmetics section. Since these sections are the four main areas in my room, I have decided to use four different shaders and four different sounds that will be triggered depending on what part of the room I am in. I want the shaders to be the mask kind, where the shader will appear inside my body. Each shader will represent the way I feel standing in each part of my room. So, the shader will be on my body while the rest of the room will be visible. For my bed area, I want to use a very relaxing shader and very calming sounds to show how I feel relaxed. For my kitchen, I want the shader to be more vibrant, and I want energetic sounds. For my work area, I want to shader to be more dull, and the music to be tense to demonstrate how I often feel exhausted in this area, especially during the pandemic. For my cosmetics section, I want the shader to be more creative. I am still unsure what method I will use to achieve my idea, but I am leaning towards using the gridEye to detect where I am in the room. I look forward to working more on my final project, and implementing it in my room.