Phygical Expression

The Concept

The idea behind our final project is inspired by the game ‘Just Dance’ that Omar and I both played when we were younger and still continue to. It is also well-known and universal and almost every individual in our generation has played it or at least knows about it.

This is an image for reference of an older version of Just Dance where the players are supposed to make the same dance moves like the ones displayed in front of them on the screen.

Seeing that we currently live in a time with restrictions on the number of people in the same room at the same time and the limit of interaction, the installation we made requires having just one person in the frame at a time.

The first person that goes in has a snapshot taken of their first pose, this image will then be displayed on the screen presented for the people that come after and they would reenact/imitate such image. Once they do it correctly, the screen turns green for a second and plays a ‘success’ sound. They are then given a few seconds to pose in a way that expresses their thoughts, ideas, or a move that they find personal. When the timer gets to 0, the system takes a screenshot of their pose and displays this image for the next person. The next person that goes into the frame will do the same; stop, pose like the previous person, record their individualized pose, and so on.

The Final Interactive Installation


Here are some of the poses that we collected put together in a collage

The Implementation

When working on the implementation, we found that using PoseNet and ml5.neuralNetwork() would be the best way to record poses and check for them later. In order to implement it, we looked into machine learning and watched videos online to learn about it which was challenging but also really exciting. It took us a while to understand key concepts but once we did, it was fairly easy to apply them to our project and plan out how each part of the project would be coded. After setting things up, we took turns doing a standstill pose on camera as the code collected training data. The data was put into an array and saved as a .json file, which we later normalized (the large x & y values for) and used to train the model. At this point, the code runs normally and the users can go in.

The Code

Below are the most important sets of codes that are needed to explain how some of the complex functions are actually implemented.

The first function (keyPressed) is basically what we use to record the data, train it, and save it. The second function is our setup function that initializes the canvas, loads the prerecorded pose, and loads the sound effects.

These two functions are the ones that do most of the work. The first one (classifyPose) is the one that saves the different data points of the poses in the system to then compare with other poses to check if they are correct. The function after is the one that actually gets the result (gotResult) which checks if the confidence meaning the accuracy of the pose is close to 98% to leave space for a slight margin of error. If that’s the case most boolean variables change and are used elsewhere. The success sound is then taken and everything else occurs.

Pay attention to what you touch! – Robert and Suzan


The idea of this project is to set up an object that says ‘move me,’ that invites people to touch it. Moving the object sets a trigger of 3 interactive visuals that change every 20 seconds. While the audience is distracted by the visuals, trying to see what exactly each achieves, we are secretly testing to see if they touch their face during the process, ultimately transferring germs that they touched onto their face. At the end of the visuals, images of them touching their face is shown, exposing them and bringing awareness to how they may be exposing themselves to unwanted germs.


As we progressed, we jumped around how many shaders, types, and styles we wanted to present. We finally reached the conclusion that we would go for 3 shaders, each signifying something and trying to achieve something.

  1. A simple one. We hope a simple one may make the audience feel underwhelmed and would like to overcompensate with movements
  2. Shocking– one to make the audience feel observed, overwhelmed, and nervous.
  3. Strange, difficult to understand, a bit queasy.

All these are in hopes that the audience may feel a bit awkward and fall into some nervous tics such as touching their face or covering their mouths.

1. Simple
2. shocking
3. Strange

The second and third where difficult to accomplish. The second was remade to make the eyes track someone, instead of opening and closing randomly. And the third took long because of the very long and complicated code. But eventually, we got the three of them to work in one code, seperated by constant intervals.

The object was created using paper, and in the shape of a pyramid. This shape was chosen because it was the easiest option to move and hold, as it has an easy grab, and also could remain the sturdiest option to create out of paper.

We also decided to add an interval shader between runs, so that the laptop is still displaying something intriguing when it hasn’t been triggered yet.

screensaver shader


after testing with a few people, we came to some interesting conclusions. Although most people did end up falling in the issue where they touched their face, some did not. These people tended to be the people that I didn’t know at all. I realised that although I expected the people who felt awkward to these tics, if they felt too awkward, they tended to be very stiff, not move at all, and ‘pass the test.’ They found more difficulty in interacting with the shader. The more comfortable the person was around me (usually my closer acquaintances, the more they moved, enjoyed the shaders, and touched their faces.

Because of privacy concerns, I will only show two of the screenshots taken of the user testing


Despite it not always succeeding, and the social anxiety I got asking people to try my project, it was very rewarding to see people understand their error and the shock on their faces when they realised what they had done. Overall, we are really happy with our idea and are proud to have been able to implement it.

Final Project Demonstration

My room, my haven

We have all developed personal relationships with our rooms during the COVID-19 pandemic. During quarantine periods, we spent most, if not all, of our time in our rooms, thinking, moving and feeling different things in our personal space. Personally, my room became a safe haven for me, because I knew that no matter what happened, whether I was fearful of the pandemic, or experiencing social anxiety, I could always go back to my room and keep myself company. As I spent more time in my room, my mind started to divide it up into parts – holons. Each section of my room evoked different emotions within me. For my final project, I wanted to capture this feeling, as I believe that we can all relate to this, especially because of the pandemic.

What: ‘My room, my haven’ is a project that highlights how each part of my room makes me feel. It combines visuals and sounds together to make one interactive piece. I divided my room into four main sections: bedroom section, kitchen, cosmetics section and workspace. Each section has an accompanying visual and sound that is triggered once I move within the section. The sounds become louder if I start to move more in each section.

How: I used frame differencing, shaders and TidalCycles to implement this project. I browsed on shadertoy and chose four shaders that I thought represented each section of my room nicely, and made appropriate adjustments to the shaders on Atom. After this, I combined the four shaders into one shader file and used ‘if else’ statements to place each shader on an appropriate part of the screen. I also included an extra ‘else’ statement at the end to make sure that the unused parts of the screen would just be the webcam. Next, I used frame differencing in the sketch.js file to detect movement of pixels in specific locations where the shaders were placed. If the movement in a specific section was greater than a threshold chosen by the user, then a boolean will equal true, and this would be sent as a uniform value to the fragment shader file. In this file, if this boolean is true, then the shader will turn on. I also used the function setTimeout() to set a timer of 1 second, so the shader turns on for only one second unless there is more movement in the section. I also used OSC to send values from the sketch.js to the TidalCycles file containing the sound, so that I could control sound based on my movement.

Evaluation: I really enjoyed working on my final project! I learnt a lot by coding and experimenting with different shaders and sounds with TidalCycles. I am very satisfied with my final project because it fulfils my original vision of what I wanted it to be. It was very difficult for me port the first shader from shadertoy, because it required version 300 of glsl. But after fixing all these errors, I got the hang of it. The part I also found difficult was using frame differencing to trigger each section of the screen, but after receiving help from my Professor, I was able to complete this. Overall, I believe my final project was successful, and I look forward to sharing my work with the rest of my class!

Here’s a recording of me ‘performing’ my final project in my room:

Final Progress

This week Suzan and I worked on getting everything together for our final project. It was super interesting for us both to work on this project because of how far each of us are from each other. Our idea for our project seemed simple and interesting enough to do and should have taken us not much time to complete since we both had a clear idea of what we wanted to achieve, the technicalities to complete it but that does not actually matter when coding. Suzan and I would work on each small part we had each assigned for ourselves and then compare notes on what worked, what didn’t and how we would fix the issues and what to change.

This required a lot of random and unprompted Zoom calls to pair program and debug and get things working. I worked on the user’s eye tracking (that we used to have the shader follow the user as they moved), handOverFace detection while Suzan worked hard on getting the shaders to all work in sync and with the interaction with the Arduino. The most challenging aspects of all of this were the parts we expected to be the simplest and easiest to implement. Displaying the image screenshots we took, for reasons unknown to us, were not being displayed the way we had expected nor were the shaders responding adequately to the users movement. After multiple back and forth calls and pushes to Github we were able to get the project to work mostly the way we wanted it to.

Despite the challenges we had it was very rewarding in the end to see our goal be achieved and our project come to life. The collaboration was the most fun and interesting part because I felt I was able to learn a lot from working with Suzan and appreciate her impact on the project from the ideation and brainstorming process all the way to the problem solving and implementation. We look forward to seeing how actual users interact and respond to our project and learning as much as we can about human behavior and interaction and most importantly how much the pandemic has altered our behaviors when it comes to touching our faces and so on.

When in doubt, move your head around… A final SBM Project

This final project has been an interesting process. Through conversations with friends I gained new perspectives on the meaning of Zoom in our life and what we can make of this Zoom University experience. Through much experimentation with different shaders, I now feel more comfortable with P5.js and navigating too many subfolders in the terminal.

The Question(s)

My final project resolves around the question of how we connect (or don’t connect) through Zoom which has now become a central element with classes and social gatherings taking place virtually. Do we feel a connection to the people we see on our screen? Why? Why not? How do we connect? We see each other, though only our faces, constantly. Our body is somewhat reduced to whatever we show/see in our little rectangle. So does the body play a central role in our connection? If so, which parts?

The process • Part I • Part II

For me, part of this final was an artistic way of exploring present realities, trying to make sense of them, and trying to somewhat visually transmit where I see the body in Zoom meetings inspired by the thoughts shared by the friends I interviewed as well as my own thoughts.

I asked several friends to share their view and experience with “Zoom connections”, specifically asking which role they attribute to the(ir) body in those connections. There were thoughts on keeping the camera on to be “physically present” though “mentally absent” or also keeping the camera on to encourage themself to stay engaged. There were thoughts on feeling watched or watching others, as in a Zoom meeting, you never knew who was looking at you or at somebody else. There were thoughts on people being very still, almost like a picture of themselves, and thoughts on people constantly moving around, perhaps changing space. These thoughts inspired me to create this little video which is all about being watched/watching as bodies are still or in movement. I initially wanted to overlay it with voice recordings I have from the different interviews but decided to stay with the somewhat uncomforting not completely silent silence coming from the different recordings I put together – a silence I still feel weird about when nobody speaks in a Zoom meeting but everyone stares at their screen.

Part I – Many Bodies. Many Eyes.

Besides this more research based outcome, I also wanted to create a simple, fun, more interactive outcome which resulted in an idea for a real time Zoom intervention. If you have some synchronous classes left, give it a try 🙂 For this, I edited the delay shader we looked at in class to have many more layers. I capture the browser window in OBS, start a virtual camera from OBS and use that camera as my Zoom camera. As long as I don’t move, everything is fine. But once I move, it first seems like unstable Internet and if I move faster, like many versions of myself. Unfortunately, my laptop reached its limits with this experimentation and the video output in Zoom was much slower and not as clean as the one in the Browser or OBS but maybe that makes this end-of-semester mood even more realistic.

Part II (Demo) – Weirdly moving my head here – this is were the project title comes from. Also don’t focus too much on what I say 🙂
Here is a “behind the scenes” so that you can employ this effect in your next Zoom meeting yourself.

Week 11: Final Progress

This week, I focused on how I would implement my final project. I initially planned to use GridEye to detect my location in my room. Then Professor suggested that I use body pix and blob detection. However, this would make my computer go very slow. After more brainstorming, I decided that I would use frame differencing instead to implement my idea. I would split my room into four sections, and each section will have its own shader and its corresponding sound that will be triggered when I move in each section. This week, I focused on choosing the appropriate shaders, and making sure they work on atom. I got several errors, but after spending hours on google, and asking Professor for help, I was able to fix these errors and successfully implement the shaders on atom. Moving forward, I will focus on creating the sounds using TidalCycles, then using frame differencing to tie everything together. I look forward to seeing how everything turns out in the end!