Week 6: GridEye SparkFun

For this week’s exercise I experimented a bit more with voices. The used recordings are older and the content talked about is unrelated but I would like to start a new project and actually ask friends to send voice recordings talking about a topic relevant to this time (idea saved for later 🙂 ).

With the GridEye SparkFun project, my main goal was rather to experiment with positioning sound in space. I started out with assigning different parameters of one tidal line to squares in close proximity but realized it will be more interesting to space them out more over the area so that any movement in the space would trigger different parameters and it becomes less predictable for an audience uninvolved in the setup.

Experimenting on different days and German fall weather being very moody, I also realized how important it is to adjust the mapping and threshold according to the temperature in the space which can be very important for setting up an actual installation.

Here are some stills from my setup. Instead of using my whole body, I decided to use my hands as miniature versions of human bodes moving around in space.

And a video of the browser window here.

Week 6: Grid Eye Thermal Camera

This has probably been one of the most interesting assignments for me so far, with the implementation of visuals and the flow of the color changes with the emergence of sound.

With the current common use of thermal cameras all around us due to the coronavirus situation, the possibilities seem to be endless in terms of taking such temperatures and of people and introduce interactive experiences.

Initially, I tried changing up the colors to try to make the block colors different and the changes of different shades, however, the program would not run on the browser when I did that. I tried to understand a few lines of the code to see if I could figure it out but I couldn’t.

I had the chance to explore a huge number of sounds for this assignment and gathered some that would fit together and sound good.

This is my experimental use of the thermal sensor/camera.

The video was a bit difficult to make as I tried to trigger the different sound areas separately when I was physically moving but it was easier for me to just use my hand and stay away from the sensor for the individual sounds and then use my whole body.

My main focuses here were close to block 0 and block 56. The goal was to present a landscape where the area close to block 56 is that of a childcare service or kids playing with pebbles and toys whereas the area closer to block 0 was a party for adults.

The right side of the screen was not my focus and I noticed after finishing the recording that I set up block 27 with a sound instead of block 7. Block 63 was a back to reality type of space where it seems to be a timer about to go off, maybe for the part and maybe for the kids.

I also moved in a way to trigger the sounds of the different areas combined as if it was a bird view of all of what was going on at the same time, as I also showcased all of the sounds at the same time as well.

Week 6: Response (What is Somatics?)

The author starts by distinguishing how a person’s body is perceived; a human body from the outside, in third-person, and a human soma from within in first-person. The idea of the soma seems to be very interesting as I never heard about it before, and it reminded me of random shows that I watched as a child. A statement that helped me understand the idea was when he clarified that “the mode of viewpoint is different: it is immediate proprioception – a sensory mode that provides unique data.”

The way he breaks down the third-person and first-person view of the body makes it simple and easy to understand. The idea of how first-person is factual and more grounded would apply specifically to the individual, the third-person view however would require following sets of principles and multiple observations to ensure the accuracy of the results. For example, he talks about how psychology data is immediately factual and unified whereas in third-person it needs to be analyzed and interpreted in order to reach a factual conclusion.

Steps to understand somatics:

  1. Understand that somas are not bodies
  2. Recognizing that self-awareness is a distinction of the human soma

In regards to the first step, the author discusses that the sensory-motor system leads to a unique way of learning. He states that self-regulation is reached through the unity of sense with acting and acting with sense. According to him, such self-regulation is vital for human survival as the internal soma’s process of self-regulation ensures the existence of the external body structure.

As for the second step, he talks about how self-sensing and self-moving are interlocked in a way where they make up the core of somatic self-organization and self-adaptation. This unified experience that is observed from the first-person point of view makes the soma distinct from the body.

Two of the prime somatic functions are awareness and consciousness, as seen in the previous paragraphs. Consciousness is a relative function that is voluntary, based on a person’s interests and the skills that they wish to develop and it cannot perform more than its self-imposed limits. Awareness, however, could be focused and is used as the only way for the soma to isolate perceptive events. It also works to isolate “new sensory-motor phenomena in order to learn to recognize and control them.

He finalizes the chapter by looking at the relation between somatic learning and sensory-motor amnesia. Somatic learning broadens our range of action and perception and hence this increases the voluntary consciousness and adaptation to the environment. He perceives this as, either, a response to amnesia or just a normal day-to-day experience to avoid the effects of stress.

The optimal human state would be when somatic freedom is reached, not only because internally, from the first-person perspective progress without distortion, but also from the third-person perspective it is when the body portrays maximal efficiency and minimal entropy, meaning decline or collapse.

Week Sheish (6): Dancing to the heat

So this assignment was super fun and easy to do. The setup was very simple and I had fun doing it. So as usual I tried my own tidal scripts but I really loved the vibe the one Aaron made from last week had. The vibe of the ‘mix’ in that was very funky and fresh and sort of energizing, so I decided I wanted to use that tidal script. I edited it and added a bit of a more chaotic sound to it as well.

The goal of this exercise was to communicate with Tidal and use the grideye to make sounds with our bodies, so obviously obviously I danced :). While testing the setup, I had an idea, why not the music gets louder the faster I move as well as if the temperature increases. I edited the sketch and added this change that I wanted. The heat check was very simple, I simply had an extra array that would keep the previous temperatures and on the execution of the function (whenever there was a new message from the server .i.e from Teensy) it would simply compare the values in each grid and if the temperature had increased to simply multiple the average and temperature values by a very small constant each time. This new value would be sent to the Tidal script and change the gain values I referred to with cf0.

The more I would dance would mean the hotter I would get and it would be a feedback loop. The faster I moved section didnt work as much as I had hoped it would but the result, even though different was still fun! In my mistake of setting the previous temps in the wrong line, I ended up making the value that was sent to Tidal be out of range which lead to the sample being distored because of the invalid gain at certain points in time. I really liked the oddity of the sound and decided to keep it.

Week 6: What is Somatics?

Thomas Hanna defines Somatics as the study of the ‘soma’, which is the human body perceived from a first-person perspective. The soma is different from the body not becayse the subject viewing it is different, but because the lens by which it is being viewed varies. Hanna mentions how the first-person viewpoint, as opposed to the third-person viewpoint, is immediately factual, and does not need to be proven correct through ‘universal laws’. Hanna uses the study of substances such as rocks or minerals as an example; when scientists study this, the science is completely backed up by the results they achieve and all the calculations that they do. However, when studying human beings, there is the third-person view that the scientists see, as well as the first-person view that the person being studied has.

Understanding Somatics

Hanna discusses how in order to fully understand somatics, we need to first recognise that somas are not bodies. Furthermore, somas are not only self-aware, but they are actively engaged in the process of self-regulating and acting upon itself. For example, human beings are never passive, and are always sensing and moving even when we are being observed. This links to the idea of thinking-moving-feeling that we have discussed previously, where viewers of interactive art become participants because they begin to act upon what they see; even the people who choose not to participate in the art are still acting, and are not passive.

Consciousness and Awareness

One interesting topic that Hanna brings to light is the idea that consciousness is not a fixed lens, but rather a learned skill, that springs into action once we encounter external stimuli. I was very intrigued by this idea, as I have always thought of consciousness as something that all human beings had in similar amounts (though before reading this I never thought of consciousness as something that could be counted). S

Week 6: GridEye

This week’s assignment was very enjoyable for me, especially because I was able to fix the errors that I had during the assignment by myself (I think I might be getting the hand of everything!). This task was very similar to frame differencing, except this time, there were 63 boxes that I could trigger. On Atom, however, I was only able to trigger 7 sounds, because doing more sounds than this created a glitching sound effect that wasn’t very pleasing to the ear.

The most difficult part of this assignment for me was trying to trigger the individual sounds, because moving just slightly caused several boxes to be triggered at the same time. To tackle this, I used mostly my arms to trigger the sounds. Though, I was not able to screen record my movements because my laptop did not capture the video. The boxes I tried to trigger were: 0, 5, 15, 21, 32, 45 and 52. Hopefully, it is evident in the video which sounds are played when the specific boxes are triggered.https://youtu.be/Idyx4OtK5to

For this week, I wanted to use TidalCycles to create ‘music’ or sounds that reminded people of childhood. So, I searched for sounds that could be associated with children, such as the alphabet, numbers and toy sounds. However, when I played the sounds together, it was actually more haunting than I intended. Due to this, I decided that I could experiment with the sounds in a different way. I adjusted each of the sounds so that when played together, it sounded like a bunch of malfunctioning toys.

I can imagine this being used in a children’s toy shop. Perhaps as a child moves through the room and approaches a specific toy, then specific sounds are triggered. I guess this could be useful during the pandemic as it would eliminate the need to touch a toy in order to trial it.

Week 6: GridEye Sensor

Final GridEye video

I start the video showcasing each block and their respective sound. Funnily enough, this was the most frustrating part of the process. I took 11 (yes I counted) videos, and struggled to find and trigger each sound. I kept re-taping the sensor on my screen to different positions, so I was able to trigger them all. Sometimes I was too short to trigger the high ones, sometimes I couldn’t squat long enough to trigger the ones on the bottom. If I moved further back I wouldn’t trigger anything, and if I moved closer I would trigger too much at once.

My setup: I taped it to the screen of my Mac. Yes it fell a few times. I think its okay though.

I didn’t know that Tidal only read up to 9 sounds, (d10, and d11 wouldn’t work). So although I wanted to layer more sounds, I had to settle for 9. I picked a mashup of convenient positions, as well as positions all around the grid, so that even when I was recording, I would often find I sound that I had forgotten about. This was fun to play around with, but by the 11th video, I had heard the sounds too much.

I expected the GridEye to be able to read from further distances, but when I was further it didn’t trigger much. I also experimented with a candle, and a match, and to my surprise, once there was a small distance, the sensor barely read it. The sensor also only read my forehead, and nothing else in my body, from where I was standing, and I found that interesting.

Overall I still am amazed by how powerful a tiny square could be, and think about the many possibilities of what I could do with it.