Week 11: Final Progress w/Omar

For this week, Omar and I met up to work on the implementation of our project. We put a lot of thought into brainstorming to see what visuals and shaders we’ll use in order to optimize our output. As we went through the process of outlining our idea, we kept on getting more ideas of somewhat different applications of shaders as we wanted it to be even more fun and interactive.

The idea we decided to go ahead with and implement is inspired by the game ‘Just Dance’ that we both played when we were younger and still continue to. It is also well-known and universal and almost every individual in our generation has played it or knows about it.

This is an image for reference from the 2017 version of Just Dance.

Seeing that we currently live in a time with restrictions on the number of people in the same room at the same time and the limit of interaction, we are working on an installation that requires having just one person in a frame at a time.

To start off, an image that we initialized would first be presented to the very first person which they would reenact/imitate. Once they do it correctly and follow its silhouette, they will then be given a few seconds to stand in a position that expresses their thoughts, ideas, or a move that they find personal. When they do that and standstill, the system will take a screenshot of the individual’s pose and save this image for the next person. The next person that goes into the frame will do the same steps; stop, pose like the image, and record their individualized pose, and so on.

When working on the implementation, we found out that using PoseNet and ml5.neuralNetwork() would be the best way to record certain poses and check for them later. Looking into machine learning was challenging but also really exciting. It took us a while to understand key concepts but once we did, it was fairly easy to apply them to our project and plan out how each part of the project would be coded. After setting things up, we took turns doing a T-pose on camera as the code collected training data. The data was put into an array and saved as a .json file, which we later normalized (the large x & y values for) and used to train the model. Although we have a clear path laid out for how we’re going to move forward, we ran into some problems here. In the p5 editor, the Training Performance window is blank, so we cannot track the training of our model. We also get an error and we never get a model downloaded, as well as an error code that we spent hours trying to troubleshoot and that doesn’t seem to make a lot of sense.

Screenshot of the error displayed in the console below the p5 sketch.
Screenshot of the error displayed in the console in the developers tools.