Face & Palm Documentation

Face & Palm


Face & Palm is an interactive installation that projects the user’s face onto his/her hands. It is intended to simulate the interaction of looking down on reflective surfaces with projection mapping on hands. The user will be immersed in the surreal experience of the on-hand mirroring effect.

Demo Video:


Technologies & Materials:

  • Processing
    • OpenCV
      • Face Detection
      • Mapping Transformation
  • 1 * Laptop
  • 2 * Logitech C920 Webcam
  • 1 * Portable LED light
  • 1 * Pocket Projector
  • 3 * Manfrotto Clip Arm
  • 1 * Wheeled Cabinet


The code contains several well-divided classes to clarify the development — FaceCam, HandCam, and Mixer.

Therefore, the main body of the loop is quite concise.



The FaceCam class fetches pictures from the webcam for faces. It utilizes OpenCV Frontalface Cascade to extract the largest face in the frame. It output a faceImg PImage of a fixed square size (my choice was 400 * 400 based on the performance).


A lot more is going on in this class.

  1. It warps the raw image to match the projection area. I used the “mapProjectionSpace” example from class to generate a JSON file for a specific installation setup. The main program of the project only contains warping without modifying the transformation. The warped image should be a perfect rectangle of the projection area. This step ensures that the visuals would (almost) perfectly match on the hands.
  2. The HandCam uses color difference checking to separate hands from the background. When the program first started, press “D” to set the difference reference of the specific setup. After the comparison, the OpenCV will find the contours of the hand. If the contour’s bounding box is bigger than a certain threshold, it’s recognized as a hand. (Other objects would also work but the instruction will tell the user to use hands.)
  3. The HandCam eventually stores every contour in an ArrayList of Hand objects which contains central coordinates and the binary image of the hand figure.


The Mixer constantly reads from the HandCam and the FaceCam. It masks the face PImage with the hand PImage and displays it at the original position of the hands. I also added a visual effect to split the face.

When no hands are detected, it plays an instruction animation to guide the new user to start.


It’s really exciting to finish a complete installation. As I work on my own in a limited time, the final result of the project is made into a compact little piece. There aren’t lots of technical challenge but many fine tunings during the development. Also, the installation is compact just like the project itself. One single wheeled cabinet covered by black fabric holds all the sensors, the projector, and the laptop. I’m overall satisfied with the outcome as I went solo. However, there are some aspects of the project which can definitely be improved with more time and efforts. Such as:

  • Framerate issue (could be fixed with OpenFrameworks)
  • Projection Mapping Accuracy (could be fixed with Kinect but a lot more space needed for the installation due to the minimum distance of Kinect)
  • More accurate recognition of the hands (developing a Cascade for hand to exclude other objects)
  • More visual effects and possibilities of interactions (more time and thoughts to put in)