exploring the resemblance and contrast with our online and physical identities
As social media has become a mainstream social connection and personal emotional outlet for individuals, it’s a common topic of discussion about what is real and what is not real on these digital platforms. For instance, when we post on any social media platform, we are often prompted to write a caption to explain the post we want to make or to jog down our thoughts at the moment. But how much of these captions represent our real thoughts?
For some individuals, their digital identity may be largely different from how they show up in real life. For others, the side of themselves that they display on digital platforms more closely resembles their true inner selves, versus that in their real lives. I find myself in the overlap of these two categories. To some extent, the conflict between digital and IRL (in-real-life) identities can cause confusion and self-doubt as we struggle to understand what truly represents ourselves.
"Obscurity" collects viewers’ thoughts at the moment and presents real-time visualization feedback based on the inputs. The goal of this project is to present a similar type of connection space and emotional outlet, but in a physical space rather than a digital space, to observe potential differences between those presented on social media versus what people are actually thinking in real life. Users can interact with the visualization to explore different representations of themselves via MIDI controllers. Distortions as a result of the user’s manipulations will appear in the mask visualization as well as the captured video. At times, the mask will co-exist with the user, and at other times, only the mask or the user will be displayed, provoking introspections on how we present ourselves outwardly.
The project is developed using TouchDesigner, a webcam, and a MIDI controller. The TouchDesigner software is used for creating the visualization and linking the MIDI controller to the webcam. Specifically, a plugin, MediaPipe is used to recognize and track the user's body figure, allowing MIDI operations to be operated within the region. The webcam is used to capture the viewer in frame, focusing on the individual’s upper body and allowing visualizations to be created as a result. The MIDI controller is used as the medium to connect the viewer with the visualization, allowing them to interact with a representation of themselves through the different notes and knots.