exploring the resemblance and contrast with our online and physical identities
Followed a tutorial of distorting the webcam visualization via a feedback loop when the key “1” is pressed.
After following through with last week’s tutorial, I experimented with changing the visualization based on different keyboard inputs. Currently, the distortion is playing by itself after the key is pressed. I couldn’t figure out a way to have it be affected by the amount of time a key is pressed. Might need to find more tutorials on this, or switch to experimenting with MIDI inputs after securing a MIDI controller this week… → Feedback is provided during the midterm critique to start with basic tutorials in TouchDesigner for building a foundation and gaining comfortability with the different elements in the software.
For weeks 7-8, I was mainly following some tutorials to learn the basics of different operators in TouchDesigner. After securing a MIDI keyboard from the media lab, I started looking at MIDI-specific tutorials and connecting the MIDI keyboard to TouchDesigner. For now, viewers are able to interact with a visualization of themselves via the MIDI knobs K1-K8, each affecting different parameters in the "Noise", "Edge", and "Composite" operators I added. A text "Use Knobs K1-K8 to Explore How You Want to Present Yourself!" is placed over the visualization as an instruction / prompt. Comments are included in the TouchDesigner workspace to keep track of what MIDI control affects what visual parameter. In the actual exhibition, I'm debating between placing these detail alongside the MIDI controller for clarity, or leaving the viewers to explore and discover the interactions themselves.
Over Spring break and in the first week of Spring Quarter, I rethink the direction of my project and tried to incorporate more intentionality with the visual effects. I thought of localizing the visual effects to only the facial area to create a mask effect. To do so, I experimented with MediaPipe, but so far, this only works with the "edge" operator. More work need to be done to figure out if other effects can be localize and if not, what direction I should pivot afterwards.
I was able to figure out how to localize other effects to the face and body area by choosing an "image segmentation" in mediapipe. An on/off trigger is added to one of the MIDI pads to control whether or not to mirror the webcam image.
I added interactions to all notes and knots on the MIDI controller by using a "select" CHOP to separate out the MIDI notes and have filters from the "composite" TOP triggered based on the note that is pressed. From feedback in class, I plan to experiment with more diverse visual effects asides from the default ones in the "composite" TOP. To increase interactivity, I also plan to add sounds that are triggered upon pressing on the notes on the MIDI controller.
I added more dynamics to the visual effect by using a feedback loop, like the one I experimented with in the first few week. I also added a white noise background sound that viewers can control the volume of via the MIDI knot. In regards to the actual installation, I decided to use a monitor to portray the visuals, an external webcam attached to the bottom of the monitor to capture the viewer in frame, and a pedestal about waist height to place the MIDI controller for interactions.
I tested out my project on a Lenovo laptop, offered by the Visual Arts Facility, however the laptop doesn't seem capable to running the TouchDesigner program, so I plan to bring my personal laptop for the exhibition and hide it in a box underneath the monitor so it doesn't interfere or distracts from the project.