Ideation

IMG_0319.png

The setup idea was to use steel wool as medium for the interaction using three different gestures: pushing, plucking and stroking. As using the audio-channel of the teachable machine as classification channel seemed overly complex (due to the necessity of using multiple microphones to determine direction) I drafted a camera-based setup with the camera placed below the steel wool. To allow for ductility of the steel wool I also knew I would need a stretchable framework to carry the medium.

V1

Hardware

IMG_2785_small.png

The first hardware-setup was built using a euro box, rubber bands, foldback-clips and a phone-camera streaming the video via Iriun Webcam to the computer running the teachable machine.

Software

The class-setup of the teachable machine covered the three envisioned gestures plus an idle state for when there was no input.

Experiments V1 Camera.mov

As can be seen from the video, the model ran unexpectedly well from almost the very beginning.

Setup.png

However some things became clear in tinkering with the input of the model: