
The setup idea was to use steel wool as medium for the interaction using three different gestures: pushing, plucking and stroking. As using the audio-channel of the teachable machine as classification channel seemed overly complex (due to the necessity of using multiple microphones to determine direction) I drafted a camera-based setup with the camera placed below the steel wool. To allow for ductility of the steel wool I also knew I would need a stretchable framework to carry the medium.

The first hardware-setup was built using a euro box, rubber bands, foldback-clips and a phone-camera streaming the video via Iriun Webcam to the computer running the teachable machine.
The class-setup of the teachable machine covered the three envisioned gestures plus an idle state for when there was no input.
As can be seen from the video, the model ran unexpectedly well from almost the very beginning.

However some things became clear in tinkering with the input of the model: