My reseach about perception framed the development of a personal audio-visual instrument, which outputs acoustic sound, digital sound, and digital image. The instrument includes a zither, that is an acoustic multi-string instrument with a fretboard, and 3D software which operates based on amplitude and pitch detection from the zither input. Technically, the software would operate upon any audio input detection. But the design specifications, mappings, parameterisations and structural sections are made for a specific zither with aged strings, a personal tuning system, and personal zither playing techniques.
The work stresses a distinction between the notions of 'play' in music and gaming. The audience does not interact with the instrument, there are no allusive icons or player-paradigms, and the performer does not face the screen. The interaction with the instrument is not simple, and the image creates a reactive stage scene without distracting the audience from the music.
A compositional language emerged with the first version of the instrument, which includes the AG#1 software. Clarifying creative insights with research about perception and attention led to its further development. The subsequent version of the software, Arpeggio-Detuning, focused on sound organisation. Its creative strategies were then extended to audio-visual software, AG#2 and AG#3.
All software versions are
developed in collaboration with John Klima, who kindly wrote the code according to my specifications.