Arpeggio-Detuning: exploring disparities between
human perception and digital analysis in music
The Arpeggio-Detuning was developed in 2013, after the first version of my audio-visual software, AG#1. It is a rule-based audio software, which implements new creative strategies to produce sonic complexity based on amplitude and pitch analysis from my custom zither - with aged strings, a personal tuning system, and personal playing techniques. The musical language discovered with the Arpeggio-Detuning is later also extended to the audio-visual domain, with AG#2.
John Klima used an iOs/Android system from video games called Marmelade and an audio library called Maximilian to implement my design specifications. I parameterised the sound architecture and created the digital sounds.
The AUDIO RECORDINGS at the end of the text illustrate the resulting musical forms.
Whilst software operates based on mathematical calculations, humans sample and process the information based on attention, cognitive principles, and cross-sensorial context. The disparities are tangible with pitch analysis. For example, a sound may vary in pitch during attack, sustain and release, and nevertheless we group and hierarchise those pitch variations as we segregate the sound from the soundscape. In contrast, the software slices the spectrum according to a buffer size, which may lead overtones or resonance frequencies to be extracted as fundamentals. Or else, an overtone can be intense due to the musical structure, without the pitch being fundamental according to mathematical formulas.
One can create complexity and unpredictability with purely rule-based software; particularly with an acoustic, audible input. Whereas the zither enables an immediate control over the sonic outcome, software entails thresholds between the performer’s control and the instrument’s unpredictability, which can be manipulated so as to convey liveness and expression.
My interfaces in performance are the analysis from the zither input and an Ipad touch screen. If the zither was plugged into a guitar tuner, the tuner would display a succession of different values upon a single string or chord. The detected pitch is mapped to the closest tone or half tone, which is not played back itself; tones and half tones are further mapped to pre-recorded sounds. The audio analysis process provides two streams of data, whose disparities are explored as creative material. One corresponds to the extracted fundamental pitch, and the other corresponds to the nearest tone/ half tone. A single audio input detection causes a corresponding pre-recorded sound to play back twice. The result is not repetitive because the second playback is detuned. The detuning value is equal to the difference between the detected frequency and the closest tone or halftone.