adriana  sá
trans-disciplinary   music

HOME PROJECTS INSTRUMENTS AUDIO SCORES VIDEO PUBLICATIONS BIO
SCIENTIFIC PUBLICATIONS

EDITORIAL WORKS

LIVE INTERFACES journal (Univ. Lusófona/ CICANT) ISSN 2975-9943

The SECOND EDITION features three media-rich contributions focused on sound art performance, interactive installation, and automated video editing. They reveal distinct interpretations of liveness and creativity, contrasting sharply with one another in terms of motivations, methods and modes of discussion. Yet, ultimately we can also extract an overarching theme: interfaces that transcend personal control.

This common ground invites the reader-viewer-listener to reflect on a series of fundamental questions. What defines liveness? How are flow and expression realised? Where do creativity and authorship reside? And what roles do interfaces play in these processes? These are ongoing, open questions, given the wealth of existing approaches and interpretations. Interrogations are Live Interfaces’ reason-to-be.

The FIRST EDITION gathers theoretical and media-rich contributions, which interrogate the meanings of ‘liveness’ and ‘mediation’ in quite different ways. In consonance with the cross-disciplinary spirit of the journal, these investigations reveal fuzzy thresholds between material and imaterial, entropy and negentropy, self and other.

Media-rich Proceedings of the International Conference on Live Interfaces 2022, ed. Adriana Sá, Lusofona University, 2022.

ICLI 2022 offered the opportunity to investigate how different understandings of performance technology might convey an approach or an alienation from the physical body and the environment. It exposed a variety of motivations and approaches to ‘liveness’, ‘timing’ and ‘flow’.

The debate on environmental impact and sustainability has been proliferating, and so do the social “off-grid” movements, as well as artistic practices that sonify and visualise physical phenomena that are not directly perceivable through human senses. This coexists with the dominant paradigm in our current societies, where all is supposed to be quantifiable, and the reality on a mobile phone screen is seemingly truer than direct experience. Nevertheless, software is necessarily biased: it mediates our action through code, and code embeds theories informed by specific purposes and criteria. The problem is, those theories are too often taken for granted. They remain concealed in a black box formed of multiple layers of code; the more science and technology succeed, the opaquer and obscure they become, and the more distant we become from computation as creative material. The more distant we might become from a humanistic way of thinking.

Previously, ICLI took place in Newcastle, UK (2012), Lisbon, PT (2014), Sussex, UK (2016), Porto, PT (2018) and Trondheim, NW (2020). In 2022, it assumed a different format. There were invited contributions, rather than submissions. The program proposed a set of research vectors, while questioning how to connect the physically and non-physically present participants in inventive, engaging ways. The experience generated material for further research, and motivated the emergence of a peer-reviewed, media-rich journal called Live Interfaces.

INTER-FACE: International Conference on Live Interfaces 2014 (ICLI 2014), ed. Adriana Sá, Miguel Carvalhais, Alex McLean, pub. Porto University, CECL & CESEM (NOVA University), MITPL (University of Sussex), 2015. ISBN 978-989-746-060-9

With preface interview-discussion “Live Interfaces: Seeds of Debate” by Adriana Sá, Joel Ryan, Edwin van der Heide, Atau Tanaka, Andrew McPherson, Thor Magnusson, Alex McLean, Miguel Carvalhais and Mick Gierson, pp. 14-28. PDF

INTER-FACE, the second International Conference on Live Interfaces, was dedicated to problematizing convergences and divergences between different understandings of performance technology. It sought to expose a variety of motivations and approaches, and discuss how specifc understandings of ‘liveness’, ‘immediacy’, ‘timing’ or ‘flow’ manifest in performance with digital media.

INTER-FACE gathered numerous paper presentations, performances, interactive installations, poster demonstrations and workshops. It happened in Lisbon, Portugal, at the Fine Arts Faculty of the University Lisbon (FBAUL); the School of Music of the National Conservatorium (EMCN); ZDB; the National Museum for Contemporary Arts (MNAC) and the Institute of Art, Design and Enterprise (IADE).

The Conference was biannual, and these Proceedings were published a year after the conference itself. The authors had the opportunity to strengthen their work after the presentation at the conference, beneftting from the feedback of the other participants and the editorial peer-review.

The Conference included two round-tables, “Problematizing Foundations” and “Further Directions”. These moments were extremely useful to outline a common ground of discussion, and we wanted the proceedings to include a general dimension as well. This is the purpose of the preface interview, which developed as a collaborative online discussion after the conference itself.

See also the flyer with the program

DOCTORAL THESIS

A Perceptual Approach to Audio-Visual Instrument Design, Composition and Performance. Goldsmiths, University of London, 2016

The practical component and the bridging of art and science are shown on this WEBSITE

BOOK CHAPTERS AND ARTICLES

Transdisciplinarity, Composition, Expression: Reflections of a Spherical Way of Thinking.
In Leonardo MIT Press, 2023.

Abstract

Bridging art music, philosophy and perception science, this article proposes a spherical way of thinking; the term evokes an inclination to sense connections between all things, implying a transdisciplinary approach to the world. The tensions between different viewpoints can drive us to think beyond any dichotomy, and uncertainty can inspire creativity. By exposing how this reflects in her work, the author hopes to activate new questions and new ways of addressing those questions.

Using Video-Game Technologies in New Interfaces for Musical Expression.
In Teaching Sound: Aesthetics & PraxisCILECT - International Association of Film and Television Schools , 2025 (in press).

Abstract

Among the artists and researchers who have explored music with 3D environments, many have made explicit references to gaming while exploring participatory interaction, video-game player paradigms, and allusive iconography. I have also been using 3D technologies since 2007, but in a quite different way. My 3D software operates based on the sound from an acoustic instrument, and the image is intended to work like a reactive stage scene.

Whatever motivates a practitioner, it is useful to think of how interaction designs and audio-visual relationships drive experience. I will begin this paper with a short description of how 3D engines render sound, and will then proceed with an overview of the diversity of approaches to music in 3D environments. Following this, I will analyse convergences and divergences between the notion of flow in music and gaming. Then, I will draw from perception science and film theory to analyse how audio-visual relationships and the dynamics of sound and image drive attention. Armed with these tools, it is possible to glean how any work exploring music with 3D environments conducts perception.

A Parametric Model for Audio-Visual instrument Design, Composition and Performance.
Adriana Sá & Atau Tanaka. In Live Visuals: History, Theory, Practice. Routledge, 2022. PDF

Abstract

This chapter proposes a parametric model that is useful in audio-visual instrument design, composition and performance. We can draw a separation between those activities, but in practice that separation might not be so obvious: ultimately, the iterative creation process must always consider the final, global experience. Derived from a perceptual approach, the model is applicable to the broad diversity of aesthetical options and technical platforms. One can equally discard part of the parameters to analyse recorded audio pieces and films. On the one hand, the model enables the separate analysis of performer-instrument interaction, sound, image, audio-visual relationship and physical setup. On the other, it enables the analysis of how the combination conducts the audience’s experience.

The chapter begins by presenting each parameter independently, while illustrating their use with a range of artistic examples. It then explains how their combination facilitates the analysis of expression, of the relative strength of sound and image and of the audience’s feeling of presence. Finally, it demonstrates how the model can be used in creative practice, showing its usefulness as a compositional tool.

Sonic Expression, Mediation and Perception (in Portuguese: Expressao Sonora, Mediacao e Percepcao).
In The Sonic Experience: From Linearity to Circularity. (in Portuguese: A Experiencia Sonora: da Linearidade a Circularidade), Documenta, 2023, pp. 139-172.

The complementary practice and the bridging of art and science are shown on this WEBSITE
Abstract (translation from Portuguese)

When we enter the words interface, music or expression into a search engine, we are presented with many links for "NIME" - New Interfaces for Musical Expression. This term is well known among those dedicated to the invention of new musical systems and instruments. It emerged in 2001 to designate a conference that has taken place annually ever since, bringing together a wide range of creative perspectives. Research into new sonic interfaces is flourishing, but interestingly, the meaning of 'expression' is rarely discussed. This is an important topic, not only because the spectrum of sensibilities and creative motivations is vast, but also because interaction with digital technologies is governed by implicit theories, hidden beneath various layers of code.

In this chapter, we bring together creative practice and perceptual science to set out a notion of sonic expression and outline the ways in which this understanding may converge with or diverge from others. We begin by proposing a method that facilitates the analysis of any sonic performance, regardless of its aesthetic approach or the technology used — whether analogue, digital, or hybrid. This method consists of a parametric model that helps us understand how a performance shapes the audience’s experience.

A Method for the Analysis of Sound Art and Audio-Visual Performance.
In Audiovisual e Industrias Criativas: presente e futuro vol. 1, McGraw Hill Education, 2021, pp. 575-589. ISBN: 9788448627355 / ISBN ebook: 9788448627355

The complementary practice and the bridging of art and science are shown on this WEBSITE
Abstract (translation from Portuguese)

The term NIME, acronym of New Interface for Musical Expression, applies to a great diversity of creative practices. Nevertheless, the meaning of expression is rarely discussed. In this article we formulate an understanding of expression where the reciprocal interaction between performer and instrument is important, as well as the relation between audition, vision and space. Articulating artistic practice and the science of perception, we describe three creative principles and a parametric visualisation model. The model includes parameters for interaction, sonic and visual dynamics, audio-visual relationship, physical setup and semantics. Those parameters are applicable to any technical platform and aesthetic approach. Our proposed visualisation method facilitates the analysis of how their inter-relationship drives the audience’s experience.

The Variables of Spatial Presence: a Parametric Model.
Adriana Sá & Atau Tanaka. In xCoAx 2019 Proceedings.

Abstract

The term ‘spatial presence’ refers to the feeling of presence in a mediated space. This subjective experience has been discussed in media theory, sound art, film and performance. It depends on multiple variables, or parameters. This paper presents a parametric model that can be used to analyze those variables and their relationships. It exposes methods to assess interaction, characteristics of sound and image, audio-visual relationship and physical setup. It also exposes methods to assess how these variables intertwine in perceptual experience. The model draws from perception science, interaction design, music and audio-visual theory. It is applicable to the broad diversity of aesthetical options and technical platforms, facilitating the analysis of spatial presence in any performance. One can also discard part of the parameters so as to analyze installations, sound art and film.

Designing Musical Expression.
In xCoAx 2017 Proceedings.

The complementary practice and the bridging of art and science are shown on this WEBSITE
Abstract

The term New Interface for Musical Expression (NIME) has been applied to a great variety of instruments and systems, since the first NIME conference in 2001. But what is musical expression, and how does an interface intended for idiosyncratic expression differ from ubiquitous interfaces? This paper formulates an understanding where the reciprocal interaction between performer and instrument is important. Drawing from research in perception science , interface design and music, the paper specifies methods that can be used to analyse interaction, attention dynamics and semantics. The methods are applicable to any technical platform and aesthetic approach, facilitating the discussion of creative strategies and the analysis of music experience. The paper uses these methods to describe a NIME that combines an acoustic string instrument and software that operates based on the acoustic sound. The software applies the difference between the detected pitch and the closest tone / half tone to the processing of pre-recorded sounds. The proposed methods help to explain how this NIME enables versatile musical forms, and prevents undesired outcomes.

Exploring Disparities Between Acoustic and Digital Sound.
In Leonardo Transactions, vol. 48 No. 3, MIT Press, 2015, pp. 280-281. PDF

Abstract

Mapping digital sound to an acoustic input enables the performer and the software to ‘talk’ simultaneously. Whilst the performer has direct control over the acoustic outcome, the digital can become a means of destabilization - as it is mediated through code. Musical expression substantiates as the performer addresses the unexpected resourcefully. This text describes the performative dynamics in terms of perceptual mechanics.

The Fungible Audio-Visual Mapping and its Experience.
Adriana Sá, Baptiste Caramieux and Atau Tanaka. In  Journal Of Science And Technology Of The Arts vol. 6 No. 1, 2014,pp. 85-96. p-ISSN: 1646-9798 | e-ISSN: 2183-0088

Abstract

This article draws a perceptual approach to audio-visual mapping. Clearly perceivable cause and effect relationships can be problematic if one desires the audience to experience the music. Indeed perception would bias those sonic qualities that fit previous concepts of causation, subordinating other sonic qualities, which may form the relations between the sounds themselves. The question is, how can an audio-visual mapping produce a sense of causation, and simultaneously confound the actual cause-effect relationships. We call this a fungible audio-visual mapping. Our aim here is to glean its constitution and aspect. We will report a study, which draws upon methods from experimental psychology to inform audio-visual instrument design and composition. The participants are shown several audio-visual mapping prototypes, after which we pose quantitative and qualitative questions regarding their sense of causation, and their sense of understanding the cause-effect relationships. The study shows that a fungible mapping requires both synchronized and seemingly non-related components – sufficient complexity to be confusing. As the specific cause-effect concepts remain inconclusive, the sense of causation embraces the whole.

Repurposing Video Game Software for Musical Expression: a perceptual approach.
In Proceedings of New Interfaces for Musical Expression 2014, pp. 331-334.

Abstract

This article exposes a perceptual approach to instrument design and composition, and introduces an instrument that combines acoustic sound, digital sound, and digital image. We explore disparities between human perception and digital analysis as creative material. Because the instrument repurposes software intended to create video games, we establish a distinction between the notion of “flow” in music and gaming, questioning how it may substantiate in interaction design. We further describe how the projected image creates a reactive stage scene without deviating attention from the music.

A Study About Confounding Causation in Audio-Visual Mapping.
Adriana Sá, Baptiste Caramieux, Atau Tanaka. In xCoAx 2014 Proceedings , pp. 274-288

Abstract

The text reports a study, which draws upon methods from experimental psychology to inform audio-visual instrument design. The study aims at gleaning how an audio-visual mapping can produce a sense of causation, and simultaneously confound the actual cause and effect relationships. We call this a fungible audio-visual mapping. The participants in this study are shown a few audio-visual mapping prototypes. We collect quantitative and qualitative data on their sense of causation and their sense of understanding the cause-effect relationships. The study shows that a fungible mapping requires both synchronized and seemingly non-related components - sufficient complexity to be confusing. The sense of causation embraces the whole when the specific cause-effect concepts are inconclusive.

How an Audio-Visual Instrument Can Foster the Sonic Experience.
In Live Visuals, eds. L. Aceti, S. Gibson, S. M. Arisona, O. Sahin, Leonardo Almanac vol. 19 No. 3, MIT Press, January 2013, pp.284-305. ISBN: 978-1-906897-22-2. | ISSN: 1071-4391

Abstract

The chapter formulates an understanding of how an audio-visual instrument can be composed in such a way that the experience is driven through sound organization – modulated, but not obfuscated, by a moving image. This is particularly challenging, as normally the audio-visual relationship is skewed in favor of the visual. The investigation is motivated by insights derived from artistic practice. It outlines psychophysical boundaries with the aid of existing cognition/ attention research, and it describes three principles for the creation audio-visual instruments. As an example, the article describes how they are explored in a specific audio-visual instrument, combining an acoustic zither and modified software from audio processing and video-game technologies. This instrument addresses the three principles while exploring the disparities between an acoustic and a digital output.The chapter formulates an understanding of how an audio-visual instrument can be composed in such a way that the experience is driven through sound organization – modulated, but not obfuscated, by a moving image. This is particularly challenging, as normally the audio-visual relationship is skewed in favor of the visual. The investigation is motivated by insights derived from artistic practice. It outlines psychophysical boundaries with the aid of existing cognition/ attention research, and it describes three principles for the creation audio-visual instruments. As an example, the article describes how they are explored in a specific audio-visual instrument, combining an acoustic zither and modified software from audio processing and video-game technologies. This instrument addresses the three principles while exploring the disparities between an acoustic and a digital output.