Sixteen Years of Sound & Music Computing
A Look Into the History and Trends of the Conference and Community

D.A. Mauro, F. Avanzini, A. Baratè, L.A. Ludovico, S. Ntalampiras, S. Dimitrov, S. Serafin
Card image

Papers

Sound and Music Computing Conference 2008 (ed. 5)

Dates: from July 31 to August 03, 2008
Place: Berlin, Germany
Proceedings info: Proceedings of the 5th Sound and Music Computing Conference (SMC2008), ISBN 978-3-7983-2094-9


2008.1
A Dynamic Spatial Locator Ugen for Clm
Lopez-lezcano, Fernando   Center for Computer Research in Music and Acoustics (CCRMA), Stanford University; Stanford, United States

Abstract
Dlocsig is a dynamic spatial locator unit generator written for the Common Lisp Music (CLM) sound synthesis and processing language. Dlocsig was first created in 1992 as a four channel 2d dynamic locator and since then it has evolved to a full 3d system for an arbitrary number of speakers that can render moving sound objects through amplitude panning (VBAP) or Ambisonics. This paper describes the motivations for the project, its evolution over time and the details of its software implementation and user interface.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849501
Zenodo URL: https://zenodo.org/record/849501


2008.2
An Algorithm for Real-time Harmonic Microtuning
Sabat, Marc   Audiokommunikation, Technische Universität Berlin (TU Berlin); Berlin, Germany

Abstract
Subtle inflections of pitch, often performed intuitively by musicians, create a harmonically sensitive expressive intonation. As each new pitch is added to a simultaneously sounding structure, very small variations in its tuning have a substantial impact on overall harmonic comprehensibility. In this project, James Tenney’s multidimensional lattice model of intervals (‘harmonic space’) and a related measure of relative consonance (‘harmonic distance’) are used to evaluate and optimize the clarity of sound combinations. A set of tuneable intervals, expressed as whole-number frequency ratios, forms the basis for real-time harmonic microtuning. An algorithm, which references this set, allows a computer music instrument to adjust the intonation of input frequencies based on previously sounded frequencies and several user-specified parameters (initial reference pitch, tolerance range, pitch-class scaling, prime limit). Various applications of the algorithm are envisioned: to find relationships within components of a spectral analysis, to dynamically adjust a computer instrument to other musicians in real time, to research the tuneability of complex microtonal pitch structures. More generally, it furthers research into the processes underlying harmonic perception, and how these may lead to musical applications.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849505
Zenodo URL: https://zenodo.org/record/849505


2008.3
An A-Life Approach to Machine Learning of Musical Worldviews for Improvisation Systems
Gimenes, Marcelo   Interdisciplinary Centre for Computer Music Research (ICCMR), School of Computing, Communications and Electronics, University of Plymouth; Plymouth, United Kingdom
Miranda, Eduardo Reck   Interdisciplinary Centre for Computer Music Research (ICCMR), School of Computing, Communications and Electronics, University of Plymouth; Plymouth, United Kingdom

Abstract
In this paper we introduce Interactive Musical Environments (iMe), an interactive intelligent music system based on software agents that is capable of learning how to generate music autonomously and in real-time. iMe belongs to a new paradigm of interactive musical systems that we call “ontomemetical musical systems” for which a series of conditions are proposed.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849503
Zenodo URL: https://zenodo.org/record/849503


2008.4
Applications of Typomorphology in Acute; Scoring the Ideal and Its Mirror
Climent, Ricardo   Novars Research Centre, The University of Manchester; Manchester, United Kingdom

Abstract
Acute, is a music score composed by the author of this paper, for Percussion Quartet and Fixed Media (tape) using ‘Searched Objects’ as instruments. This paper examines how this piece recontextualises existing research in Typology and Morphology of Sound Objects to produce a unique music mixed-media score, for the exploration of the sonic possibilities when confronting the ‘ideal’ (sonic object to be found) with ‘the reconstruction of itself’ through time (when performers attempt to recreate the given sounds) using processes of Spectro-gestural mimesis.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849507
Zenodo URL: https://zenodo.org/record/849507


2008.5
Asymmetrical Envelope Shapes in Sound Spatialization
Canavese, Francesco   Tempo Reale; Firenze, Italy
Giomi, Francesco   Tempo Reale; Firenze, Italy
Meacci, Damiano   Tempo Reale; Firenze, Italy
Schwoon, Kilian   University of Arts Bremen (HFK); Bremen, Germany

Abstract
Amplitude-based sound spatialization without any further signal processing is still today a valid musical choice in certain contexts. This paper emphasizes the importance of the resulting envelope shapes on the single loudspeakers in common listening situations such as concert halls, where most listeners will find themselves in off-centre positions, as well as in other contexts such as sound installations. Various standard spatialization techniques are compared in this regard and a refinement is proposed, which results in asymmetrical envelope shapes. This method combines a strong sense of localization and a natural sense of continuity. Some examples of pratical application carried out by Tempo Reale are also discussed.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849511
Zenodo URL: https://zenodo.org/record/849511


2008.6
Breeding Rhythms With Artificial Life
Martins, João M.   Interdisciplinary Centre for Computer Music Research (ICCMR), School of Computing, Communications and Electronics, University of Plymouth; Plymouth, United Kingdom
Miranda, Eduardo Reck   Interdisciplinary Centre for Computer Music Research (ICCMR), School of Computing, Communications and Electronics, University of Plymouth; Plymouth, United Kingdom

Abstract
We are interested in developing intelligent systems for music composition. In this paper we focus on our research into generative rhythms. We have adopted an Artificial Life (A-Life) approach to intelligent systems design in order to develop generative algorithms inspired by the notion of music as social phenomena that emerge from the overall behaviour of interacting autonomous software agents. Whereas most A-Life approaches to implementing computer music systems are chiefly based on algorithms inspired by biological evolution (for example, Genetic Algorithms [2]), this work is based on cultural development (for example, Imitation Games [12, 13]). We are developing a number of such “cultural” algorithms, one of which is introduced in this paper: the popularity algorithm. We also are developing a number of analysis methods to study the behaviour of the agents. In our experiments with the popularity algorithm we observed the emergence of coherent repertoires of rhythms across the agents in the society.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849513
Zenodo URL: https://zenodo.org/record/849513


2008.7
CompScheme: A Language for Composition and Stochastic Synthesis
Döbereiner, Luc   Institute of Sonology, Royal Conservatory of The Hague; The Hague, Netherlands

Abstract
In this paper, I present a programming language for algorithmic composition and stochastic sound synthesis called CompScheme. The primary value generating mechanism in the program are streams, which allow the user to concisely describe networks of dynamic data. Secondly, I present CompScheme’s event model, which provides a framework for building abstract structural musical units, exemplified by showing CompScheme’s functionalities to control the SuperCollider server in real-time. Thirdly, I discuss CompScheme’s stochastic synthesis functionality, an attempt to generalize from I. Xenakis’s dynamic stochastic synthesis and G.M. Koenig’s SSP.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849515
Zenodo URL: https://zenodo.org/record/849515


2008.8
Concatenative Synthesis of Expressive Saxophone Performance
Kersten, Stefan   Music Technology Group (MTG), Pompeu Fabra University (UPF); Barcelona, Spain
Ramirez, Rafael   Music Technology Group (MTG), Pompeu Fabra University (UPF); Barcelona, Spain

Abstract
In this paper we present a systematic approach to applying expressive performance models to non-expressive score transcriptions and synthesizing the results by means of concatenative synthesis. Expressive performance models are built from score transcriptions and recorded performances by means of decision tree rule induction, and those models are used both to transform inexpressive input scores and to guide the concatenative synthesizer unit selection.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849517
Zenodo URL: https://zenodo.org/record/849517


2008.9
Designing and Synthesizing Delay-based Digital Audio Effects Using the CORDIS ANIMA Physical Modeling Formalism
Kontogeorgakopoulos, Alexandros   ICA Laboratory, Association pour la Création et la Recherche sur les Outils d’Expression (ACROE), Grenoble Institute of Technology (Grenoble INP); Grenoble, France
Cadoz, Claude   ICA Laboratory, Association pour la Création et la Recherche sur les Outils d’Expression (ACROE), Grenoble Institute of Technology (Grenoble INP); Grenoble, France

Abstract
Throughout this paper, several CORDISANIMA physical models will be presented to offer an alternative synthesis of some classical delay-based digital audio effects: a delay model, two comb filter models, three flanger models and a sound spatialization model. Several of these realizations support a control scheme based on the ''Physical Instrumental Interaction''. Additionally they provide several sonic characteristics which do not appear in the original ones. Especially the flanger model, for certain parameter values may give a new digital audio effect between flanging and filtering.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849519
Zenodo URL: https://zenodo.org/record/849519


2008.10
Distance Encoding in Ambisonics Using Three Angular Coordinates
Penha, Rui   INET-md (Instituto de Etnomusicologia - Centro de Estudos em Música e Dança), University of Aveiro; Aveiro, Portugal

Abstract
In this paper, the author describes a system for encoding distance in an Ambisonics soundfield. This system allows the postponing of the application of cues for the perception of distance to the decoding stage, where they can be adapted to the characteristics of a specific space and sound system. Additionally, this system can be used creatively, opening some new paths for the use of space as a compositional factor.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849521
Zenodo URL: https://zenodo.org/record/849521


2008.11
Echi tra le Volte, a Sound Design Project for Churches
Taroppi, Andrea   Conservatorio di musica “Giuseppe Verdi” di Como; Como, Italy

Abstract
This work is about the relation between music and architecture. In particular we are interested in the concept of space, as the way where music and architecture meet each other. The study of this topic offer the starting point to the development of Echi tra le Volte, a music installation for churches, where sounds are from the natural reverb of the place, excited by a sinusoidal impulses, which receive its pitches from a genetic algorithm.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849523
Zenodo URL: https://zenodo.org/record/849523


2008.12
Ephemeron: Control Over Self-organised Music
Kollias, Phivos-Angelos   CICM, University Paris VIII; Paris, France

Abstract
The present paper discusses an alternative approach to electroacoustic composition based on principles of the interdisciplinary scientific field of Systemics. In this approach, the setting of the electronic device is prepared in such a way to be able to organise its own function, according to the conditions of the sonic environment. We discuss the approaches of Xenakis and of Di Scipio in relation to Systemics, demonstrating the applications in their compositional models. In my critique on Di Scipio’s approach, I argue that the composer is giving away a major part of his control over the work and therefore the notion of macro-structural form is abandoned. Based on my work Ephemeron, I show that it is possible to conduct emerging situations applying the systemic principle of ‘equifinality’. Moreover, I argue that it is possible to acquire control over these situations and their properties over time so as to develop formal structure.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849525
Zenodo URL: https://zenodo.org/record/849525


2008.13
Expressive Performance in the Human Tenor Voice
Marinescu, Maria Cristina   Thomas J. Watson Research Center, IBM; New York, United States
Ramirez, Rafael   Music Technology Group (MTG), Pompeu Fabra University (UPF); Barcelona, Spain

Abstract
This paper presents preliminary results on expressive performance in the human tenor voice. This work investigates how professional opera singers manipulate sound properties such as timing, amplitude, and pitch in order to produce expressive performances. We also consider the contribution of features of prosody in the artistic delivery of an operatic aria. Our approach is based on applying machine learning to extract patterns of expressive singing from performances by Josep Carreras. This is a step towards recognizing performers by their singing style, capturing some of the aspects which make two performances of the same piece sound different, and understanding whether there exists a correlation between the occurrences correctly covered by a pattern and specific emotional attributes.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849527
Zenodo URL: https://zenodo.org/record/849527


2008.14
Extending Voice-driven Synthesis to Audio Mosaicing
Janer, Jordi   Music Technology Group (MTG), Pompeu Fabra University (UPF); Barcelona, Spain
de Boer, Maarten   Music Technology Group (MTG), Pompeu Fabra University (UPF); Barcelona, Spain

Abstract
This paper presents a system for controlling audio mosaicing with a voice signal, which can be interpreted as a further step in voice-driven sound synthesis. Compared to voice-driven instrumental synthesis, it increases the variety in the synthesized timbre. Also, it provides a more direct interface for audio mosaicing applications, where the performer voice controls rhythmic, tonal and timbre properties of the output sound. In a first step, voice signal is segmented into syllables, extracting a set of acoustic features for each segment. In the concatenative synthesis process, the voice acoustic features (target) are used to retrieve the most similar segment from the corpus of audio sources. We implemented a system working in pseudo-realtime, which analyzes voice input and sends control messages to the concatenative synthesis module. Additionally, this work raises questions to be further explored about mapping the input voice timbre space onto the audio sources timbre space.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849529
Zenodo URL: https://zenodo.org/record/849529


2008.15
Granular Sound Spatialization Using Dictionary-based Methods
McLeran, Aaron   Media Arts and Technology Program, University of California, Santa Barbara (UCSB); Santa Barbara, United States
Roads, Curtis   Media Arts and Technology Program, University of California, Santa Barbara (UCSB); Santa Barbara, United States
Sturm, Bob Luis   Department of Electrical and Computer Engineering, University of California, Santa Barbara (UCSB); Santa Barbara, United States
Shynk, John J.   Department of Electrical and Computer Engineering, University of California, Santa Barbara (UCSB); Santa Barbara, United States

Abstract
We present methods for spatializing sound using representations created by dictionary-based methods (DBMs). DBMs have been explored primarily in applications for signal processing and communications, but they can also be viewed as the analytical counterpart to granular synthesis. A DBM determines how to synthesize a given sound from any collection of grains, called atoms, specified in a dictionary. Such a granular representation can then be used to perform spatialization of sound in complex ways. To facilitate experimentation with this technique, we have created an application for providing real-time synthesis, visualization, and control using representations found via DBMs. After providing a brief overview of DBMs, we present algorithms for spatializing granular representations, as well as our application program Scatter, and discuss future work.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849531
Zenodo URL: https://zenodo.org/record/849531


2008.16
Melody Characterization by a Genetic Fuzzy System
Ponce de León, Pedro José   Department of Informatic Languages and Systems, Universidad de Alicante; Alicante, Spain
Rizo, David   Department of Informatic Languages and Systems, Universidad de Alicante; Alicante, Spain
Ramirez, Rafael   Department of Informatic Languages and Systems, Universidad de Alicante; Alicante, Spain
Iñesta, José Manuel   Music Technology Group (MTG), Pompeu Fabra University (UPF); Barcelona, Spain

Abstract
We present preliminary work on automatic human-readable melody characterization. In order to obtain such a characterization, we (1) extract a set of statistical descriptors from the tracks in a dataset of MIDI files, (2) apply a rule induction algorithm to obtain a set of (crisp) classification rules for melody track identification, and (3) automatically transform the crisp rules into fuzzy rules by applying a genetic algorithm to generate the membership functions for the rule attributes. Some results are presented and discussed.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849533
Zenodo URL: https://zenodo.org/record/849533


2008.17
Memory Space
Emmerson, Simon   Music, Technology and Innovation Research Centre (MTI), De Montfort University Leicester; Leicester, United Kingdom

Abstract
Memory may be mapped metaphorically onto space, as in the mediaeval and renaissance Memory Theatre (see Frances Yates, Art of Memory, 1966/1992 [Reference 4]). But we now have the power to project this literally in sound in sound installation systems such as the Klangdom. In Resonances (8 channel acousmatic work, commissioned by the IMEB (Bourges) in 2007), I explore my memories of the modernist repertoire (1820-1940) using small timbral ‘instants’, extended, layered and spatialised. Some juxta- and superpositions are logical, others unlikely – but that is the joy of memory and creativity. But memories also fade and die ... This paper examines this work, and how the memory and spatial relationships are articulated through the material. It also presents plans for a more elaborate work to be realised in 2008-2009. In this the fixed nature of the previous work will give way to an ‘evolving’ acousmatic piece which changes at each performance as new spatial layers are added, others fade. The paper will be illustrated with music examples.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849535
Zenodo URL: https://zenodo.org/record/849535


2008.18
Mode-dependent Differences in Chord Classification Under an Original Computational Method of Tonal Structure Analysis
Majchrzak, Miroslaw   Institute of Art, Polish Academy of Sciences; Warsaw, Poland

Abstract
Basing upon original computational analytic method (Majchrzak 2005, 2007), the present work aims, at: 1) Showing the differences for the major key and the minor (harmonic) key in the classification of chords, as an aspect of importance for interpreting a piece’s tonal structure diagram; 2) Drawing attention to the subordination of the minor key as versus the major key in the chord classification, using the same algorithm. The relations between chords appearing in the major and minor (harmonic) key are shown by applying the comparisons of: 1) third-based chords; 2) degrees in the C major and A minor keys, on which the same diatonic chords appear.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849537
Zenodo URL: https://zenodo.org/record/849537


2008.19
Modeling Affective Content of Music: A Knowledge Base Approach
Oliveira, António Pedro   Centre for Informatics and Systems, University of Coimbra; Coimbra, Portugal
Cardoso, Amílcar   Centre for Informatics and Systems, University of Coimbra; Coimbra, Portugal

Abstract
The work described in this paper is part of a project that aims to implement and assess a computer system that can control the affective content of the music output, in such a way that it may express an intended emotion. In this system, music selection and transformation are done with the help of a knowledge base with weighted mappings between continuous affective dimensions (valence and arousal) and music features (e.g., rhythm and melody) grounded on results from works of Music Psychology. The system starts by making a segmentation of MIDI music to obtain pieces that may express only one kind of emotion. Then, feature extraction algorithms are applied to label these pieces with music metadata (e.g., rhythm and melody). The mappings of the knowledge base are used to label music with affective metadata. This paper focus on the refinement of the knowledge base (subsets of features and their weights) according to the prediction results of listeners’ affective answers.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849539
Zenodo URL: https://zenodo.org/record/849539


2008.20
Modeling Moods in Violin Performances
Pérez Carrillo, Alfonso   Music Technology Group (MTG), Pompeu Fabra University (UPF); Barcelona, Spain
Ramirez, Rafael   Music Technology Group (MTG), Pompeu Fabra University (UPF); Barcelona, Spain
Kersten, Stefan   Music Technology Group (MTG), Pompeu Fabra University (UPF); Barcelona, Spain

Abstract
In this paper we present a method to model and compare expressivity for different Moods in violin performances. Models are based on analysis of audio and bowing control gestures of real performances and they predict expressive scores from non expressive ones. Audio and control data is captured by means of a violin pickup and a 3D motion tracking system and aligned with the performed score. We make use of machine learning techniques in order to extract expressivity rules from score-performance deviations. The induced rules conform a generative model that can transform an inexpressive score into an expressive one. The paper is structured as follows: First, the procedure of performance data acquisition is introduced, followed by the automatic performance-score alignment method. Then the process of model induction is described, and we conclude with an evaluation based on listening test by using a sample based concatenative synthesizer.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849541
Zenodo URL: https://zenodo.org/record/849541


2008.21
NOVARS Research Centre, University of Manchester, UK. Studio Report
Climent, Ricardo   Novars Research Centre, The University of Manchester; Manchester, United Kingdom
Berezan, David   Novars Research Centre, The University of Manchester; Manchester, United Kingdom
Davison, Andrew   Novars Research Centre, The University of Manchester; Manchester, United Kingdom

Abstract
NOVARS is a new Research Centre started in March 2007 with specialisms in areas of Electroacoustic Composition, Performance and Sound-Art. The Centre is capitalising on the success of Music at the University of Manchester with the expansion of its existing research programme in Electroacoustic Composition with a new £2.2 million investment in a cutting-edgev new studios infrastructure. This studio report covers key aspects of architectural and acoustic design of the Studios, functionality and existing technology.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849543
Zenodo URL: https://zenodo.org/record/849543


2008.22
Retrospat: A Perception-based System for Semi-automatic Diffusion of Acousmatic Music
Mouba, Joan   Laboratoire Bordelais de Recherche en Informatique (LaBRI), Université Bordeaux-I; Bordeaux, France
Marchand, Sylvain   Laboratoire Bordelais de Recherche en Informatique (LaBRI), Université Bordeaux-I; Bordeaux, France
Mansencal, Boris   Laboratoire Bordelais de Recherche en Informatique (LaBRI), Université Bordeaux-I; Bordeaux, France
Rivet, Jean-Michel   Laboratoire Bordelais de Recherche en Informatique (LaBRI), Université Bordeaux-I; Bordeaux, France

Abstract
We present the RetroSpat system for the semiautomatic diffusion of acousmatic music. This system is intended to be a spatializer with perceptive feedback. More precisely, RetroSpat can guess the positions of physical sound sources (e.g. loudspeakers) from binaural inputs, and can then output multichannel signals to the loudspeakers while controlling the spatial location of virtual sound sources. Together with a realistic binaural spatialization technique taking into account both the azimuth and the distance, we propose a precise localization method which estimates the azimuth from the interaural cues and the distance from the brightness. This localization can be used by the system to adapt to the room acoustics and to the loudspeaker configuration. We propose a simplified sinusoidal model for the interaural cues, the model parameters being derived from the CIPIC HRTF database. We extend the binaural spatialization to a multi-source and multiloudspeaker spatialization system based on a static adaptation matrix. The methods are currently implemented in a real-time free software. Musical experiments are conducted at the SCRIME, Bordeaux.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849545
Zenodo URL: https://zenodo.org/record/849545


2008.23
Sound as Multiplicity: Spaces and Representations in Computer Music Composition
Fuentes, Arturo   CICM, University Paris VIII; Paris, France

Abstract
This text deals with the subject of sonic spaces within the field of computer music composition. Highlighted by the notion of multiplicity, the sound will be analysed as a multi-representational space. This central idea will take us to consider some proposals of the hermeneutical criticism of representation, where we’ll observe the emergence of sonic spaces from an action-perception perspective: our musical significations appear at the very moment we execute a “local action” in the composition process. Multiplicity is produced by singularities as well as singularity is conceived as a multiple entity: depending on our operatory procedure in music composition we shall consider a sound as One or as Multiple. In music composition, human-computer interaction moves towards this problematic.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849547
Zenodo URL: https://zenodo.org/record/849547


2008.24
Sound in Cyberspace: Exploring Material Technoculture
Polymeropoulou, Marilou   Department of Music Studies, National and Kapodistrian University of Athens; Athens, Greece

Abstract
Cyberspace is nowadays a social network of people that produce, reproduce and consume technology culture, or as it is better expressed, technoculture. In this vast environment, transmittable digital information represents sound. However, what is the function of sound and why does it matter? In the following pages, I shall present sound as the materiality of technoculture in cyberspace, or, the cultural meanings of sound beyond natural space.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849549
Zenodo URL: https://zenodo.org/record/849549


2008.25
Sound Spatialisation, Free Improvisation and Ambiguity
Mooney, James   Culture Lab, Newcastle University; Newcastle upon Tyne, United Kingdom
Belly, Paul   International Center for Music Studies, Newcastle University; Newcastle upon Tyne, United Kingdom
Parkinson, Adam   Culture Lab, Newcastle University; Newcastle upon Tyne, United Kingdom

Abstract
This paper documents emergent practice led research that brings together live sound spatialisation and free improvisation with digital tools in a performance context. An experimental performance is described in which two musicians – a turntablist and a laptop performer – improvised, with the results being spatialised via multiple loudspeakers by a third performer using the Resound spatialisation system. This paper focuses on the spatial element of the performance and its implications, its technical realisation and some aesthetic observations centring on the notion of ‘ambiguity’ in free improvisation. An analysis raises numerous research questions, which feed into a discussion of subsequent, current and future work.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849551
Zenodo URL: https://zenodo.org/record/849551


2008.26
Space as an Evolution Strategy. Sketch of a Generative Ecosystemic Structure of Sound
Scamarcio, Massimo   School of Electronic Music, Conservatorio di Musica San Pietro a Majella; Napoli, Italy

Abstract
This paper discusses a generative, systemic approach on sound processing, touching topics like genetics, evolutionary programming, eco-systemic interaction of sound in space, and feedback, putting them in the context of the author’s Syntáxis(Acoustic Generative Sound Processing System, part 1): a sound installation for stereophonic speaker system and microphone. The main implications of the overall structure of the installation are analysed, focusing on the dynamics of it and its relationships with space. The paper also illustrates the main structure of the algorithm regulating the installation behavior, along with brief references to the software platform used to develop it (Max MSP 5).

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849553
Zenodo URL: https://zenodo.org/record/849553


2008.27
Space Resonating Through Sound
Iazzetta, Fernando   Department of Music (CMU), University of São Paulo (USP); São Paulo, Brazil
Campesato, Lílian   Department of Music (CMU), University of São Paulo (USP); São Paulo, Brazil

Abstract
In this paper we will analyze how the conception of space in music is expanded by the repertoire of sound art, moving from the idea of space as a delimited area with physical and acoustical characteristics, to the notion of site in which representational aspects of a place become expressive elements of a work of art.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849555
Zenodo URL: https://zenodo.org/record/849555


2008.28
Spatial Orchestration
Lyon, Eric   Sonic Arts Research Centre (SARC), Queen's University Belfast; Belfast, United Kingdom

Abstract
The emergence of multiple sites for the performance of multi-channel spatial music motivates a consideration of strategies for creating spatial music, and for making necessary adjustments to existing spatial works for performances in spaces with significantly different acoustic properties and speaker placement. Spatial orchestration is proposed as a conceptual framework for addressing these issues.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849559
Zenodo URL: https://zenodo.org/record/849559


2008.29
Speaker-Herd: A Multi-channel Loudspeaker Project for Miscellaneous Spaces, Loudspeaker Architectures and Compositional Approaches
Bierlein, Frank   Department of Media Art / Sound, Karlsruhe University of Arts and Design (HfG); Karlsruhe, Germany
Farchmin, Elmar   Department of Media Art / Sound, Karlsruhe University of Arts and Design (HfG); Karlsruhe, Germany
Fütterer, Lukas   Department of Media Art / Sound, Karlsruhe University of Arts and Design (HfG); Karlsruhe, Germany
Kerschkewicz, Anja   Department of Media Art / Sound, Karlsruhe University of Arts and Design (HfG); Karlsruhe, Germany
Loscher, David   Department of Media Art / Sound, Karlsruhe University of Arts and Design (HfG); Karlsruhe, Germany
Modler, Paul   Department of Media Art / Sound, Karlsruhe University of Arts and Design (HfG); Karlsruhe, Germany
Möhrmann, Thorsten   Department of Media Art / Sound, Karlsruhe University of Arts and Design (HfG); Karlsruhe, Germany
Myatt, Tony   Department of Music, University of York; York, United Kingdom
Rafinski, Adam   Department of Media Art / Sound, Karlsruhe University of Arts and Design (HfG); Karlsruhe, Germany
Räpple, René   Department of Media Art / Sound, Karlsruhe University of Arts and Design (HfG); Karlsruhe, Germany
Rosinski, Stefan   Department of Media Art / Sound, Karlsruhe University of Arts and Design (HfG); Karlsruhe, Germany
Schwarz, Lorenz   Department of Media Art / Sound, Karlsruhe University of Arts and Design (HfG); Karlsruhe, Germany
Unger, Amos   Department of Media Art / Sound, Karlsruhe University of Arts and Design (HfG); Karlsruhe, Germany
Zielke, Markus   Department of Media Art / Sound, Karlsruhe University of Arts and Design (HfG); Karlsruhe, Germany

Abstract
Strong interest in spatial sound has existed at all times and on various levels of aesthetic music and sound production and perception. In recent years the availability of high-quality loudspeakers and digital multi-channel audio systems has paved the way to incorporate spatial acoustics into musical composition. In this paper we describe a project which is aimed at providing flexible possibilities to experiment with miscellaneous loudspeaker architectures and multi-channel distribution systems. The system allows the use of up to 96 audio channels in real time which can be fed to loudspeakers setup according to varying spatial designs. As examples a number of realized architectures and compositions will be described.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849561
Zenodo URL: https://zenodo.org/record/849561


2008.30
Textural Composition and Its Space
Hagan, Kerry   University of Limerick; Limerick, Ireland

Abstract
The aesthetic implications of real-time stochastic sound mass composition call for a new approach to musical material and spatialization. One possibility is found in the fertile ground of musical texture. Texture exists between notions of the singular sound object and the plurality of a sound mass or lo-fi soundscape. Textural composition is the practice of elevating and exploring the intermediary position between the single and the plural while denying other musical attributes. The consequences of this aesthetic principle require a similarly intermediary spatialization conducive to textural listening. This spatialization exists between point-source diffusion and mimetic spatialization. Ultimately, the ramifications of textural composition affect both the space in the sound and the sound in space. This paper introduces the intermediary aesthetics of textural composition focusing on its spaces. It then describes an implementation in the author’s work, real-time tape music III.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849563
Zenodo URL: https://zenodo.org/record/849563


2008.31
Towards a Decodification of the Graphical Scores of Anestis Logothetis (1921-1994). The Graphical Space of Odysee(1963)
Baveli, Maria-Dimitra   Department of Music, University of Athens; Athens, Greece / Hochschule für Musik Franz Liszt Weimar; Weimar, Germany
Georgaki, Anastasia   Department of Music, University of Athens; Athens, Greece

Abstract
In this presentation we are going to examine, via de-codification of graphic scores, the work of the avantgarde composer and pre-media artist Anestis Logothetis, who is considered one of the most prominent figures in graphic musical notation. In the primary stage of our research, we have studied these graphical scores in order to make a first taxonomy of his graphical language and to present the main syntax of his graphic notation, aiming at a future sonic representation of his scores. We also present an example of graphical space through his ballet Odysee (1963).

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849565
Zenodo URL: https://zenodo.org/record/849565


2008.32
Traditional and Digital Music Instruments: A Relationship Based on a Interactive Model
Ferreira-Lopes, Paulo   Research Centre for Science and Technology of the Arts (CITAR), Catholic University of Portugal; Porto, Portugal / Institute for Musics and Acoustics (IMA), Center for Art and Media Karlsruhe (ZKM); Karlsruhe, Germany

Abstract
In the present work some aspects of the influence of the digital music instruments on composition methods are observed. Some consequences of the relationship between traditional instruments and digital music instruments result on a triangular interactive process. As an analytical approach to understand this relationship and the association process between traditional instruments and digital music instruments, a typology of interaction for the musical performance based on this instrumental configuration is proposed. The deduced classes are based upon the observation and systematization of my work as a composer. The proposed model aims to contribute towards an unifying terminology and systematization of some of the major questions that arise from the coexistence between two different paradigms (traditional and digital music instruments) in the universe of live electronic music.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849567
Zenodo URL: https://zenodo.org/record/849567


2008.33
Vocabulary of Space Perception in Electroacoustic Musics Composed or Spatialised in Pentaphony
Merlier, Bertrand   Department of Music, University Lumière Lyon 2; Lyon, France

Abstract
This paper begins with a brief introduction of the GETEME (Groupe d’Étude sur l’Espace dans les M usiques É lectroacoustiques - Working Group about Space in Electroacoustic Musics), followed by an overview on its past, present and future activities. A first major achievement was the completion and publication of the “vocabulary of space electroacoustic musics…”, coupled with the realization of a taxonomy of space. Beyond this collection and clarification of these words in general use, it appears necessary to begin to connect words and sound. The goal of our present research is to clarify or elaborate a vocabulary (a set of specialized words) allowing to describe space perception in electroacoustic (multiphonic) musics. The issue is delicate as it deals with psychoacoustics… as well as creators’ or listeners’ imagination. In order to conduct this study, it was necessary to develop a battery of tests, procedures and listening collection of words describing listening space, and then counting and sorting words. The sound descriptions quickly overlap, the words coincide with the same listening situations. A consensus seemed to emerge, revealing: 5 types of spatiality and 2 types of mobility, as well as a variety of adjectives to describe or characterize spatiality or mobility.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849569
Zenodo URL: https://zenodo.org/record/849569


2008.34
Zsa.Descriptors: A Library for Real-time Descriptors Analysis
Malt, Mikhail   Institut de Recherche et Coordination Acoustique/Musique (IRCAM); Paris, France
Jourdan, Emmanuel   Institut de Recherche et Coordination Acoustique/Musique (IRCAM); Paris, France

Abstract
In the past few years, several strategies to characterize sound signals have been suggested. The main objective of these strategies was to describe the sound [1]. However, it was only with the creation of a new standard format for indexing and transferring audio MPEG 7 data that the desire to define audio data semantic content descriptors came about [2, p.52]. The widely known document written by Geoffroy Peeters [1] is an example where, even if the goal announced is not to carry out a systematic taxonomy on all the functions intended to describe sound, it does in fact systematize the presentation of various descriptors.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849571
Zenodo URL: https://zenodo.org/record/849571


Search