Sixteen Years of Sound & Music Computing
A Look Into the History and Trends of the Conference and Community

D.A. Mauro, F. Avanzini, A. Baratè, L.A. Ludovico, S. Ntalampiras, S. Dimitrov, S. Serafin
Card image

Papers

Sound and Music Computing Conference 2007 (ed. 4)

Dates: from July 11 to July 13, 2007
Place: Lefkada, Greece
Proceedings info: Proceedings of the 4th Sound and Music Computing Conference (SMC07), ISBN 978-960-6608-75-9


2007.1
A Component-based Framework for the Development of Virtual Musical Instruments Based on Physical Modeling
Tzevelekos, Panagiotis   Department of Informatics and Telecommunications, National and Kapodistrian University of Athens; Athens, Greece
Perperis, Thanassis   Department of Informatics and Telecommunications, National and Kapodistrian University of Athens; Athens, Greece
Kyritsi, Varvara   Department of Informatics and Telecommunications, National and Kapodistrian University of Athens; Athens, Greece
Kouroupetroglou, Georgios   Department of Informatics and Telecommunications, National and Kapodistrian University of Athens; Athens, Greece

Abstract
We propose a framework for the design and development of component-based woodwind virtual instruments. Each functional part of the instrument is represented with an independent component, and can be created with different approaches, by unfamiliar constructors. Using the aforementioned framework, Virtual Zournas is implemented. The user can experiment with the instrument, changing its physical properties. Instrument control is performed via MIDI files or external MIDI devices .

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849377
Zenodo URL: https://zenodo.org/record/849377


2007.2
Advanced Sound Manipulation in Interactive Multimedia Environments
Deliyiannis, Yannis   Department of Audiovisual Arts, Ionian University; Corfu, Greece
Floros, Andreas   Department of Audiovisual Arts, Ionian University; Corfu, Greece
Tsakostas, Christos   Holistiks Engineering Systems; Athens, Greece

Abstract
Multimedia standards, frameworks and models are already well established for generalized presentations. However, the situation is much less advanced for systems, which require the combination of advanced sound-oriented features and capabilities similar to those used to interact with highly demanding visual content. In this respect, current commercial presentation applications have been found lacking, often revealing multifaceted presentation limitations, including lack of sound control and delivery. They rarely offer cross-platform compatibility, provide limited programmability, are restrictive on data-interaction, and only support static WWW-based delivery. To overcome the above-stated deficiencies, a number of innovations are proposed, including the presentation of a combined multimedia framework, supported by a model that describes content-connectivity and stream-synchronization, enabling interaction for all audiovisual data-types included in the system.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849381
Zenodo URL: https://zenodo.org/record/849381


2007.3
A Dynamic Interface for the Audio-visual Reconstruction of Soundscape, Based on the Mapping of its Properties
Stratoudakis, Constantinos   Department of Music Studies Arts, Ionian University; Corfu, Greece
Papadimitriou, Kimon   Department of Rural and Surveying Engineering, Aristotle University of Thessaloniki; Thessaloniki, Greece

Abstract
The application of hi-end technologies in multimedia platforms provides the dynamic setting of the parameters for the reproduction of audiovisual stimulus from natural environments (virtualization through real-time interaction). Additionally, the integration of cartographic products that describe quantitative and qualitative spatial properties expands multimedia’s capabilities for representations with geographical reference. The proposed interface combines data that are used for the mapping of a sonic environment, as well as sonic elements derived from field recordings and photographic material, in order to reconstruct in-vitro the soundscape of a protected area around Lake Antinioti, at northern Corfu, Greece.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849425
Zenodo URL: https://zenodo.org/record/849425


2007.4
A Generative Grammar Approach to Diatonic Harmonic Structure
Rohrmeier, Martin   Centre for Music and Science, Faculty of Music, University of Cambridge; Cambridge, United Kingdom

Abstract
This paper aims to give a hierarchical, generative account of diatonic harmony progressions and proposes a generative phrase-structure grammar. The formalism accounts for structural properties of key, functional, scale and surface level. Being related to linguistic approaches in generative syntax and to the hierarchical account of tonality in the generative theory of tonal music (GTTM) [1], cadence-based harmony contexts and its elaborations are formalised. This approach covers cases of modulation, tonicisation and some aspects of large-scale harmonic form, and may be applied to large sets of diatonic compositions. Potential applications may rise in computational harmonic and corpus analysis, as well as in the music psychological investigation of tonal cognition.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849397
Zenodo URL: https://zenodo.org/record/849397


2007.5
A Grammatical Approach to Automatic Improvisation
Keller, Robert M.   Harvey Mudd College; Claremont, United States
Morrison, David R.   Harvey Mudd College; Claremont, United States

Abstract
We describe an approach to the automatic generation of convincing jazz melodies using probabilistic grammars. Uses of this approach include a software tool for assisting a soloist in the creation of a jazz solo over chord progressions. The method also shows promise as a means of automatically improvising complete solos in real-time. Our approach has been implemented and demonstrated in a free software tool.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849479
Zenodo URL: https://zenodo.org/record/849479


2007.6
Algorithmic Composition - "Gestalt Revolution" - a New Approach to a Unified View on Structuring Diverse Levels of Musical Composition
Schmitt, Jürgen   Studio for Experimental Electronic Music (eem), Hochschule für Musik Würzburg; Würzburg, Germany

Abstract
Original method of revolving distinct structures, preserving their internal "gestalt“, mapped to: harmonic gestures, – ”quantised“ to 12-tet or free microtonality, rhythmic design, based on milliseconds or adapted to traditional mensural notation, overtone structures, e.g. resonance banks, based on frequencies or proportions, distributions and relations of musical formal elements. By ”reverse engineering“, starting from traditionally composed passages the author (composer/ pianist, synthesist) set out to systematize his research project and tried to apply methods from one field of the compositional process to any other. The method aims at a unified approach to generating musical material, controlling its mapping and application, synthesizing overtone spectra or the like and building form blocks.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849485
Zenodo URL: https://zenodo.org/record/849485


2007.7
A Neural Network Approach for Synthesising Timbres From Adjectives
Gounaropoulos, Alex   Computing Laboratory, University of Kent; Canterbury, United Kingdom
Johnson, Colin G.   Computing Laboratory, University of Kent; Canterbury, United Kingdom

Abstract
This paper describes a computer sound synthesis system, based on artificial neural networks, that constructs a mapping between adjectives and adverbs that describe timbres, and sounds having those timbres. This is used in two ways: firstly, to recognize the timbral characteristics of sounds supplied to the system, and, secondly, to make changes to sounds based on descriptions of timbral change.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849463
Zenodo URL: https://zenodo.org/record/849463


2007.8
A Neurocognitive Analysis of Rhythmic Memory Process as a Social Space Phenomenon
Bonda, Eva   Cabinet Médical; Paris, France

Abstract
Functional neuroimaging has recently offered the means to map the workings of the human auditory system during music processing. I propose a cognitive theoretical account of rhythmic memory brain processing on the basis of current functional neuroimaging data on music perception in the light of my own work on the brain coding of body perception. This framework allows unifying experimental evidence showing activity in brain areas involved in the appreciation of the emotional cues conveyed by music, and data showing activity in the same areas but during the perception of emotional cues conveyed by body actions. This account postulates that the human brain may code the rhythmic processing space as a social space with communicative value.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849405
Zenodo URL: https://zenodo.org/record/849405


2007.9
An Innovative Method for the Study of African Musical Scales: Cognitive and Technical Aspects
Arom, Simha   Centre National de la Recherche Scientifique; Paris, France
Fernando, Nathalie   Université de Montréal; Montreal, Canada
Marandola, Fabrice   McGill University; Montreal, Canada

Abstract
Our aim is to demonstrate how music computing can increase efficiency in a field of humanities such as ethnomusicology. In particular, we will discuss issues concerning musical scales in Central Africa. We will describe a recent methodology developed by a multidisciplinary team composed of researchers from IRCAM and the CNRS. This methodology allows us to understand the implicit behaviour of musicians and the cognitive cultural categories of the functioning of scale systems.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849401
Zenodo URL: https://zenodo.org/record/849401


2007.10
A Platform for Real-Time Multimodal Processing
Camurri, Antonio   InfoMus Lab, Università di Genova; Genova, Italy
Coletta, Paolo   InfoMus Lab, Università di Genova; Genova, Italy
Demurtas, Mirko   InfoMus Lab, Università di Genova; Genova, Italy
Peri, Massimiliano   InfoMus Lab, Università di Genova; Genova, Italy
Ricci, Andrea   InfoMus Lab, Università di Genova; Genova, Italy
Sagoleo, Roberto   InfoMus Lab, Università di Genova; Genova, Italy
Simonetti, Marzia   InfoMus Lab, Università di Genova; Genova, Italy
Varni, Giovanna   InfoMus Lab, Università di Genova; Genova, Italy
Volpe, Gualtiero   InfoMus Lab, Università di Genova; Genova, Italy

Abstract
We present an overview of the architecture and the main technical features of EyesWeb XMI (for eXtended Multimodal Interaction), a hardware and software platform for real-time multimodal processing of multiple data streams. This platform originates from the previous EyesWeb platform, but the XMI is not simply an updated version; it is the result of a 3-year work, concerninig a new conceptual model, design, and implementation. The new platform includes novel approaches integrating new kinds of interfaces (e.g. tangible acoustic interfaces) and a new set of tools for supporting research in multimodal architectures and interfaces. We discuss two case studies from two research projects at our Lab: the “Premio Paganini” Concert-Experiment and the Orchestra Explorer experiment on a new active listening paradigm. The real-time multimodal data streams processing is discussed. The experimental setups are used to clarify the model, technical issues involved in this new platform.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849487
Zenodo URL: https://zenodo.org/record/849487


2007.11
A Semi-automated Tagging Methodology for Orthodox Ecclesiastic Chant Acoustic Corpora
Chryssochoidis, Georgios   Department of Informatics and Telecommunications, University of Athens; Athens, Greece
Delviniotis, Dimitrios   Department of Informatics and Telecommunications, University of Athens; Athens, Greece
Kouroupetroglou, Georgios   Department of Informatics and Telecommunications, University of Athens; Athens, Greece

Abstract
This paper addresses the problem of using a semi automated process for tagging lyrics, notation and musical interval metadata of an Orthodox Ecclesiastic Chant (OEC) voice corpus. The recordings are post processed and segmented using the PRAAT software tool. Boundaries are placed, thus creating intervals where all tagging metadata will be embedded. The metadata are encoded and then inserted as labels in the segmented file. The methodology and processes involved in this tagging process are described along with its evaluation with the DAMASKINOS prototype Acoustic Corpus of Byzantine Ecclesiastic Chant Voices.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849407
Zenodo URL: https://zenodo.org/record/849407


2007.12
A System of Interactive Scores Based on Petri Nets
Allombert, Antoine   Institut de Recherche et Coordination Acoustique/Musique (IRCAM); Paris, France / Université Bordeaux-I; Bordeaux, France
Assayag, Gérard   Institut de Recherche et Coordination Acoustique/Musique (IRCAM); Paris, France
Desainte-Catherine, Myriam   Université Bordeaux-I; Bordeaux, France

Abstract
We propose a formalism for composition and performance of musical pieces involving temporal structures and discrete interactive events. We use the Allen relations to constrain these structures and to partially define a temporal order on them. During the score composition stage, we use a constraints propagation model to maintain the temporal relations between the structures. For the performance stage, we must allow the composer to trigger the interactive events “whenever” he wants and we have to also maintain the temporal relations in a real-time context. We use a model based on Petri nets for this stage. We also provide a solution to define global constraints in addition of the local temporal constraints inspired by the NTCC formalism.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849417
Zenodo URL: https://zenodo.org/record/849417


2007.13
Authenticity Issue in Performing Arts Using Live Electronics
Guercio, Mariella   Università degli Studi di Urbino "Carlo Bo"; Urbino, Italy
Barthélemy, Jérôme   Institut de Recherche et Coordination Acoustique/Musique (IRCAM); Paris, France
Bonardi, Alain   Institut de Recherche et Coordination Acoustique/Musique (IRCAM); Paris, France

Abstract
The CASPAR project is an European project devoted to the preservation of digitally encoded information. In the course of the project, the contemporary arts testbed aims at building a preservation framework for contemporary arts using electronic devices, and particularly for performing arts (music, dance, video installations...). The project addresses very specific issues as digital rights, or authenticity. In this paper, we address the issue of authenticity and give an overview of our approach. The approach of authenticity in CASPAR is based on provenance, that is to say, addressing the question of who, when, and why. Implementing this approach in the field of contemporary artistic production including electronic devices is not a trivial task. In CASPAR we intend to study the production process, and extract from the elements and traces left by the production process key elements for future assessment of authenticity. We will present some "case studies" in order to explain our approach. Notably, in the production of the String Quartet by Florence Baschet, the electronic processes are evaluated towards their robustness against changes of instrumentist, changes of tempo, changes in the hardware settings (particularly, removal of a specific sensor). We will show the interest of this evaluation for authenticity preservation issues, and give an insight on the tools we intend to develop, aiming at proving, beyond authentication of provenance, authentication of the results that can be assessed towards author's intentions.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849441
Zenodo URL: https://zenodo.org/record/849441


2007.14
Automatic Semantic Annotation of Music With Harmonic Structure
Weyde, Tillman   Music Informatics Research Group, Department of Computing, City University London; London, United Kingdom
Wissmann, Jens   Music Informatics Research Group, Department of Computing, City University London; London, United Kingdom

Abstract
This paper presents an annotation model for harmonic structure of a piece of music, and a rule system that supports the automatic generation of harmonic annotations. Musical structure has so far received relatively little attention in the context of musical metadata and annotation, although it is highly relevant for musicians, musicologists and indirectly for music listeners. Activities in semantic annotation of music have so far mostly concentrated on features derived from audio data and file-level metadata. We have implemented a model and rule system for harmonic annotation as a starting point for semantic annotation of musical structure. Our model is for the musical style of Jazz, but the approach is not restricted to this style. The rule system describes a grammar that allows the fully automatic creation of an harmonic analysis as tree-structured annotations. We present a prototype ontology that defines the layers of harmonic analysis from chords symbols to the level of a complete piece. The annotation can be made on music in various formats, provided there is a way of addressing either chords or time points within the music. We argue that this approach, in connection with manual annotation, can support a number of application scenarios in music production, education, and retrieval and in musicology. Keywords—harmonic analysis, semantic description, automatic semantic annotation, grammar, ontology.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849395
Zenodo URL: https://zenodo.org/record/849395


2007.15
Autosimilar Melodies and Their Implementation in OpenMusic
Amiot, Emmanuel   CPGE, Lycée Privé Notre Dame de Bon Secours; Perpignan, France
Agon, Carlos   Music Representations Team, Institut de Recherche et Coordination Acoustique/Musique (IRCAM); Paris, France
Andreatta, Moreno   Music Representations Team, Institut de Recherche et Coordination Acoustique/Musique (IRCAM); Paris, France

Abstract
Autosimilar melodies put together the interplay of several melodies within a monody, the augmentations of a given melody, and the notion of autosimilarity which is well known in fractal objects. From a mathematical study of their properties, arising from experimentations by their inventor, composer Tom Johnson, we have generalized the notion towards the dual aspects of the group of symmetries of a periodic melody, and the creation of a melody featuring a set of given symmetries. This is now a straightforward tool, both for composers and analysts, in OpenMusic visual programming language.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849483
Zenodo URL: https://zenodo.org/record/849483


2007.16
Beyond Movement an Animal, Beyond an Animal the Sound
Philippides, Alkiviades   Department of Music Studies Arts, Ionian University; Corfu, Greece

Abstract
‘Beyond movement an animal, beyond an animal the sound’ is a solo interactive dance performance about the route of an animal from birth to an unnatural death. The project experiments with the control and adjustment of musical parameters through movement. Two questions raised during this work are: in which ways are movements translated into sound? How can a single person realize dance and composition, while at the same time efficiently handling software programming? The dancer takes decisions upon the real time sound mixing and processing via a miniDV camera, which is placed on the ceiling of the stage as well as EyesWeb and SuperCollider software. Parameters extracted from video to drive the sound were the coordinates of the center of gravity of the silhouette of the dancer, the total amount of movement of the silhouette and the duration of movement or stasis of the silhouette. These drove the triggering of sound samples and the processing of these via filters, granulation, pitch shift and spectral processing. The duration of the performance varies from ten to fourteen minutes, depending on the time the moving body needs to drive through the different phases of the choreography. Handling the entire production process from sound design over software programming to choreography and performance was a challenge that resulted in an aesthetically homogeneous and finely tuned result, that takes fully into advantage the flexibility afforded by “playing” the sound via the dancer’s movements. Thus “interaction” is here understood as the mutual adjustment of sound from the dancer’s movement and the dancer’s movement from the sound, both at the design stage and during the performance.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849429
Zenodo URL: https://zenodo.org/record/849429


2007.17
Cephalomorphic Interface for Emotion-based Music Synthesis
Maniatakos, Vassilios-Fivos A.   Laboratoire d'Informatique pour la Mécanique et les Sciences de l'Ingénieur (LIMSI), Université Paris-Sud XI; Paris, France

Abstract
This article discusses to adapt ’Pogany’, a tangible cephalomorphic interface designed and realized in LIMSI-CNRS laboratory, to use for music synthesis purposes. We are interested in methods for building an affective emotion-based system for gesture and posture identification, captured through a facial interface that understands variations of luminosity by distance or touch. After a brief discussion on related research, the article introduces issues in the development of a robust gesture learning and recognition tool based on HMMs. Results of the first gesture training and recognition system built are presented and evaluated. Explicit future work is described, as well as further possible improvements concerning the interface, the recognition system and its mapping with a music synthesis tool.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849435
Zenodo URL: https://zenodo.org/record/849435


2007.18
Controlling Aural and Visual Particle Systems Through Human Movement
Guedes, Carlos   School of Music and Performing Arts (ESMAE), P.Porto (Instituto Politécnico do Porto); Porto, Portugal / Escola Superior de Artes Aplicadas, Instituto Politécnico de Castelo Branco; Castelo Branco, Portugal
Woolford, Kirk   Department of Music, Lancaster University; Lancaster, United Kingdom

Abstract
This paper describes the methods used to construct an interactive installation using human motion to animate both an aural and visual particle system in synch. It outlines the rotoscoping, meta-motion processing, and visual particle system software. The paper then goes into a detailed explanation of the audio software developed for the project.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849431
Zenodo URL: https://zenodo.org/record/849431


2007.19
Cordis Anima Physical Modeling and Simulation System Analysis
Kontogeorgakopoulos, Alexandros   ICA Laboratory, Association pour la Création et la Recherche sur les Outils d’Expression (ACROE), Grenoble Institute of Technology (Grenoble INP); Grenoble, France
Cadoz, Claude   ICA Laboratory, Association pour la Création et la Recherche sur les Outils d’Expression (ACROE), Grenoble Institute of Technology (Grenoble INP); Grenoble, France

Abstract
The CORDIS-ANIMA physical modeling system is one of the oldest techniques for digital sound synthesis via physical modeling. This formalism which is based on the mass-interaction paradigm has been designed and developed by ACROE in several stages since 1978. The aim of this article is to enlighten some special and particulars features of this approach by exploiting it mathematically. Linear CORDIS-ANIMA (CA) models are studied and presented using several useful system representations like system function input/output external descriptions, state space internal descriptions, finite difference model, modal decomposition, electrical analogous circuits, CA networks and digital signal processing block diagrams.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849459
Zenodo URL: https://zenodo.org/record/849459


2007.20
Digitally Augmented Everyday Objects in Music Composition
Kojš, Juraj   Department of Music, University of Virginia; Charlottesville, United States
Serafin, Stefania   Department of Medialogy, Aalborg University Copenhagen; Copenhagen, Denmark

Abstract
This paper discusses three instances of compositions, in which physical and cyber everyday objects interact. The physical objects such as plastic corrugated tubes, plastic superballs, glass marbles, cocktail shakers, electric blenders, and others provide unique musical data for the performance and input data for their physically modeled counterparts. Besides the compositional strategies, the article focuses on the software and hardware issues connected with tracking, parametrical mapping, interfacing, and physical modeling of everyday objects.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849481
Zenodo URL: https://zenodo.org/record/849481


2007.21
Discovery of Generalized Interval Patterns
Conklin, Darrell   Music Informatics Research Group, Department of Computing, City University London; London, United Kingdom
Bergeron, Mathieu   Music Informatics Research Group, Department of Computing, City University London; London, United Kingdom

Abstract
This paper presents a method for pattern discovery based on viewpoints and feature set patterns. The representation for pattern components accommodates in a fully general way the taxonomic relationships that may exist between interval classes. A heuristic probabilistic hill climbing algorithm is developed to rapidly direct the search towards interesting patterns. The method can be used for single piece analysis, for comparison of two pieces, and also for pattern analysis of a large corpus. The method is applied to the music of French singer-songwriter Georges Brassens.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849413
Zenodo URL: https://zenodo.org/record/849413


2007.22
Embodiment and Agency: Towards an Aesthetics of Interactive Performativity
Kim, Jin Hyun   Institute of Musicology, University of Cologne; Cologne, Germany
Seifert, Uwe   Institute of Musicology, University of Cologne; Cologne, Germany

Abstract
The aim of this paper is to take first steps in direction of a scientifically oriented aesthetics of New Media Art, taking into account the transformation of musical aesthetics taking place at present induced by new digital methods in artistic sound and music computing. Starting from the observation of relevant current issues in music composition and performances such as gesture control in algorithmic sound synthesis, live coding, musical robotics, and live algorithms for music, certain important concepts concerning a theory of human-machine interaction, which are at present under development in our research project C10 "Artistic Interactivity in Hybrid Networks" as part of the collaborative research center SFB/FK 427 "Media and Cultural Communication", are introduced and related to artistic practices. The essential concept of this theory – "interactivity" – is used as a generic term for different kinds of human- machine interactions and is closely related to "agency", "situatedness", and "embodiment". "Agency" stresses a non-asymmetrical relationship between humans and machines. To make clear that some concepts of digital interaction are not conceived of as truly interactive, problems of disembodiment in computer and interactive music and new approaches of embodiment and situatedness in philosophy and cognitive science are discussed. This discussion shows that embodiment serves as a necessary condition for interactivity. Finally, perspectives towards an aesthetics of interactive performativity are discussed, taking into account interactivity, agency, and embodiment. "Performativity" – as developed in German media theories and aesthetics – is characterized as the capacity of a performative act to generate meaning and "reality". It is proposed as a theoretical approach to an aesthetics of New Media Art.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849443
Zenodo URL: https://zenodo.org/record/849443


2007.23
Experimental (δ, γ)-Pattern-Matching with Don't Cares
Iliopoulos, Costas S.   Algorithm Design Group, Department of Computer Science, King's College London; London, United Kingdom
Mohamed, Manal   Algorithm Design Group, Department of Computer Science, King's College London; London, United Kingdom
Mohanaraj, Velumailum   Algorithm Design Group, Department of Computer Science, King's College London; London, United Kingdom

Abstract
The $\delta$-Matching problem calculates, for a given text $T_{1\ldots n}$ and a pattern $P_{1\ldots m}$ on an alphabet of integers, the list of all indices (...). The $\gamma$-Matching problem computes, for given T and P , the list of all indices (...). When a “don’t care” symbol occurs, the associated difference is counted as zero. In this paper, we give experimental results for the different matching algorithms that handle the presence of “don’t care” symbols. We highlight some practical issues and present experimental analysis for the current most efficient algorithms that calculate (...) , for pattern P with occurrences of “don’t cares”. Moreover, we present our own version of $\gamma$-Matching algorithm.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849411
Zenodo URL: https://zenodo.org/record/849411


2007.24
Expressive Text-to-speech Approaches
Kanellos, Ioannis   Department of Computer Science, Telecom Bretagne; Brest, France
Suciu, Ioana   Department of Computer Science, Telecom Bretagne; Brest, France / France Télécom; Lannion, France
Moudenc, Thierry   France Télécom; Lannion, France

Abstract
The core concern of this paper is the modelling and the tractability of expressiveness in natural voice synthesis. In the first part we quickly discuss the imponderable gap between natural and singing voice synthesis approaches. In the second part we outline a four level model and a corpus-based methodology in modelling expressive forms—an essential step towards expressive voice synthesis. We then try to contrast them with recurrent concerns in singing voice synthesis. We finally undertake a first reflection about a possible transposition of the approach to singing voice. We conclude with some program considerations in Research and Development for the singing voice synthesis, inspired from natural voice synthesis techniques.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849379
Zenodo URL: https://zenodo.org/record/849379


2007.25
Fifty Years of Digital Sound for Music
Risset, Jean-Claude   Laboratoire de Mécanique et d'Acoustique (LMA); Marseille, France

Abstract
Fifty years ago, Max Mathews opened new territories for music as he implemented the first digital synthesis and the first digital recording of sound. This presentation illustrates some specific problems and possibilities of computer music, focusing on the perception of musical sound and reflecting the points of view of the author.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849369
Zenodo URL: https://zenodo.org/record/849369


2007.26
Fractured Sounds, Fractured Meanings: A Glove-controlled Spectral Instrument
Litke, David   School of Music, The University of British Columbia (UBC); Vancouver, Canada

Abstract
The compositional practice of applying data collected from FFT analyses to pre-compositional organization has been well established by composers such as Gerard Grisey, Tristan Murial, and Kaija Saariaho. The instrument presented here builds upon fundamental ideas of the spectral composers, applying the concept of timbral disintegration and re-synthesis in a real-time environment. By allowing a performer to sample a source sound, de-construct its overtone spectrum, and manipulate its individual components in a live performance environment, the Spectral Instrument facilitates the audience's perception and comprehension of spectral concepts. In order to afford performers expressive, gestural control over spectral materials, the instrument uses a three-dimensional glove controller as its primary interface. In using the instrument to deconstruct and manipulate the overtones of the human voice, interesting parallels emerge between the semantic structures of language and timbral identities.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849433
Zenodo URL: https://zenodo.org/record/849433


2007.27
From Music Symbolic Information to Sound Synthesis: An XML-based Approach
Haus, Goffredo   Laboratorio di Informatica Musicale (LIM), Dipartimento di Informatica e Comunicazione (DICo), Università degli Studi di Milano; Milano, Italy
Ludovico, Luca Andrea   Laboratorio di Informatica Musicale (LIM), Dipartimento di Informatica e Comunicazione (DICo), Università degli Studi di Milano; Milano, Italy
Russo, Elisa   Laboratorio di Informatica Musicale (LIM), Dipartimento di Informatica e Comunicazione (DICo), Università degli Studi di Milano; Milano, Italy

Abstract
This paper deals with the automatic generation of computer-driven performances and related audio renderings of music pieces encoded in symbolic format. A particular XML-based format, namely MX, is used to represent the original music information. The first step is illustrating how symbolic information can originate music performances. The format we have chosen to represent performance information is Csound. Then, an audio rendering is automatically produced. Finally, we will show how the aforementioned computer-generated information can be linked to the original symbolic description, in order to provide an advanced framework for heterogeneous music contents in a single XML-based format.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849455
Zenodo URL: https://zenodo.org/record/849455


2007.28
Generation and Representation of Data and Events for the Control of Sound Synthesis
Bresson, Jean   Music Representations Team, Institut de Recherche et Coordination Acoustique/Musique (IRCAM); Paris, France
Stroppa, Marco   State University of Music and the Performing Arts Stuttgart (HMDK); Stuttgart, Germany
Agon, Carlos   Music Representations Team, Institut de Recherche et Coordination Acoustique/Musique (IRCAM); Paris, France

Abstract
This article presents a system created in the computer-aided composition environment OpenMusic in order to handle complex compositional data related to sound synthesis processes. This system gathers two complementary strategies: the use of sound analysis data as basic material, and the specification of abstract rules in order to automatically process and extend this basic material.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849423
Zenodo URL: https://zenodo.org/record/849423


2007.29
Gestural Control of Sonic Swarms: Composing With Grouped Sound Objects
Davis, Tom   Sonic Arts Research Centre (SARC), Queen's University Belfast; Belfast, United Kingdom
Karamanlis, Orestis   Sonic Arts Research Centre (SARC), Queen's University Belfast; Belfast, United Kingdom

Abstract
This paper outlines an alternative controller designed to diffuse and manipulate a swarm of sounds in 3- dimensional space and discusses the compositional issues that emerge from its use. The system uses an algorithm from a nature-derived model describing the spatial behavior of a swarm. The movement of the swarm is mapped in the 3- dimensional space and a series of sound transformation functions for the sonic agents are implemented. The notion of causal relationships is explored regarding the spatial movement of the swarm and sound transformation of the agents by employing the physical controller as a performance, compositional and diffusion tool.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849427
Zenodo URL: https://zenodo.org/record/849427


2007.30
Haptic Feedback for Improved Positioning of the Hand for Empty Handed Gestural Control
Modler, Paul   Karlsruhe University of Arts and Design (HfG); Karlsruhe, Germany
Myatt, Tony   University of York; York, United Kingdom

Abstract
The fast development of information technology, which to a large extent deals intensively with human communication, requires a means to integrate gestures into man machine communication. The available CPU power of current computers as well as low cost video devices provide the facilities for tracking human motion by analysis of camera images [5]. In a video tracking environment benefits of mapping freely human gestures to sound and music parameters are often combined with the drawback of a limited feedback to the user. In the case of hand gestures tracked by a video system, a precise positioning of the hand in the gestural space is difficult through the varying bio- motor features of different gestures. In this paper two approaches are proposed for providing a performer with haptic feedback of the hand position whereby the gestural space of the hand is divided in radial and concentric sectors. Two approaches are proposed: a tactile feedback through vibrating actuators and audio feedback through different sound generators.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849493
Zenodo URL: https://zenodo.org/record/849493


2007.31
Horizontal and Vertical Integration/Segregation in Auditory Streaming: A Voice Separation Algorithm for Symbolic Musical Data
Karydis, Ioannis   Department of Informatics, Aristotle University of Thessaloniki; Thessaloniki, Greece
Nanopoulos, Alexandros   Department of Informatics, Aristotle University of Thessaloniki; Thessaloniki, Greece
Papadopoulos, Apostolos   Department of Informatics, Aristotle University of Thessaloniki; Thessaloniki, Greece
Cambouropoulos, Emilios   Department of Music Studies, Aristotle University of Thessaloniki; Thessaloniki, Greece
Manolopoulos, Yannis   Department of Music Studies, Aristotle University of Thessaloniki; Thessaloniki, Greece

Abstract
Listeners are thought to be capable of perceiving multiple voices in music. Adopting a perceptual view of musical ‘voice’ that corresponds to the notion of auditory stream, a computational model is developed that splits a musical score (symbolic musical data) into different voices. A single ‘voice’ may consist of more than one synchronous notes that are perceived as belonging to the same auditory stream; in this sense, the proposed algorithm, may separate a given musical work into fewer voices than the maximum number of notes in the greatest chord (e.g. a piece consisting of four or more concurrent notes may be separated simply into melody and accompaniment). This is paramount, not only in the study of auditory streaming per se, but also for developing MIR systems that enable pattern recognition and extraction within musically pertinent ‘voices’ (e.g. melodic lines). The algorithm is tested qualitatively and quantitatively against a small dataset that acts as groundtruth.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849469
Zenodo URL: https://zenodo.org/record/849469


2007.32
Image Features Based on Two-dimensional FFT for Gesture Analysis and Recognition
Modler, Paul   Karlsruhe University of Arts and Design (HfG); Karlsruhe, Germany
Myatt, Tony   University of York; York, United Kingdom

Abstract
This paper describes features and the feature extraction processing which were applied for recognising gestures by artificial neural networks. The features were applied for two cases: time series of luminance rate images for hand gestures and time series of pure grey-scale images of the facial mouth region. A focus will be on the presentation and discussion of the application of 2-dimensional Fourier transformed images both for luminance rate feature maps of hand gestures and for greyscale images of the facial mouth region. Appearance-based features in this context are understood as features based on whole images, which perform well for highly articulated objects. The described approach was used based on our assumption that highly articulated objects are of great interest for musical applications.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849491
Zenodo URL: https://zenodo.org/record/849491


2007.33
Image to Sound and Sound to Image Transform
Spyridis, Haralampos C.   Department of Music, University of Athens; Athens, Greece
Moustakas, Aggelos K.   University of Piraeus; Athens, Greece

Abstract
Objective of our paper is the presentation of a software we wrote, which creates the sound analog of an image, as well as the image analog of a sound. To the best of our knowledge, the proposed process is original.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849473
Zenodo URL: https://zenodo.org/record/849473


2007.34
Implementation of Algorithms to Classify Musical Texts According to Rhythms
Chen, Arbee L. P.   Department of Computer Science, National Chengchi University; Taiwan, Taiwan
Iliopoulos, Costas S.   Algorithm Design Group, Department of Computer Science, King's College London; London, United Kingdom
Michalakopoulos, Spiros   Algorithm Design Group, Department of Computer Science, King's College London; London, United Kingdom
Rahman, Mohammad Sohel   Algorithm Design Group, Department of Computer Science, King's College London; London, United Kingdom

Abstract
An interesting problem in musicology is to classify songs according to rhythms. A rhythm is represented by a sequence of “Quick” (Q) and “Slow” (S) symbols, which correspond to the (relative) duration of notes, such that S = 2Q. Recently, Christodoulakis et al. [16] presented an efficient algorithm that can be used to classify musical texts according to rhythms. In this paper, we implement the above algorithm along with the naive brute force algorithm to solve the same problem. We then analyze the theoretical time complexity bounds with the actual running times achieved by the experiments and compare the results of the two algorithms.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849409
Zenodo URL: https://zenodo.org/record/849409


2007.35
Injecting Periodicities: Sieves as Timbres
Exarchos, Dimitris   Department of Music, Goldsmiths, University of London; London, United Kingdom

Abstract
Although Xenakis’s article on sieves was published in 1990, the first extended reference to Sieve Theory is found in the final section of ‘Towards a Metamusic’ of 1967. There are certain differences among the two. The section of ‘Metamusic’ is titled ‘Sieve Theory’ whereas the 1990 article simply ‘Sieves’. The latter is more practical, as it provides a tool for treating sieves (with the two computer programmes). These two writings mark two periods: during the first, sieves were of the periodic asymmetric type, with significantly large periods; in the more recent, they were irregular with absent periodicity. Also, the option of a simplified formula only appears in the 1990 article: there is a progression from the decomposed formula to the simplified one. This progression reflects Xenakis’s aesthetic of sieves as timbres and is here explored under the light of the idea of injected periodicities.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849389
Zenodo URL: https://zenodo.org/record/849389


2007.36
Is Knowledge Emerging in the Secrecy of Our Digital Collections?
Rousseaux, Francis   Institut de Recherche et Coordination Acoustique/Musique (IRCAM); Paris, France
Bonardi, Alain   Institut de Recherche et Coordination Acoustique/Musique (IRCAM); Paris, France

Abstract
Object-oriented computer science was created to simulate our activities of placing objects in identified and labeled class structures. As we know, its success was immediate. Recently, an innovative trend appeared. It is characterized by the mobilization of object-oriented computer science for the organization of our collections which are considered like heaps of objects waiting to be classified in ad-hoc classes that could be created at the same time. Undeniably, collecting is an older activity than classifying, in so far as it allows the useful experimentation of the concepts of extension (in the case of a spatiotemporal, even temporary and ephemeral arrangement) and of intension in the idea of an abstract order of similarities. We always put together a collection of something, which makes it impossible to typify the activity regardless of the objects, and which therefore disturbs the modeler's customary practices.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849445
Zenodo URL: https://zenodo.org/record/849445


2007.37
KLAMA: The Voice From Oral Tradition in Death Rituals to a Work for Choir and Live Electronics
Spiropoulos, Georgia   Composer, Independent; Paris, France
Meudic, Benoît   Institut de Recherche et Coordination Acoustique/Musique (IRCAM); Paris, France

Abstract
Klama, for mixed choir, live electronics & prerecorded sounds, has its origins in the ritual lament of Southern Peloponnese (Greece); a “polyphony” composed of improvised monodies (moirolóya), epodes, crying, screams and monologues, accompanied by ritual gestures. By its acoustic violence the lament can be considered an alteration of vocality which affects simultaneously tonality, timbre and language. Klama has been developed in three levels, a nexus where vocal writing interacts with electroacoustics and live electronics, the latter seen as a metaphore of the inherent vocal alterations on the lament. In this paper we will show : 1) how the compositional material derived from the voice in oral and byzantine church tradition is explored for the choir and electronic writing; 2) how the three levels of Klama, acoustic, electroacoustic & live electronics interact through the act of composition and by means of the technological tools (Open Music, Max/Msp, Audio Sculpt).

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849447
Zenodo URL: https://zenodo.org/record/849447


2007.38
Large Scale Musical Instrument Identification
Benetos, Emmanouil   Department of Informatics, Aristotle University of Thessaloniki; Thessaloniki, Greece
Kotti, Margarita   Department of Informatics, Aristotle University of Thessaloniki; Thessaloniki, Greece
Kotropoulos, Constantine   Department of Informatics, Aristotle University of Thessaloniki; Thessaloniki, Greece

Abstract
In this paper, automatic musical instrument identification using a variety of classifiers is addressed. Experiments are performed on a large set of recordings that stem from 20 instrument classes. Several features from general audio data classification applications as well as MPEG-7 descriptors are measured for 1000 recordings. Branch-and-bound feature selection is applied in order to select the most discriminating features for instrument classification. The first classifier is based on non-negative matrix factorization (NMF) techniques, where training is performed for each audio class individually. A novel NMF testing method is proposed, where each recording is projected onto several training matrices, which have been Gram-Schmidt orthogonalized. Several NMF variants are utilized besides the standard NMF method, such as the local NMF and the sparse NMF. In addition, 3-layered multilayer perceptrons, normalized Gaussian radial basis function networks, and support vector machines employing a polynomial kernel have also been tested as classifiers. The classification accuracy is high, ranging between 88.7\% to 95.3\%, outperforming the state-of-the-art techniques tested in the aforementioned experiment.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849461
Zenodo URL: https://zenodo.org/record/849461


2007.39
LISTEN LISBOA: Scripting Languages for Interactive Musical Installations
Le Prado, Cécile   Centre d'Etudes et de Recherche en Informatique et Communication, Conservatoire national des arts et métiers (CNAM); Paris, France
Natkin, Stéphane   Centre d'Etudes et de Recherche en Informatique et Communication, Conservatoire national des arts et métiers (CNAM); Paris, France

Abstract
This paper starts from an experimental interactive musical installation designed at IRCAM: “LISTEN LISBOA”. This installation relies on a perceptual paradox: spectators are walking into a real space, see this space and at the same time hears through headphones a virtual sound space, mapped to the real one. In the first part of this paper we present this installation and more precisely the interactive musical “scenario” designed by the composer. We derive from this example general need for tools, used by composers working on this kind of installation. We show that these needs are close to those of video game level designers and associated scene languages with some extensions.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849383
Zenodo URL: https://zenodo.org/record/849383


2007.40
Mapping and Dimensionality of a Cloth-based Sound Instrument
Birnbaum, David   Input Devices and Music Interaction Laboratory (IDMIL), Schulich School of Music, Music Technology Area, McGill University; Montreal, Canada
Abtan, Freida   Concordia University; Montreal, Canada
Sha, Xin Wei   Concordia University; Montreal, Canada
Wanderley, Marcelo M.   Input Devices and Music Interaction Laboratory (IDMIL), Schulich School of Music, Music Technology Area, McGill University; Montreal, Canada

Abstract
This paper presents a methodology for data extraction and sound production derived from cloth that prompts “improvised play” rather than rigid interaction metaphors based on preexisting cognitive models. The research described in this paper is a part of a larger effort to uncover the possibilities of using sound to prompt emergent play behaviour with pliable materials. This particular account documents the interactivation of a stretched elastic cloth with an integrated sensor array called the “Blanket”.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849499
Zenodo URL: https://zenodo.org/record/849499


2007.41
Mapping Chaotic Dynamical Systems Into Timbre Evolution
Rizzuti, Constantino   Department of Linguistics, Università della Calabria; Arcavacata di Rende, Italy

Abstract
Until now many approaches have been used to transform in music and sounds time series produced by dynamical systems. Mainly these approaches can be divided in two categories: high level, finalized to melodic pattern generation, low level, in which dynamical systems are used to generate sound samples. In the present work we are going to present a new approach realizing a mapping at an intermediate level between the two previous mentioned in which we use chaotic systems to control the parameters of a sound synthesis process.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849375
Zenodo URL: https://zenodo.org/record/849375


2007.42
Measured Characteristics of Development in Adolescent Singers
Barlow, Christopher   School of Computing and Communications, Solent University Southampton; Southampton, United Kingdom
Howard, David M.   Department of Electronics, University of York; York, United Kingdom

Abstract
Electrolaryngographic recordings were made of spoken and sung voices of 256 trained and untrained singers aged 8-18. The authors examined measures of Larynx Closed Quotient (CQ) over a sung scale and spoken passage and related them to the parameters of sex, development and vocal training. A positive correlation between CQ and development was found for both the singing and spoken voices of boys. Girls, however, showed a negative correlation between sung CQ and development, but exhibited no correlation between spoken CQ and vocal development. Trained singers in both sexes exhibited slightly lowered mean sung CQ, with a reduced range of values and lower standard deviation, possibly demonstrating greater control over the vocal mechanism. It is proposed that this demonstration of qnuntifiable vocal differences could form the basis of a biofeedback tool for pedagogoy and vocal health monitoring.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849399
Zenodo URL: https://zenodo.org/record/849399


2007.43
Mixing Time Representations in a Programmable Score Editor
Agon, Carlos   Music Representations Team, Institut de Recherche et Coordination Acoustique/Musique (IRCAM); Paris, France
Bresson, Jean   Music Representations Team, Institut de Recherche et Coordination Acoustique/Musique (IRCAM); Paris, France

Abstract
This article presents a new type of document developed in the computer-aided composition environment OpenMusic. The sheet extends the compositional tools of this environment by putting forward the concepts of score and music notation. Two principal questions are dealt with: the integration of musical objects corresponding to heterogeneous time systems (e.g. traditionnal music notation representations and linear time representations), and the integration of programs and functional descriptions with the notation aspects within the score framework.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849415
Zenodo URL: https://zenodo.org/record/849415


2007.44
Modelling Dynamics of Key Induction in Harmony Progressions
Rohrmeier, Martin   Centre for Music and Science, Faculty of Music, University of Cambridge; Cambridge, United Kingdom

Abstract
This paper presents a statistical model of key induction from given short harmony progressions and its application in modelling dynamics of on-line key induction in harmonic progressions with a sliding window approach. Using a database of Bach’s chorales, the model induces keys and key profiles for given harmonic contexts and accounts for related music theoretical concepts of harmonic ambiguity and revision. Some common results from music analytical practice can be accounted for with the model. The sliding window key induction gives evidence of cases in which key is established or modulation is recognised even though neither dominant nor leading tone was involved. This may give rise to a more flexible, probabilistic interpretation of key which would encompass ambiguity and under-determination. In addition, a novel method of harmonically adequate segmentation is presented.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849393
Zenodo URL: https://zenodo.org/record/849393


2007.45
Multi-channel Formats in Electroacoustic Composition: Acoustic Space as a Carrier of Musical Structure
Stavropoulos, Nikos   Leeds Metropolitan University; Leeds, United Kingdom

Abstract
The purpose of this paper is to examine and compare multi-channel compositional practices in the scope of electroacoustic music and discuss arising issues regarding compositional methodology, rationale, performance and dissemination of multi-channel works with examples drawn from the author’s compositional output. The paper describes principal theories of musical space and draws parallels between those and compositional practices whilst discussing the articulation of acoustic space as another expressive dimension in the musical language with reference to specific multi-channel formats.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849449
Zenodo URL: https://zenodo.org/record/849449


2007.46
ORPHEUS: A Virtual Learning Environment of Ancient Greek Music
Politis, Dionysios   Department of Informatics, Aristotle University of Thessaloniki; Thessaloniki, Greece
Margounakis, Dimitrios   Department of Informatics, Aristotle University of Thessaloniki; Thessaloniki, Greece
Botsaris, George   Department of Informatics, Aristotle University of Thessaloniki; Thessaloniki, Greece
Papaleontiou, Leontios   Department of Informatics, Aristotle University of Thessaloniki; Thessaloniki, Greece

Abstract
The applications ORPHEUS and ARION have been presented to vast audiences during the International Fair of Thessaloniki between 8-17 September 2006. They were presented in international conferences, in nation-wide radio emissions, in newspaper and magazine articles. ORPHEUS has been supported by the “HERMES” 1 project, while ARION is under the auspices of the SEEArchWeb 2 project. ORPHEUS is an interactive presentation of Ancient Greek Musical Instruments. The virtual environment of ORPHEUS allows the experimentation with the use and the sounds of the modeled ancient instruments. The Ancient Greek Guitar (“Kithara”), which was the first modeled instrument, can be virtually strummed using the mouse or the keyboard. The auditory result is ancient Greek melodies. The application, which is accompanied by information about the history of Ancient Greek Music and a picture gallery relative to the Ancient Greek Instruments, has mainly educational character. Its main scope is to demonstrate the Ancient Greek Musical Instruments to the audience.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849467
Zenodo URL: https://zenodo.org/record/849467


2007.47
Pitch Spelling: Investigating Reductions of the Search Space
Honingh, Aline   Music Informatics Research Group, Department of Computing, City University London; London, United Kingdom

Abstract
Pitch spelling addresses the question of how to derive traditional score notation from pitch classes or MIDI numbers. In this paper, we motivate that the diatonic notes in a piece of music are easier to spell correctly than the non-diatonic notes. Then we investigate 1) whether the generally used method of calculating the proportion of correctly spelled notes to evaluate pitch spelling models can be replaced by a method that concentrates only on the non- diatonic pitches, and 2) if an extra evaluation measure to distinguish the incorrectly spelled diatonic notes from the incorrectly spelled non-diatonic notes would be useful. To this end, we calculate the typical percentage of pitch classes that correspond to diatonic notes and check whether those pitch classes do indeed refer to diatonic notes in a piece of music. We explore extensions of the diatonic set. Finally, a good performing pitch spelling algorithm is investigated to see what percentage of its incorrectly spelled notes are diatonic notes. It turns out that a substantial part of the incorrectly spelled notes consist of diatonic notes, which means that the standard evaluation measure of pitch spelling algorithms cannot be replaced by a measure that only concentrates on non-diatonic notes without losing important information. We propose instead that two evaluation measures could be added to the standard correctness rate to be able to give a more complete view of a pitch spelling model.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849391
Zenodo URL: https://zenodo.org/record/849391


2007.48
Preparing for TreeTorika: Computer-assisted Analysis of Mao's Oratory
Lindborg, PerMagnus   School of Contemporary Music, LASALLE College of the Arts; Singapore, Singapore

Abstract
This paper examines computer-assisted analysis techniques in extracting musical features from a recording of a speech by Mao Zedong. The data was used to prepare compositional material such as global form, melody, harmony and rhythm for TreeTorika for chamber orchestra. The text focuses on large-scale segmentation, melody transcription, quantification and quantization 1 . It touches upon orchestration techniques but does not go into other aspects of the work such as constrained-based rhythm development. Automatic transcription of the voice was discarded in favour of an aurally based method supported by tools in Amadeus and Max/MSP. The data were processed in OpenMusic to optimise the notation with regards to accuracy and readability for the musicians. The harmonic context was derived from AudioSculpt partial tracking and chord-sequence analyses. Finally, attention will be given to artistic and political considerations when using recordings of such an intensively disputed public figure as Mao.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849457
Zenodo URL: https://zenodo.org/record/849457


2007.49
Prolegomena to Sonic Toys
De Götzen, Amalia   CSC, Department of Information Engineering, Università di Padova; Padova, Italy
Serafin, Stefania   Department of Medialogy, Aalborg University Copenhagen; Copenhagen, Denmark

Abstract
A variety of electronic sonic toys is available in the market. However, such toys are usually played by pushing buttons which trigger different sampled sounds. In this paper we advocate the possibility of using continuous natural gestures connected with synthesized sounds for the creation of enactive sonic toys.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849421
Zenodo URL: https://zenodo.org/record/849421


2007.50
Real-time Spatial Mixing Using Binaural Processing
Tsakostas, Christos   Holistiks Engineering Systems; Athens, Greece
Floros, Andreas   Department of Audiovisual Arts, Ionian University; Corfu, Greece
Deliyiannis, Yannis   Department of Audiovisual Arts, Ionian University; Corfu, Greece

Abstract
In this work, a professional audio mastering / mixing software platform is presented which employs state-of-the-art binaural technology algorithms for efficient and accurate sound source positioning. The proposed mixing platform supports high-quality audio (typically 96kHz/24bit) for an arbitrary number of sound sources, while room acoustic analysis and simulation models are also incorporated. All binaural calculations and audio signal processing/mixing are performed in real-time, due to the employment of an optimized binaural 3D Audio Engine developed by the authors. Moreover, all user operations are performed through a user-friendly graphical interface allowing the efficient control of a large number of binaural mixing parameters. It is shown that the proposed mixing platform achieves subjectively high spatial impression, rendering it suitable for high-quality professional audio applications.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849465
Zenodo URL: https://zenodo.org/record/849465


2007.51
Reviewing the Transformation of Sound to Image in New Computer Music Software
Lemi, Esther   Department of Music, National and Kapodistrian University of Athens; Athens, Greece
Georgaki, Anastasia   Department of Music, National and Kapodistrian University of Athens; Athens, Greece

Abstract
In the following essay we are going to analyse the relationship between sound and image in computer music. We will be examining sound visualisation software, and its evolution over the thirty-five year timespan in which it has existed. How we judge software, is based on aesthetic criteria, the way they were handed down to us from theories of abstract painting (20th century avant-garde) , the theory of montage by Sergei Eisenstein, of neurophysiology (synesthesia, muscular images) and of the successful correspondence of the two media (pixel and music) in the works and theory of James Whitney.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849385
Zenodo URL: https://zenodo.org/record/849385


2007.52
Self-Space: Interactive, Self Organized, Robotics Mediated, Cross-Media Platform for Music and Multimedia Performance
Giannoukakis, Marinos   Department of Music, Ionian University; Corfu, Greece
Zannos, Ioannis   Department of Audiovisual Arts, Ionian University; Corfu, Greece

Abstract
In this paper, we present a platform for experimental music and multimedia performance that uses swarm theory, a subfield of complexity theory, to model of complex behaviors in intelligent environments. The platform is a kinetic structure equipped with sensors and mobile speakers that functions as a dynamically changing space for music performance. Game development software and Internet technologies have been used for the creation of a 3D virtual environment for remote control and data exchange, thereby adding an augmented reality component. The paper describes the hardware and software architecture of the system and discusses implementation issues concerning multilayered control architecture. This research combines approaches and techniques from the fields of kinetic architecture and intelligent environments on the one hand and of generative music algorithms on the other hand.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849489
Zenodo URL: https://zenodo.org/record/849489


2007.53
Singer Identification in Rembetiko Music
Holzapfel, André   Institute of Computer Science, Foundation for Research and Technology - Hellas (FORTH); Heraklion, Greece
Stylianou, Yannis   Multimedia Informatics Laboratory, Department of Computer Science, University of Crete; Heraklion, Greece

Abstract
In this paper, the problem of the automatic identification of a singer is investigated using methods known from speaker identification. Ways for using world models are presented and the usage of Cepstral Mean Subtraction (CMS) is evaluated. In order to minimize the difference due to musical style we use a novel data set, consisting of samples from greek Rembetiko music, being very similar in style. The data set also explores for the first time the influence of the recording quality, by including many historical gramophone recordings. Experimental evaluations show the benefits of world models for frame selection and CMS, resulting in an average classification accuracy of about 81\% among 21 different singers.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849477
Zenodo URL: https://zenodo.org/record/849477


2007.54
Some Symbolic Possibilities Specific to Electroacoustic Music
Di Santo, Jean-Louis   Studio de Création et de Recherche en Informatique et Musique Electroacoustique; Bordeaux, France

Abstract
In the past, very often composers tried to show a part of reality or ideas with music. But their expression was limited by the use of instruments. Today, computers offer the possibility of using and transforming the sounds existing in the world (referential sounds). This is a real revolution, and now composers can make several kinds of symbolic music, according to their wish: narrative, argumentative or sound poetry. One will use the CH. Sanders Peirce's sign theory to analyse all these possibilities and their functioning.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849439
Zenodo URL: https://zenodo.org/record/849439


2007.55
Sonification of Gestures Using Specknets
Lympouridis, Vangelis   School of Arts, Culture and Environment, The University of Edinburgh; Edinburgh, United Kingdom
Parker, Martin   School of Arts, Culture and Environment, The University of Edinburgh; Edinburgh, United Kingdom
Young, Alex   School of Informatics, The University of Edinburgh; Edinburgh, United Kingdom
Arvind, DK   School of Informatics, The University of Edinburgh; Edinburgh, United Kingdom

Abstract
This paper introduces a novel approach to gesture recognition for interactive virtual instruments. The method is based on the tracking of body postures and movement which is achieved by a wireless network of Orient-2 specks strapped to parts of the body. This approach is in contrast to camera-based methods which require a degree of infrastructure support. This paper describes the rationale underlying the method of sonification from gestures, addressing issues such as disembodiment and virtuality. A working system is described together with the method for interpreting the gestures as sounds in the MaxMSP tool.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849497
Zenodo URL: https://zenodo.org/record/849497


2007.56
Sound Synthesis and Musical Composition by Physical Modelling of Self-sustained Oscillating Structures
Poyer, François   ICA Laboratory, Association pour la Création et la Recherche sur les Outils d’Expression (ACROE), Grenoble Institute of Technology (Grenoble INP); Grenoble, France
Cadoz, Claude   ICA Laboratory, Association pour la Création et la Recherche sur les Outils d’Expression (ACROE), Grenoble Institute of Technology (Grenoble INP); Grenoble, France

Abstract
In this paper we present the first results of a study that is carried out with the sound synthesis and musical creation environment GENESIS on self-sustained oscillating structures models. Based on the mass-interaction CORDIS-ANIMA physical modelling formalism, GENESIS has got the noteworthy property that it allows to work both on sound itself and on musical composition in a single coherent environment. By taking as a starting point the analysis of real self-sustained instruments like bowed strings or woodwinds, our aim is to develop generic tools that enable to investigate a wide family of physical models: the self-sustained oscillating structures. But, if this family is able to create rich timbres, it can also play a new and fundamental role at the level of the temporal macrostructure of the music (the gesture and the instrumental performance level, as well as the composition one). So we propose also in this paper to use the relatively complex motion of a bowed macrostructure as a musical events generator, in order to work with the musical composition possibilities of this type of models.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849373
Zenodo URL: https://zenodo.org/record/849373


2007.57
SRA: A Web-based Research Tool for Spectral and Roughness Analysis of Sound Signals
Vassilakis, Pantelis N.   School of Music, DePaul University; Chicago, United States

Abstract
SRA is a web-based tool that performs Spectral and Roughness Analysis on user-submitted sound files (.wav and .aif formats). Spectral analysis incorporates an improved Short-Time Fourier Transform (STFT) algorithm [1-2] and automates spectral peak-picking using Loris opensource C++ class library components. Users can set three spectral analysis/peak-picking parameters: analysis bandwidth, spectral-amplitude normalization, and spectral- amplitude threshold. These are described in detail within the tool, including suggestions on settings appropriate to the submitted files and research questions of interest. The spectral values obtained from the analysis enter a roughness calculation model [3-4], outputting roughness values at user-specified points within a file or roughness profiles at user-specified time intervals. The tool offers research background on spectral analysis, auditory roughness, and the algorithms used, including links to relevant publications. Spectral and roughness analysis of sound signals finds applications in music cognition, musical analysis, speech processing, and music teaching research, as well as in medicine and other areas. Presentation of the spectral analysis technique, the roughness estimation model, and the online tool is followed by a discussion of research studies employing the tool and an outline of future possible applications.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849475
Zenodo URL: https://zenodo.org/record/849475


2007.58
Synthesising Singing
Sundberg, Johan   Department of Speech Music Hearing, KTH Royal Institute of Technology; Stockholm, Sweden

Abstract
This is a review of some work carried out over the last decades at the Speech Music Hearing Department, KTH, where the analysis-by-synthesis strategy was applied to singing. The origin of the work was a hardware synthesis machine combined with a control program, which was a modified version of a text- to-speech conversion system. Two applications are described, one concerning vocal loudness variation, the other concerning coloratura singing. In these applications the synthesis work paved the way for investigations of specific aspects of the singing voice. Also, limitations and advantages of singing synthesis are discussed.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849371
Zenodo URL: https://zenodo.org/record/849371


2007.59
Synthesis of a Macro Sound Structure Within a Self Organizing System
Bökesoy, Sinan   CICM, University Paris VIII; Paris, France

Abstract
This paper is focused on synthesizing macro-sound structures with certain ecological attributes to obtain perceptually interesting and compositionally useful results. The system, which delivers the sonic result is designed as a self organizing system. Certain principles of cybernetics are critically assessed in the paper in terms of interdependencies among system components, system dynamics and the system/environment coupling. It is aiming towards a self evolution of an ecological kind, applying an interactive exchange with its external conditions. The macro-organization of the sonic material is a result of interactions of events at a meso and micro level but also this exchange with its environment. The goal is to formulate some new principles and present its sketches here by arriving to a network of concepts suggesting new ideas in sound synthesis.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849419
Zenodo URL: https://zenodo.org/record/849419


2007.60
The Breaking-continuity Paradox in Artificial Vocality Aesthetics
Bossis, Bruno   Laboratoire d'Informatique de Paris 6 (LIP6), Université Paris 1 Panthéon-Sorbonne; Paris, France / MIAC, University of Rennes 2; Rennes, France

Abstract
The first part of this paper shows the cutting and centrifugal force of the technosciences, by taking support on some examples drawn from the field of vocal music. The questioning on the traditional nomenclature of voices and the intrinsic heterogeneity of the increased voice are characteristic of this shattering of the usual aesthetic references. The second part will explore a direction tending, a contrario, to mix various musical aspects in a single perception. Thus, various acoustic parameters control the ones by the others, aesthetic dimensions belonging to separate artistic fields are meeting together. This paradoxical connection currently seems to lead to a renewed reading of the aesthetics of contemporary arts in the light of the technological repercussions.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849453
Zenodo URL: https://zenodo.org/record/849453


2007.61
The Cube: An Audiovisual Interactive Installation
Didakis, Stavros   Sonic Arts Research Centre (SARC), Queen's University Belfast; Belfast, United Kingdom

Abstract
New cross-disciplinary areas from sonic arts, to wireless interfaces, and visual graphics have been implemented into one system that applies high-level interaction from the user/participant. An embedded tangible controller is used to navigate between six different audiovisual landscapes and presents aspects of live-gaming performance as the user is able to “unlock” stages and explore virtual worlds. Other interactive approaches such as mobile technologies, website interaction accompanied with texts, visuals, and sounds have also been researched and applied, to create an immersive environment for communication between the audience and the installation.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849387
Zenodo URL: https://zenodo.org/record/849387


2007.62
The Program in Electronic Music Composition and Musical Production at the School of the Arts of the Polytechnic Institute of Castelo Branco
Guedes, Carlos   Escola Superior de Artes Aplicadas, Instituto Politécnico de Castelo Branco; Castelo Branco, Portugal
Dias, Rui   Escola Superior de Artes Aplicadas, Instituto Politécnico de Castelo Branco; Castelo Branco, Portugal

Abstract
This paper presents the program in electronic music composition and musical production at the School of the Arts of the Polytechnic Institute of Castelo Branco (http://www.esart.ipcb.pt). This study program offers a 1 st cycle degree and is functioning since the academic year of 2005/2006. At the conference, we will present the curriculum of the program, some recent work by the students, and the next phase in the program development, which includes making contacts with people and institutions to further develop the program through ERASMUS exchanges of faculty and students, hiring of new faculty members, and eventual creation of an European partnership for a 2 nd cycle degree.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849495
Zenodo URL: https://zenodo.org/record/849495


2007.63
Time Domain Pitch Recognition
Chourdakis, Michael G.   Department of Music, University of Athens; Athens, Greece
Spyridis, Haralampos C.   Department of Music, University of Athens; Athens, Greece

Abstract
The power of Time Domain based methods over Frequency Domain methods for signal processing and pitch recognition; The suggestion of new Notation Symbols in order to represent all possible frequencies in the European, Byzantine, or Electronic (MIDI) notation.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849471
Zenodo URL: https://zenodo.org/record/849471


2007.64
VEMUS - Feedback and Groupware Technologies for Music Instrument Learning
Fober, Dominique   GRAME - Générateur de Ressources et d’Activités Musicales Exploratoires; Lyon, France
Letz, Stéphane   GRAME - Générateur de Ressources et d’Activités Musicales Exploratoires; Lyon, France
Orlarey, Yann   GRAME - Générateur de Ressources et d’Activités Musicales Exploratoires; Lyon, France

Abstract
Vemus is a european research project that aims at developing and validating an open music tuition framework for popular wind instruments such as the flute, the saxophone, the clarinet and the recorder. The system will address students of beginning to intermediate level. It proposes an innovative approach both at technological and pedagogical levels. This paper presents the project with a specific focus on the feedback technologies developed to extend the instrumental and pedagogic practices.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849403
Zenodo URL: https://zenodo.org/record/849403


2007.65
Wirelessly Interfacing With the Yamaha Disklavier Mark IV
Teeter, Matthew   Department of Music, University of California, Irvine; Irvine, United States
Dobrian, Christopher   Department of Music, University of California, Irvine; Irvine, United States

Abstract
The music technology industry is only recently beginning to realize the potential of wireless communication technology for control and communication of data for music and multimedia applications. A new breed of musical devices is starting to integrate technology that allows the wireless transmission of MIDI (Musical Instrument Digital Interface) messages, real-time audio and video data, control data for performance synchronization, and commands for remote hardware control of these instruments. The Yamaha Disklavier Mark IV piano, which debuted in 2004, is the first instrument with wireless capabilities built-in [1]. It communicates via the 802.11b protocol (WiFi), which allows the piano to transmit and receive information to/from nearby wireless controllers. The piano originally comes with two such controllers: the handheld Pocket Remote Controller (PRC), as well as the larger Tablet Remote Controller (TRC). Both of these devices are proprietary, closed systems that accomplish the specific function of controlling the piano. In this project, we wished to create platform-independent software having the same functionality as these existing controllers, which could run on a conventional laptop with wireless capabilities. Although this solution has several advantages over the prepackaged solutions, it is unsupported by Yamaha because it was developed entirely at the University of California, Irvine. We were able to interface with the Disklavier by sniffing wireless network traffic with Ethereal [2]. We then deciphered this raw information to determine the messaging protocol the Disklavier used to communicate with the supplied controllers. Once we understood the inner workings of the piano, we created software using a variety of technologies, including Java, PostgreSQL, XML, and Flash. Our software can control a Mark IV piano from a Windows, Mac, or Linux laptop, thereby opening new possibilities in music creation and performance. Although we assume the primary users of our software will be universities, we hope it benefits the music technology industry as a whole.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849437
Zenodo URL: https://zenodo.org/record/849437


Search