Sixteen Years of Sound & Music Computing
A Look Into the History and Trends of the Conference and Community

D.A. Mauro, F. Avanzini, A. Baratè, L.A. Ludovico, S. Ntalampiras, S. Dimitrov, S. Serafin
Card image

Papers

Sound and Music Computing Conference 2006 (ed. 3)

Dates: from May 18 to May 19, 2006
Place: Marseille, France
Proceedings info: not available


2006.1
A Computer-assisted Analysis of Rhythmic Periodicity Applied to Two Metric Versions of Luciano Berio's Sequenza VII
Alessandrini, Patricia   Department of Music Composition, Princeton University; Princeton, United States

Abstract
In a computer-assisted analysis, a comparison is made between two metered versions of Luciano Berio’s Sequenza VII: the “supplementary edition” drafted by the oboist Jacqueline Leclair (published in 2001 in combination with the original version); and the oboe part of Berio’s chemins IV. Algorithmic processes, employed in the environment of the IRCAM program Open Music, search for patterns on two levels: on the level of rhythmic detail, by finding the smallest rhythmic unit able to divide the all of the note durations of a given rhythmic sequence; and on a larger scale, in terms of periodicity, or repeated patterns suggesting beats and measures. At the end of the process, temporal grids are constructed at different hierarchical levels of periodicity and compared to the original rhythmic sequences in order to seek instances of correspondence between the grids and the sequences. As the original notation of consists of a combination of rhythmic and spatial notation, and the section analyzed in this study consists entirely of spatial notation, this analysis of the metric versions of Sequenza VII investigates to what extent the rhythms and meters put into place are perceptible in terms of periodicity, or are rather made ambiguous by avoiding periodicity.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849317
Zenodo URL: https://zenodo.org/record/849317


2006.2
A Game Audio Technology Overview
Veneri, Olivier   Centre d'Etudes et de Recherche en Informatique, Conservatoire national des arts et métiers (CNAM); Paris, France / Research and Development Service, France Télécom; Lannion, France
Natkin, Stéphane   Centre d'Etudes et de Recherche en Informatique, Conservatoire national des arts et métiers (CNAM); Paris, France
Le Prado, Cécile   Centre d'Etudes et de Recherche en Informatique, Conservatoire national des arts et métiers (CNAM); Paris, France
Emerit, Marc   Research and Development Service, France Télécom; Lannion, France

Abstract
This paper intends to give an insight in video game audio production. We try to lay down a general framework of this kind of media production in order to discuss and suggest improvement of the actual tool chain. The introductory part is some kind of big picture of game audio creation practice and the constraints that model it. In a second part we’ll describe the state-of-the-art of software tools and sketch a formal description of game activity. This background will lead us to make some assumption about game audio, suggest future research fields and relevance of linking it with computer music research’s area.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849319
Zenodo URL: https://zenodo.org/record/849319


2006.3
A Java Framework for Data Sonification and 3D Graphic Rendering
Vicinanza, Domenico   Department of Mathematics and Computer Science, Università degli Studi di Salerno; Salerno, Italy

Abstract
Data audification is the representation of data by means of sound signals (waveforms or melodies typically). Although most data analysis techniques are exclusively visual in nature, data presentation and exploration systems could benefit greatly from the addition of sonification capabilities. In addition to that, sonic representations are particularly useful when dealing with complex, high-dimensional data, or in data monitoring or analysis tasks where the main goal is the recognition of patterns and recurrent structures. The main goal of this paper is to briefly present the audification process as a mapping between a discrete data set and a discrete set of notes (we shall deal with MIDI representation), and look at a couple of different examples from two well distinguished fields, geophysics and linguistics (seismograms sonificaton and text sonification). Finally the paper will present another example of the mapping between data sets and other information, namely a 3D graphic image generation (rendered with POVRay) driven by the ASCII text.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849321
Zenodo URL: https://zenodo.org/record/849321


2006.4
A model for graphical interaction applied to gestural control of sound
Couturier, Jean-Michel   Blue Yeti; St Georges de Didonne, France

Abstract
This paper is about the use of an interaction model to describe digital musical instruments which have a graphical interface. Interaction models have been developed into the Human-Computer Interaction (HCI) field to provide a framework for guiding designers and developers to design interactive systems. But digital musical instruments are very specific interactive systems: the user is totally in charge of the action and has to control simultaneously multiple continuous parameters. First, this paper introduces the specificities of digital musical instruments and how graphical interfaces are used in some of these instruments. Then, an interaction model called Instrumental Interaction is presented; this model is then refined to take into account the musical context. Finally, some examples of digital musical instruments are introduced and described with this model.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: Missing
Zenodo URL: Missing


2006.5
An XML-based Format for Advanced Music Fruition
Baratè, Adriano   Laboratorio di Informatica Musicale (LIM), Dipartimento di Informatica e Comunicazione (DICo), Università degli Studi di Milano; Milano, Italy
Haus, Goffredo   Laboratorio di Informatica Musicale (LIM), Dipartimento di Informatica e Comunicazione (DICo), Università degli Studi di Milano; Milano, Italy
Ludovico, Luca Andrea   Laboratorio di Informatica Musicale (LIM), Dipartimento di Informatica e Comunicazione (DICo), Università degli Studi di Milano; Milano, Italy

Abstract
This paper describes an XML-based format that allows an advanced fruition of music contents. Thanks to such format, namely MX (IEEE PAR1599), and to the implementation of ad hoc interfaces, users can enjoy music from different points of view: the same piece can be described through different scores, video and audio performances, mutually synchronized. The purpose of this paper is pointing out the basic concepts of our XML encoding and presenting the process required to create rich multimedia descriptions of a music piece in MX format. Finally, a working application to play and view MX files will be presented.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849323
Zenodo URL: https://zenodo.org/record/849323


2006.6
Automatic Composition and Notation in Network Music Environments
Hajdu, Georg   Hochschule für Musik und Theater Hamburg; Hamburg, Germany

Abstract
Using real-time notation in network music performance environments adds novel dimensions to man-machine interaction. After a 200-year history of algorithmic composition and a 30-year history of network music performance, a number of performance environments have recently been developed which allow performers to read music composed in real-time off a computer monitor. In the pieces written for these environments, the musicians are supposed to either improvise to abstract graphical symbols and/or to sight-read the score in standard music notation. Quintet.net—a network performance environment conceived in 1999 and used for several project involving Internet as well as local network concerts—has built-in notation capabilities, which makes the environment ideal for this type of music. The search for an ideal notation format, for which several known formats were compared, was an important aspect during the development of the Conductor component of Quintet.net—a component that reads and streams parts to the Client and Listener components. In real-time composition, these parts need to be generated automatically. Therefore, different scenarios can be envisaged, which are either automatic or interactive with the players shaping the outcome of a piece by their performance.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849325
Zenodo URL: https://zenodo.org/record/849325


2006.7
Composing Audio-visual Art: The Issue of Time and the Concept of Transduction
Courribet, Benoît   CICM, University Paris VIII; Paris, France

Abstract
In this article we study the relations between images and sounds in the field of crossed-media composition. The article aims at giving some elements of reflection considering the issue of time. The notion of temporal object is exposed, and so are the mechanisms driving the perception of these objects. The issue of time is then discussed through the study of relations between audio and visual media.and several examples of strategies for mapping are exposed. Finally, the concept of transduction is presented.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849327
Zenodo URL: https://zenodo.org/record/849327


2006.8
Computer-aided Transformational Analysis With Tone Sieves
Noll, Thomas   Escola Superior de Música de Catalunya (ESMUC); Barcelona, Spain
Andreatta, Moreno   Institut de Recherche et Coordination Acoustique/Musique (IRCAM); Paris, France
Agon, Carlos   Institut de Recherche et Coordination Acoustique/Musique (IRCAM); Paris, France
Amiot, Emmanuel   CPGE, Lycée Privé Notre Dame de Bon Secours; Perpignan, France

Abstract
Xenakis’ tone sieves belong to the first examples of theoretical tools whose implementational character has contributed to the development of computation in music and musicology. According to Xenakis’ original intuition, we distinguish between elementary sieves and compound ones and trace the definition of sieve transformations along the sieve construction. This makes sense if the sieve construction is considered as part of the musical meaning. We explore this by analyzing Scriabin’s Study for piano Op. 65 No. 3 by means of intersections and unions of whole-tone and octatonic sieves. On the basis of this example we also demonstrate some aspects of the implementation of sieve-theory in Open-Music and thereby suggest further applications in computer-aided music analysis.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849331
Zenodo URL: https://zenodo.org/record/849331


2006.9
Computer Vision Method for Guitarist Fingering Retrieval
Burns, Anne-Marie   Schulich School of Music, McGill University; Montreal, Canada
Wanderley, Marcelo M.   Schulich School of Music, McGill University; Montreal, Canada

Abstract
This article presents a method to visually detect and recognize fingering gestures of the left hand of a guitarist. This method has been developed following preliminary manual and automated analysis of video recordings of a guitarist. These first analyses led to some important findings about the design methodology of such a system, namely the focus on the effective gesture, the consideration of the action of each individual finger, and a recognition system not relying on comparison against a knowledge base of previously learned fingering positions. Motivated by these results, studies on three important aspects of a complete fingering system were conducted. One study was on finger tracking, another on strings and frets detection, and the last on movement segmentation. Finally, these concepts were integrated into a prototype and a system for left hand fingering detection was developed.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849329
Zenodo URL: https://zenodo.org/record/849329


2006.10
Concurrent Constraints Models for Specifying Interactive Scores
Allombert, Antoine   Institut de Recherche et Coordination Acoustique/Musique (IRCAM); Paris, France
Assayag, Gérard   Institut de Recherche et Coordination Acoustique/Musique (IRCAM); Paris, France
Desainte-Catherine, Myriam   Université Bordeaux-I; Bordeaux, France
Rueda, Camilo   Institut de Recherche et Coordination Acoustique/Musique (IRCAM); Paris, France / Pontificia Universidad Javeriana; Bogotá, Colombia

Abstract
We propose a formalism for the construction and performance of musical pieces composed of temporal structures involving discrete interactive events. The occurrence in time of these structures and events is partially defined according to constraints, such as Allen temporal relations. We represent the temporal structures using two constraint models. A constraints propagation model is used for the score composition stage, while a non deterministic temporal concurrent constraint calculus (NTCC) is used for the performance phase. The models are tested with examples of temporal structures computed with the GECODE constraint system library and run with a NTCC interpreter.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849333
Zenodo URL: https://zenodo.org/record/849333


2006.11
Exploring the Effect of Mapping Trajectories on Musical Performance
Van Nort, Doug   Sound Processing and Control Laboratory (SPCL), Schulich School of Music, Music Technology Area, McGill University; Montreal, Canada
Wanderley, Marcelo M.   Input Devices and Music Interaction Laboratory (IDMIL), Schulich School of Music, Music Technology Area, McGill University; Montreal, Canada

Abstract
The role of mapping as determinant of expressivity is examined. Issues surround the mapping of real-time control parameters to sound synthesis parameters are discussed, including several representations of the problem. Finally a study is presented which examines the effect of mapping on musical expressivity, on the ability to navigate sonic exploration and on visual feedback.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849335
Zenodo URL: https://zenodo.org/record/849335


2006.12
Extracting Musically-relevant Rhythmic Information From Dance Movement by Applying Pitch Tracking Techniques to a Video Signal
Guedes, Carlos   School of Music and Performing Arts (ESMAE), P.Porto (Instituto Politécnico do Porto); Porto, Portugal / Escola Superior de Artes Aplicadas, Instituto Politécnico de Castelo Branco; Castelo Branco, Portugal

Abstract
In this paper, I discuss in detail the approach taken in the implementation of two external objects in Max/MSP [11] that can extract musically relevant rhythmic information from dance movement as captured by a video camera. These objects perform certain types of analysis on the digitized video stream and can enable dancers to generate musical rhythmic structures and/or to control the musical tempo of an electronicallygenerated sequence in real time. One of the objects, m.bandit,1 implements an algorithm that does the spectral representation of the frame-differencing video analysis signal and calculates its fundamental frequency in a fashion akin to pitch tracking. The fundamental frequency of the signal as calculated by this object, can be treated as a beat candidate and sent to another object, m.clock, an adaptive clock that can adjust the tempo of a musical sequence being played, thereby enabling the dancer the control of musical tempo in real time.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849337
Zenodo URL: https://zenodo.org/record/849337


2006.13
First Steps Towards a Digital Assistant for Performers and Stage Directors
Bonardi, Alain   Maison des Sciences de l'Homme Paris Nord; La Plaine St-Denis, France
Truck, Isis   Maison des Sciences de l'Homme Paris Nord; La Plaine St-Denis, France

Abstract
In this article, we present the first steps of our research work to design a Virtual Assistant for Performers and Stage Directors. Our aim is to be able to give an automatic feedback in stage performances. We collect video and sound data from numerous performances of the same show from which it should be possible to visualize the emotions and intents or more precisely “intent graphs”. To perform this, the collected data defining low-level descriptors are aggregated and converted into high-level characterizations. Then, depending on the retrieved data and on their distribution on the axis, we partition the universes into classes. The last step is the building of the fuzzy rules that are obtained from the classes and that permit the detecting of emotion states.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849339
Zenodo URL: https://zenodo.org/record/849339


2006.14
Generating Chants Using Mnemonic Capabilities
Iyengar, Sudharsan   Winona State University; Winona, United States

Abstract
A chant is a simplistic repetitive song in which syllables may be assigned to a single tone. Additionally chants may be rhythmic and include simple melody. Chants can be considered speech or music that can convey emotion. Additionally, a chant can be monotonous, droning, and tedious. Fundamental to a chant is the notion of timing and note patterns. We present here a framework for the synthesis of chants of music notes. These are chants without syllables from spoken language. We introduced Mnemonic capabilities in [1] and utilize these to systematically generate chants. We illustrate our ideas using examples using the notes set{C, D, E, G, A, φ (or silence)} - a Pentatonic Scale (perfect fifths). First, we define, and structure, the use of timing and notes to develop the chant strings. Then we propose the use of mnemonics to develop different styles of musical chants. Finally we suggest the adoption of intonations and syllables for controlled generation of musical chants.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849341
Zenodo URL: https://zenodo.org/record/849341


2006.15
Imitative and Generative Orchestrations Using Pre-analysed Sounds Databases
Carpentier, Grégoire   Institut de Recherche et Coordination Acoustique/Musique (IRCAM); Paris, France
Tardieu, Damien   Institut de Recherche et Coordination Acoustique/Musique (IRCAM); Paris, France
Assayag, Gérard   Institut de Recherche et Coordination Acoustique/Musique (IRCAM); Paris, France
Rodet, Xavier   Institut de Recherche et Coordination Acoustique/Musique (IRCAM); Paris, France
Saint-James, Emmanuel   Laboratoire d'Informatique de Paris 6 (LIP6), Université Paris 1 Panthéon-Sorbonne; Paris, France

Abstract
In this paper we will introduce a tool aimed at assisting composers in orchestration tasks. Thanks to this tool, composers can specify a target sound and replicate it with a given, pre-determined orchestra. This target sound, defined as a set of audio and symbolic features, can be constructed either by analysing a pre-recorded sound, or through a compositional process. We will then describe an orchestration procedure that uses large pre-analysed instrumental sound databases to offer composers a set of sound combinations. This procedure relies on a set of features that describe different aspects of the sound. Our purpose is not to build an exhaustive sound description, but rather to design a general framework which can be easily extended by adding new features when needed.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849343
Zenodo URL: https://zenodo.org/record/849343


2006.16
Interaction and Spatialization: Three Recent Musical Works
Risset, Jean-Claude   Laboratoire de Mécanique et d'Acoustique (LMA); Marseille, France

Abstract
This presentation will describe the use of interaction and spatialization in the realization of three recent musical works. In Echappées, the computer responds to the live celtic harp by echoes or harmonizations produced with MaxMSP. Resonant Sound Spaces and Pentacle resort to the tape, either alone for the former or in dialogue with the harpsichord played live for the latter. For these two pieces, real-time synthesis and processing have been used to produce sound material for the tape, and a 8track spatialization has been elaborated, using the Holophon software designed by Laurent Pottier at GMEM. The presentation will be illustrated by sound examples.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849345
Zenodo URL: https://zenodo.org/record/849345


2006.17
Kinetic Engine: Toward an Intelligent Improvising Instrument
Eigenfeldt, Arne   School for the Contemporary Arts, Simon Fraser University; Burnaby, Canada

Abstract
Kinetic Engine is an improvising instrument that generates complex rhythms. It is an interactive performance system that models a drum ensemble, and is comprised of four multi-agents (Players) that collaborate to create complex rhythms. Each agent performs a role, and decisions are made according to this understanding. Furthermore, the four Players are under the coordination of a software conductor in a hierarchical structure.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849347
Zenodo URL: https://zenodo.org/record/849347


2006.18
Minimalism and Process Music: A PureData Realization of Pendulum Music
Coco, Remigio   Conservatorio Statale di Musica “O. Respighi” di Latina; Latina, Italy

Abstract
Minimalist music often made use of “processes” to express a musical idea. In our “computer-oriented” terms, we can define the musical processes of minimalist composers as predefined sequences of operations, or, in other words, as “compositional algorithms”. In this presentation, one important “process-based” work of the first period of minimalism, “Pendulum Music” by Steve Reich, has been re-created from scratch, using only a personal computer. The implementation of the “process”, as well as the simulation of the Larsen effect, has been made with Pure Data, an Open Source program widely available, and it is explained in detail hereafter. The main goal of this work was not to make a perfect reconstruction of the piece, but to recreate the compositional design and to focus on the musical aspects of the process itself. Therefore, no rigorous validation method has been designed for the simulation; instead, the audio results have been compared empirically with a recorded version.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849349
Zenodo URL: https://zenodo.org/record/849349


2006.19
Music Structure Representation: A Formal Model
Tagliolato, Paolo   Laboratorio di Informatica Musicale (LIM), Dipartimento di Informatica e Comunicazione (DICo), Università degli Studi di Milano; Milano, Italy

Abstract
In the present work we introduce a general formal (Object Oriented) model for the representation of musical structure information, taking into account some common feature of the information vehiculed by music analysis. The model is suitable to represent many different kinds of musical analytical entities. As an example of both musical and mathematical-computational relevance, we introduce the D.Lewin’s GIS theory. A GIS is equivalent to a particular kind of group action over a set: we show that other related structures can be treated in a similar manner. We conclude with the prototype implementation of these ideas in MX, an XML format for music information.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849351
Zenodo URL: https://zenodo.org/record/849351


2006.20
OMAX-OFON
Assayag, Gérard   Institut de Recherche et Coordination Acoustique/Musique (IRCAM); Paris, France
Bloch, Georges   University of Strasbourg; Strasbourg, France
Chemillier, Marc   University of Caen; Caen, France

Abstract
We describe an architecture for an improvisation oriented musician-machine interaction system. The virtual improvisation kernel is based on a statistical learning model. The working system involves a hybrid architecture using two popular composition/perfomance environments, Max and OpenMusic, that are put to work and communicate together. The Midi based OMAX system is described first, followed by the OFON system, its extension to audio.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849353
Zenodo URL: https://zenodo.org/record/849353


2006.21
Real-time Control of Greek Chant Synthesis
Zannos, Ioannis   Department of Audiovisual Arts, Ionian University; Kerkyra, Greece
Georgaki, Anastasia   Department of Musicology, University of Athens; Athens, Greece
Delviniotis, Dimitrios   Department of Informatics and Telecommunications, University of Athens; Athens, Greece
Kouroupetroglou, Georgios   Department of Informatics and Telecommunications, University of Athens; Athens, Greece

Abstract
This paper we report on an interdisciplinary project for modeling Greek chant with real-time vocal synthesis. Building on previous research, we employ a hybrid musical instrument: Phonodeon (Georgaki et al. 2005), consisting of a MIDI-accordeon coupled to a real-time algorithmic interaction and vocal synthesis engine. The synthesis is based on data provided by the AOIDOS program developed in the Department of the Computer science of the University of Athens, investigating Greek liturgical chant compared to bel canto singing. Phonodeon controls expressive vocal synthesis models based on formant synthesis and concatenated filtered samples. Its bellows serve as hardware control device that is physically analogous to the human breathing mechanism [georgaki, 1998a], while the buttons of the right hand can serve multiple functions. On the level of pitch structure, this paper focuses on a particular aspect of control, namely that of playing in the traditional non-tempered and flexible interval structure of Greek modes (ήχοι: echoi) while using the 12semitone piano-type keyboard of the left hand. This enables the musical exploration of the relationship between the spectral structure of the vocal timbre of Greek chant and characteristic intervals occuring in the modal structure of the chant. To implement that, we developed techniques for superimposing interval patterns of the modes on the keyboard of the phonodeon. The work is the first comprehensive interactive model of antique, medieval and modern near-eastern tunings. The techniques developed can be combined with techniques for other control aspects, such as timbre and vocal expression control, phoneme or (expressive/ornamental/melodic pattern, inflection) sequence recall and combination, data record on/off, or others, which form part of the phonodeon project. On the level of timbre and expression, we make use of data obtained by analysis from audio samples of chanting as control sources for synthesis by concatenation of control data, thereby providing an example of realtime application of Diphone techniques (Rodet and Levevre). This research can find applications in many computer music fields such as algorithmically controlled improvisation, microtonal music, music theory and notation of (algorithmic/computerized) real-time performance, and computer modeling of experimental or non-western musical styles.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849355
Zenodo URL: https://zenodo.org/record/849355


2006.22
Sound and Music Design for Games and Interactive Media
Natkin, Stéphane   Graduate School of Games and Interactive Media, Conservatoire national des arts et métiers (CNAM); Angoulême, France
Le Prado, Cécile   Centre d'Etudes et de Recherche en Informatique et Communication, Conservatoire national des arts et métiers (CNAM); Paris, France

Abstract
This paper presents the sound and music specialty of the master at ENJMIN (Ecole Nationale des Jeux et Media Interactifs Numériques: Graduate School of Games and Interactive Media) (http://www.enjmin.fr). The sound specialty is opened to composers, musicians, sound engineers, sound designers. The main goals of the school are to teach the use of sound and music in interactive media in general and video games in particular and the ability to work in multidisplinary teams. In the first section we present the whole master goal. The second section is devoted to the description of the sound programme. The third section presents other activities of the school (continuous, training, international partnership, research). The studio presentation will develop these topics through several students project’s demo.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849357
Zenodo URL: https://zenodo.org/record/849357


2006.23
Sound Texture Modeling: A Survey
Strobl, Gerda   Institute of Electronic Music and Acoustics (IEM), University of Music and Performing Arts (KUG); Graz, Austria
Eckel, Gerhard   Institute of Electronic Music and Acoustics (IEM), University of Music and Performing Arts (KUG); Graz, Austria
Rocchesso, Davide   Department of Computer Science, Università di Verona; Verona, Italy

Abstract
Sound texture modeling is a widely used concept in computer music, that has its well analyzed counterpart in image processing. We report on the current state of different sound texture generation methods and try to outline common problems of the sound texture examples. Published results pursue different kinds of analysis /re-synthesis approaches that can be divided into methods that try to transfer existing techniques from computer graphics and methods that take advantage of well-known techniques found in common computer music systems. Furthermore we present the idea of a new texture generator framework, where different analysis and synthesis tools can be combined and tested with the goal of producing high quality sound examples.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849359
Zenodo URL: https://zenodo.org/record/849359


2006.24
Temporal Control Over Sound Synthesis Processes
Bresson, Jean   Music Representations Team, Institut de Recherche et Coordination Acoustique/Musique (IRCAM); Paris, France
Agon, Carlos   Music Representations Team, Institut de Recherche et Coordination Acoustique/Musique (IRCAM); Paris, France

Abstract
This article addresses the problem of the representation of time in computer-assisted sound composition. We try to point out the specific temporal characteristics of sound synthesis processes, in order to propose solutions for a compositional approach using symbolic models and representations.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849361
Zenodo URL: https://zenodo.org/record/849361


2006.25
The Case Study of an Application of the System, "BodySuit" and "RoboticMusic" - Its Introduction and Aesthetics
Goto, Suguru   Institut de Recherche et Coordination Acoustique/Musique (IRCAM); Paris, France

Abstract
This paper is intended to introduce the system, which combines "BodySuit" and "RoboticMusic," as well as its possibilities and its uses in an artistic application. "BodySuit" refers to a gesture controller in a Data Suit type. "RoboticMusic" refers to percussion robots, which are applied to a humanoid robot type. In this paper, I will discuss their aesthetics and the concept, as well as the idea of the "Extended Body".

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849363
Zenodo URL: https://zenodo.org/record/849363


2006.26
Trends In/Over Time: Rhythm in Speech and Musical Melody in 19th-century Art Song
VanHandel, Leigh   School of Music, Michigan State University; East Lansing, United States

Abstract
This paper presents the results of a quantitative study of the relationship between rhythmic characteristics of spoken German and French and the rhythm of musical melody in 19th-century art song. The study used a modified version of the Normalized Pairwise Variability Index, or nPVI, to measure the amount of variability between successive rhythmic events in the melodies of over 600 songs by 19 French and German composers. The study returned an unexpected result; songs written to texts in the two languages exhibited sharply diverging trends as a function of time through the 19th century. This trend is reflected both in the overall trends and in the trends of individual composers.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849365
Zenodo URL: https://zenodo.org/record/849365


2006.27
Using evolving physical models for musical creation in the Genesis environment
Cadoz, Claude   ICA Laboratory, Association pour la Création et la Recherche sur les Outils d’Expression (ACROE), Grenoble Institute of Technology (Grenoble INP); Grenoble, France
Tache, Olivier   ICA Laboratory, Association pour la Création et la Recherche sur les Outils d’Expression (ACROE), Grenoble Institute of Technology (Grenoble INP); Grenoble, France

Abstract
Physical modelling schemes are well known for their ability to generate plausible sounds, i.e. sounds that are perceived as being produced by physical objects. As a result, a large part of physical modelling research is devoted to the realistic synthesis of real-world sounds and has a strong link with instrumental acoustics. However, we have shown that physical modelling not only addresses sound synthesis, but also musical composition. In particular, mass-interaction physical modelling has been presented as enabling the musician to work both on sound production (i.e. microstructure) and events organization (i.e. macrostructure). This article presents a method for building mass-interaction models whose physical structure evolves during the simulation. Structural evolution is implemented in a physically consistent manner, by using nonlinear interactions that set temporary viscoelastic links between independent models. This yields new possibilities for music creation, particularly for the generation of complex sound sequences that exhibit an intimate articulation between the micro- and the macrostructure.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: Missing
Zenodo URL: Missing


2006.28
Vocabulaire de l'Espace et de la Spatialisation des Musiques Électroacoustiques: Présentation, Problématique et Taxinomie De l'Espace
Merlier, Bertrand   Department of Music, University Lumière Lyon 2; Lyon, France

Abstract
La parution imminente du "Vocabulaire de l’espace et de la spatialisation des musiques électroacoustiques" est tout d’abord l’occasion de faire le point sur les activités du GETEME (Groupe d’Étude sur l’Espace dans les Musiques Électroacoustiques). La présentation de quelques termes essentiels et de leurs définitions donnera un aperçu de la teneur de l’ouvrage. Mais l’objectif principal de cette communication se situe au niveau de l’analyse du travail effectué et des méthodes employées. Les difficultés rencontrées lors de l’élaboration de ce document (sources documentaires, collecte, dépouillement, mise en forme, définitions en usage : floues ou divergentes...) nous ont amené à réaliser une taxinomie de l’espace et de la spatialisation. Cette classification sera présentée en détail. Nous verrons sur divers exemples, comment cet outil permet de lever la confusion de certaines définitions, comment il permet d’éviter les oublis. En conclusion, nous envisagerons quelques travaux de recherche à venir: le "vocabulaire" sera certainement une source importante de débats, de réflexions, de formalisations.

Keywords
not available

Paper topics
not available

Easychair keyphrases
not available

Paper type
unknown

DOI: 10.5281/zenodo.849367
Zenodo URL: https://zenodo.org/record/849367


Search