Dates: from May 18 to May 19, 2006
Place: Marseille, France
Proceedings info: not available
Abstract
In a computer-assisted analysis, a comparison is made between two metered versions of Luciano Berio’s Sequenza VII: the “supplementary edition” drafted by the oboist Jacqueline Leclair (published in 2001 in combination with the original version); and the oboe part of Berio’s chemins IV. Algorithmic processes, employed in the environment of the IRCAM program Open Music, search for patterns on two levels: on the level of rhythmic detail, by finding the smallest rhythmic unit able to divide the all of the note durations of a given rhythmic sequence; and on a larger scale, in terms of periodicity, or repeated patterns suggesting beats and measures. At the end of the process, temporal grids are constructed at different hierarchical levels of periodicity and compared to the original rhythmic sequences in order to seek instances of correspondence between the grids and the sequences. As the original notation of consists of a combination of rhythmic and spatial notation, and the section analyzed in this study consists entirely of spatial notation, this analysis of the metric versions of Sequenza VII investigates to what extent the rhythms and meters put into place are perceptible in terms of periodicity, or are rather made ambiguous by avoiding periodicity.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849317
Zenodo URL: https://zenodo.org/record/849317
Abstract
This paper intends to give an insight in video game audio production. We try to lay down a general framework of this kind of media production in order to discuss and suggest improvement of the actual tool chain. The introductory part is some kind of big picture of game audio creation practice and the constraints that model it. In a second part we’ll describe the state-of-the-art of software tools and sketch a formal description of game activity. This background will lead us to make some assumption about game audio, suggest future research fields and relevance of linking it with computer music research’s area.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849319
Zenodo URL: https://zenodo.org/record/849319
Abstract
Data audification is the representation of data by means of sound signals (waveforms or melodies typically). Although most data analysis techniques are exclusively visual in nature, data presentation and exploration systems could benefit greatly from the addition of sonification capabilities. In addition to that, sonic representations are particularly useful when dealing with complex, high-dimensional data, or in data monitoring or analysis tasks where the main goal is the recognition of patterns and recurrent structures. The main goal of this paper is to briefly present the audification process as a mapping between a discrete data set and a discrete set of notes (we shall deal with MIDI representation), and look at a couple of different examples from two well distinguished fields, geophysics and linguistics (seismograms sonificaton and text sonification). Finally the paper will present another example of the mapping between data sets and other information, namely a 3D graphic image generation (rendered with POVRay) driven by the ASCII text.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849321
Zenodo URL: https://zenodo.org/record/849321
Abstract
This paper is about the use of an interaction model to describe digital musical instruments which have a
graphical interface. Interaction models have been developed into the Human-Computer Interaction (HCI) field to provide a framework for guiding designers and developers to design interactive systems. But digital musical instruments are very specific interactive systems: the user is totally in charge of the action and has to control simultaneously multiple continuous parameters. First, this paper introduces the specificities of digital musical instruments and how graphical interfaces are used in some of these instruments. Then, an interaction model called Instrumental Interaction is presented; this model is then refined to take into account the musical context. Finally, some examples of digital musical instruments are introduced and described with this model.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: Missing
Zenodo URL: Missing
Abstract
This paper describes an XML-based format that allows an advanced fruition of music contents. Thanks to such format, namely MX (IEEE PAR1599), and to the implementation of ad hoc interfaces, users can enjoy music from different points of view: the same piece can be described through different scores, video and audio performances, mutually synchronized. The purpose of this paper is pointing out the basic concepts of our XML encoding and presenting the process required to create rich multimedia descriptions of a music piece in MX format. Finally, a working application to play and view MX files will be presented.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849323
Zenodo URL: https://zenodo.org/record/849323
Abstract
Using real-time notation in network music performance environments adds novel dimensions to man-machine interaction. After a 200-year history of algorithmic composition and a 30-year history of network music performance, a number of performance environments have recently been developed which allow performers to read music composed in real-time off a computer monitor. In the pieces written for these environments, the musicians are supposed to either improvise to abstract graphical symbols and/or to sight-read the score in standard music notation. Quintet.net—a network performance environment conceived in 1999 and used for several project involving Internet as well as local network concerts—has built-in notation capabilities, which makes the environment ideal for this type of music. The search for an ideal notation format, for which several known formats were compared, was an important aspect during the development of the Conductor component of Quintet.net—a component that reads and streams parts to the Client and Listener components. In real-time composition, these parts need to be generated automatically. Therefore, different scenarios can be envisaged, which are either automatic or interactive with the players shaping the outcome of a piece by their performance.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849325
Zenodo URL: https://zenodo.org/record/849325
Abstract
In this article we study the relations between images and sounds in the field of crossed-media composition. The article aims at giving some elements of reflection considering the issue of time. The notion of temporal object is exposed, and so are the mechanisms driving the perception of these objects. The issue of time is then discussed through the study of relations between audio and visual media.and several examples of strategies for mapping are exposed. Finally, the concept of transduction is presented.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849327
Zenodo URL: https://zenodo.org/record/849327
Abstract
Xenakis’ tone sieves belong to the first examples of theoretical tools whose implementational character has contributed to the development of computation in music and musicology. According to Xenakis’ original intuition, we distinguish between elementary sieves and compound ones and trace the definition of sieve transformations along the sieve construction. This makes sense if the sieve construction is considered as part of the musical meaning. We explore this by analyzing Scriabin’s Study for piano Op. 65 No. 3 by means of intersections and unions of whole-tone and octatonic sieves. On the basis of this example we also demonstrate some aspects of the implementation of sieve-theory in Open-Music and thereby suggest further applications in computer-aided music analysis.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849331
Zenodo URL: https://zenodo.org/record/849331
Abstract
This article presents a method to visually detect and recognize fingering gestures of the left hand of a guitarist. This method has been developed following preliminary manual and automated analysis of video recordings of a guitarist. These first analyses led to some important findings about the design methodology of such a system, namely the focus on the effective gesture, the consideration of the action of each individual finger, and a recognition system not relying on comparison against a knowledge base of previously learned fingering positions. Motivated by these results, studies on three important aspects of a complete fingering system were conducted. One study was on finger tracking, another on strings and frets detection, and the last on movement segmentation. Finally, these concepts were integrated into a prototype and a system for left hand fingering detection was developed.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849329
Zenodo URL: https://zenodo.org/record/849329
Abstract
We propose a formalism for the construction and performance of musical pieces composed of temporal structures involving discrete interactive events. The occurrence in time of these structures and events is partially defined according to constraints, such as Allen temporal relations. We represent the temporal structures using two constraint models. A constraints propagation model is used for the score composition stage, while a non deterministic temporal concurrent constraint calculus (NTCC) is used for the performance phase. The models are tested with examples of temporal structures computed with the GECODE constraint system library and run with a NTCC interpreter.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849333
Zenodo URL: https://zenodo.org/record/849333
Abstract
The role of mapping as determinant of expressivity is examined. Issues surround the mapping of real-time control parameters to sound synthesis parameters are discussed, including several representations of the problem. Finally a study is presented which examines the effect of mapping on musical expressivity, on the ability to navigate sonic exploration and on visual feedback.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849335
Zenodo URL: https://zenodo.org/record/849335
Abstract
In this paper, I discuss in detail the approach taken in the implementation of two external objects in Max/MSP [11] that can extract musically relevant rhythmic information from dance movement as captured by a video camera. These objects perform certain types of analysis on the digitized video stream and can enable dancers to generate musical rhythmic structures and/or to control the musical tempo of an electronicallygenerated sequence in real time. One of the objects, m.bandit,1 implements an algorithm that does the spectral representation of the frame-differencing video analysis signal and calculates its fundamental frequency in a fashion akin to pitch tracking. The fundamental frequency of the signal as calculated by this object, can be treated as a beat candidate and sent to another object, m.clock, an adaptive clock that can adjust the tempo of a musical sequence being played, thereby enabling the dancer the control of musical tempo in real time.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849337
Zenodo URL: https://zenodo.org/record/849337
Abstract
In this article, we present the first steps of our research work to design a Virtual Assistant for Performers and Stage Directors. Our aim is to be able to give an automatic feedback in stage performances. We collect video and sound data from numerous performances of the same show from which it should be possible to visualize the emotions and intents or more precisely “intent graphs”. To perform this, the collected data defining low-level descriptors are aggregated and converted into high-level characterizations. Then, depending on the retrieved data and on their distribution on the axis, we partition the universes into classes. The last step is the building of the fuzzy rules that are obtained from the classes and that permit the detecting of emotion states.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849339
Zenodo URL: https://zenodo.org/record/849339
Abstract
A chant is a simplistic repetitive song in which syllables may be assigned to a single tone. Additionally chants may be rhythmic and include simple melody. Chants can be considered speech or music that can convey emotion. Additionally, a chant can be monotonous, droning, and tedious. Fundamental to a chant is the notion of timing and note patterns. We present here a framework for the synthesis of chants of music notes. These are chants without syllables from spoken language. We introduced Mnemonic capabilities in [1] and utilize these to systematically generate chants. We illustrate our ideas using examples using the notes set{C, D, E, G, A, φ (or silence)} - a Pentatonic Scale (perfect fifths). First, we define, and structure, the use of timing and notes to develop the chant strings. Then we propose the use of mnemonics to develop different styles of musical chants. Finally we suggest the adoption of intonations and syllables for controlled generation of musical chants.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849341
Zenodo URL: https://zenodo.org/record/849341
Abstract
In this paper we will introduce a tool aimed at assisting composers in orchestration tasks. Thanks to this tool, composers can specify a target sound and replicate it with a given, pre-determined orchestra. This target sound, defined as a set of audio and symbolic features, can be constructed either by analysing a pre-recorded sound, or through a compositional process. We will then describe an orchestration procedure that uses large pre-analysed instrumental sound databases to offer composers a set of sound combinations. This procedure relies on a set of features that describe different aspects of the sound. Our purpose is not to build an exhaustive sound description, but rather to design a general framework which can be easily extended by adding new features when needed.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849343
Zenodo URL: https://zenodo.org/record/849343
Abstract
This presentation will describe the use of interaction and spatialization in the realization of three recent musical works. In Echappées, the computer responds to the live celtic harp by echoes or harmonizations produced with MaxMSP. Resonant Sound Spaces and Pentacle resort to the tape, either alone for the former or in dialogue with the harpsichord played live for the latter. For these two pieces, real-time synthesis and processing have been used to produce sound material for the tape, and a 8track spatialization has been elaborated, using the Holophon software designed by Laurent Pottier at GMEM. The presentation will be illustrated by sound examples.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849345
Zenodo URL: https://zenodo.org/record/849345
Abstract
Kinetic Engine is an improvising instrument that generates complex rhythms. It is an interactive performance system that models a drum ensemble, and is comprised of four multi-agents (Players) that collaborate to create complex rhythms. Each agent performs a role, and decisions are made according to this understanding. Furthermore, the four Players are under the coordination of a software conductor in a hierarchical structure.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849347
Zenodo URL: https://zenodo.org/record/849347
Abstract
Minimalist music often made use of “processes” to express a musical idea. In our “computer-oriented” terms, we can define the musical processes of minimalist composers as predefined sequences of operations, or, in other words, as “compositional algorithms”. In this presentation, one important “process-based” work of the first period of minimalism, “Pendulum Music” by Steve Reich, has been re-created from scratch, using only a personal computer. The implementation of the “process”, as well as the simulation of the Larsen effect, has been made with Pure Data, an Open Source program widely available, and it is explained in detail hereafter. The main goal of this work was not to make a perfect reconstruction of the piece, but to recreate the compositional design and to focus on the musical aspects of the process itself. Therefore, no rigorous validation method has been designed for the simulation; instead, the audio results have been compared empirically with a recorded version.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849349
Zenodo URL: https://zenodo.org/record/849349
Abstract
In the present work we introduce a general formal (Object Oriented) model for the representation of musical structure information, taking into account some common feature of the information vehiculed by music analysis. The model is suitable to represent many different kinds of musical analytical entities. As an example of both musical and mathematical-computational relevance, we introduce the D.Lewin’s GIS theory. A GIS is equivalent to a particular kind of group action over a set: we show that other related structures can be treated in a similar manner. We conclude with the prototype implementation of these ideas in MX, an XML format for music information.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849351
Zenodo URL: https://zenodo.org/record/849351
Abstract
We describe an architecture for an improvisation oriented musician-machine interaction system. The virtual improvisation kernel is based on a statistical learning model. The working system involves a hybrid architecture using two popular composition/perfomance environments, Max and OpenMusic, that are put to work and communicate together. The Midi based OMAX system is described first, followed by the OFON system, its extension to audio.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849353
Zenodo URL: https://zenodo.org/record/849353
Abstract
This paper we report on an interdisciplinary project for modeling Greek chant with real-time vocal synthesis. Building on previous research, we employ a hybrid musical instrument: Phonodeon (Georgaki et al. 2005), consisting of a MIDI-accordeon coupled to a real-time algorithmic interaction and vocal synthesis engine. The synthesis is based on data provided by the AOIDOS program developed in the Department of the Computer science of the University of Athens, investigating Greek liturgical chant compared to bel canto singing. Phonodeon controls expressive vocal synthesis models based on formant synthesis and concatenated filtered samples. Its bellows serve as hardware control device that is physically analogous to the human breathing mechanism [georgaki, 1998a], while the buttons of the right hand can serve multiple functions. On the level of pitch structure, this paper focuses on a particular aspect of control, namely that of playing in the traditional non-tempered and flexible interval structure of Greek modes (ήχοι: echoi) while using the 12semitone piano-type keyboard of the left hand. This enables the musical exploration of the relationship between the spectral structure of the vocal timbre of Greek chant and characteristic intervals occuring in the modal structure of the chant. To implement that, we developed techniques for superimposing interval patterns of the modes on the keyboard of the phonodeon. The work is the first comprehensive interactive model of antique, medieval and modern near-eastern tunings. The techniques developed can be combined with techniques for other control aspects, such as timbre and vocal expression control, phoneme or (expressive/ornamental/melodic pattern, inflection) sequence recall and combination, data record on/off, or others, which form part of the phonodeon project. On the level of timbre and expression, we make use of data obtained by analysis from audio samples of chanting as control sources for synthesis by concatenation of control data, thereby providing an example of realtime application of Diphone techniques (Rodet and Levevre). This research can find applications in many computer music fields such as algorithmically controlled improvisation, microtonal music, music theory and notation of (algorithmic/computerized) real-time performance, and computer modeling of experimental or non-western musical styles.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849355
Zenodo URL: https://zenodo.org/record/849355
Abstract
This paper presents the sound and music specialty of the master at ENJMIN (Ecole Nationale des Jeux et Media Interactifs Numériques: Graduate School of Games and Interactive Media) (http://www.enjmin.fr). The sound specialty is opened to composers, musicians, sound engineers, sound designers. The main goals of the school are to teach the use of sound and music in interactive media in general and video games in particular and the ability to work in multidisplinary teams. In the first section we present the whole master goal. The second section is devoted to the description of the sound programme. The third section presents other activities of the school (continuous, training, international partnership, research). The studio presentation will develop these topics through several students project’s demo.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849357
Zenodo URL: https://zenodo.org/record/849357
Abstract
Sound texture modeling is a widely used concept in computer music, that has its well analyzed counterpart in image processing. We report on the current state of different sound texture generation methods and try to outline common problems of the sound texture examples. Published results pursue different kinds of analysis /re-synthesis approaches that can be divided into methods that try to transfer existing techniques from computer graphics and methods that take advantage of well-known techniques found in common computer music systems. Furthermore we present the idea of a new texture generator framework, where different analysis and synthesis tools can be combined and tested with the goal of producing high quality sound examples.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849359
Zenodo URL: https://zenodo.org/record/849359
Abstract
This article addresses the problem of the representation of time in computer-assisted sound composition. We try to point out the specific temporal characteristics of sound synthesis processes, in order to propose solutions for a compositional approach using symbolic models and representations.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849361
Zenodo URL: https://zenodo.org/record/849361
Abstract
This paper is intended to introduce the system, which combines "BodySuit" and "RoboticMusic," as well as its possibilities and its uses in an artistic application. "BodySuit" refers to a gesture controller in a Data Suit type. "RoboticMusic" refers to percussion robots, which are applied to a humanoid robot type. In this paper, I will discuss their aesthetics and the concept, as well as the idea of the "Extended Body".
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849363
Zenodo URL: https://zenodo.org/record/849363
Abstract
This paper presents the results of a quantitative study of the relationship between rhythmic characteristics of spoken German and French and the rhythm of musical melody in 19th-century art song. The study used a modified version of the Normalized Pairwise Variability Index, or nPVI, to measure the amount of variability between successive rhythmic events in the melodies of over 600 songs by 19 French and German composers. The study returned an unexpected result; songs written to texts in the two languages exhibited sharply diverging trends as a function of time through the 19th century. This trend is reflected both in the overall trends and in the trends of individual composers.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849365
Zenodo URL: https://zenodo.org/record/849365
Abstract
Physical modelling schemes are well known for their ability to generate plausible sounds, i.e. sounds that are perceived as being produced by physical objects. As a result, a large part of physical modelling research is devoted to the realistic synthesis of real-world sounds and has a strong link with instrumental acoustics. However, we have shown that physical modelling not only addresses sound synthesis, but also musical composition. In particular, mass-interaction physical modelling has been presented as enabling the musician to work both on sound production (i.e. microstructure) and events organization (i.e. macrostructure). This article presents a method for building mass-interaction models whose physical structure evolves during the simulation. Structural evolution is implemented in a physically consistent manner, by using nonlinear interactions that set temporary viscoelastic links between independent models. This yields new possibilities for music creation, particularly for the generation of complex sound sequences that exhibit an intimate articulation between the micro- and the macrostructure.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: Missing
Zenodo URL: Missing
Abstract
La parution imminente du "Vocabulaire de l’espace et de la spatialisation des musiques électroacoustiques" est tout d’abord l’occasion de faire le point sur les activités du GETEME (Groupe d’Étude sur l’Espace dans les Musiques Électroacoustiques). La présentation de quelques termes essentiels et de leurs définitions donnera un aperçu de la teneur de l’ouvrage. Mais l’objectif principal de cette communication se situe au niveau de l’analyse du travail effectué et des méthodes employées. Les difficultés rencontrées lors de l’élaboration de ce document (sources documentaires, collecte, dépouillement, mise en forme, définitions en usage : floues ou divergentes...) nous ont amené à réaliser une taxinomie de l’espace et de la spatialisation. Cette classification sera présentée en détail. Nous verrons sur divers exemples, comment cet outil permet de lever la confusion de certaines définitions, comment il permet d’éviter les oublis. En conclusion, nous envisagerons quelques travaux de recherche à venir: le "vocabulaire" sera certainement une source importante de débats, de réflexions, de formalisations.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849367
Zenodo URL: https://zenodo.org/record/849367