Dates: from July 31 to August 03, 2008
Place: Berlin, Germany
Proceedings info: Proceedings of the 5th Sound and Music Computing Conference (SMC2008), ISBN 978-3-7983-2094-9
Abstract
Dlocsig is a dynamic spatial locator unit generator written for the Common Lisp Music (CLM) sound synthesis and processing language. Dlocsig was first created in 1992 as a four channel 2d dynamic locator and since then it has evolved to a full 3d system for an arbitrary number of speakers that can render moving sound objects through amplitude panning (VBAP) or Ambisonics. This paper describes the motivations for the project, its evolution over time and the details of its software implementation and user interface.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849501
Zenodo URL: https://zenodo.org/record/849501
Abstract
Subtle inflections of pitch, often performed intuitively by musicians, create a harmonically sensitive expressive intonation. As each new pitch is added to a simultaneously sounding structure, very small variations in its tuning have a substantial impact on overall harmonic comprehensibility. In this project, James Tenney’s multidimensional lattice model of intervals (‘harmonic space’) and a related measure of relative consonance (‘harmonic distance’) are used to evaluate and optimize the clarity of sound combinations. A set of tuneable intervals, expressed as whole-number frequency ratios, forms the basis for real-time harmonic microtuning. An algorithm, which references this set, allows a computer music instrument to adjust the intonation of input frequencies based on previously sounded frequencies and several user-specified parameters (initial reference pitch, tolerance range, pitch-class scaling, prime limit). Various applications of the algorithm are envisioned: to find relationships within components of a spectral analysis, to dynamically adjust a computer instrument to other musicians in real time, to research the tuneability of complex microtonal pitch structures. More generally, it furthers research into the processes underlying harmonic perception, and how these may lead to musical applications.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849505
Zenodo URL: https://zenodo.org/record/849505
Abstract
In this paper we introduce Interactive Musical Environments (iMe), an interactive intelligent music system based on software agents that is capable of learning how to generate music autonomously and in real-time. iMe belongs to a new paradigm of interactive musical systems that we call “ontomemetical musical systems” for which a series of conditions are proposed.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849503
Zenodo URL: https://zenodo.org/record/849503
Abstract
Acute, is a music score composed by the author of this paper, for Percussion Quartet and Fixed Media (tape) using ‘Searched Objects’ as instruments. This paper examines how this piece recontextualises existing research in Typology and Morphology of Sound Objects to produce a unique music mixed-media score, for the exploration of the sonic possibilities when confronting the ‘ideal’ (sonic object to be found) with ‘the reconstruction of itself’ through time (when performers attempt to recreate the given sounds) using processes of Spectro-gestural mimesis.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849507
Zenodo URL: https://zenodo.org/record/849507
Abstract
Amplitude-based sound spatialization without any further signal processing is still today a valid musical choice in certain contexts. This paper emphasizes the importance of the resulting envelope shapes on the single loudspeakers in common listening situations such as concert halls, where most listeners will find themselves in off-centre positions, as well as in other contexts such as sound installations. Various standard spatialization techniques are compared in this regard and a refinement is proposed, which results in asymmetrical envelope shapes. This method combines a strong sense of localization and a natural sense of continuity. Some examples of pratical application carried out by Tempo Reale are also discussed.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849511
Zenodo URL: https://zenodo.org/record/849511
Abstract
We are interested in developing intelligent systems for music composition. In this paper we focus on our research into generative rhythms. We have adopted an Artificial Life (A-Life) approach to intelligent systems design in order to develop generative algorithms inspired by the notion of music as social phenomena that emerge from the overall behaviour of interacting autonomous software agents. Whereas most A-Life approaches to implementing computer music systems are chiefly based on algorithms inspired by biological evolution (for example, Genetic Algorithms [2]), this work is based on cultural development (for example, Imitation Games [12, 13]). We are developing a number of such “cultural” algorithms, one of which is introduced in this paper: the popularity algorithm. We also are developing a number of analysis methods to study the behaviour of the agents. In our experiments with the popularity algorithm we observed the emergence of coherent repertoires of rhythms across the agents in the society.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849513
Zenodo URL: https://zenodo.org/record/849513
Abstract
In this paper, I present a programming language for algorithmic composition and stochastic sound synthesis called CompScheme. The primary value generating mechanism in the program are streams, which allow the user to concisely describe networks of dynamic data. Secondly, I present CompScheme’s event model, which provides a framework for building abstract structural musical units, exemplified by showing CompScheme’s functionalities to control the SuperCollider server in real-time. Thirdly, I discuss CompScheme’s stochastic synthesis functionality, an attempt to generalize from I. Xenakis’s dynamic stochastic synthesis and G.M. Koenig’s SSP.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849515
Zenodo URL: https://zenodo.org/record/849515
Abstract
In this paper we present a systematic approach to applying expressive performance models to non-expressive score transcriptions and synthesizing the results by means of concatenative synthesis. Expressive performance models are built from score transcriptions and recorded performances by means of decision tree rule induction, and those models are used both to transform inexpressive input scores and to guide the concatenative synthesizer unit selection.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849517
Zenodo URL: https://zenodo.org/record/849517
Abstract
Throughout this paper, several CORDISANIMA physical models will be presented to offer an alternative synthesis of some classical delay-based digital audio effects: a delay model, two comb filter models, three flanger models and a sound spatialization model. Several of these realizations support a control scheme based on the ''Physical Instrumental Interaction''. Additionally they provide several sonic characteristics which do not appear in the original ones. Especially the flanger model, for certain parameter values may give a new digital audio effect between flanging and filtering.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849519
Zenodo URL: https://zenodo.org/record/849519
Abstract
In this paper, the author describes a system for encoding distance in an Ambisonics soundfield. This system allows the postponing of the application of cues for the perception of distance to the decoding stage, where they can be adapted to the characteristics of a specific space and sound system. Additionally, this system can be used creatively, opening some new paths for the use of space as a compositional factor.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849521
Zenodo URL: https://zenodo.org/record/849521
Abstract
This work is about the relation between music and architecture. In particular we are interested in the concept of space, as the way where music and architecture meet each other. The study of this topic offer the starting point to the development of Echi tra le Volte, a music installation for churches, where sounds are from the natural reverb of the place, excited by a sinusoidal impulses, which receive its pitches from a genetic algorithm.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849523
Zenodo URL: https://zenodo.org/record/849523
Abstract
The present paper discusses an alternative approach to electroacoustic composition based on principles of the interdisciplinary scientific field of Systemics. In this approach, the setting of the electronic device is prepared in such a way to be able to organise its own function, according to the conditions of the sonic environment. We discuss the approaches of Xenakis and of Di Scipio in relation to Systemics, demonstrating the applications in their compositional models. In my critique on Di Scipio’s approach, I argue that the composer is giving away a major part of his control over the work and therefore the notion of macro-structural form is abandoned. Based on my work Ephemeron, I show that it is possible to conduct emerging situations applying the systemic principle of ‘equifinality’. Moreover, I argue that it is possible to acquire control over these situations and their properties over time so as to develop formal structure.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849525
Zenodo URL: https://zenodo.org/record/849525
Abstract
This paper presents preliminary results on expressive performance in the human tenor voice. This work investigates how professional opera singers manipulate sound properties such as timing, amplitude, and pitch in order to produce expressive performances. We also consider the contribution of features of prosody in the artistic delivery of an operatic aria. Our approach is based on applying machine learning to extract patterns of expressive singing from performances by Josep Carreras. This is a step towards recognizing performers by their singing style, capturing some of the aspects which make two performances of the same piece sound different, and understanding whether there exists a correlation between the occurrences correctly covered by a pattern and specific emotional attributes.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849527
Zenodo URL: https://zenodo.org/record/849527
Abstract
This paper presents a system for controlling audio mosaicing with a voice signal, which can be interpreted as a further step in voice-driven sound synthesis. Compared to voice-driven instrumental synthesis, it increases the variety in the synthesized timbre. Also, it provides a more direct interface for audio mosaicing applications, where the performer voice controls rhythmic, tonal and timbre properties of the output sound. In a first step, voice signal is segmented into syllables, extracting a set of acoustic features for each segment. In the concatenative synthesis process, the voice acoustic features (target) are used to retrieve the most similar segment from the corpus of audio sources. We implemented a system working in pseudo-realtime, which analyzes voice input and sends control messages to the concatenative synthesis module. Additionally, this work raises questions to be further explored about mapping the input voice timbre space onto the audio sources timbre space.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849529
Zenodo URL: https://zenodo.org/record/849529
Abstract
We present methods for spatializing sound using representations created by dictionary-based methods (DBMs). DBMs have been explored primarily in applications for signal processing and communications, but they can also be viewed as the analytical counterpart to granular synthesis. A DBM determines how to synthesize a given sound from any collection of grains, called atoms, specified in a dictionary. Such a granular representation can then be used to perform spatialization of sound in complex ways. To facilitate experimentation with this technique, we have created an application for providing real-time synthesis, visualization, and control using representations found via DBMs. After providing a brief overview of DBMs, we present algorithms for spatializing granular representations, as well as our application program Scatter, and discuss future work.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849531
Zenodo URL: https://zenodo.org/record/849531
Abstract
We present preliminary work on automatic human-readable melody characterization. In order to obtain such a characterization, we (1) extract a set of statistical descriptors from the tracks in a dataset of MIDI files, (2) apply a rule induction algorithm to obtain a set of (crisp) classification rules for melody track identification, and (3) automatically transform the crisp rules into fuzzy rules by applying a genetic algorithm to generate the membership functions for the rule attributes. Some results are presented and discussed.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849533
Zenodo URL: https://zenodo.org/record/849533
Abstract
Memory may be mapped metaphorically onto space, as in the mediaeval and renaissance Memory Theatre (see Frances Yates, Art of Memory, 1966/1992 [Reference 4]). But we now have the power to project this literally in sound in sound installation systems such as the Klangdom. In Resonances (8 channel acousmatic work, commissioned by the IMEB (Bourges) in 2007), I explore my memories of the modernist repertoire (1820-1940) using small timbral ‘instants’, extended, layered and spatialised. Some juxta- and superpositions are logical, others unlikely – but that is the joy of memory and creativity. But memories also fade and die ... This paper examines this work, and how the memory and spatial relationships are articulated through the material. It also presents plans for a more elaborate work to be realised in 2008-2009. In this the fixed nature of the previous work will give way to an ‘evolving’ acousmatic piece which changes at each performance as new spatial layers are added, others fade. The paper will be illustrated with music examples.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849535
Zenodo URL: https://zenodo.org/record/849535
Abstract
Basing upon original computational analytic method (Majchrzak 2005, 2007), the present work aims, at: 1) Showing the differences for the major key and the minor (harmonic) key in the classification of chords, as an aspect of importance for interpreting a piece’s tonal structure diagram; 2) Drawing attention to the subordination of the minor key as versus the major key in the chord classification, using the same algorithm. The relations between chords appearing in the major and minor (harmonic) key are shown by applying the comparisons of: 1) third-based chords; 2) degrees in the C major and A minor keys, on which the same diatonic chords appear.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849537
Zenodo URL: https://zenodo.org/record/849537
Abstract
The work described in this paper is part of a project that aims to implement and assess a computer system that can control the affective content of the music output, in such a way that it may express an intended emotion. In this system, music selection and transformation are done with the help of a knowledge base with weighted mappings between continuous affective dimensions (valence and arousal) and music features (e.g., rhythm and melody) grounded on results from works of Music Psychology. The system starts by making a segmentation of MIDI music to obtain pieces that may express only one kind of emotion. Then, feature extraction algorithms are applied to label these pieces with music metadata (e.g., rhythm and melody). The mappings of the knowledge base are used to label music with affective metadata. This paper focus on the refinement of the knowledge base (subsets of features and their weights) according to the prediction results of listeners’ affective answers.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849539
Zenodo URL: https://zenodo.org/record/849539
Abstract
In this paper we present a method to model and compare expressivity for different Moods in violin performances. Models are based on analysis of audio and bowing control gestures of real performances and they predict expressive scores from non expressive ones. Audio and control data is captured by means of a violin pickup and a 3D motion tracking system and aligned with the performed score. We make use of machine learning techniques in order to extract expressivity rules from score-performance deviations. The induced rules conform a generative model that can transform an inexpressive score into an expressive one. The paper is structured as follows: First, the procedure of performance data acquisition is introduced, followed by the automatic performance-score alignment method. Then the process of model induction is described, and we conclude with an evaluation based on listening test by using a sample based concatenative synthesizer.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849541
Zenodo URL: https://zenodo.org/record/849541
Abstract
NOVARS is a new Research Centre started in March 2007 with specialisms in areas of Electroacoustic Composition, Performance and Sound-Art. The Centre is capitalising on the success of Music at the University of Manchester with the expansion of its existing research programme in Electroacoustic Composition with a new £2.2 million investment in a cutting-edgev new studios infrastructure. This studio report covers key aspects of architectural and acoustic design of the Studios, functionality and existing technology.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849543
Zenodo URL: https://zenodo.org/record/849543
Abstract
We present the RetroSpat system for the semiautomatic diffusion of acousmatic music. This system is intended to be a spatializer with perceptive feedback. More precisely, RetroSpat can guess the positions of physical sound sources (e.g. loudspeakers) from binaural inputs, and can then output multichannel signals to the loudspeakers while controlling the spatial location of virtual sound sources. Together with a realistic binaural spatialization technique taking into account both the azimuth and the distance, we propose a precise localization method which estimates the azimuth from the interaural cues and the distance from the brightness. This localization can be used by the system to adapt to the room acoustics and to the loudspeaker configuration. We propose a simplified sinusoidal model for the interaural cues, the model parameters being derived from the CIPIC HRTF database. We extend the binaural spatialization to a multi-source and multiloudspeaker spatialization system based on a static adaptation matrix. The methods are currently implemented in a real-time free software. Musical experiments are conducted at the SCRIME, Bordeaux.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849545
Zenodo URL: https://zenodo.org/record/849545
Abstract
This text deals with the subject of sonic spaces within the field of computer music composition. Highlighted by the notion of multiplicity, the sound will be analysed as a multi-representational space. This central idea will take us to consider some proposals of the hermeneutical criticism of representation, where we’ll observe the emergence of sonic spaces from an action-perception perspective: our musical significations appear at the very moment we execute a “local action” in the composition process. Multiplicity is produced by singularities as well as singularity is conceived as a multiple entity: depending on our operatory procedure in music composition we shall consider a sound as One or as Multiple. In music composition, human-computer interaction moves towards this problematic.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849547
Zenodo URL: https://zenodo.org/record/849547
Abstract
Cyberspace is nowadays a social network of people that produce, reproduce and consume technology culture, or as it is better expressed, technoculture. In this vast environment, transmittable digital information represents sound. However, what is the function of sound and why does it matter? In the following pages, I shall present sound as the materiality of technoculture in cyberspace, or, the cultural meanings of sound beyond natural space.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849549
Zenodo URL: https://zenodo.org/record/849549
Abstract
This paper documents emergent practice led research that brings together live sound spatialisation and free improvisation with digital tools in a performance context. An experimental performance is described in which two musicians – a turntablist and a laptop performer – improvised, with the results being spatialised via multiple loudspeakers by a third performer using the Resound spatialisation system. This paper focuses on the spatial element of the performance and its implications, its technical realisation and some aesthetic observations centring on the notion of ‘ambiguity’ in free improvisation. An analysis raises numerous research questions, which feed into a discussion of subsequent, current and future work.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849551
Zenodo URL: https://zenodo.org/record/849551
Abstract
This paper discusses a generative, systemic approach on sound processing, touching topics like genetics, evolutionary programming, eco-systemic interaction of sound in space, and feedback, putting them in the context of the author’s Syntáxis(Acoustic Generative Sound Processing System, part 1): a sound installation for stereophonic speaker system and microphone. The main implications of the overall structure of the installation are analysed, focusing on the dynamics of it and its relationships with space. The paper also illustrates the main structure of the algorithm regulating the installation behavior, along with brief references to the software platform used to develop it (Max MSP 5).
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849553
Zenodo URL: https://zenodo.org/record/849553
Abstract
In this paper we will analyze how the conception of space in music is expanded by the repertoire of sound art, moving from the idea of space as a delimited area with physical and acoustical characteristics, to the notion of site in which representational aspects of a place become expressive elements of a work of art.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849555
Zenodo URL: https://zenodo.org/record/849555
Abstract
The emergence of multiple sites for the performance of multi-channel spatial music motivates a consideration of strategies for creating spatial music, and for making necessary adjustments to existing spatial works for performances in spaces with significantly different acoustic properties and speaker placement. Spatial orchestration is proposed as a conceptual framework for addressing these issues.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849559
Zenodo URL: https://zenodo.org/record/849559
Abstract
Strong interest in spatial sound has existed at all times and on various levels of aesthetic music and sound production and perception. In recent years the availability of high-quality loudspeakers and digital multi-channel audio systems has paved the way to incorporate spatial acoustics into musical composition. In this paper we describe a project which is aimed at providing flexible possibilities to experiment with miscellaneous loudspeaker architectures and multi-channel distribution systems. The system allows the use of up to 96 audio channels in real time which can be fed to loudspeakers setup according to varying spatial designs. As examples a number of realized architectures and compositions will be described.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849561
Zenodo URL: https://zenodo.org/record/849561
Abstract
The aesthetic implications of real-time stochastic sound mass composition call for a new approach to musical material and spatialization. One possibility is found in the fertile ground of musical texture. Texture exists between notions of the singular sound object and the plurality of a sound mass or lo-fi soundscape. Textural composition is the practice of elevating and exploring the intermediary position between the single and the plural while denying other musical attributes. The consequences of this aesthetic principle require a similarly intermediary spatialization conducive to textural listening. This spatialization exists between point-source diffusion and mimetic spatialization. Ultimately, the ramifications of textural composition affect both the space in the sound and the sound in space. This paper introduces the intermediary aesthetics of textural composition focusing on its spaces. It then describes an implementation in the author’s work, real-time tape music III.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849563
Zenodo URL: https://zenodo.org/record/849563
Abstract
In this presentation we are going to examine, via de-codification of graphic scores, the work of the avantgarde composer and pre-media artist Anestis Logothetis, who is considered one of the most prominent figures in graphic musical notation. In the primary stage of our research, we have studied these graphical scores in order to make a first taxonomy of his graphical language and to present the main syntax of his graphic notation, aiming at a future sonic representation of his scores. We also present an example of graphical space through his ballet Odysee (1963).
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849565
Zenodo URL: https://zenodo.org/record/849565
Abstract
In the present work some aspects of the influence of the digital music instruments on composition methods are observed. Some consequences of the relationship between traditional instruments and digital music instruments result on a triangular interactive process. As an analytical approach to understand this relationship and the association process between traditional instruments and digital music instruments, a typology of interaction for the musical performance based on this instrumental configuration is proposed. The deduced classes are based upon the observation and systematization of my work as a composer. The proposed model aims to contribute towards an unifying terminology and systematization of some of the major questions that arise from the coexistence between two different paradigms (traditional and digital music instruments) in the universe of live electronic music.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849567
Zenodo URL: https://zenodo.org/record/849567
Abstract
This paper begins with a brief introduction of the GETEME (Groupe d’Étude sur l’Espace dans les M usiques É lectroacoustiques - Working Group about Space in Electroacoustic Musics), followed by an overview on its past, present and future activities. A first major achievement was the completion and publication of the “vocabulary of space electroacoustic musics…”, coupled with the realization of a taxonomy of space. Beyond this collection and clarification of these words in general use, it appears necessary to begin to connect words and sound. The goal of our present research is to clarify or elaborate a vocabulary (a set of specialized words) allowing to describe space perception in electroacoustic (multiphonic) musics. The issue is delicate as it deals with psychoacoustics… as well as creators’ or listeners’ imagination. In order to conduct this study, it was necessary to develop a battery of tests, procedures and listening collection of words describing listening space, and then counting and sorting words. The sound descriptions quickly overlap, the words coincide with the same listening situations. A consensus seemed to emerge, revealing: 5 types of spatiality and 2 types of mobility, as well as a variety of adjectives to describe or characterize spatiality or mobility.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849569
Zenodo URL: https://zenodo.org/record/849569
Abstract
In the past few years, several strategies to characterize sound signals have been suggested. The main objective of these strategies was to describe the sound [1]. However, it was only with the creation of a new standard format for indexing and transferring audio MPEG 7 data that the desire to define audio data semantic content descriptors came about [2, p.52]. The widely known document written by Geoffroy Peeters [1] is an example where, even if the goal announced is not to carry out a systematic taxonomy on all the functions intended to describe sound, it does in fact systematize the presentation of various descriptors.
Keywords
not available
Paper topics
not available
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.849571
Zenodo URL: https://zenodo.org/record/849571