Dates: from July 12 to July 14, 2012
Place: Copenhangen, Denmark
Proceedings info: Proceedings of the 9th Sound and Music Computing Conference, Copenhagen, Denmark, 11-14, ISBN 978-3-8325-3180-5
Abstract
Popular music plays a central role in the lives of millions of people. It motivates beginners, engages experienced musicians, and plays both functional (e.g. churches) and non-functional (e.g. music festivals) roles in many contexts. Forming and maintaining a popular music ensemble can be challenging, particularly for part-time musicians who face other demands on their time. Where an ensemble has a functional role, performing music of consistent style and quality becomes imperative yet the demands of everyday life mean that it is not always possible to have a full complement of musicians. Interactive music technology has the potential to substitute for absent musicians to give a consistent musical output. However, the technology to achieve this (for popular music) is not yet mature, or in a suitable form for adoption and use by musicians who are not experienced with interactive music systems, or who are unprepared to work in experimental music or with experimental systems (a particular concern for functional ensembles). This paper proposes a framework of issues to be considered when developing interactive music technologies for popular music ensemble performance. It explores aspects that are complementary to technological concerns, focusing on adoption and practice to guide future technological developments.
Keywords
Evaluation, Interactive Music Systems, Popular Music
Paper topics
Interactive performance systems, Methodological issues in sound and music computing, Social interaction in sound and music computing
Easychair keyphrases
not available
Paper type
Position paper / Poster
DOI: 10.5281/zenodo.850028
Zenodo URL: https://zenodo.org/record/850028
Abstract
We propose a hierarchical approach for the design of gesture-to-sound mappings, with the goal to take into account multilevel time structures in both gesture and sound processes. This allows for the integration of temporal mapping strategies, complementing mapping systems based on instantaneous relationships between gesture and sound synthesis parameters. Specifically, we propose the implementation of Hierarchical Hidden Markov Models to model gesture input, with a flexible structure that can be authored by the user. Moreover, some parameters can be adjusted through a learning phase. We show some examples of gesture segmentations based on this approach, considering several phases such as preparation, attack, sustain, release. Finally we describe an application, developed in Max/MSP, illustrating the use of accelerometer-based sensors to control phase vocoder synthesis techniques based on this approach.
Keywords
gesture, gesture recognition, hidden markov models, interaction, machine learning, mapping, music, music interfaces, sound
Paper topics
Interactive performance systems, Interfaces for sound and music
Easychair keyphrases
not available
Paper type
Full paper
DOI: 10.5281/zenodo.850030
Zenodo URL: https://zenodo.org/record/850030
Abstract
This paper presents a hybrid interface based on a touch-sensing keyboard which gives detailed expressive control over a physically-modeled guitar. Physical modeling allows realistic guitar synthesis incorporating many expressive dimensions commonly employed by guitarists, including pluck strength and location, plectrum type, hand damping and string bending. Often, when a physical model is used in performance, most control dimensions go unused when the interface fails to provide a way to intuitively control them. Techniques as foundational as strumming lack a natural analog on the MIDI keyboard, and few digital controllers provide the independent control of pitch, volume and timbre that even novice guitarists achieve. Our interface combines gestural aspects of keyboard and guitar playing. Most dimensions of guitar technique are controllable polyphonically, some of them continuously within each note. Mappings are evaluated in a user study of keyboardists and guitarists, and the results demonstrate its playability by performers of both instruments.
Keywords
guitar, keyboard, mapping, performance interfaces, physical modeling, touch sensing
Paper topics
Digital audio effects, Interactive performance systems, Interfaces for sound and music, Sound/music signal processing algorithms
Easychair keyphrases
not available
Paper type
Position paper / Poster
DOI: 10.5281/zenodo.850032
Zenodo URL: https://zenodo.org/record/850032
Abstract
The VirtualPhilharmony (VP) system conveys the sensation of conducting an orchestra to a user (a conductor). VP’s performances are created through interaction between the conductor and orchestra, exactly like real performance. ``Concertmaster function’’ has been already implemented by incorporating the heuristics of conducting an orchestra. A precisely predictive scheduler and dynamical template have been designed based on analyses of actual recordings. We especially focused on two more problems to emulate a more real orchestral performance; one was that each note in the template was controlled by single-timeline scheduler; the other was that the interaction and communication between the conductor and the orchestra in repeated practices were not simulated. We implemented ``Multi-timelines scheduler’’ and ``Rehearsal function’’ to resolve these problems.
Keywords
conducting system, heuristics, interaction, multi-timeline scheduler, rehearsal
Paper topics
Interfaces for sound and music
Easychair keyphrases
not available
Paper type
Position paper / Poster
DOI: 10.5281/zenodo.850034
Zenodo URL: https://zenodo.org/record/850034
Abstract
Software development benefits from systematic testing with respect to implementation, optimization, and maintenance. Automated testing makes it easy to execute a large number of tests efficiently on a regular basis, leading to faster development and more reliable software. Systematic testing is not widely adopted within the computer music community, where software patches tend to be continuously modified and optimized during a project. Consequently, bugs are often discovered during rehearsal or performance, resulting in literal “show stoppers”. This paper presents a testing environment for computer music systems, first developed for the Jamoma framework and Max. The testing environment works with Max 5 and 6, is independ from any 3rd-party objects, and can be used with non-Jamoma patches as well.
Keywords
Jamoma, software development, testing
Paper topics
Computer environments for sound/music processing
Easychair keyphrases
not available
Paper type
Position paper / Poster
DOI: 10.5281/zenodo.850042
Zenodo URL: https://zenodo.org/record/850042
Abstract
Software to design multimedia scenarios is usually based either on a fixed timeline or on cue lists, but both models are unrelated temporally. On the contrary, the formalism of interactive scores can describe multimedia scenarios with flexible and fixed temporal relations among the objects of the scenario, but cannot express neither temporal relations for micro controls nor signal processing. We extend interactive scores with such relations and with sound processing. We show some applications and we describe how they can be implemented in Pure Data. Our implementation has low average relative jitter even under high CPU load.
Keywords
concurrent constraint programming, interactive multimedia scenarios, multimedia interaction, sound processing
Paper topics
Computer environments for sound/music processing, Interactive performance systems
Easychair keyphrases
temporal object [26], temporal relation [25], interactive score [24], sound processing [11], faust program [10], pure data [10], high cpu load [9], karplus strong [9], real time [9], multimedia scenario [8], average relative jitter [6], cpu load [6], desainte catherine [6], fixed temporal relation [6], high precision temporal relation [6], interactive object [6], soft real time [6], audio output [5], cue list [5], dataflow relation [5], fixed timeline [5], micro control [5], signal processing [5], time model [5], time scale [5], time unit [5], control message [4], non deterministic timed concurrent constraint [4], sampling rate [4], standalone program [4]
Paper type
Position paper / Poster
DOI: 10.5281/zenodo.850044
Zenodo URL: https://zenodo.org/record/850044
Abstract
Current models of musical mood are based on clean, noiseless data that does not correspond to real life listening experiences. We conducted an experience sampling study collecting in-situ data of listening experiences. We show that real life music listening experiences are far from the homogeneous experiences used in current models of musical mood.
Keywords
Listening Context, Listening Habits, Musical Mood
Paper topics
Content processing of music audio signals, Humanities in sound and music computing, Methodological issues in sound and music computing, Social interaction in sound and music computing
Easychair keyphrases
not available
Paper type
Full paper
DOI: 10.5281/zenodo.850046
Zenodo URL: https://zenodo.org/record/850046
Abstract
This paper introduces a database of sound-based applications running on the Android mobile platform. The long-term objective is to provide a state-of-the-art of mobile applications dealing with sound and music interaction. After exposing the method used to build up and maintain the database using a non-hierarchical structure based on tags, we present a classification according to various categories of applications, and we conduct a preliminary analysis of the repartition of these categories reflecting the current state of the database.
Keywords
Android, Interaction, Music, Sound
Paper topics
Computer environments for sound/music processing, Interfaces for sound and music, Multimodality in sound and music computing, Sonic interaction design
Easychair keyphrases
not available
Paper type
Position paper / Poster
DOI: 10.5281/zenodo.850048
Zenodo URL: https://zenodo.org/record/850048
Abstract
The process of composition can be seen as sequence of manipulations on the material. In algorithmic composition, such sequences are prescribed through another set of sequences which yield the algorithm. In a realtime situation the sequences may be closely linked to the temporal sequence of the unfolding musical structure, but in general they form orthogonal temporal graphs on their own. We present a framework which can be used to model these temporal graphs. The framework is composed of layers, which---from low to high level---provide (1) database storage and software transactional memory with selectable temporal semantics, (2) the most prominent semantics being confluent persistence, in which the temporal traces are registered and can be combined, yielding a sort of structural feedback or recursion, and finally (3) an event and expression propagation system, which, when combined with confluent persistence, provides a hook to update dependent object graphs even when they were constructed in the future. This paper presents the implementation of this framework, and outlines how it can be combined with a realtime sound synthesis system.
Keywords
algorithmic composition, computer music language, confluent persistence, data structures
Paper topics
Computer environments for sound/music processing, Interactive performance systems
Easychair keyphrases
not available
Paper type
Position paper / Poster
DOI: 10.5281/zenodo.850036
Zenodo URL: https://zenodo.org/record/850036
Abstract
With musical applications in mind, this paper reports on the level of noise observed in two commercial infrared marker-based motion capture systems: one high-end (Qualisys) and one affordable (OptiTrack). We have tested how various features (calibration volume, marker size, sampling frequency, etc.) influence the noise level of markers lying still, and fixed to subjects standing still. The conclusion is that the motion observed in humans standing still is usually considerably higher than the noise level of the systems. Dependent on the system and its calibration, however, the signal-to-noise-ratio may in some cases be problematic.
Keywords
accuracy and precision, motion capture, noise, quantity of motion, spatial range, standstill, still markers
Paper topics
Methodological issues in sound and music computing, Music performance analysis and rendering
Easychair keyphrases
not available
Paper type
Position paper / Poster
DOI: 10.5281/zenodo.850038
Zenodo URL: https://zenodo.org/record/850038
Abstract
A system for Do-It-Yourself (DIY) interface designs focused on sound and music computing has been developed. The system is based on the Create USB Interface (CUI), which is an open source microcontroller prototyping board together with the GROVE system of interchangeable transducers. Together, these provide a malleable and fluid prototyping process of ‘Sketching in Hardware’ for both music and non-music interaction design ideas. The most recent version of the board is the CUI32Stem, which is designed specifically to work hand-in-hand with the GROVE elements produced by Seeed Studio, Inc. GROVE includes a growing collection of open source sensors and actuators that utilize simple 4-wire cables to connect to the CUI32Stem. The CUI32Stem itself utilizes a high-performance Microchip® PIC32 microcontroller, allowing a wide range of programmable interactions. The development of this system and its use in sound and music interaction design is described. Typical use scenarios for the system may pair the CUI32Stem with a smartphone, a normal computer, and one or more GROVE elements via wired or wireless connections.
Keywords
802.11, 802.15.4, Arduino, BASIC, Bluetooth, Create USB Interface, CUI32, CUI32Stem, Microchip PIC32, Microcontrollers, Music Interaction Design, Open Sound Control, Sketching in Hardware, Sound and Music Computing education, StickOS, Wifi, Wireless, Zigbee, Zigflea
Paper topics
Interactive performance systems, Interfaces for sound and music, Methodological issues in sound and music computing, Multimodality in sound and music computing, Music and robotics, Sonic interaction design
Easychair keyphrases
not available
Paper type
Position paper / Poster
DOI: 10.5281/zenodo.850040
Zenodo URL: https://zenodo.org/record/850040
Abstract
We have developed an audio information retrieval system called, Audio Metaphor, that utilizes large online collaborative databases for real time soundscape composition. Audio Metaphor has been used in a contemporary dance piece IN[A]MOMENT. The audience interacts with the system by sending the system Tweets. At the heart of the Audio Metaphor is a sub-query generation algorithm. This algorithm which we name SLiCE, for string list chopping experiments, accepts a natural language phrase that is parsed into a text feature string list, which is chopped into sub-query search to find a combination of queries with non-empty mutually exclusive results. Employing SLiCE, Audio Metaphor processes natural language phrases to find a combination of audio file results that represents the input phrase. In parallel, the parsed input phrase is used to search for related Tweets which are similarly put through SLiCE. Search results are then sent to a human composer to combine and process into a soundscape.
Keywords
Performance, Search algorithms, Social media, Soundscape composition
Paper topics
Interactive performance systems, Music information retrieval, Social interaction in sound and music computing
Easychair keyphrases
not available
Paper type
Full paper
DOI: 10.5281/zenodo.850050
Zenodo URL: https://zenodo.org/record/850050
Abstract
In this paper, we present an automated process as a part of the SyncGlobal project for time continuous prediction of loudness and brightness in soundtracks. The novel Annotation Tool is presented, which allows performing manual time-continuous annotations. We rate well-known audio features to represent two perceptual attributes -loudness and brightness. A regression model is trained with the manual annotations and the acoustic features representing the perceptions. Four different regression methods are implemented and their success in tracking the two perceptions is studied. A coefficient of determination (R^2) of 0.91 is achieved for loudness and 0.35 for brightness using Support Vector Regression (SVR), yielding a better performance than Friberg et al. (2011).
Keywords
Annotation Tool, Automatic Annotation, Brightness, Loudness, Prediction, Regression, Soundtracks, Time continuous
Paper topics
Automatic separation, classification of sound and music, Content processing of music audio signals, Multimodality in sound and music computing, Music information retrieval, Perception and cognition of sound and music, recognition
Easychair keyphrases
not available
Paper type
Position paper / Poster
DOI: 10.5281/zenodo.850052
Zenodo URL: https://zenodo.org/record/850052
Abstract
Considering a large population of guitarist and a relatively poor selection of guitar scores, there should be a certain demand for systems that automatically arrange scores for other instruments to guitar scores. This paper introduces a framework based on hidden Markov model (HMM) that carries out ``arrangement'' and ``fingering determination'' in a unified way. The framework takes forms and picking patterns as its hidden states and a given piece of music as an observation sequence and carries out fingering determination and arrangement as a decoding problem of HMM. With manually-set HMM parameters reflecting preference of beginner guitarists, the framework generates natural fingerings and arrangements suitable for beginners. Some examples of fingering and arrangement generated by the framework are presented.
Keywords
automatic arrangement, guitar, hidden Markov model (HMM)
Paper topics
Automatic music generation/accompaniment systems
Easychair keyphrases
not available
Paper type
Full paper
DOI: 10.5281/zenodo.850054
Zenodo URL: https://zenodo.org/record/850054
Abstract
Many audio synthesis techniques have been successful in reproducing the sounds of musical instruments. Several of these techniques require parameters calibration. However, this task can be difficult and time-consuming especially when there is not intuitive correspondence between a parameter value and the change in the produced sound. Searching the parameter space for a given synthesis technique is, therefore, a task more naturally suited to an automatic optimization scheme. Genetic algorithms (GA) have been used rather extensively for this purpose, and in particular for calibrating Classic FM (ClassicFM) synthesis to mimic recorded harmonic sounds. In this work, we use GA to further explore its modified counterpart, Modified FM (ModFM), which has not been used as widely, and its ability to produce musical sounds not as fully explored. We completely automize the calibration of a ModFM synthesis model for the reconstruction of harmonic instrument tones using GA. In this algorithm, we refine parameters and operators such as crossover probability or mutation operator for closer match. As an evaluation, we show that GA system automatically generates harmonic musical instrument sounds closely matching the target recordings, a match comparable to the application of GA to ClassicFM synthesis.
Keywords
artificial intelligence, automatic parameter estimation, genetic algorithm, modified FM sound synthesis, resynthesis
Paper topics
Automatic music generation/accompaniment systems, Models for sound analysis and synthesis, Sound/music signal processing algorithms
Easychair keyphrases
not available
Paper type
Full paper
DOI: 10.5281/zenodo.850056
Zenodo URL: https://zenodo.org/record/850056
Abstract
In this paper, we propose a method to generate new melodic styles (melodics) in the automatic composition of polyphonic music. In the proposed method, a melodic style is represented as a grammar that consists of rewriting rules, and the rewriting rules are generated by a classifier system, which is a genetics-based machine learning system. In the previous studies of the grammatical approach, the problem of how to treat polyphony and that of generating new melodic styles automatically haven't been studied very intensively. Therefore, we have chosen to tackle those problems. We modeled the generative process of polyphonic music as asynchronous growth by applying rewriting rules in each voice separately. In addition, we developed a method to automatically generate grammar rules, which are the parameters of the polyphony model. The experimental results show that the proposed method can generate grammar rules and polyphonic music pieces that have characteristic melodic styles.
Keywords
Algorithmic Composition, Classifier System, Grammar, Melodics, Polyphony
Paper topics
Automatic music generation/accompaniment systems, Computational musicology
Easychair keyphrases
not available
Paper type
Position paper / Poster
DOI: 10.5281/zenodo.850058
Zenodo URL: https://zenodo.org/record/850058
Abstract
Systems able to find a song based on a sung, hummed, or whistled melody are called Query-By-Humming (QBH) systems. Hummed or sung queries are not directly compared to original recordings. Instead, systems employ search keys that are more similar to a cappella singing than the original pieces. Successful, deployed systems use human computation to create search keys: hand-entered midi melodies or recordings of a cappella singing. There are a number of human computation-based approaches that may be used to build a database of QBH search keys, but it is not clear what the best choice is based on cost, computation time, and search performance. In this paper we compare search keys built through human computation using two populations: paid local singers and Amazon Mechanical Turk workers. We evaluate them on quality, cost, computation time, and search performance.
Keywords
human computation, music information retrieval, query by humming
Paper topics
Music information retrieval
Easychair keyphrases
not available
Paper type
Full paper
DOI: 10.5281/zenodo.850060
Zenodo URL: https://zenodo.org/record/850060
Abstract
In this paper we propose a method of audio chord estimation. It does not rely on any machine learning technique, but shows good recognition quality compared to other known algorithms. We calculate a beat-synchronized spectrogram with high time and frequency resolution. It is then processed with an analogue of Prewitt filter used for edge detection in image processing to suppress non-harmonic spectral components. The sequence of chroma vectors obtained from spectrogram is smoothed using self-similarity matrix before the actual chord recognition. Chord templates used for recognition are binary-like, but have the tonic and the 5th note accented. The method is evaluated on the 13 Beatles albums.
Keywords
Audio content processing, Chord recognition, Music information retrieval
Paper topics
Content processing of music audio signals, Music information retrieval, Sound/music signal processing algorithms
Easychair keyphrases
not available
Paper type
Position paper / Poster
DOI: 10.5281/zenodo.850062
Zenodo URL: https://zenodo.org/record/850062
Abstract
Tempo variations in classical music are an important means of artistic expression. Fluctuations in tempo can be large and sudden, making applications like automated score following a challenging task. Some of the fluctuations may be predicted from (tempo annotations in) the score, but prediction based only on the score is unlikely to capture the internal coherence of a performance. On the other hand, filtering approaches to tempo prediction (like the Kalman filter) are suited to track gradual changes in tempo, but do not anticipate sudden changes. To combine the advantages of both approaches, we propose a method that incorporates score based tempo predictions into a Kalman filter model of performance tempo. We show that the combined model performs better than the filter model alone.
Keywords
expressive performance, Kalman filter, linear basis function, tempo prediction, tempo tracking
Paper topics
Automatic music generation/accompaniment systems, Computational musicology, Models for sound analysis and synthesis, Music performance analysis and rendering
Easychair keyphrases
not available
Paper type
Full paper
DOI: 10.5281/zenodo.850064
Zenodo URL: https://zenodo.org/record/850064
Abstract
This paper will explore the potential for the SUM tool, intended initially for the sonification of images, as a tool for graphical computer-aided composition. As a user li- brary with a graphical user interface within the computer- aided composition environment of PWGL, SUM has the potential to be used as a graphical approach towards com- puter-aided composition. Through the re-composition of the graphic score of Ligeti’s Artikulation, we demonstrate how SUM can be used in the generation of a graphic score. Supporting spatio-temporal timepaths, we explore alterna- tive ways of reading this score. Furthermore, we investi- gate the claim of certain visual artworks to be ‘visual mu- sic’, by sonifying them as graphic scores in SUM.
Keywords
computer-aided composition, graphic score, visual music
Paper topics
3D sound/music, Auditory and multimodal illusions, Auditory display and data sonification, Humanities in sound and music computing, Interfaces for sound and music
Easychair keyphrases
not available
Paper type
Position paper / Poster
DOI: 10.5281/zenodo.850066
Zenodo URL: https://zenodo.org/record/850066
Abstract
Corpus based concatenative synthesis has been approached from different perspectives by many researchers. This generated a number of diverse solutions addressing the matter of target selection, corpus visualization and navigation. With this paper we introduce the concept of extended descriptor space, which permits the arbitrary redistribution of audio units in space, without affecting each unit's sonic content. This feature can be exploited in novel instruments and music applications to achieve spatial dispositions which could enhance control and expression. Making use of Virtual Reality technology, we developed vrGrains, an immersive installation in which real-time corpus navigation is based on the concept of extended descriptor space and on the related audio unit rearrangement capabilities. The user is free to explore a corpus represented by 3D units which physically surrounds her/him. Through natural interaction, the interface provides different interaction modalities which allow controllable and chaotic audio unit triggering and motion.
Keywords
corpus based synthesis, installation, tracking, virtual reality
Paper topics
Computer environments for sound/music processing, Multimodality in sound and music computing, Sonic interaction design, Sound and music for VR and games
Easychair keyphrases
not available
Paper type
Full paper
DOI: 10.5281/zenodo.850068
Zenodo URL: https://zenodo.org/record/850068
Abstract
This paper describes the construction of computable audio descriptors capable of modeling relevant high-level perceptual qualities of textural sounds. These qualities - all metaphoric bipolar and continuous constructs - have been identified in previous research: high-low, ordered-chaotic, smooth-coarse, tonal-noisy, and homogeneous-heterogeneous, covering timbral, temporal and structural properties of sound. We detail the construction of the descriptors and demonstrate the effects of tuning with respect to individual accuracy or mutual orthogonality. The descriptors are evaluated on a corpus of 100 textural sounds against respective measures of human perception that have been retrieved by use of an online survey. Potential future use of perceptual audio descriptors in music creation is illustrated by a prototypic sound browser application.
Keywords
audio analysis, audio descriptors, auditory perception, music information retrieval
Paper topics
Content processing of music audio signals, Models for sound analysis and synthesis, Music information retrieval, Perception and cognition of sound and music, Sound/music signal processing algorithms
Easychair keyphrases
not available
Paper type
Position paper / Poster
DOI: 10.5281/zenodo.850070
Zenodo URL: https://zenodo.org/record/850070
Abstract
Query by example retrieval of environmental sound recordings is a research area with applications to sound design, music composition and automatic suggestion of metadata for the labeling of sound databases. Retrieval problems are usually composed of successive feature extraction (FE) and similarity measurement (SM) steps, in which a set of extracted features encoding important properties of the sound recordings are used to compute the distance between elements in the database. Previous research has pointed out that successful features in the domains of speech and music, like MFCCs, might fail at describing environmental sounds, which have intrinsic variability and noisy characteristics. We present a set of novel multiresolution features obtained by modeling the distribution of wavelet subband coefficients with generalized Gaussian densities (GGDs). We define the similarity measure in terms of the Kullback-Leibler divergence between GGDs. Experimental results on a database of 1020 environmental sound recordings show that our approach always outperforms a method based on traditional MFCC features and Euclidean distance, improving retrieval rates from 51% to 62%.
Keywords
Content-based retrieval, Environmental sounds, Features, Similarity Measure, Sound textures, Wavelets
Paper topics
Automatic separation, classification of sound and music, Models for sound analysis and synthesis, Music information retrieval, recognition
Easychair keyphrases
not available
Paper type
Position paper / Poster
DOI: 10.5281/zenodo.850072
Zenodo URL: https://zenodo.org/record/850072
Abstract
We describe a multi-agent systems which composes in real-time, using negotiation as the active compositional technique. In one version of the system, creative agents’ output is written to disk; during performance, a curatorial agent selects prior-composed movements and assembles a complete musical composition. The resulting score is then displayed to musicians, and performed live. A second version of the system is described, in which the real-time interaction is performed immediately by a mechanical musical instrument, and a human instrumentalist’s performance data is included in system as being one of the agents (a human agent).
Keywords
generative music, interactive performance, music and robotics, social interaction
Paper topics
Automatic music generation/accompaniment systems, Interactive performance systems, Music and robotics, Social interaction in sound and music computing
Easychair keyphrases
not available
Paper type
Full paper
DOI: 10.5281/zenodo.850074
Zenodo URL: https://zenodo.org/record/850074
Abstract
The Agent Design Toolkit is a software suite that we have developed for designing the behaviour of musical agents; software elements that automate some aspect of musical composition or performance. It is intended to be accessible to musicians who have no expertise in computer programming or algorithms. However, the machine learning algorithms that we use require the musician to engage with technical aspects of the agent design, and our research goal is to find ways to enable this process through understandable and intuitive concepts and interfaces, at the same time as developing effective agent algorithms. Central to enabling musicians to use the software is to make available a set of clear instructional examples showing how the technical aspects of agent design can be used effectively to achieve particular musical results. In this paper, we present a pilot study of the Agent Design Toolkit in which we conducted two contrasting musical agent design experiments with the aim of establishing a set of such examples. From the results, we compiled a set of four clear examples of effective use of the learning parameters which will be used to teach new users about the software. Secondary results of the experiments were the discovery of a range of improvements which can be made to the software itself.
Keywords
design, human-computer interaction, musical agents
Paper topics
Automatic music generation/accompaniment systems, Interactive performance systems, Interfaces for sound and music
Easychair keyphrases
not available
Paper type
Position paper / Poster
DOI: 10.5281/zenodo.850078
Zenodo URL: https://zenodo.org/record/850078
Abstract
Specific requirements of certain works of music, especially in the field of contemporary experimental music of the 19th century, are sometimes hard to meet when it comes to the performance. Special instruments or technologies are necessary and often no longer available, broken or their documentation is insufficient. This paper addresses this problem of performance practice in contemporary music by exploring the design of an electronic replacement of a mechanical instrument for the performance of the piece ``Mouvement - vor der Erstarrung'' by Helmut Lachenmann. The simulacra developed consist of a musical interface, a software for sound synthesis and a loudspeaker system. A focus is put on the challenge of synthesising and projecting the sound as close as possible to the original instrument and to fit the musical requirements of the piece. The acoustic integration of the electronic instrument into an ensemble of acoustic instruments was achieved by using an omni-directional loudspeaker. For the sound synthesis, a hybrid approach of sampling and additive synthesis was chosen. The prototypes developed were proven to be robust and reliable and the simulacra were generally well-accepted by performing musicians, surrounding musicians, conductor and audience.
Keywords
electronic instrument, instrumental design, musical interface, performance practice, radiation synthesis, simulacrum, sound synthesis
Paper topics
access and modelling of musical heritage, Interactive performance systems, Technologies for the preservation
Easychair keyphrases
not available
Paper type
Full paper
DOI: 10.5281/zenodo.850080
Zenodo URL: https://zenodo.org/record/850080
Abstract
"Disembodied voices" is an interactive environment designed for an expressive, gesture-based musical performance. The motion sensor Kinect, placed in front of the performer, provides the computer with the 3D space coordinates of the two hands. The application is designed according to the metaphor of the choir director: the performer, through gestures, is able to run a score and to produce a real-time expressive interpretation. The software, developed by the authors, interprets the gestural data by activating a series of compositional algorithms that produce vocal sounds. These are pre-recorded samples processed in real time through the expressive interaction dependent on the conductor's gestures. Hence the name of the application: you follow the conductor's gestures, hear the voices but don't see any singer. The system also provides a display of motion data, a visualization of the part of the score performed at that time, and a representation of the musical result of the compositional algorithms activated.
Keywords
algorithmic composition, Kinect, musical performance
Paper topics
Interactive performance systems, Interfaces for sound and music
Easychair keyphrases
not available
Paper type
Position paper / Poster
DOI: 10.5281/zenodo.850082
Zenodo URL: https://zenodo.org/record/850082
Abstract
Two pilot experiments have been conducted to investigate the influence of auditory and underfoot tactile cues respectively on perception and action during walking. The for- mer experiment shows that illusory tactile perception can be generated, biased by the intensity of auditory cues in the low-frequency region. The latter experiment suggests that non-visual foot-level augmentation may influence the gait cycle in normally able subjects. In the respective limits of significance, taken together both experiments suggest that the introduction of ecological elements of augmented reality at floor level may be exploited for the development of novel multimodal interfaces.
Keywords
auditory, cross-modal effect, ecological, experiment, footwear interface, gait, illusion, vibrotactile
Paper topics
Auditory and multimodal illusions, Auditory display and data sonification, Interfaces for sound and music, Multimodality in sound and music computing, Sonic interaction design, Sound and music for VR and games, Sound/music signal processing algorithms
Easychair keyphrases
not available
Paper type
Full paper
DOI: 10.5281/zenodo.850084
Zenodo URL: https://zenodo.org/record/850084
Abstract
This paper presents a new electronic pipe organ based on positive audio feedback. Unlike typical resonance of a tube of air, we use audio feedback introduced by an amplifier, a lowpass filter, as well as a loudspeaker and a microphone in a closed pipe to generate resonant sounds without any physical air blows. Timbre of this sound can be manipulated by controlling the parameters of the filter and the amplifier. We introduce the design concept of this audio feedback-based wind instrument, and present a prototype that can be played by a MIDI keyboard.
Keywords
audio feedback, pipe instrument, resonance
Paper topics
Digital audio effects, Interfaces for sound and music
Easychair keyphrases
not available
Paper type
Full paper
DOI: 10.5281/zenodo.850086
Zenodo URL: https://zenodo.org/record/850086
Abstract
This paper presents the results of an experiment in which the effect of spatial sonification of a moving target on the user's performance during the execution of basic tracking exercises was investigated. Our starting hypothesis is that a properly designed multimodal continuous feedback could be used to represent temporal and spatial information that can in turn improve performance and motor learning of simple target following tasks. Sixteen subjects were asked to track the horizontal movement of a circular visual target by controlling an input device with their hand. Two different continuous task-related auditory feedback modalities were considered, both simulating the sound of a rolling ball, the only difference between them being the presence or absence of binaural spatialization of the target's position. Results demonstrate how spatial auditory feedback significantly decreases the average tracking error with respect to visual feedback alone, contrarily to monophonic feedback. It was thus found how spatial information provided through sound in addition to visual feedback helps subjects improving their performance.
Keywords
3D audio, auditory feedback, sonification, target following
Paper topics
3D sound/music, Auditory display and data sonification, Multimodality in sound and music computing
Easychair keyphrases
not available
Paper type
unknown
DOI: 10.5281/zenodo.850088
Zenodo URL: https://zenodo.org/record/850088
Abstract
In this paper we present a new instrument model to be used for sound modification and interpolation. The approach comprises the analysis of sounds of an instruments sound database, a parameter estimation for the instrument model and a sound synthesis using this model. The sound analysis is carried out by the segmentation of each sound into a sinusoidal and noise component. Both components are modeled using B-splines (basic-splines) in a n-dimensional hyperplane according to the specific sound parameters to capture the instruments timbre for its complete pitch range, possible intensities and temporal evolution. Sound synthesis therein is achieved by exploring these hyperplanes and altering the timbre of the sounds of the database. To conclude a subjective evaluation is presented for comparison with state of the art sound transformations. This work is based on a preliminary study published recently.
Keywords
analysis, interpolation, modeling, synthesis
Paper topics
Digital audio effects, Models for sound analysis and synthesis
Easychair keyphrases
not available
Paper type
Full paper
DOI: 10.5281/zenodo.850090
Zenodo URL: https://zenodo.org/record/850090
Abstract
The enumeration of musical objects has received heightened attention in the last twenty five years, and whilst such phenomena as tone rows, polyphonic mosaics, and scales have been explored, there has not been prior investigation of the enumeration of chord sequences. In part, analysts may have disregarded the situation as having a trivial solution, namely the number of chord types at each step raised to the power of the number of steps. However, there are more subtle and interesting situations where there are constraints, such as rotational and transpositional equivalence of sequences. Enumeration of such chord sequences is explored through the application of Burnside’s lemma for counting equivalence classes under a group action, and computer generation of lists of representative chord sequences outlined. Potential extensions to ‘McCartney’s Chord Sequence Problem’ for the enumeration of cyclic (looping) chord sequences are further discussed
Keywords
computer enumeration of musical objects, generation of chord sequences, mathematical music theory
Paper topics
Automatic music generation/accompaniment systems, Computational musicology, Music information retrieval
Easychair keyphrases
not available
Paper type
Position paper / Poster
DOI: 10.5281/zenodo.850092
Zenodo URL: https://zenodo.org/record/850092
Abstract
Motiongrams are visual representations of human motion, generated from regular video recordings. This paper evaluates how different video features may influence the generated motiongram: inversion, colour, filtering, background, lighting, clothing, video size and compression. It is argued that the proposed motiongram implementation is capable of visualising the main motion features even with quite drastic changes in all of the above mentioned variables.
Keywords
Jamoma, jitter, max, motiongram, motion image, musical gestures, music-related motion, video analysis
Paper topics
Methodological issues in sound and music computing, Multimodality in sound and music computing, Music performance analysis and rendering
Easychair keyphrases
not available
Paper type
Position paper / Poster
DOI: 10.5281/zenodo.850094
Zenodo URL: https://zenodo.org/record/850094
Abstract
Sound synthesis algorithms which radically depart from acoustical equations, and seek out numerical quirks at audio rate, can still have a part to play in the art-science investigations of computer music. This paper describes a host of ideas in alternative sound synthesis, from dilation equations and nonlinear dynamical equations, through probabilistic sieves, to oscillators based on geometrical formulae. We close with some new ideas in concatenative sound synthesis, using sparse approximation as the analysis method for matching, and driving synthesis through an EEG interface. (side note for reviewers: code and examples are made available at http://www.sussex.ac.uk/Users/nc81/evenmoreerrant.html to assist review, and illustrate disclosure that would accompany the paper)
Keywords
concatenative synthesis, non-linear equations, sound synthesis
Paper topics
Digital audio effects, Models for sound analysis and synthesis, Sonic interaction design, Sound/music signal processing algorithms
Easychair keyphrases
not available
Paper type
Full paper
DOI: 10.5281/zenodo.850098
Zenodo URL: https://zenodo.org/record/850098
Abstract
In the course of the realization of the sound installation \textit{Interstices}, questions pertaining to the auditory perception of location and extent and the spatial composition of the micro and macro structure of sound were explored in a \textit{poietic} way. Physical modelling was re-interpreted as a framework to design the spatial and timbral appearance of sounds upon a set of distributed speaker array clusters. This explorative process lead to observations that helped formulating novel research questions within the context of psychoacoustics and auditory display.
Keywords
auditory display, installation art, physical modelling, spatialization
Paper topics
3D sound/music, Auditory display and data sonification, Computer environments for sound/music processing, Perception and cognition of sound and music
Easychair keyphrases
not available
Paper type
Full paper
DOI: 10.5281/zenodo.850100
Zenodo URL: https://zenodo.org/record/850100
Abstract
Since the development of sound recording technologies, the palette of sound timbres available for music creation was extended way beyond traditional musical instruments. The organization and categorization of timbre has been a common endeavor. The availability of large databases of sound clips provides an opportunity for obtaining data-driven timbre categorizations via content-based clustering. In this article we describe an experiment aimed at understanding what factors influence the ability of users to learn a given clustering of sound samples. We clustered a large database of short sound clips, and analyzed the success of participants in assigning sounds to the "correct" clusters after listening to a few examples of each. The results of the experiment suggest a number of relevant factors related both to the strategies followed by users and to the quality measures of the clustering solution, which can guide the design of creative applications based on audio clip clustering.
Keywords
data clustering, hci, sound clip databases
Paper topics
Automatic separation, classification of sound and music, Interfaces for sound and music, Perception and cognition of sound and music, recognition
Easychair keyphrases
not available
Paper type
Full paper
DOI: 10.5281/zenodo.850102
Zenodo URL: https://zenodo.org/record/850102
Abstract
This paper discusses the use of the Shepard Tone (ST) as a digital sound source in musical composition. This tone has two musical interests. First, it underlines the difference between the tone height and the tone chroma, opening new possibilities in sound generation and musical perception. And second, considering the fact that it is (in a paradoxical way) locally directional while still globally stable and circumscribed it allows us to look differently at the instrument's range as well as at the phrasing in musical composition. Thus, this paper proposes a method of generating the ST relying upon an alternative spectral envelope, which as far as we know, has never been used before for the reproduction of the Shepard Scale Illusion (SSI). Using the proposed digital sound source, it was possible to successfully reproduce the SSI, even when applied to a melody. The melody was called "Perpetual Melody Auditory Illusion" because when it is heard it creates the auditory illusion that it never ends, as is the case with the SSI. Moreover, we composed a digital music titled “Perpetual Melody – contrasting moments”, using exclusively the digital sound source as sound generator and the melody as musical content.
Keywords
40 Phon equal-loudness curve, A-Weighting curve, Digital music composition, Perpetual Melody Auditory Illusion, Shepard Scale Illusion, Shepard Tones, Spectral envelope
Paper topics
Auditory and multimodal illusions, Perception and cognition of sound and music
Easychair keyphrases
not available
Paper type
Full paper
DOI: 10.5281/zenodo.850104
Zenodo URL: https://zenodo.org/record/850104
Abstract
In this paper a comparative study gestural interaction with musical sound, to gain insight into the notion of musical affordance on interactive music systems. We con-ducted an interview base user study trialing three accel-erometer based devices, an iPhone, a Wii-mote, and an Axivity Wax prototype, with four kinds of musical sound, including percussion, stringed instruments, and voice recordings. The accelerometers from the devices were mapped to computer based sound synthesis parameters. By using consistent mappings across different source sounds, and performing them from the three different devices, users experienced forms of physical, sonic, and cultural affordance, that combine to form what we term musical affordance.
Keywords
accelerometer, Affordances, Gesture, user study
Paper topics
Interactive performance systems, Interfaces for sound and music, Social interaction in sound and music computing
Easychair keyphrases
not available
Paper type
Position paper / Poster
DOI: 10.5281/zenodo.850106
Zenodo URL: https://zenodo.org/record/850106
Abstract
Algorithmic Composition (AC) methods often depend on evaluation methods in order to define the probabilities that change operators have to be applied. However, the evaluation of music material involves the codification of aesthetic features, which is a very complex process if we want to outline automatic procedures that are able to compute the suitability of melodies. In this context, we offer in this paper a comprehensive investigation on numerous ideas to examine and evaluate melodies, some of them based on music theory. These ideas have been used in music analysis but have been usually neglected in many AC procedures. Those features are partitioned into ten categories. While there is still much research to do in this field, we intend to help computer-aided composers define more sophisticated and useful methods for evaluating music.
Keywords
Algorithmic Composition, Automatic Evaluation of Melodies, Evolutionary Music
Paper topics
Automatic music generation/accompaniment systems, Computational musicology, Methodological issues in sound and music computing, Music information retrieval
Easychair keyphrases
not available
Paper type
Position paper / Poster
DOI: 10.5281/zenodo.850108
Zenodo URL: https://zenodo.org/record/850108
Abstract
Sound synthesis with mass-interaction physical modeling networks can be considered as a general paradigm capable of being the central part of complete software environments for musical creation. GENESIS 3, built around the CORDIS-ANIMA formalism and developed by ACROE/ICA Laboratory, is the first environment of this kind. Using it, the artist may be facing an inherent problematic of every creation process: how to use a given tool in order to obtain an expected result. In our context, the question would be: “Considering a sound, which physical model could produce it?”. This paper especially aims at presenting the frame in which this inverse problem is set and at establishing its very own “ins and outs”. However, we will also present two different algorithmic resolutions applied on quite simple cases and then discuss their relevance.
Keywords
ANIMA, CORDIS, GENESIS, Interaction, Inverse, Mass, Modeling, Physical, Problem, Sound, Synthesis
Paper topics
Automatic music generation/accompaniment systems, Computer environments for sound/music processing, Interfaces for sound and music, Models for sound analysis and synthesis, Sound/music signal processing algorithms
Easychair keyphrases
not available
Paper type
Full paper
DOI: 10.5281/zenodo.850110
Zenodo URL: https://zenodo.org/record/850110
Abstract
In this paper, we describe LCSynth, a new sound synthe-sis language currently under development. LCSynth integrates objects and manipulation for microsounds in its language design. Such an integration of objects and manipulations for microsound into sound synthesis framework design can facilitate creative exploration in the mi-crosound synthesis techniques such as granular synthesis and waveset synthesis, which has been considered relatively difficult in the existing sound synthesis frameworks and computer music languages, which depend solely on traditional abstraction of unit-generators.
Keywords
computer music, microsound, programming language
Paper topics
Computer environments for sound/music processing
Easychair keyphrases
not available
Paper type
Full paper
DOI: 10.5281/zenodo.850112
Zenodo URL: https://zenodo.org/record/850112
Abstract
This paper examines whether latent structure may be discovered from commercially sold albums using features characterizing their songs adjacencies. We build a large-scale dataset from the first 5 songs of 8,505 commercial albums. The dataset spans multiple artists, genres, and decades. We generate a training set (Train) consisting of 11,340 True song adjacencies and use it to train a mixture of multivariate gaussians. We also generate two disjoint test sets (Test1 and Test2), each having 11,340 True song adjacencies and 45,360 Artificial song adjacencies. We perform feature subset selection and evaluate on Test_1. We test our model on Test_2 in a standard retrieval setting. The model achieves a precision of 22.58%, above baseline precision of 20% . We compare this performance against a model trained and tested on a smaller dataset and against a model that uses full-song features. In the former case, precision is better than the large scale experiment (24.80%). In the latter case, the model achieves precision no better than baseline (20.13%). Noting the difficulty of the retrieval task, we speculate that using features which characterize song adjacency may improve Automatic Playlist Generation (APG) systems.
Keywords
Modeling, Playlists, Retrieval
Paper topics
Automatic separation, classification of sound and music, Music information retrieval, recognition
Easychair keyphrases
not available
Paper type
Full paper
DOI: 10.5281/zenodo.850114
Zenodo URL: https://zenodo.org/record/850114
Abstract
This paper presents a temporary multimedia installation set up at the \emph{Civic Aquarium of Milan}. Thanks to four web cameras located in front of the tropical fishpond, fishes are tracked and their movements are used to control a number of music-related parameters in real time. In order to process multiple video streams, the open-source programming language Processing has been employed. Then, the sonification is implemented by a PureData patch. The communication among the parts of the system has been realized through Open Sound Control (OSC) messages. This paper describes the key concepts, the musical idea, the design phase and the implementation of this multimedia installation, discussing also the major critical aspects.
Keywords
aquarium, motion tracking, music, sonification, sonorization, webcam
Paper topics
Auditory display and data sonification, Automatic music generation/accompaniment systems, Interactive performance systems, Sonic interaction design
Easychair keyphrases
not available
Paper type
Full paper
DOI: 10.5281/zenodo.850026
Zenodo URL: https://zenodo.org/record/850026
Abstract
Music composition is an intellectually demanding human activity that engages a wide range of cognitive faculties. Although several domain-general integrated cognitive architectures (ICAs) exist---ACT-R, Soar, Icarus, etc.---the use of integrated models for solving musical problems remains virtually unexplored. In designing MusiCOG, we wanted to bring forward ideas from our previous work, combine these with principles from the fields of music perception and cognition and ICA design, and bring these elements together in an initial attempt at an integrated model. Here we provide an introduction to MusiCOG, outline the operation of its various modules, and share some initial musical results.
Keywords
Architecture, Cognitive, Composition, Music
Paper topics
Automatic music generation/accompaniment systems, Music information retrieval, Perception and cognition of sound and music
Easychair keyphrases
not available
Paper type
Position paper / Poster
DOI: 10.5281/zenodo.850116
Zenodo URL: https://zenodo.org/record/850116
Abstract
We present new tools for the segmentation and analysis of musical scores in the OpenMusic computer-aided composition environment. A modular object-oriented framework enables the creation of segmentations on score objects and the implementation of automatic or semi-automatic analysis processes. The analyses can be performed and displayed thanks to customizable classes and callbacks. Concrete examples are given, in particular with the implementation of a semi-automatic harmonic analysis system and a framework for rhythmic transcription.
Keywords
Computer-Aided Music Analysis, OpenMusic, Segmentation
Paper topics
Computational musicology, Computer environments for sound/music processing, Interfaces for sound and music
Easychair keyphrases
not available
Paper type
Position paper / Poster
DOI: 10.5281/zenodo.850118
Zenodo URL: https://zenodo.org/record/850118
Abstract
This study investigates how non-musicians engaged in a solo-accompaniment music improvisation relationship. Seven user teams interacted with two electronic music instruments integrated in two pen tablets. One instrument was a melody instrument and the other a chord instrument. The study was done in order to understand how future shared electronic music instruments can be designed to encourage non-musicians to engage in social action through music improvisation. A combination of quantitative and qualitative analysis was used to find characteristics in co-expression found in a solo-accompaniment relationship. Results of interaction data and video analysis show that 1) teams related to each other through their experience with verbal conversation, 2) users searched for harmonic connections and 3) were able to establish rhythmical grounding. The paper concludes with some design guidelines for future solo-accompaniment shared improvisation interfaces: How realtime analysis of co-expression can be mapped to additional sound feedback that supports, strengthens and evolves co-expression in improvisation.
Keywords
collaborative interfaces, music improvisation, novice, shared electronic music instruments, social learning, user studies
Paper topics
Humanities in sound and music computing, Social interaction in sound and music computing, Sonic interaction design
Easychair keyphrases
not available
Paper type
Full paper
DOI: 10.5281/zenodo.850120
Zenodo URL: https://zenodo.org/record/850120
Abstract
With the spread of mobile devices comes the possibility of using (relatively) cheap, wireless hardware embedded with plenty of sensors to perform real time Digital Signal Processing on live artistic performances. The Android Operating System represents a milestone for mobile devices due to its lightweight Java Virtual Machine and API that makes it easier to develop applications that run on any (supported) device. With an appropriate DSP model implementation, it is possible to use the values of sensors as input for algorithms that can modify streams of audio to generate rich output signals. Because of memory, CPU and battery limitations, it is interesting to study the performance of each device under real time DSP conditions, and also provide feedback about resources consumption to provide the basis for (user or automated) decision making regarding devices' use. This work presents an object oriented model for performing DSP on Android devices and focus on measuring the time taken to perform common DSP tasks as read from the input, write to output, and carry the desired signal manipulation. We obtain statistics regarding one specific combination of device model and operating system version, but our approach can be used on any Android device to provide the user with important information that can aid aesthetic and algorithmic decisions.
Keywords
android devices, benchmarking, digital signal processing, real-time I/O
Paper topics
Computer environments for sound/music processing, Digital audio effects, Interactive performance systems, Sound/music signal processing algorithms
Easychair keyphrases
not available
Paper type
Position paper / Poster
DOI: 10.5281/zenodo.850122
Zenodo URL: https://zenodo.org/record/850122
Abstract
Tonnetz are space-based musical representations that lay out individual pitches in a regular structure. They are primarily used for analysis with visualization tools or on paper and for performance with button-based tablet or tangible interfaces. This paper first investigates how properties of Tonnetz can be applied in the composition process, including how to represent pitch based on chords or scales and lay them out in a two-dimensional space. We then describe PaperTonnetz, a tool that lets musicians explore and compose music with Tonnetz representations by making gestures on interactive paper. Unlike screen-based interactive Tonnetz systems that treat the notes as playable buttons, PaperTonnetz allows composers to interact with gestures, creating replayable patterns that represent pitch sequences and/or chords. We describe the results of an initial test of the system in a public setting, and how we revised PaperTonnetz to better support three activities: discovering, improvising and assembling musical sequences in a Tonnetz. We conclude with a discussion of directions for future research with respect to creating novel paper-based interactive music representations to support musical composition.
Keywords
computer aided composition, human computer interaction, interactive paper, pitch layouts, Tonnetz
Paper topics
Interactive performance systems, Interfaces for sound and music
Easychair keyphrases
not available
Paper type
Full paper
DOI: 10.5281/zenodo.850124
Zenodo URL: https://zenodo.org/record/850124
Abstract
This paper compares two multimodal net art projects, AVOL and AV Clash, by the author and André Carrilho (under the name Video Jack). Their objective is to create projects enabling integrated audiovisual expression that are flexible, intuitive, playful to use and engaging to experience. The projects are contextualized with related works. The methodology for the research is presented, with an emphasis on experience-focused Human-Computer Interaction (HCI) perspectives. The comparative evaluation of the projects focuses on the analysis of the answers to an online questionnaire. AVOL and AV Clash have adopted an Interactive Audiovisual Objects (IAVO) approach, which is a major contribution from these projects, consisting of the integration of sound, audio visualization and Graphical User Interfaces (GUI). Strengths and weaknesses detected in the projects are analysed. Generic conclusions are discussed, mainly regarding simplicity and harmony versus complexity and serendipity in audiovisual projects. Finally, paths for future development are presented.
Keywords
audiovisual, evaluation, graphical user interface, interaction design, multimodal, net art, sound visualization
Paper topics
Auditory and multimodal illusions, Interfaces for sound and music, Multimodality in sound and music computing
Easychair keyphrases
not available
Paper type
Position paper / Poster
DOI: 10.5281/zenodo.850126
Zenodo URL: https://zenodo.org/record/850126
Abstract
Exploiting the persistence properties of signals leads to significant improvements in audio denoising. This contribution derives a novel denoising operator based on neighborhood smoothed, Wiener filter like shrinkage. Relations to the sparse denoising approach via thresholding are drawn. Further, a rationale for adapting the threshold level to a performance criterion is developed. Using a simple but efficient estimator of the noise level, the introduced operators with adaptive thresholds are demonstrated to act as attractive alternatives to the state of the art in audio denoising.
Keywords
Audio Restauration, Denoising, Digital Audio Effects, Signal Estimation, Sparsity
Paper topics
Digital audio effects, Models for sound analysis and synthesis, Sound/music signal processing algorithms
Easychair keyphrases
not available
Paper type
Full paper
DOI: 10.5281/zenodo.850128
Zenodo URL: https://zenodo.org/record/850128
Abstract
Time-frequency representations play a central role in sound analysis and synthesis. While the most conventional methods are based on phase vocoders with uniform frequency bands, perception and physical characteristics of sound signals suggest the need for nonuniform bands. In this paper we propose a flexible design of a phase vocoder having arbitrary frequency band divisions. The design is based on recently developed nonuniform frames where here frequency warping, i.e. a remapping of the frequency axis, is employed for the design of the sliding windows, which are different for each frequency channel. We show that tight frames can be obtained with this method, which allow for perfect reconstruction with identical analysis and synthesis time-frequency atoms. The transform and its inverse have computationally efficient implementations.
Keywords
Frequency Warping, Gabor Frames, Phase Vocoder, Time-Frequency Representations, Wavelets
Paper topics
Models for sound analysis and synthesis, Sound/music signal processing algorithms
Easychair keyphrases
not available
Paper type
Full paper
DOI: 10.5281/zenodo.850130
Zenodo URL: https://zenodo.org/record/850130
Abstract
In this paper we introduce a physical model of the Gyil, an African pentatonic idiophone whose wooden bars' sonic characteristics most closely resemble the western marimba. We are most interested in the resonators of the instrument, which are comprised of graduated gourds suspended beneath each bar similar to the way tuned tubes are used on western mallet instruments. The prominant characteristic of the resonator that we are concerned with is the intentionally added buzz that results when the bar is struck creating a naturally occuring type of distortion. Sympathetic distortion is inherent to African sound design, as these unamplified acoustic instruments must typically be heard above large crowds of people dancing and singing at ceremonies and festivals. The Gyil's distortion is a result of a specific preparation of the gourds where holes are drilled into the sides and covered with a membrane traditionally constructed from the silk of spider egg casings stretched across the opening. In analysing the sonic characteristics of the Gyil, and developing a model, we find that the physical mechanisms through which the characteristic Gyil sound is produced are highly non-linear, and the development of this model has required the use of synthesis techniques novel to physical modelling. We present several variants of our physical model, and conduct comparitive listening tests with musicians who are recognised Gyil virtuosos.
Keywords
Gyil, Non-linear model, Physical modelling, Pitched Percussion
Paper topics
access and modelling of musical heritage, Digital audio effects, Models for sound analysis and synthesis, Sound/music signal processing algorithms, Technologies for the preservation
Easychair keyphrases
not available
Paper type
Full paper
DOI: 10.5281/zenodo.850132
Zenodo URL: https://zenodo.org/record/850132
Abstract
Capturing pianist movements can be used for various applications such as music performance research, musician medicine, movement-augmented piano instruments, and piano pedagogical feedback systems. This paper contributes an unobtrusive method to capture pianist movements based on depth sensing. The method was realized using the Kinect depth camera and evaluated in comparison with 2D marker tracking.
Keywords
Kinect, motion capture, piano
Paper topics
Interactive performance systems, Multimodality in sound and music computing, Music performance analysis and rendering
Easychair keyphrases
not available
Paper type
Position paper / Poster
DOI: 10.5281/zenodo.850134
Zenodo URL: https://zenodo.org/record/850134
Abstract
In this paper, a new method for recognizing phonemes in singing is proposed. Recognizing phonemes in singing is a task that has not yet matured to a standardized method, in comparison to regular speech recognition. The standard methods for regular speech recognition have already been evaluated on vocal records, but their performances are lower compared to regular speech. In this paper, two alternative classification methods dealing with this issue are proposed. One uses Mel-Frequency Cepstral Coefficient features, while another uses Temporal Patterns. They are combined to create a new type of classifier which produces a better performance than the two separate classifiers. The classifications are done with US English songs. The preliminary result is a phoneme recall rate of 48.01% in average of all audio frames within a song.
Keywords
mfcc, pattern, recognition, singing, song, Speech, temporal
Paper topics
Automatic separation, classification of sound and music, Content processing of music audio signals, Music information retrieval, recognition
Easychair keyphrases
not available
Paper type
Position paper / Poster
DOI: 10.5281/zenodo.850136
Zenodo URL: https://zenodo.org/record/850136
Abstract
It is common to learn to play an orchestral musical instrument through one-to-one lessons with an experienced tutor. For musicians who choose to study performance at an undergraduate level and beyond, their tutor is an important part of their professional musical development. For many musicians, travel is part of their professional lives due to touring, auditioning and teaching, often overseas. This makes temporary separation of students from their tutor inevitable. A solution used by some conservatoires is teaching via video conferencing, however the challenges of using video conference for interaction and collaborative work are well documented. The Remote Music Tuition prototype was designed to enhance music tuition via video conference by providing multiple views of the student. This paper describes the system, documents observations from initial tests of the prototype and makes recommendations for future developments and further testing.
Keywords
multiple views, music education, music tuition, video conference
Paper topics
Humanities in sound and music computing, Social interaction in sound and music computing
Easychair keyphrases
not available
Paper type
Position paper / Poster
DOI: 10.5281/zenodo.850138
Zenodo URL: https://zenodo.org/record/850138
Abstract
Finger force, acceleration and position are fundamental in playing music instruments. Measuring these parameters is a technical challenge and precise position and acceleration measurement of single fingers is particularly demanding. We present a sensor setup for multi modal measurements of force, position and acceleration in piano playing. We capture outputs from the upper extremity contributing to the total force output seen at the fingers. To precisely characterize fingers' positions and acceleration we use wearable sensors. A 6-axes (3-force and 3-torque axes) force sensor precisely captures contributions from hand, wrist and arm. A finger's acceleration sensor and a MIDI grand piano complete the measuring setup. The acceleration and position sensor is fixed to the dorsal aspect of the last finger phalanx. The 6-axes sensor is adjustable to fit individual hand positions and constitutes a basis setup that can be easily expanded to account for diverse measurement needs. An existing software tool was adapted to visualize the sensor data and to synchronize it to the MIDI out. With this basis setup we seek to estimate the isolated force output of finger effectors and to show coherences of finger position, force and attack. To proof the setup, a few pilot measurements were carried out.
Keywords
finger, force, Piano, position, Sensor
Paper topics
Music performance analysis and rendering
Easychair keyphrases
not available
Paper type
Position paper / Poster
DOI: 10.5281/zenodo.850142
Zenodo URL: https://zenodo.org/record/850142
Abstract
This paper deals with the task of extracting vocal melodies from accompanied singing recordings. The challenging aspect of this task consists in the tendency for instrumental sounds to interfere with the extraction of the desired vocal melodies, especially when the singing voice is not necessarily predominant among other sound sources. Existing methods in the literature are either rule-based or statistical. It is difficult for rule-based methods to adequately take advantage of human voice characteristics, whereas statistical approaches typically require large-scale data collection and labeling efforts. In this work, the extraction is based on a model of the input signals that integrates acoustic-phonetic knowledge and real-world data under a probabilistic framework. The resulting vocal pitch estimator is simple, determined by a small set of parameters and a small set of data. Tested on a publicly available dataset, the proposed method achieves a transcription accuracy of 76%.
Keywords
Acoustic phonetics, Formant synthesis, Melody extraction, Pitch estimation, Simulation, Singing voice, Voice examples
Paper topics
Automatic separation, classification of sound and music, Content processing of music audio signals, Models for sound analysis and synthesis, recognition, Sound/music signal processing algorithms
Easychair keyphrases
singing voice [28], vocal spectrum envelope [19], spectrum envelope [14], vocal melody extraction [14], vocal pitch [14], accompanied singing [13], melody extraction [10], formant frequency [9], pitch value [9], time point [8], accompanied singing signal [7], raw pitch accuracy [7], non professional [6], overall transcription accuracy [6], probability distribution [6], quarter tone [6], vocal melody [6], vocal pitch sequence [6], voicing detection [6], instrumental sound [5], time sample [5], voiced sound [5], accompanied singing data [4], constant q transform [4], publicly available dataset [4], random n vector [4], raw chroma accuracy [4], short time spectra [4], signal model [4], vocal pitch estimation [4]
Paper type
Full paper
DOI: 10.5281/zenodo.850144
Zenodo URL: https://zenodo.org/record/850144
Abstract
This paper presents a prototype allowing the control of a concatenative synthesis algorithm using a 2D sketching interface. The design of the system is underpinned by a preliminary discussion in which isomorphisms between auditory and visual phenomena are identified. We support that certain qualities of sound and graphics are inherently cross-modal. Following this reasoning, a mapping strategy between low-level auditory and visual features was developed. The mapping enables the selection of audio units based on five feature data streams that derive from the statistical analysis of the sketch.
Keywords
Audio Visual association, Concatenative Synthesis, Interaction Design, Isomorphism, Metaphor, Reduced modes, Sketching
Paper topics
Auditory and multimodal illusions, Auditory display and data sonification, Interfaces for sound and music, Multimodality in sound and music computing, Perception and cognition of sound and music, Sonic interaction design
Easychair keyphrases
not available
Paper type
Full paper
DOI: 10.5281/zenodo.850146
Zenodo URL: https://zenodo.org/record/850146
Abstract
SpatDIF, the Spatial Sound Description Interchange Format, is an ongoing collaborative effort offering a semantic and syntactic specification for storing and transmitting spatial audio scene descriptions. The SpatDIF core is a lightweight minimal solution providing the most essential set of descriptors for spatial sound scenes. Additional descriptors are introduced as extensions, expanding the namespace and scope with respect to authoring, scene description, rendering and reproduction of spatial audio. A general overview of the specification is provided, and two use cases are discussed, exemplifying SpatDIF’s potential for file-based pieces as well as real-time streaming of spatial audio scenes.
Keywords
object-based audio, SpatDIF, spatial audio
Paper topics
3D sound/music
Easychair keyphrases
not available
Paper type
Position paper / Poster
DOI: 10.5281/zenodo.850148
Zenodo URL: https://zenodo.org/record/850148
Abstract
In this paper an integrated system for the creation of a combined audio and tactile display is described. In order to get an illusion of physically being among virtual sounding objects, we used vibration motors attached to a belt to give tactile stimulus, and sensed the user's position and orientation with a 3D tracker. Collisions with free-to-move virtual objects is provided through semi-realistic vibration on the correct collision point with respect to the position and orientation of the user. The tactile vibration is encoded on 8 vibrotactile motors using a calculation of the gains on the motors similar to a panning law and improved to convey the perceptual illusion of proximity of the object and collision with it. We combine the tactile stimulus with a spatialization system augmented with distance cues. As a case scenario, simpleLife, an immersive audio-tactile installation for one participant, inspired by the concept of performance ecosystems and ecological approaches to musical interaction is shown.
Keywords
audio-tactile integration, sonic interaction, spatialization, virtual environment
Paper topics
3D sound/music, Auditory and multimodal illusions, Multimodality in sound and music computing, Sonic interaction design, Sound and music for VR and games
Easychair keyphrases
not available
Paper type
Full paper
DOI: 10.5281/zenodo.850150
Zenodo URL: https://zenodo.org/record/850150
Abstract
We propose a statistical method for modeling and synthesizing sounds with both sinusoidal and attack transient components. In addition, the sinusoidal component can have pitch-changing characteristics. The method applies multivariate decomposition techniques (such as independent component analysis and principal component analysis) to learn the intrinsic structures that characterize the sound samples. Afterwards these structures are used to synthesize new sounds which can be drawn from the distribution of the real original sound samples. Here we apply the method to impact sounds and show that the method is able to generate new samples that have the characteristic attack transient of impact sounds.
Keywords
independent component analysis, natural sounds, sound synthesis, transients
Paper topics
Models for sound analysis and synthesis
Easychair keyphrases
not available
Paper type
Position paper / Poster
DOI: 10.5281/zenodo.850154
Zenodo URL: https://zenodo.org/record/850154
Abstract
This paper introduces StringScore, a productive text-based Music Representation for Composition that provides a visual arrangement of motivic elements in a compact and meaningful layout of characters. Time dimension is represented horizontally, taking the text character as the time unit, thus considering character strings as time-lines where musical elements are sequenced. While being compact, Stringscore provides a high degree of independent control of the fundamentals of traditional composition, such as musical form, harmony, melodic contour, texture and counterpoint. The description of the proposed representation has been illustrated with musical examples of applied composition. As an additional validation, StringScore has been successfully applied in the analysis and re-composition of the beginning of Beethoven's Fifth Symphony. Finally, the paper presents "StringScore in the Cloud", a Web-based implementation that probes the representation in the environment of the Computer Music Cloud.
Keywords
cloud computing, composition model, music composition, music representation, web composition
Paper topics
Automatic music generation/accompaniment systems, Computational musicology, Computer environments for sound/music processing
Easychair keyphrases
not available
Paper type
Position paper / Poster
DOI: 10.5281/zenodo.850156
Zenodo URL: https://zenodo.org/record/850156
Abstract
In this paper we present a new general approach to the use of multi-touch screens as musical controllers. In our approach the surface acts as a large hierarchically structured state-space map through which a musician can navigate a path. We discuss our motivations for this approach, which include the possibility of representing large amounts of musical data such as an entire live set in a common visually mnemonic space rather like a map, and the potential for a rich dynamic and non-symbolic approach to live algorithm generation. We describe our initial implementation of the system and present some initial examples of its use in musical contexts.
Keywords
Android, hierarchical structure, iOS, multi-touch tablet, tablet controller
Paper topics
Interactive performance systems, Interfaces for sound and music
Easychair keyphrases
not available
Paper type
Full paper
DOI: 10.5281/zenodo.850158
Zenodo URL: https://zenodo.org/record/850158
Abstract
This text discusses the notions of physical presence, perception and 'gestural' actions as an important element of a performance practice in electronic music. After discussing the meaning of the term 'gesture' in music and dance, a brief overview about current trends and methods in research is presented. The skills associated with the performance of electronic instruments are compared to those acquired with traditional instruments, for other physical performing arts such as dance and in technologically mediated art forms that extend the concept of the stage. Challenges and approaches for composing and performing electronic music are addressed and finally a tentative statement is made about embodiment as a quality and category to be applied to and perceived in electronic music performance.
Keywords
Electronic Music, Embodiment, Enaction, Instruments, Perception, Performance
Paper topics
Interfaces for sound and music, Perception and cognition of sound and music, Sound/music and the neurosciences
Easychair keyphrases
not available
Paper type
Full paper
DOI: 10.5281/zenodo.850160
Zenodo URL: https://zenodo.org/record/850160
Abstract
Spectral flux is usually measured with the FFT, but here a constant-Q IIR filterbank implementation is proposed. This leads to a relatively efficient sliding feature extractor with the benefit of keeping the time resolution of the output as high as it is in the input signal. Several applications are considered, such as estimation of sensory dissonance, uses in sound synthesis, adaptive effects processing and visualisation in recurrence plots. A novel feature called second order flux is also introduced.
Keywords
adaptive DAFX, filterbank, sensory dissonance, sliding feature extractor, Spectral Flux
Paper topics
Digital audio effects, Methodological issues in sound and music computing, Models for sound analysis and synthesis, Perception and cognition of sound and music, Sound/music signal processing algorithms
Easychair keyphrases
not available
Paper type
Full paper
DOI: 10.5281/zenodo.850162
Zenodo URL: https://zenodo.org/record/850162
Abstract
The FireFader is a simple haptic force-feedback device that is optimized for introducing musician to haptics. It has a single degree of freedom and is based upon a mass-produced linear potentiometer fader coupled to a DC motor, also known as a "motorized fader." Lights are connected in parallel with the motor to help visually communicate the strength of the force. Compatibile with OS X, Linux, and Windows, the FireFader consists of only open-source hardware and software elements. Consequently, it is also relatively easy for users to repurpose it into new projects involving varying kinds and numbers of motors and sensors. An open-source device driver for the FireFader allows it to be linked to a laptop via USB, so that the computer can perform the feedback control calculations. For example, the laptop can simulate the acoustics of a virtual musical instrument to calculate the motor force as a function of the fader position. The serial interface over USB causes delay of the control signal, but it facilitates easy programming and less expensive control nevertheless using floating-point computation. Some new devices derived from the FireFader design are presented.
Keywords
Arduino, force feedback, haptic, interfaces, open-source hardware, open-source software, robotics
Paper topics
Interfaces for sound and music, Multimodality in sound and music computing, Music and robotics, Sonic interaction design
Easychair keyphrases
not available
Paper type
Full paper
DOI: 10.5281/zenodo.850164
Zenodo URL: https://zenodo.org/record/850164
Abstract
This paper reports on aspects of the Fluxations paradigm for interactive music generation and an iPhone app implementation of it. The paradigm combines expressive interactivity with stochastic algorithmic computer generated sound. The emphasis is on pitch-oriented (harmonic) continuity and flux, as steered through sliders and sensors. The paradigm enables the user-performer to maximize exotic but audible musical variety by spontaneously manipulating parameters within the paradigm.
Keywords
accelerometer, emergence, form, harmony, human-computer interaction, improvisation, iPhone, minimalism, pitch-class sets, texture, transposition
Paper topics
Automatic music generation/accompaniment systems, Computational musicology, Humanities in sound and music computing, Interactive performance systems, Interfaces for sound and music, Methodological issues in sound and music computing, Perception and cognition of sound and music, Sonic interaction design
Easychair keyphrases
not available
Paper type
Position paper / Poster
DOI: 10.5281/zenodo.850166
Zenodo URL: https://zenodo.org/record/850166
Abstract
The ICST DSP library is a compact collection of C++ routines with focus on rapid development of audio processing and analysis applications. Unlike other similar libraries it offers a set of technical computing tools as well as speed-optimized industrial-grade DSP algorithms, which allow one to prototype, test and develop real-time applications without the need of switching development environment. The package has no dependence on third-party libraries, supports multiple platforms and is released under FreeBSD license.
Keywords
analysis, audio, C++ library, computational efficiency, DSP, open-source, synthesis, technical computing
Paper topics
Computer environments for sound/music processing, Content processing of music audio signals, Digital audio effects, Models for sound analysis and synthesis, Sound/music signal processing algorithms
Easychair keyphrases
not available
Paper type
Position paper / Poster
DOI: 10.5281/zenodo.850168
Zenodo URL: https://zenodo.org/record/850168
Abstract
This paper presents The Quiet Walk, an interactive mo-bile artwork for sonic explorations of urban space. The goal of TQW is to find the “quietest place”. An interface on the mobile device directs the user to avoid noisy areas of the city, giving directions to find quiet zones. Data collected by the system generates a geo-acoustic map of the city that facilitates the personal recollection of sonic memories. The system is comprised of 3 components: a smartphone running a custom application based on libpd and openFrameworks, a web server collecting the GPS and acoustical data, and computer in an exhibition space displaying a visualization of the sound map. This open-ended platform opens up possibilities of mobile digital signal processing, not only for sound art related artworks but also as a platform for data-soundscape compositions and mobile, digital explorations in acoustic ecology studies.
Keywords
acoustic ecology, libpd, mobile dsp, sonic interactions, sound maps, soundwalks
Paper topics
Auditory display and data sonification, Interactive performance systems, Interfaces for sound and music, Perception and cognition of sound and music, Social interaction in sound and music computing, Sonic interaction design, Sound/music signal processing algorithms
Easychair keyphrases
not available
Paper type
Position paper / Poster
DOI: 10.5281/zenodo.850170
Zenodo URL: https://zenodo.org/record/850170
Abstract
This work presents an extension to a measurement technique that was used to estimate the reflection and transmission functions of musical instrument bells, to be used within the context of a parametric waveguide model. In the original technique, several measurements are taken of a system, a 2-meter long cylindrical tube with a speaker and co-located microphone at one end, having incrementally varying termination/boundary conditions. Each measured impulse response yields a sequence of multiple evenly spaced arrivals, from which estimates of waveguide element transfer functions, including the bell reflection and transmission, may be formed. The use of the technique for measuring the saxophone presents difficulties due to 1) the inability of separating the bore from the bell for an isolated measurement, 2) the length of the saxophone producing impulse response arrivals that overlap in time (and are not easily windowed), and 3) the presence of a junction when appending the saxophone to the measurement tube and the spectral ``artifact'' generated as a result. In this work we present a different post signal processing technique to overcome these difficulties while keeping the hardware the same. The result is a measurement of the saxophone's round-trip reflection function which is used to construct its transfer function---the inverse transform of which yield's the instrument's impulse response.
Keywords
acoustic measurement, saxophone synthesis, waveguide synthesis, wind instruments modeling
Paper topics
Models for sound analysis and synthesis
Easychair keyphrases
not available
Paper type
Full paper
DOI: 10.5281/zenodo.850140
Zenodo URL: https://zenodo.org/record/850140
Abstract
We introduce five regression models for the modeling of expressed emotion in music using data obtained in a two alternative forced choice listening experiment. The predictive performance of the proposed models is compared using learning curves, showing that all models converge to produce a similar classification error. The predictive ranking of the models is compared using Kendall's tau rank correlation coefficient which shows a difference despite similar classification error. The variation in predictions across subjects and the difference in ranking is investigated visually in the arousal-valence space and quantified using Kendall's tau.
Keywords
Expressed emotions, Music Information Retrieval, Pairwise comparisons
Paper topics
Automatic separation, classification of sound and music, Music information retrieval, Perception and cognition of sound and music, recognition
Easychair keyphrases
not available
Paper type
Full paper
DOI: 10.5281/zenodo.850172
Zenodo URL: https://zenodo.org/record/850172
Abstract
The study at hand presents the testing of sonification for communicating the distance between two stations in a train journey. We wanted to investigate if it is possible to provide the traveller with information about the distance left to the next station by using non-speech sounds. The idea is that of using a sonification that is independent from culture and language and that can be understood by international travellers.
Keywords
High-speed railroad car, Sonification, Sound design
Paper topics
Auditory display and data sonification, Sonic interaction design
Easychair keyphrases
sound representation [7], next station [6], non speech sound [6], train journey [6], isht interior sound design [4], listening duration [4], mean duration [4], mobile journey planner application [4], post experiment question [4], sonification helped participant [4], urban area [4]
Paper type
Full paper
DOI: 10.5281/zenodo.850174
Zenodo URL: https://zenodo.org/record/850174
Abstract
In this paper we present some technical aspects on the interactive masks of the opera Les Bacchantes by Georgia Spiropoulos (Ircam, 2010) for a single performer, tape and live electronics. Our purpose is to explore the mutability of a solo voice through different “virtual masks” in Max/Msp as proposed by the composer on a relecture of Euripide’s Bacchae.
Keywords
real-time vocal processing, tragic vocality, virtual score
Paper topics
Humanities in sound and music computing, Interactive performance systems, Music performance analysis and rendering
Easychair keyphrases
not available
Paper type
Full paper
DOI: 10.5281/zenodo.850176
Zenodo URL: https://zenodo.org/record/850176
Abstract
Digital waveguides have been used in signal processing for modelling room acoustics. The same technique can be used for model-based sonifications, where the given data serves to construct the three-dimensional sound propagation model. The resulting sounds are intuitive to understand as they simulate real world acoustics. As an example we introduce a digital waveguide mesh based on complex data of computational physics. This approach allows exploring sonically this three-dimensional data and unveiling spatial structures.
Keywords
Digital signal processing, Digital waveguide, Sonification
Paper topics
Auditory display and data sonification, Sound/music signal processing algorithms
Easychair keyphrases
not available
Paper type
Position paper / Poster
DOI: 10.5281/zenodo.850178
Zenodo URL: https://zenodo.org/record/850178
Abstract
Approaches to network music performance are often focused on creating systems with minimal latency and maximal synchronicity. In this article we present \emph{Yig, the Father of Serpents}, a new program for performing network music that is designed with these principles in mind, but also offers an argument for a different approach. In Yig, users may have identical states yet the audio rendering could be different. In this paper an introduction to the interface is followed by a brief description of the technical development of the software. Next, the instrument is classified and analyzed using existing frameworks and some philosophy behind divergence in network music is explained. The article concludes with an numeration of potential software improvements and suggestions towards future work using divergence
Keywords
Interfaces, Network Music, SuperCollider
Paper topics
Computer environments for sound/music processing, Interactive performance systems, Interfaces for sound and music
Easychair keyphrases
not available
Paper type
Position paper / Poster
DOI: 10.5281/zenodo.850180
Zenodo URL: https://zenodo.org/record/850180