DAFx-14
/ Program
17th International conference |
|||||||||||
|
At DAFx-14 we are very excited to be able to offer the following tutorials:
Pitch analysis is an essential part of making sense
of music signals. Whereas skilled human musicians perform the task seemingly easily, computational
extraction of the note pitches and expressive nuances from polyphonic music signals has turned out to
be hard. This tutorial starts from the fundamentals of pitch estimation, explaining the basic challenges
of the task (robustness to different sound sources, robustness to polyphony and additive noise, octave
ambiguity, inharmonicity, missing data, time-frequency resolution) and the processing principles and
sources of information that can be used to tackle those challenges. Among the processing principles,
we will discuss why autocorrelation-type estimators (as used in speech processing) do not work for
polyphonic data and how they can be amended; how phase information can be utilized; how timbral information
must be either explicitly modeled or normalized away; etc. Example pictures and sounds will be presented
in order to illustrate what kind of data we are dealing with and to develop intuition. Towards the end of
the talk, I will describe some state-of-the-art systems by different researchers, and from my own experience,
mention some of the practical challenges that I have encountered when developing real-time multipitch
estimation on mobile devices in last few years.
Anssi Klapuri received his Ph.D. degree from Tampere
University of Technology (TUT), Tampere, Finland. He visited as a post-doctoral researcher at Ecole Centrale
de Lille, France, and Cambridge University, UK, in 2005 and 2006, respectively. He worked until 2009 as a
professor (pro term) at TUT. In 2009 he joined Queen Mary, University of London as a lecturer in Sound and
Music Processing. In September 2011 he joined Ovelin Ltd to develop
game-based musical instrument learning applications, while continuing part-time at TUT. His research
interests include audio signal processing, auditory modeling, and machine learning.
One of the attributes distinguishing music from other
sound sources is the hierarchical structure in which music is organized. Individual sound events corresponding
to individual notes form larger structures such as motives, phrases, and chords, and these elements again form
larger constructs that determine the overall layout of the composition. One important goal of audio structure
analysis is to divide up a given music recording into temporal segments that correspond to musical parts and
to group these segments into musically meaningful categories. One challenge is that there are many different
criteria for segmenting and structuring music. This results in conceptually different approaches, which may be
loosely categorized in repetition-based, novelty-based, and homogeneity-based approaches. Furthermore, one has
to account for different musical dimensions such as melody, harmony, rhythm, and timbre. In this tutorial, I
will give an overview of current approaches for the computational analysis of the structure of music recordings,
which has been a very active research problem within the area of music information retrieval. As one example, I
present a novel audio thumbnailing procedure to determine the audio segment that best represents a given music
recording. Furthermore, I show how path and block structures of self-similarity matrices, the most important
tool used in automated structure analysis, can be enhanced and transformed. Finally, I report on a recent
novelty-based segmentation approach that combines homogeneity and repetition principles in a single
representation referred to as structure feature.
Meinard Müller studied mathematics (Diplom) and computer
science (Ph.D.) at the University of Bonn, Germany. In 2002/2003, he conducted postdoctoral research in
combinatorics at the Mathematical Department of Keio University, Japan. In 2007, he finished his
Habilitation at Bonn University in the field of multimedia retrieval writing a book titled "Information
Retrieval for Music and Motion," which appeared as Springer monograph. From 2007 to 2012, he was a member
of the Saarland University and the Max-Planck Institut für Informatik leading the research group "Multimedia
Information Retrieval and Music Processing" within the Cluster of Excellence on Multimodal Computing and
Interaction. Since September 2012, Meinard Müller holds a professorship for Semantic Audio Processing at the
International Audio Laboratories Erlangen,
which is a joint institution of the Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) and the
Fraunhofer-Institut für Integrierte Schaltungen IIS. His recent research interests include content-based
multimedia retrieval, audio signal processing, music processing, music information retrieval, and motion processing.
Perceptual audio has been a key ingredient in the multimedia
revolution, enabling the availability of high-quality audio over channels with limited channel capacity, such as the
Internet, broadcasting or wireless services. Today, mp3 and other perceptual audio coding technologies are ubiquitous in
devices, such as CD/DVD players, computers, portable music players and cellular phones. This tutorial covers the basics of
perceptual audio coding, starting with what it means to operate according to psychoacoustic principles rather than Mean
Square Error (MSE). The most relevant psychoacoustic effects will be briefly reviewed. From the modules of a perceptual
audio coder, the filterbank and strategies for quantization and coding are examined in some detail. Furthermore, we
discuss tools for joint stereo coding of two channels. Alongside, the most common coding artefacts that originate from
violating perceptual transparency criteria will be demonstrated and explained. Beyond these concepts, modern perceptual
audio coders feature tools that can significantly boost their performance further at low bitrates, for example, audio
bandwidth extension, parametric stereo or unified speech and audio coding. Some sound examples will be given to illustrate
these new advanced tools. Finally, an overview of today's state of the art in compression efficiency is given as well as
an outlook of some currently ongoing coding developments.
Jürgen Herre joined the Fraunhofer Institute for Integrated Circuits
in 1989. He contributed to many perceptual coding algorithms for high quality audio, including MPEG-1 Layer 3 ("MP3")
and - during a PostDoctoral term at Bell Laboratories - MPEG-2 Advanced Audio Coding (AAC). Working on more advanced
multimedia technologies including MPEG-4, MPEG-7 and MPEG-D, Dr. Herre is currently the Chief Executive Scientist
for the Audio/Multimedia activities at Fraunhofer IIS, Erlangen, Germany. Since September 2010, he is a professor at
the Friedrich-Alexander University of Erlangen-Nürnberg and the International Audio Laboratories Erlangen.
Bernd Edler obtained his Dipl.-Ing. degree from the Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) in 1985 and
his Ph.D. from the University of Hannover in 1994. There he continued his research in the field of audio coding with a focus
on transforms, filter banks, and perception. Since October 2010 he is a professor at the International Audio Laboratories
Erlangen, which is a joint institution of FAU and Fraunhofer IIS.
Sascha Disch received his Dipl.-Ing. degree in electrical engineering from the Technical University Hamburg-Harburg (TUHH)
in 1999 and joined the Fraunhofer Institute for Integrated Circuits (IIS) the same year. Ever since he has been working in
research and development of perceptual audio coding and audio processing. From 2007 to 2010 he was a researcher at the Laboratory
of Information Technology, Leibniz University Hannover (LUH), receiving his Doctoral Degree (Dr.-Ing.) in 2011. He contributed
to the standardization of MPEG Surround and the MPEG Unified Speech and Audio Coding (USAC). His research interests as a
Senior Scientist at Fraunhofer include waveform and parametric audio coding, audio bandwidth extension and digital audio effects.
Sound field synthesis with massive-multichannel loudspeaker arrays
has been an active research field for the last few decades. Several rendering methods for multiple loudspeakers have been
developed including Wave Field Synthesis, Ambisonics, and Vector Base Amplitude Panning. Different loudspeaker installations
exist at many institutions throughout Europe. While the their operating software is often home-made and specific to the
particular loudspeaker set-up, there exists also a versatile open-source software tool for real-time spatial audio reproduction,
the SoundScape Renderer (SSR). It can be adapted to various loudspeaker configurations and provides modules for the most common
rendering methods. For headphone use also spatial sound by binaural synthesis is supported. The tutorial gives an introduction
to the most common sound field rendering methods, presents the SoundScape Renderer and some of its rendering methods, and allows
hands-on experience for a limited number of participants using the 128 loudspeaker array at the Chair of Multimedia Communications
and Signal Processing (LMS).
Sascha Spors is a Professor and heads the group for Signal Processing
and Virtual Acoustics at the Institute of Communications Engineering,
Universität Rostock. From 2005 to 2012, he was heading the audio technology group at the Quality and Usability Lab, Deutsche Telekom
Laboratories, Technische Universität Berlin, as a senior research scientist. From the Electrical, Electronic and Communication Engineering
Faculty of the University Erlangen-Nuremberg, he obtained his Doctoral Degree (Dr.-Ing.) with distinction in January 2006, as a result
of his work as a research scientist at the Chair of Multimedia Communications and Signal Processing. The topic of his doctoral thesis
was the active compensation of the listening room characteristics for sound reproduction systems. During his thesis work from 2001 to
2006, he conducted research on wavefield analysis, wavefield synthesis and massive multichannel adaptation problems. Sascha Spors is
member of the IEEE Signal Processing Society, the "Deutsche Gesellschaft für Akustik (DEGA)" and the Audio Engineering Society (AES).
In 2011 he has received the Lothar-Cremer-Preis of the DEGA. He is member of the IEEE Audio and Acoustic Signal Processing Technical
Committee and chair of the AES Technical Committee on Spatial Audio.
Matthias Geier is currently working as a research assistant at the Institute
of Communications Engineering, University of Rostock. From 2007 to 2012, he was working as a research assistant at the Quality and
Usability Lab of Deutsche Telekom Laboratories, TU Berlin. He studied Electrical Engineering/Sound Engineering at University of Technology
and University of Music and Dramatic Arts in Graz, Austria, where he received his diploma degree (Diplom-Ingenieur) in 2006.
Maximilian Schäfer studies Electrical Engineering at the Faculty of Engineering of the Friedrich-Alexander-University
Erlangen-Nuremberg (FAU). He is a performing musician and acts as consultant in recording, arranging and management in the music business.
We are very happy to introduce our three keynote speakers for DAFx-14:
Upmix has been broadly used in the
professional (broadcast) and consumer (home cinema) domains, to convert stereo signals to
5.1 surround. Our motivation to add time-domain methods (such as reverberators, early reflections,
equalisers, exciters, and compressors) came originally from the desire of scalability. At the
advent of 3D multi-channel surround, we wanted an upmix that would be scalable to almost any number
of output channels. It became quickly clear, for instance, that ambience signals to be reproduced
with many loudspeakers need to be generated very differently than in a 5.1 upmix. The initial efforts
in adding reverberators were frustrating: while one could hear the potential (amazing envelopment),
difficult items sounded too often too bad. Ultimately, time-domain processing improved the quality
of our upmix beyond scalability. Specifically, I”ll describe: early reflections for depth in three
dimensions, reverberators for generation of multi-channel ambience signals, equalisation of the
center channel, and the use of exciters to enhance room signals.
Christof Faller received an Dipl. Ing. degree
in electrical engineering from ETH Zurich, Switzerland, in 2000, and a Ph.D. degree for his work
on parametric multichannel audio coding from EPFL Lausanne, Switzerland, in 2004. From 2000 to 2004
he worked in the Speech and Acoustics Research Department at Bell Labs Lucent and its spin-off Agere
Systems, where he worked on audio coding for satellite radio, MP3 Surround, and the MPEG Surround
international standard. Dr. Faller is currently managing director at Illusonic,
a company he founded in 2006, and teaches at the Swiss Federal
Institute of Technology (EPFL) in Lausanne. He has won a number of awards for his contributions
and inventions in spatial audio.
Since the end of the 90's, audio
signal analysis has been used more and more in connection with machine learning for
the development of audio indexing. One of the specific types of audio content targeted
by this indexing technologies is music and the corresponding research field named Music
Information Retrieval (MIR).
MIR attempt to develop tools for the automatic analysis of music (score, tempo, chord,
key, instrumentation, genre, mood, tag classification).
In this talk, I will review the development of this research field, its connection with
other research fields and the motivation for its development: from the initial search
and navigation over large music collections paradigm (music search engine) to the more
recent computational musicology, ethnomusicology and the use of MIR for music creativity.
Geoffroy Peeters is a senior researcher at IRCAM where he is leading audio and music indexing activities. He received his Ph.D degree in 2001 and Habilitation degree (accreditation to supervise research) in 2013 from the University-Paris-VI. He has developed new algorithms for timbre description, sound classification, audio identification, rhythm description, music structure discovery, audio summary, music genre/ mood recognition. He owns several patents in these fields. He is co-author of the ISO MPEG-7 audio standard.
Legendary bands, orchestras, stars and virtuosos like Yehudi Menuhin,
the Bavarian Radio Orchestra, Peter Kraus, Elvis, the Beatles and the Rolling Stones - they
all were playing Bubenreuth instruments. In the post-war years, displaced persons from the
Sudeten region (Czechoslovakia) brought musical instrument manufacturing and related industries
to the region around Erlangen. The region is home to one of every tenth German musical instrument
manufacturers. Bubenreuth in particular was transformed from a farming village to a metropolis
of German string instrument making. The community council of Bubenreuth - then a small village of
less than 500 inhabitants - decided by 1949 that more than 2.000 people would be resettled there
in the following years to come.
Whether it is those learning to play an instrument, musicians
in philharmonic orchestras or rock stars - they all appreciate Franconian violins and guitars. Both
small artisan workshops and semi-industrial manufacturers produce quality products for the home
market, but mainly for export. The viola da gamba-shaped electric bass designed by Walter
Höfner in 1956 and played by Sir Paul McCartney exemplifies
the story of Bubenreuth's roots in the instrument making tradition of the 17th century and how it
extends to the manufacturing of electric guitars today. A museum was formed in 2009 in order to
maintain the cultural heritage of Bubenreuth.
Christian Hoyer (born 1976) studied History, Political Science and East
European Studies at the Universities of Marburg, Keele (UK), London and Erlangen. He received his
PhD degree in history in 2007 ("Lord Salisbury and 19th Century English Foreign Policy"). He worked
as a lecturer at the University of Erlangen and as a research assistant at the Centre for British
Studies at the University of Bamberg. From 2007 till 2011 he had been working as manager of
"history communication" for the manufacturer of electric guitars Warwick/Framus. This position
included the curatorship of the Framus Museum
and the corporate archives in Markneukirchen (Saxony). One of his main tasks was to build up a company museum.
Successfully having achieved this, the award for "History Communication of the Association of German
Business Archivists" (Vereinigung deutscher Wirtschaftsarchivare VDW) had been awarded to the Framus
Museum in 2009. Since 2012, Dr. Hoyer is working with the Erlangen based publishing house Palm & Enke
and is head of the Bubenreutheum museum association.
Monday, 01.09.2014 (IIS) | ||
12:00 - 18.00 | Registration @ IIS | IIS Foyer |
12:00 - 14:00 | Lunch @ IIS | IIS Foyer |
14:00 - 18:00 | Tutorials & Demos @ IIS (Chair: Jürgen Herre) | |
14:00 - 15:30 | Tutorial: Multipitch Analysis of Music Signals Anssi Klapuri (Ovelin & Tampere University of Technology) |
IIS Lecture Room |
15:30 - 16:30 | Demos Researchers from IIS & AudioLabs |
IIS Foyer |
16:30 - 18:00 | Tutorial: Audio Structure Analysis of Music Meinard Müller (AudioLabs, Universität Erlangen-Nürnberg) |
IIS Lecture Room |
18:00 - 21:00 | Welcome Reception @ IIS | IIS Foyer |
Tuesday, 02.09.2014 (IIS) | ||
08:30 - 12:00 | Registration @ IIS | IIS Foyer |
09:00 - 10:00 | Welcome | IIS Lecture Room |
10:00 - 11:00 | Keynote 1 (Chair: Bernd Edler) | IIS Lecture Room |
Improving Time-Frequency Upmix through Time-Domain Processing Christof Faller (ILLUSONIC) |
||
11:00 - 11:30 | Coffee Break | IIS Foyer |
11:30 - 13:00 | Tutorials & Demos @ IIS | |
11:30 - 13:00 | Tutorial: Perceptual Audio Coding Jürgen Herre, Bernd Edler, Sascha Disch (AudioLabs Erlangen) |
IIS Lecture Room |
13:00 - 14:00 | Lunch @ IIS | IIS Foyer |
14:00 - 14:30 | Poster Fast Forward 1 (Chair: Martin Holters) | IIS Lecture Room |
14:30 - 16:00 | Poster Sesssion 1: Sound Processing (Chair: Martin Holters) | IIS Foyer |
Finite Difference Schemes on Hexagonal Grids for Thin Linear Plates with Finite Volume Boundaries Brian Hamilton and Alberto Torin |
||
Prioritized Computation for Numerical Sound Propagation John Drake, Maxim Likhachev and Alla Safonova |
||
Sinusoidal Synthesis Method using a Force-based Algorithmm Ryoho Kobayashi |
||
A Method of Morphing Spectral Envelopes of the Singing Voice for Use with Backing Vocals Matthew Roddy and Jacqueline Walker |
||
Short-Time Time-Reversal on Audio Signals Hyung-Suk Kim and Julius Smith |
||
A Statistical Approach to Automated Offline Dynamic Processing in the Audio Mastering Process Marcel Hilsamer and Stephan Herzog |
||
Revisiting Implicit Finite Difference Schemes for Three-Dimensional Room Acoustics Simulations on GPU Brian Hamilton, Stefan Bilbao and Craig J. Webb |
||
A Preliminary Model for the Synthesis of Source Spaciousness Darragh Pigott and Jacqueline Walker |
||
Low Frequency Group Delay Equalization of Vented Boxes using Digital Correction Filters Stephan Herzog and Marcel Hilsamer |
||
Exploring the Vectored Time Variant Comb Filter Vesa Norilo |
||
Time-Varying Filters for Musical Applications Aaron Wishnick |
||
16:00 - 18:00 | Tutorials & Demos @ IIS (Chair: Jouni Paulus) | IIS Cinema Foyer |
Demos IIS, Cinema, Sound Labs, etc. |
||
19:00 - 22:00+ | Concert & Reception @ IIS Music: Florian von Ameln, Fabian-Robert Stöter, etc. |
IIS Foyer |
Wednesday, 03.09.2014 (FAU) | ||
08:30 - 10:00 | Registration @ FAU | FAU H11 Foyer |
09:00 - 10:30 | Oral Session 1: Filters and Effects (Chair: Sigurd Saue) | FAU H11 Lecture Hall |
09:00 - 09:20 | Perceptual Linear Filters: Low-Order ARMA Approximation for Sound Synthesis Rémi Mignot and Vesa Välimäki |
|
09:20 - 09:40 | Approximations for Online Computation of Redressed Frequency Warped Vocoders Gianpaolo Evangelista |
|
09:40 - 10:00 | Hybrid Reverberation Processor with Perceptual Control Thibaut Carpentier, Markus Noisternig and Olivier Warusfel |
|
10:00 - 10:20 | Examining the Oscillator Waveform Animation Effect Joseph Timoney, Victor Lazzarini, Jari Kleimola and Vesa Valimaki |
|
10:30 - 11:00 | Coffee Break | FAU H11 Foyer |
11:00 - 12:30 | Oral Session 2: Sound Synthesis (Chair: Jiri Schimmel) | FAU H11 Lecture Hall |
11:00 - 11:20 | Multi-Player Microtiming Humanisation using a Multivariate Markov Model Ryan Stables, Satoshi Endo and Alan Wing |
|
11:20 - 11:40 | Streaming Spectral Processing with Consumer-Level Graphics Processing Units Victor Lazzarini, John ffitch, Joseph Timoney and Russell Bradford |
|
11:40 - 12:00 | A Two Level Montage Approach to Sound Texture Synthesis with Treatment of Unique Events Sean O'Leary and Axel Röbel |
|
12:00 - 12:20 | Fast Signal Reconstruction from Magnitude Spectrogram of Continuous Wavelet Transform Based on Spectrogram Consistency Tomohiko Nakamura and Hirokazu Kameoka |
|
12:30 - 14:00 | Lunch @ FAU | FAU Canteen |
14:00 - 15:30 | Tutorials & Demos @ FAU (Chair: Rudi Rabenstein) | |
14:00 - 15:30 | Sound Field Synthesis with the SoundScape Renderer Sascha Spors et al. |
|
15:30 - 16:00 | Coffee Break | FAU H11 Foyer |
16:00 - 17:30 | Oral Session 3: Physical Modeling and Virtual Analog (Chair: Vesa Välimäki) | FAU H11 Lecture Hall |
16:00 - 16:20 | Numerical Simulation of String/Barrier Collisions: The Fretboard Stefan Bilbao and Alberto Torin |
|
16:20 - 16:40 | An Energy Conserving Finite Difference Scheme for the Simulation of Collisions in Snare Drums Alberto Torin, Brian Hamilton and Stefan Bilbao |
|
16:40 - 17:00 | Physical Modeling of the MXR Phase 90 Guitar Effect Pedal Felix Eichas, Marco Fink, Martin Holters and Udo Zölzer |
|
17:00 - 17:20 | A Physically-Informed, Circuit-Bendable, Digital Model of the Roland TR-808 Bass Drum Circuit Kurt James Werner, Jonathan S. Abel and Julius O. Smith |
|
19:00 - 23:00 | Conference Banquet | Entla's Keller |
Thursday, 04.09.2014 (FAU) | ||
08:30 - 10:00 | Registration @ FAU | FAU H11 Foyer |
09:00 - 10:00 | Keynote 2 (Chair: Meinard Müller) | FAU H11 Lecture Hall |
Audio Indexing for Music Analysis and Music Creativity Geoffroy Peeters (IRCAM) |
||
10:00 - 10:30 | Coffee Break | FAU H11 Foyer |
10:30 - 12:00 | Oral Session 4: Music Analysis and Retrieval (Chair: Joe Timoney) | FAU H11 Lecture Hall |
10:30 - 10:50 | The Modulation Scale Spectrum and its Application to Rhythm-Content Description Ugo Marchand and Geoffroy Peeters |
|
10:50 - 11:10 | Quad-Based Audio Fingerprinting Robust to Time and Frequency Scaling Reinhard Sonnleitner and Gerhard Widmer |
|
11:10 - 11:30 | Score-Informed Tracking and Contextual Analysis of Fundamental Frequency Contours in Trumpet and Saxophone Jazz Solos Jakob Abeßer, Martin Pfleiderer, Klaus Frieler and Wolf-Georg Zaddach |
|
11:30 - 11:50 | Real-Time Transcription and Separation of Drum Recordings Based on NMF Decomposition Christian Dittmar and Daniel Gärtner |
|
12:00 - 12:30 | Poster Fast Forward 2 (Chair: Stefan Bilbao) | FAU H11 Lecture Hall |
12:30 - 14:00 | Lunch @ FAU | FAU Canteen |
14:00 - 15:30 | Poster Session 2: Music Analysis and Effects (Chair: Stefan Bilbao) | FAU H11 Foyer |
A Pitch Salience Function Derived from Harmonic Frequency Deviations for Polyphonic Music Analysis Alessio Degani, Riccardo Leonardi, Pierangelo Migliorati and Geoffroy Peeters |
||
A Comparison of Extended Source-Filter Models for Musical Signal Reconstruction Tian Cheng, Simon Dixon and Matthias Mauch |
||
Onset Time Estimation for the Analysis of Percussive Sounds using Exponentially Damped Sinusoids Bertrand Scherrer and Philippe Depalle |
||
Automatic Tablature Transcription of Electric Guitar Recordings by Estimation of Score- and Instrument-Related Parameters Christian Kehling, Jakob Abeßer, Christian Dittmar and Gerald Schuller |
||
Improving Singing Language Identification through i-Vector Extraction Anna Kruspe |
||
Unison Source Separation Fabian-Robert Stöter, Stefan Bayer and Bernd Edler |
||
A Very Low Latency Pitch Tracker for Audio to MIDI Conversion Olivier Derrien |
||
TSM Toolbox: MATLAB Implementations of Time-Scale Modification Algorithms Jonathan Driedger and Meinard Mueller |
||
FreeDSP: A Low-Budget Open-Source Audio-DSP Module Sebastian Merchel and Ludwig Kormann |
||
Declaratively Programmable Ultra Low-Latency Audio Effects Processing on FPGA Math Verstraelen, Jan Kuper and Gerard J.M. Smit |
||
15:30 - 16:40 | Oral Session 5: Multipitch Analysis and Source Separation (Chair: Philippe Depalle) | FAU H11 Lecture Hall |
15:30 - 15:50 | Polyphonic Pitch Detection by Iterative Analysis of the Autocorrelation Function Sebastian Kraft and Udo Zölzer |
|
15:50 - 16:10 | Music-Content-Adaptive Robust Principal Component Analysis for a Semantically Consistent Separation of Foreground and Background in Music Audio Signals Helene Papadopoulos and Daniel P.W. Ellis |
|
16:10 - 16:30 | Semi-Blind Audio Source Separation of Linearly Mixed Two-Channel Recordings via Guided Matching Pursuit Dimitri Zantalis and Jeremy Wells |
|
16:45 - 17:30 | DAFx Board Meeting | |
19:00 - open end | Visit to Nuremberg | |
Friday, 05.09.2014 (FAU) | ||
08:30 - 10:00 | Registration @ FAU | FAU H11 Foyer |
09:00 - 10:00 | Keynote 3 (Chair: Sascha Disch) | FAU H11 Lecture Hall |
The Beatles and Erlangen? Bubenreuth near Erlangen - the Place where the World-Famous Instruments are Made Christian Hoyer (Hoyer History) |
||
10:00 - 10:10 | Announcement of Best Paper Awards | FAU H11 Lecture Hall |
10:10 - 10:30 | Coffee Break | FAU H11 Foyer |
10:30 - 12:00 | Oral Session 6: Perception and Spatial Audio (Chair: Gianpaolo Evangenlista) | FAU H11 Lecture Hall |
10:30 - 10:50 | Finite Volume Perspectives on Finite Difference Schemes and Boundary Formulations for Wave Simulation Brian Hamilton |
|
10:50 - 11:10 | A Cross-Adaptive Dynamic Spectral Panning Technique Pedro D. Pestana and Joshua D. Reiss |
|
11:10 - 11:30 | Low-Delay Error Concealment with Low Computational Overhead for Audio over IP Applications Marco Fink and Udo Zölzer |
|
11:30 - 11:50 | Categorisation of Distortion Profiles in Relation to Audio Quality Alex Wilson and Bruno Fazenda |
|
12:00 - 12:10 | Handover to DAFx-15 Organizers | FAU H11 Lecture Hall |
12:10 - 12:20 | Parting Words | FAU H11 Lecture Hall |
12:20 - 13:45 | Lunch @ FAU | FAU Canteen |