This year will continue to provide the leading coverage of audio development while setting the stage for the evolution of audio technology.
We’re launching this year a new day of workshops , which will take place on Monday 13th November.
The event will be hosted at CodeNode in central London.
|8:30||Registration and refreshments|
|9:30 - 12:30|
Fabian Renn-Giles, Lead Engineer,
Ed Davies, Projucer Developer, JUCE
Zsolt Garamvölgyi, Staff Algorithm Engineer,
SKoT McDonald, Lead Software Engineer & Head of Sound R&D, ROLI
|14:00 - 17:00|
Vlad Voina, Freelance Software Engineer
Paul Chana, Senior Software Engineer, ROLI
Julian Storer, Principal Software Engineer,
Tom Poole, Senior Software Engineer, JUCE
Don Turner, Senior Developer Advocate,
Phil Burk, Staff Software Engineer, Android Audio Framework, Google
|16:00 - 17:30|
Registration for main conference delegates
|18:00 - 21:00|
Welcome event in partnership with Apple
|8:00||Registration and refreshments|
Jean-Baptiste Thiebaut, Director of Developer Products, ROLI
Keynote: Does your code actually matter?
Countless conference talks have been given about how to write "better" code. This talk is not about how good or bad your code is - instead it's a discussion of how *significant* it is.
|10:15||Break for refreshments & networking|
Béla Balázs, Software Engineer, Apple
Doug Wyatt, Software Engineer, Apple
John Danty, Senior Product Manager, GarageBand
David Rowland, Lead Software Developer, Tracktion
C++, JUCE, UIs
Angus Hewlett, VP Engineering, Research & Design, ROLI
C++, Performance, DSP
To be announced
David Saracino, Software Engineer, Apple
Yvan Grabit, Technical Lead VST,
Michael Spork, Senior Developer, Steinberg Media Technologies
Plugins and DAWs
Andreas Gustafsson, Lead Developer, Waxing Wave
Tim Adnitt, Product Manager,
Carl Bussey, Software Developer, Native Instruments
Friedemann Schautz, Head of Development, Ableton
Glenn Kasten, Software Engineer Google,
Sanna Wager, PhD Candidate, Music Informatics, Indiana University Bloomington & Intern, Google
Machine Learning, Performance
Martin Schuppius, Independent Software Developer
Anastasia Kazakova, Product Marketing Manager CLion, JetBrains
C++, Dev tools
David Zicarelli, CEO, Cycling’74
Nikolas Borrel, Founder, Livetake
Statistics and Music
The amazing usefulness of band limited impulse trains, shown for oscillator banks, additive synthesis, and modeling old stuff.
Stefan Stenzel, CTO, Waldorf Music GmbH
Dave Ramirez, Senior Software Engineer, Inspire
|15:25||Afternoon break for refreshments & networking|
Moderator: Heather D. Rafter, Attorney, RafterMarsh US
Panellists: Mike Warriner, General Counsel, Focusrite,
Iris Brandis, Legal Counsel, Ableton
Jemilla Olufeko, Legal Counsel, ROLI
Varun Nair, Engineering Manager,
Hans Fugal, Software Engineer, Audio 360, Facebook
Dev tools, Performance
Devendra Parakh, CEO, Waves Audio India
André Bergner, Development Team Leader, Native Instruments
Keynote: Music as Experience, Music as Product, Music as People
We are surrounded by music in our lives which people intrinsically connect through: they share songs, sing together, go to at concerts, and gather around campfires. Our mission is to enable these musical experiences everywhere amongst millions of people around the world, creating and sharing songs with each other every day. Much like music creation itself, there is no secret, but there is a lot of trials with failures and successes. In this keynote, Jeannie will share learnings from these trials—through Smule’s growth—and how to not lose sight of the experiences and people you’re building the products for in the first place.
|19:00||Social Gathering (until 21:30) including Awards|
Keynote: The human in the musical loop
Music we hear is most often made by humans, directly or indirectly, for consumption by humans. In a series of anecdotes, we consider the imagination and sensory constraints of the human mind when creating and apprehending music. From the architecting of large-scale forms and structures in human-computer improvisation to the limits of ensemble interaction in distributed immersive performance, experiments reveal the workings of the musician’s mind in motion. The art of crafting musical experiences has been described as the choreography of expectation. Evidence of this work is made visual in expectation violations that generate musical humour, time delays that heighten anticipation, and tension modulations that create narrative interest. Finally, together with the modulation of tension, we examine how and if repetition structure imbues coherence in computer composition.
|10:00||Morning break for refreshments & networking|
Timur Doumler, Senior Software Developer, JetBrains
Jan König, Co-Founder, Jovo
AI & Machine Learning, Home assistants
Ivan Cohen, Freelance Software Developer & Owner, Musical Entropy
Ben Supper, Technical Programme Manager, ROLI & Chair, MPE Working Group, MIDI Manufacturing Association,
Amos Gaynes, Product Design Engineer, Moog Music
Steinunn Arnardottir, Head of Software Development & Collaborations, Native Instruments
AI & Machine Learning
Michael Zbyszyński, Research Associate, Dept of Computing, Goldsmiths, University of London
AI & Machine Learning
Daniel Jones, Chief Science Officer, Chirp
Matthieu Brucher, Lead Developer, Audio Toolkit
Moderator: Matthias Krebs, Appmusik
Panellists: Jeannie Yang, Product Leader & Innovator, Smule
Matt Derbyshire, Head of Product, Ampify (part of Focusrite/Novation)
Ashley Elsdon, Founder, Palmsounds & Journalist, Create Digital Media
Julian Storer, Principal Software Engineer, Tom Poole, Senior Software Engineer,
Fabian Renn-Giles, Lead Engineer, Ed Davies, Projucer Developer,
Lukasz Kozakiewicz, Senior Software Engineer, Noah Dayan, Software Engineer Intern
Ian Hobson, Application Developer & Software Engineer, Ableton
Christof Mathies, Computer Scientist,
Nico Becherer, Senior Computer Scientist, Adobe Audio Team
Testing, Audio plugins, Performance, DSP
Raphael Dinge, Head of Software Architecture, Ohm Force
SKoT McDonald, Lead Software Engineer & Head of Sound R&D, ROLI
Brecht De Man, Researcher, Centre for Digital Music, QMUL
AI & Machine Learning, Plugins and DAWs, Audio Industry in general
Martin Finke, Freelance Software Developer
|15:25||Afternoon break for refreshments & networking|
Don Turner, Senior Developer Advocate, Google
Ryan Avery, Senior Engineer, Dolby Laboratories
Testing, Plugins and DAWs
Geert Bevin, Senior Software Engineer, Moog Music
Embedded, JUCE, UIs
Assessing the suitability of audio code for real-time, low-latency embedded processing with (or without) Xenomai
Giulio Moro, PhD Student, Centre for Digital Music, QMUL
Keynote: How can physical play give people a stronger sense of human connection?
How can physical play give people a stronger sense of human connection? This talk proposes collective play as a meditative practice to reconnect us with each other and our environment.
|19:30||Open Mic Night (until 21:30) Submit a talk here|
The playhead is an essential component of most digital audio workstations, yet the majority of audio effects remain blissfully unaware of it. The Groovinator, an audio plugin available in VST and AU format, presents several related methods of utilizing the playhead as a tool for modifying rhythmic aspects of an audio track in real-time. By using playhead position data in addition to attributes of the rhythms (i.e. the number of pulses and steps in the original and target rhythms), the Groovinator is able to automatically adjust a track’s rhythm in ways that would otherwise be painstakingly time-consuming. As such, it demonstrates the capacity for audio effects to take advantage of the playhead as a tool for enhancing real-time production and composition.
David Su, Research Assistant, MIT Media Lab
The motivation for this work is to understand the factors that influence the sound of popular guitarists, in order to be able to replicate their sound by extracting relevant parameters from their recordings. Electric guitar synthesis requires parameters such as the electric guitar's pickup position and where the string is plucked. The positioning of pickups of popular electric guitar models contributes to their unique sound. Thus, estimating the location of the magnetic pickup of an electric guitar could possibly help distinguish which pickup configuration is selected (for a known guitar) and which electric guitar model is played (for an unknown guitar). We proposed an approach to simultaneously estimate the pickup and plucking locations of an electric guitar from audio recordings. In this poster, we discuss the results from our published papers, where the accuracy of our method is evaluated for various notes, plucking dynamics, chords and audio effects. Also, we include some tests on real-world signals such as the guitar tracks from The Beatles' "Day Tripper" and The Doors' "Love Me Two Times".
Zulfadhli Mohamad, PhD Researcher, QMUL
Myo Mapper is a free and open source cross-platform application to map Myo data into OSC messages; written using JUCE. It represents a “quick and easy” solution for experimenting without requiring any programming knowledge. It features (i) easy-calibration to overcome yaw data drift (ii) mapping functions to facilitate the Myo use in musical applications, and (iii) data feature extraction. Myo Mapper is among the most downloaded apps in the Myo Market in the 'Connected Things' and 'Tools and Productivity' categories. It has facilitated the realisation of 7 musical and 3 dance performances worldwide, the development of interactive audiovisual systems for musical expression, sound design, and easy solutions for controlling robotic arms. Furthermore, it has been used in education to deliver workshops on Music Interaction Design at Berklee College of Music Valencia Campus (Spain), University of Southampton (UK), Conservatorio Giuseppe Martucci (Italy), Conservatorio Santa Cecilia (Italy).
Balandino Di Donato, PhD Student, Integra Lab, Birmingham Conservatoire & Jamie Bullock, Lead Audio Developer, Noiiz
As a developer, have you ever wondered how to create a unified way for your users to use different tuning scales in your software? As a musician or music producer, have you ever wondered how to use the same microtonal scale across different instruments? IntaScale is a C++ module and accompanying REST API for audio developers to access a centralised store of tuning scales in their plugins and applications. The C++ module is based around a local database and will automatically synchronise with changes to the remote database. The API will allow registered users to share their own scales with other users, as well as allow an application to provide a subset of scales to its users. The data format is compatible with Scala.
Adam Wilson, Founder, Node Audio
Exploring how a digital music instrument, the Seaboard, can be used for real-time controlling of animations. Animating digital characters can be a very unintuitive and difficult process. It often requires extensive knowledge and complex software. To create animations in real-time, which would be interesting for interactive live performances, is even harder. We propose a setup to use the input of the Seaboard in the game engine Unity, look at different mappings, that could be used to animate a character and present the results of a small user feedback round. We explain how the Seaboard is a suitable device to control animations in real-time.
Simon Ringeisen, Computer Science Student, ETH Zurich
Programming day in and day out can leave you with sore hands and wrists, which may become a serious problem if you don't change your habits and take ergonomics into consideration. As we all know making audio software can be great fun, but if you are not careful, hours of programming without breaks can cause you real damage. Further to this, the manual dexterity required for programming can be a limiting factor and a barrier to entry for people with disabilities. Anyone who has tried using voice recognition knows that it can be quite error-prone, and whilst these days it's very usable for writing e-mails and text messages, it is generally too cumbersome to code with due to the special syntax, indentation, and capitalisation of words required for most programming languages. These challenges can be overcome with certain hardware and software, and there are many coders out there who have switched to using their voice for some or all of their computer use, in order to recover from injuries and continue to work. Several voice coding systems have been developed in the past, but most of those were for Windows only. VoiceCode.io is a relatively new piece of software that provides an extremely flexible system and language designed for coding using the voice and works on Mac. Windows support is planned. VoiceCode.io does much more than just coding and allows customisable voice control of the entire operating system, speeding up many tasks. Oli will discuss his experiences with different hardware and software for anyone who is curious and wants to try and avoid injury. Although VoiceCode.io is not his project, Oli has developed an Xcode extension that allows the software to interface with the Xcode text editor which he will demonstrate.
Oli Larkin, Audio Software Developer, Oli Larkin Plugins & PhD Researcher, Creative Coding Lab, University of Huddersfield
CMake is a cross-platform build system generator: like JUCE’s Projucer, it can create Makefiles, Visual Studio solutions, Xcode projects. CMake has many more features than Projucer and using it will allow you to simplify and automate building, testing, and packaging your JUCE applications and audio plugins. However, due to the way JUCE projects are structured, it is not easy to use CMake for building them, especially when you are not an advanced CMake user. This is why FRUT was created. FRUT, previously known as JUCE.cmake, is a collection of tools dedicated to building JUCE projects using CMake. It currently includes: Reprojucer.cmake – a CMake module that provides high-level functions to reproduce how a JUCE project is defined in Projucer, Jucer2Reprojucer – a console application that converts .jucer project files into ready-to-use CMake project files, and several CMakeLists.txt files generated from existing .jucer files as examples. These tools make it very easy to convert an existing JUCE project and start leveraging the features of CMake right now.
Alain Martin, Software Engineer, Ableton
The deadline for submitting a talk has now passed. Please come back next year to submit a proposal for a talk and/or a poster to be presented at ADC 2018. Presenting your work at the Audio Developer Conference is an excellent way to engage a wide range of C++ practitioners about your areas of interest and expertise, and gather useful feedback from experts. JUCE invites all attendees – from C++ gurus to indie developers to students – to contribute to the Audio Developer Conference.
- 7th July: Notification of acceptance
- 14th July Program announcement
Format and proposal form
Talks range from audio research to professional practices, standards in audio development, and experimental projects are welcome to. We can accommodate talks that are 25 minutes long (half session) or 60 minutes long (full session), including Q&A.
Presenters are encouraged, but not required, to submit slides and source code for distribution to attendees and to agree to have their sessions recorded. Presenters must agree to grant a non-exclusive perpetual license to publish submitted and/or recorded materials, either electronically or in print, in any media related to JUCE.
Speakers will benefit from a discounted price to attend the conference, at the flat fee of £200. For the speakers who already have purchased an early bird ticket, the price difference will be refunded.
If you are intending to submit a proposal and would like to discuss a lighter fare (if you're traveling from abroad, are a student, independent developer, etc), please get in touch with us by sending an e-mail to email@example.com
- Aaron Leese, Stagecraft SoftwareAdam Wilson, Node AudioAnastasia Kazakova, JetBrainsAndré Bergner, Native InstrumentsAndrew McPherson, Queen Mary University of LondonAstrid Bin, Queen Mary University of LondonBen Supper, ROLICostas Calamvokis, EvenharmonicDavid Rowland, TracktionDon Turner, GoogleFabian Renn-Giles, ROLIFelipe Tonello, ROLIGeert Bevin, Moog MusicIan Hobson, AbletonIvan Cohen, Musical EntropyJulian Storer, ROLIKevin Nelson, rknLAMariana Lopez, University of YorkMartin Robinson, Spitfire AudioMichael Zbyszyński, Goldsmiths University of LondonMick Grierson, Goldsmiths University of LondonOliver Larkin, Oli Larkin Plug-insRay Chemo, Native InstrumentsRebecca Stewart, Queen Mary University of LondonRichard Meyer, JazzMan LtdRichard Powell, AppleSkot McDonald, FXpansionStefan Gränitz, Freelance Developer and Compiler EngineerSteinunn Arnardottir, Native InstrumentsStéphane Letz, GrameSteve Baker, FXpansionThibaut Carpentier, IrcamThomas Poole, ROLIZsolt Garamvölgyi, ROLI
CodeNode is the UK's largest venue dedicated to technology events, offering premium training, collaboration and large conference space in the heart of London.
A tech start-up at heart, Skills Matter has been organising sector-leading technology events for more than a decade. And, with 23,000 sq ft to play with, every event held at CodeNode can be tailor-made to suit your individual requirements.
A spacious and vibrant venue, CodeNode is perfect for hosting workshops, your next product launch or your largest technology conferences. Get in touch to find out more today.
- Travelodge City Road (London - City) (1-23 City Road, London EC1Y 1AG) 0.3 miles from Code Node
- Holiday Inn Express (London - City) (275 Old St, London) 0.6 miles from Code Node
- Travelodge Liverpool Street Station (1 Harrow Place, London E1 7DB, United Kingdom) 0.6 miles from Code Node
- Premier Inn (Aldgate) (66 Alie Street, Aldgate, London E1 8PX) 0.9 miles from Code Node
- Apex London Wall Hotel (7 - 9 Copthall Avenue, City of London) 0.3 miles from Code Node
- Hotel Indigo (Tower Hill) (142 Minories, London EC3N 1LS) 1 miles from Code Node
- South Place Hotel (3, South Place, City of London) 50 metres from Code Node
- Andaz Liverpool Street (40 Liverpool Street, City of London) 0.3 miles from Code Node