Interview: CHAIR
CHAIR (The Center for Haptic Audio Interaction Research) is the company of Max Neupert and Clemens Wegener, two experts in physical modelling synthesis that first captured the attention of many in the modular community with a prototype demonstration (lecture concert) of their upcoming Analogue Waveguide module at the SuperBooth 2024.
The Analogue Waveguide is a stereo resonator module that uses a combination of analog BBD delay chips and digital control circuitry to implement a form of physical modelling (the so-called Waveguide Synthesis) in the analog domain that had previously only been implemented as software.
Intrigued by the module and it's unique approach, we reached out to Max and Clemens to ask a few questions about their background in academic research, the concept behind CHAIR, the difference between designing hardware and software, as well as the strenghts and weaknesses of different approaches to physical modelling, including Waveguide and Karplus-Strong synthesis.
You have published a number of academic papers related to things like physical modelling – can you say a little bit about your background and what first got you interested in DSP and physical modelling synthesis?
Max Neupert: I studied industrial design and time-based media and completed a PhD in media arts. I started writing academic papers with Joachim Goßmann, whom I met at UCSD San Diego in 2013. We decided to publish a paper about an interface for concatenative synthesis, and he mentored me on how to do so. And, although I felt like a total impostor, I really enjoyed the process and the inspiring conferences. You meet interesting people and learn about fascinating subjects from your academic peers. We even gave a short presentation of the paper at CCRMA in Stanford. I was quite starstruck when I saw John Chowning [the inventor of FM synthesis] in the audience – and he gave us a thumbs-up after our talk!
We found the communities around conferences such as the NIME , SMC, ICMC, and DAFx particularly interesting. Smaller, more tightly knit communities, such as the Linux Audio Conference or the Pure Data Convention – which I hosted in Weimar in 2011 – were also really rewarding. We are now often asked to review conferences, which is a lot of unpaid work, but it helps us keep our academic “brain muscles” flexed. In February 2025, we were invited to an expert symposium on physical modelling synthesis at Princeton University by Jeff Snyder, a skilled player and inventor of some truly unique physical modelling instruments. It was an absolute honour to be able to present our work alongside the grandmasters of physical modelling Perry Cook and Julius O. Smith III.
Clemens Wegener: After studying Musicology and Computer Science, I had a little bit of experience with academic writing, but it was Max who really pulled me into writing papers. It's all his fault! When we built our first physical modelling controller in 2018, I myself wouldn't have thought to write a paper about it. But I have a PhD position in audio DSP in Lyon now, so it was probably a good idea!
When I first started working on music software, I was more interested in using samples of acoustic instruments as my source material, which really came from my own musical practice. I still use acoustic material in my tracks, it's just an aesthetic preference of mine. At some point, I then realized that physical modelling synthesizers could evoke that materialistic aesthetic of “sample music” as well – perhaps in a slightly different way, but that side of it got me very interested.
Max: I think that, because of our diverse backgrounds, our skill sets complement each other quite well. As friends, we understand and trust each other without needing to spend too much time on communication – which is essential when running a business together. Even though we no longer live in the same city, we can still work together effectively.

What led you to start your company CHAIR (The Center for Haptic Audio Interaction Research)?
Clemens: After having done a few music software projects, it was clear to me that I wanted to start my own company. I was already lurking around my university's startup hub for quite a while, when Max returned from his professorial position in Korea, and he was adventurous enough to join the team. We were initially working together with business students, that was a very interesting time. An important long-term supporter of ours is Sebastian Stang, who taught me how to code C++ and still advises us on software architecture. We were also very happy to receive some financial support from the government. It would have been very difficult without that support, and as part of that deal we were actually obliged to start a company.
Max: When we started the company, we needed a brand name that would represent what we were doing. We knew we needed the “R” for “Research” in the name, otherwise, the “chair” in our logo wouldn’t make any sense. It should be a tea bag! But all jokes aside, we liked the term “research” because we knew what interested us, but we weren’t fully sure of how to achieve our goals. We anticipated that it would take us a long time researching – conceptualising, developing and experimenting – to get there.
We also just enjoyed being a part of the academic world, and with Clemens' PhD position in Lyon, we continue to be a part of academia. That is what is encapsulated in the term “research”. However, legally, we are just the “Neupert and Wegener GbR”. Another one of our important close collaborators is Phillip Schmalfuß, who does incredible DSP work in Pure Data. He created many of the patches for our Tickle instrument and wrote the DSP basis for our EXC!TE Snare Drum and EXC!TE Cymbal plugins.
What exactly is meant by the phrase “haptic audio interaction”?
Max: Technically, it is a misnomer – the correct term would be “tactile” instead of “haptic”. However, “CTAIR” is not nearly as pronounceable as “CHAIR”, so please forgive us the inaccuracy! But to return to your question: our main interest really lies in the acoustic excitation of resonating physical models. We believe that this approach has a lot of potential for playful interactions, what is known [in computer music academia] as “intimate musical control”, a term coined by Wessel and Wright to describe a certain relationship between performer and instrument. Such a relationship requires a low-latency, low-jitter response and fine-grained, multidimensional control over the sound. So we understand acoustic excitation – e.g. a sound picked up from a physical object serving as the input for a resonator – as a form of “haptic audio interaction”.
And, you know, I hesitate to say that I am “a drummer”, but I took drum lessons and played the drums when I was younger. I used to read a lot of drummer magazines and I was really excited when Korg came out with the Wavedrum. I even went to Musikmesse Frankfurt a few times, back when it was still a cool event! So when I finally got to play the Wavedrum at the Korg booth, I was really intrigued, but ultimately disappointed, because there was just something that didn’t quite feel right about the way you interacted with the sound. Today, with my current theoretical knowledge, I can conclude that the issue with the Wavedrum was that the aforementioned “intimate musical control” was lacking. Some presets were simple drum triggers with varying velocity, while others were more real-time interactive, but it just never felt as intimate as playing a conga. It didn’t allow for the rich gestures of an instrument like the tabla.
I no longer have a Wavedrum, so I can’t say for sure, but I think the problem was a combination of deficient control paradigms and jitter and latency issues. In the analog domain, latency and jitter are a non-issue, but as soon as digital processes are involved, they become a big concern. Robert Jack from Andrew McPerson's Augmented Instruments Lab wrote a great paper called “Action-Sound Latency and the Perceived Quality of Digital Musical Instruments: Comparing Professional Percussionists and Amateur Musicians” which discusses this exact problem. It concludes that having low jitter is actually more essential than achieving low latency, because a professional musician is able to compensate for constant latency.
Our Tickle instrument, which we showcased at Superbooth 2018, was designed to play virtual resonators through acoustic excitation. And while it definitely outperformed the Wavedrum in terms of offering intimate musical control, it was not where we wanted it to be technically, since its capacitive sensing was suboptimal and it relied on an attached computer for its digital signal processing.
But if you've been following the work of Bela.io recently, you may have noticed that they have developed an affordable, low-latency, and low-jitter DSP platform, as well as a powerful multitouch interface. We're very fortunate to have shared goals with Bela and to be able to benefit from this synergy – so we're thrilled to be working with their talented team, and we're closely involved in developing future products. And, by the way: we first met at Superbooth!

What was the motivation behind your first Eurorack module, the Ilse oscillator?
Max: The Ilse is a complete synth voice. It is an adaptation of Syntherjack's Totoro. Clemens was teaching beginner electronics for media artists at the Bauhaus-Universität Weimar and it resulted in this circuit that is really good at making juicy acid sounds. It's a fun kit, and we wish more people would explore it. We feel that it hasn't quite had the success it may deserve. But on the other hand, it is also not our main mission to make DIY kits, so we're at peace with that.
Clemens: It was conceived to be a minimalist synthesizer voice using only off-the-shelf components and easy-to-understand entry-level circuits. While trying to be as economical as possible, I found a way to integrate the exponential converters in a simple opamp oscillator core. So you could argue that it is a little bit innovative as well. It was well suited to an introductory electronics course and for students to build. We’ve also learned from the mistakes they’ve made and have gone through several iterations to optimize the soldering experience. It was not really meant to be a commercial product initially, but we realized that it is very good value for the money, and we ended up offering it in our webshop. The module's name is an homage to the cat in a shared house I lived in during my student days in Weimar. It was known as “the villa”, a beautiful place to be, where everybody loved Ilse when she was exploring the garden or drinking water directly from the tap.
Can you describe what “waveguide synthesis” is and where it comes from? People often seem to use it somewhat interchangeably with “Karplus-Strong synthesis” – what are the main differences between the two approaches?
Max: In general, waveguide synthesis describes a simulation of sound waves travelling through a medium. The term “Digital Waveguide Synthesis” originates from Julius O. Smith III., who defines them as “computational physical models”, hence they are a digital signal processing method for modelling acoustic musical instruments.
Clemens: In contrast to other approaches, the wave-travel aspect of waveguide synthesis is modelled by employing delay lines. But I'm not entirely sure if waveguide synthesis should be seen as the mere method of [physical modelling] using delay lines, or if it should be seen as a closed theoretical framework with further implications. Either way, we are using the term in a more relaxed way, referring to the use of delay lines as the wave’s “travel vehicle”.
Max: In their 1983 paper Digital Synthesis of Plucked String and Drum Timbres, Kevin Karplus and Alex Strong published what has become known as the “Karplus-Strong algorithm” or “Karplus-Strong string synthesis”. While Karplus-Strong predates Smiths' work by a few years and is often described as a predecessor to it, if you take the technical definition of a waveguide, it [Karplus-Strong synthesis] does fall under its definition and could be seen as a specific implementation of a waveguide.

(diagram taken from Karplus/Strong, Digital Synthesis of Plucked String and Drum Timbres, 1983)
Waveguide synthesis consists of a few simple building blocks: a delay, a modifier (typically a lowpass filter), and finally a feedback path back to the delay, multiplied by a factor slightly under 1, which controls the decay of the sound. This represents a virtual approximation of what is happening in a stringed instrument when the string is plucked. The input to the algorithm is generally a click or a small noise burst that represents the impulse energy entering the string when it is plucked. This energy then travels to the fixed ends of the string (the bridge and nut) where it reflects and travels backwards.
The Karplus-Strong approach then folds these two two traveling-wave directions of the string into a single delay-line feedback loop. And since these core elements – a delay, a filter, and feedback – are not difficult to implement with analog circuits or with Eurorack modules, the Karplus-Strong “algorithm” can easily be patched on a standard modular synthesizer. There’s nothing in it that would only be possible in the digital domain. If you have been to Christian Günther's (CG Products) tent at Superbooth and have checked out his CG Delay 1022, you can hear the kinds of lush and organic sounds that are possible with this paradigm in the analog domain.
That said, there is something to the more complex digital approach of waveguide synthesis that has been difficult to recreate in the analog domain up until now, and that is the way energy conservation at two-dimensional scattering junctions works. So instead of its simplified form [in Karplus-Strong] – a reflection at the end of a one-dimensional string that can be implemented by a single VCA or amplifier gain knob to control the feedback – two-dimensional scattering junctions are nodes at which the waves get split into different paths. This splitting of waves needs to obey the law of energy conservation, and you will have a difficult time doing that simply with knob settings on audio mixers or VCAs (although you could, theoretically).
So compared to one-dimensional recursive feedback, something else is happening at the edge of two-dimensional resonators like membranes (drumheads, cymbals, etc.) or three-dimensional ones (think tubular bells, etc.) – the waves reflect in different directions and can re-join again, building more complex interactions than those seen in a string, creating inharmonic overtones and complex spectra. We [in our Analogue Waveguide module] are modelling this behaviour of two-dimensional resonators with our rotational mixer, but we can also model the sympathetic resonance of two strings on a bridge through the reflection setting.

(diagram taken from Julis O. Smith III, "A Basic Introduction To Digital Waveguide Synthesis", 2006)
Since Waveguide synthesis was originally invented as a digital technique, what made you want to realize it in the analog domain in the form of a BBD-based Eurorack module, the Analogue Waveguide?
Clemens: Why analog? Just curiosity alone would have been a good enough reason for me, but there was also a more concrete reason: I was listening to patches by Ron Berry, who created digital and analog models of wind instruments. And when comparing the sound examples of his analog clarinet model with his digital reactor reproduction, the analog version sounded much more organic and interesting to my ears, even though both are using the exact same building blocks. We also have made digital models of our waveguide, and I would say there is a very noticeable twist to the analog version.
There were also very few modules specialized in analog physical modelling when we started, namely Christian Günther's CG Delay 1022 and the Cwejman's Res4 – great-sounding modules, but on their own, their sound palette is somewhat limited. You would need to chain multiples of them with other specialized modules to get the kind of dense spectra you can get with our Analog Waveguide. So we wanted to create something that offered a more convenient approach to analog physical modelling and would give usable and interesting results faster, and with less of an initial investment.
As wonderful as they can be, BBD chips are also notorious for requiring quite specific circuit design to sound good. What was your approach to designing the module’s circuit from prototype to finished product?
Clemens: We went through quite a few iterations, really starting with the simplest delay circuits and then learning from them. We made simulations of the circuitry in Spice, and on a higher level, in Pure Data, to see whether certain techniques might help with noise or linearity. We learned a lot from these simulations, but equally as much from making actual breadboards and PCB prototypes. We also tested a variety of different BBD chips for their signal-to-noise ratios and saturation behaviour.
We had an all-analog version – including the driving oscillators and sine shapers for the rotation core – of the module at one point. But we had temperature stability problems with the original analog control circuitry, so we finally decided to give the module a digital brain, which also saved us an additional PCB and lowered production costs. I want to emphasize here that the audio signal path itself is still fully analog, and I'm also happy to say that the digital control circuitry does not degrade the analog sound aesthetic in any way – in fact, I would actually argue that the contrary is the case, since the analog circuits actually sound better now with the more stable digital control, and we were able to squeeze out a few additional bits of signal-to-noise ratio.
Do you see digital DSP design and analog circuit design as two entirely different artforms or are there also similarities and techniques that apply to both?
Clemens: Coming from a DSP background, there is actually a lot that carries over to the analog domain. Knowledge of DSP building blocks can be applied to the analog domain. But there are also a lot of details that are very specific to analog circuits. Suddenly, things like the material properties of the components and semiconductor physics play a role. And all of that may be daunting at first sight, but luckily, you can draw on other people's experience and circuit designs. I would say that in both worlds, you do need extensive experience to be able to come up with a good design. And melting the two together can sometimes get even more complicated, for example if you need to take the processing jitter and delays of a digital system into account while marrying it with an instantaneous analog circuit.
So on a meta level, they are very similar and have the potential for synergetic effects, but when it comes to the implementation details, they do differ a lot. I think for a bigger company, all of that may justify actually employing three different roles: an analog engineer, a DSP programmer, and a systems engineer that is able to integrate the two. I guess that tells us they are very different fields. There are already a lot of digitally controlled analog synthesizers on the market that have probably been designed in this manner, with split roles for each field.
But integrating analog and digital processing blocks more densely may actually be a good field for future research – e.g. how about having a virtual analog DSP circuit within an analog circuit? It might be a weird idea, but it tells us where our current borders are. We still mostly think in either “digital” or “analog”, but there is still technology to be developed that can integrate both worlds more tightly. That could be one future direction in music technology.
Can you say anything about your upcoming plans? Will there be more batches of the Analog Waveguide in 2026?
Max: Yes, there will be more batches of the Analog Waveguide in 2026! We have been absolutely humbled by the ongoing demand and excitement from our customers. We will also be releasing another product in collaboration with Bela, the team behind the Eurorack modules Gliss and Trails, as well as the Gem — the next generation of their powerful embedded audio platform with an onboard IDE. We will also be working on the original concept of an instrument that combines an acoustic excitation interface with physical modelling synthesis, taking things even further than the Tickle. We now have the knowledge and means to take this idea further, but it probably will not be ready by 2027.
You can find out more about CHAIR and their instruments over at their website.
Sign up for our newsletter
to get new Stromkult content directly into your inbox!