In this article in the 30th Anniversary issue of Electronic Musician magazine, three pioneering creators of Multidimensional Polyphonic Expression (MPE) instruments assess the state of the movement toward MPE within the framework of the prevailing MIDI protocol. Propelled in large part by their expressive electronic instruments, MPE is moving toward the musical mainstream.

But this is just the beginning of a long period of innovation both in multidimensionally expressive instruments and the software that supports them, says our founder Roland Lamb in conversation with Roger Linn, creator of the LinnStrument; Lippold Haken, creator of the Haken Continuum; and Gino Robair of Electronic Musician.

“I believe that expressive controllers are not going away, but, rather are the next big thing in electronic instruments,� says Roger Linn. “In my view, sometime in the future—20, 30, or 50 years from now—people will look back at the period between around 1970 and 2020 as the period of transition between the expressive acoustic instruments that came before, and the expressive electronic instruments that came after.�

30th Anniversary Special: Multidimensional Polyphonic Expression

by Gino Robair

Over a span of three decades, MIDI has changed the course of the entertainment industry in a profound way by providing a communication protocol that could be applied not only to musical instruments but also to any device. Consequently, MIDI is being used in nearly every way imaginable, from coordinating sound, lighting, rigging, and pyrotechnics in concerts to controlling sculptural objects in galleries and animatronics at Walt Disney World. The specification’s adaptability remains surprisingly robust after 30-plus years.

Since the beginning, however, a contingent of developers and users has felt that the speed, resolution, and bandwidth of MIDI limit the level of expressive control over electronic instruments that acoustic instrumentalists enjoy.

To address this, a group of manufacturers have developed an addition to the MIDI spec formally called Multidimensional Polyphonic Expression (MPE), often referred to as Expressive MIDI, which is intended to provide complete per-note expressivity in hardware and software sound generators. The list of manufacturers involved in the development of the protocol includes Apple, Bitwig, Keith McMillen Instruments, and Moog Music, as well as companies building Polyphonic Multidimensional Controllers (PMCs) such as Eigenlabs, Haken Audio, Madrona Labs, Roger Linn Design, and ROLI.

Although a new protocol providing increased expressivity is a dream come true for many musicians, others may view the concept of MPE as esoteric or see the increase in the complexity of MIDI controllers as daunting.

Do musicians want – or even need – these added capabilities? Doesn’t the MIDI spec already provide more expressive capabilities than a musician could ever use?

To answer these questions, I spoke with three leading instrument designers: Lippold Haken, who has been developing the Continuum Fingerboard since the 1980s and introduced it to the market in 2000; Roland Lamb, designer of the ROLI Seaboard GRAND (see our review); and electronic instrument pioneer Roger Linn, whose LinnStrument controller was reviewed in our October issue.

EM: How do you get the public to buy into Expressive MIDI if they don’t know what they’re missing?

Lamb: Henry Ford said that if you asked people what they wanted, they would’ve said faster horses. I think people sometimes work within a given modality, such as the modality of a keyboard, and it can be hard to imagine what the other possibilities are without experiencing something new.

The new protocol is often referred to as Expressive MIDI because that gets to the heart of what it’s trying to do. And users can understand that. We all know what MIDI controllers are. There are great Expressive MIDI controllers like the LinnStrument, which I love; the Continuum, which helped to inspire the Seaboard; and the Eigenharp and SoundPlane, among others. But these new instruments have not yet become truly accessible—in terms of the ease of use of the software, the software integration, and the design—or affordable.

Haken: Up until now, we’ve done so little with continuous controllers. People think of the world as being divided between keyboard and non-keyboard input—a beginner’s view of the world—but we’ve just barely scratched the surface. That’s because keyboards are generally interchangeable, in terms of what you can do with them: they all just start and stop the same sound libraries or the same synths.

Continuous controllers are very different from each other and not interchangeable at all. People have this perception that a continuous controller is merely a keyboard with pitch bend. That’s an oversimplified interpretation. I think it’s much deeper than that, and we’re going to find a lot to explore in the next 100 years of synthesizer development.

Linn: The biggest problem is that there is not a widespread perception of anything being wrong with the MIDI keyboard. But once people see the limitation and see what is possible beyond it, then they can’t go back. And the main limitation is that they’re playing music with on/off switches. Yes, they sense velocity and pressure after the key is fully depressed, but they are still on/off switches.

By comparison, the pressure-sensing on controllers such as the LinnStrument, Continuum, and Seaboard starts when you lightly touch the note, and then it continues as you add pressure. That way, you can emulate a wind or bowed-string instrument, controlling the loudness of the note all the way from silence to full loudness. And pitch bends on a LinnStrument or Continuum are much more natural and intuitive than a pitchbend wheel: If you want to bend a note, you just place your finger on one note and slide it up to the other note at exactly the rate and movement you desire.

Using a wheel for vibrato doesn’t really sound natural. A modulation wheel adds mathematically perfect vibrato to a signal. By comparison, if you perform vibrato on a LinnStrument, Continuum, or a Seaboard, it sounds very natural because your finger continuously controls the speed, shape, and width of the vibrato, just as violinists have done for centuries but without that tiny neck that is so difficult to play in tune.

Once you’re able to add all of these performance nuances into your music, playing with on/off switches feels like going back in time.

EM: How will adding Expressive MIDI to the MIDI Spec change things?

Linn: Synthesizers need to have presets that work with polyphonic expressive controllers. Right now, the presets in popular synths are optimized for the on/off switches of MIDI keyboards. When you connect a LinnStrument over a single MIDI channel to most synths and use these standard presets, you’ll get the advantages of pitch slides and wiggling your finger to hear vibrato, but these presets won’t use your finger pressure or forward/backward movement, nor will they permit polyphonic pitch bends or polyphonic expression.

To get polyphonic pitch bends or polyphonic expression over MIDI, you need to send each touch over a separate MIDI channel, rotating through the MIDI channels. This gets around MIDI’s limitation of Pitch Bend, Control Change, and Channel Pressure messages applying to all notes on the channel.

MPE, or Expressive MIDI, is merely a formalization of this voice-per-channel method, where each note is sent on its own MIDI channel, rotating through defined channels with each new note you play.

There are a number of synths that already implement voice-per-channel and a number of controllers that implement channel-per-note. When these are used together, you can already perform polyphonic expression. But to do it now, you have to define the range of per-note channels in both the controller and the synthesizer, define the main channel over which the messages that are common to all voices are sent, define the pitch-bend range in both instruments, define the controller numbers and more, which is confusing to most players.

Once the synthesizers and controllers start adding MPE compatibility, you’ll merely select MPE Mode on the controller and instantly the sound generator will respond polyphonically to velocity, pressure, and pitch as well as y-axis timbre control in instruments that offer it.

Haken: MPE is important because it’s Step Number 1 in getting these things to talk to each other. The next step is to have different synthesis-algorithm designs for particular controllers. I’m not saying that it should all be exclusive. But if you want to get to the point where it’s worth your time to practice eight hours a day on a particular instrument, to really play something that would approach what an acoustic performance would be in terms of expressivity and really shaping the sound and such, the sounds will need to be very particular to that controller and the things it does, how it responds, and so forth.

I think the idea that the controller is totally separate from the synthesis method is a mistake: They really go together. So, while MPE is really important, I think it’s also important to keep in mind that, in the end, you really do want to customize the synthesis to be appropriate for the controller.

Lamb: The great thing with Expressive MIDI is that it is really just an implementation of MIDI. It uses the conventions of MIDI, so it’s not a huge step. However difficult it is, eventually we will need to layer new standards on top of MIDI because it will become a gating item for what instruments can do. I think the industry will come around to that in time, and the MMA [MIDI Manufacturers Association] is doing a great job of leading that charge and working with stakeholders across the industry.

EM: Is there room for improvement within your current controller design or Expressive MIDI itself?

Lamb: There is room for improvement absolutely everywhere. I believe in constant improvement of everything.

Think about the pianoforte, especially the pianos that were made around the 1930s sort of when instruments like the Steinway concert grand started to peak. If you have a piano from that period and it has been well maintained, you could arguably have one of the best pianos in the world. Obviously, Fazioli and others have continued to push things forward, but there was a point where pianos reached the apex of their modern sound and development. But it took a few hundred years of innovation from the birth of that new idea to get to that point. The Seaboard is a new instrument and we’re just at the very beginning of a long journey with it.

In terms of the Expressive MIDI specification, while there might be room for improvement, a protocol is a very different sort of thing than an instrument. An instrument you just want to make better and better. But a protocol is more like an agreement or a compromise. That is because there are a lot of different people involved—hardware manufacturers, synth makers, and others—who might use a protocol in different ways. It’s more like a multipurpose tool, in a way. But I think we’ve come up with a great next step.

Haken: There are a huge variety of things that can be improved. The most fundamental thing is how you interact with the sound. In the Continuum, you’re pressing something that has a nice mechanical feel to it, and I’m very proud of it. But there are so many different things you could do where, if you look at the details—for example, the difference between a $100 violin and a $30,000 violin—it involves tiny details and huge improvements. And in many cases, those are improvements that only a very skilled violinist would even notice.

We are very much in an age where people expect their iPhone to be their musical instrument as well as their day planner, their telephone, and everything else. So to some extent we have this idea that there is this generic x, y, z controller that will do everything. But I think there are a huge variety of possibilities, such as how you interact with an instrument.

Look at the difference between, say, a theremin and an Ondes Martenot. In many ways, they’re quite similar in terms of sound generation. But the kind of music that has been made on those two instruments, which are from the 1920s, is very, very different. And it has to do with how you interact with each one; exactly how the instrument is set up and how a person works with it, and the skill set that people learn. It is a big deal.

The main way people purchase synthesizers nowadays is that they go to a store, try all the keyboards, and the one they sound best on is the one they buy. But that’s not how anyone buys an acoustic instrument. This whole idea of instant gratification is nice, but I think there is a very important part about being incredibly proud of the skills you have worked on for years in order to do amazing things on your instrument.

But the initial thing for MPE is to introduce these simple things that add a new dimension to the sound. Then, over time, people will explore that more and more and get to the point where synthesizer manufacturers really invest in it; technologies that are not just imitative but new instruments, maybe that borrow ideas from existing instruments, but much more complicated systems that a performer, when they first play a sound, might not sound so good on; where it might take a few years to get really good at it, as it would if you were learning a new acoustic instrument.

Linn: One problem is that human beings take a long time to learn the new physical gestures required to master a new instrument. This presents a chicken-and-egg problem. The musician is wondering if he should buy the new instrument and commit the time to becoming proficient on it, because he’s wondering if the instrument will still be around next year. But if nobody buys the instrument, it may not be around next year.

I believe that expressive controllers are not going away, but, rather, are the next big thing in electronic instruments. In my view, sometime in the future—20, 30, or 50 years from now—people will look back at the period between around 1970 and 2020 as the period of transition between the expressive acoustic instruments that came before, and the expressive electronic instruments that came after.

And during those 50 years of transition, most of the electronic instrument makers took the easy path for human interface elements, using inexpensive switches, knobs, and sliders, elements designed not for music but for data entry. “Let’s make a music keyboard out of 61 on/off switches. We’ll add velocity sensitivity to the switches, let them sense a little pressure after a key is fully depressed, and we’ll add a couple of knobs mounted sideways to bend pitch and add vibrato. That’s all people need.�

Over time, electronically generated music lost much of its dynamics, subtlety, performance nuance, and other elements of musical expression, largely because it is difficult to play expressive music with on/off switches. Eventually, electronically generated popular music stopped including instrumental solos because solos played with on/off switches aren’t very interesting.

Then, around 1999, you begin to see polyphonic expression instruments using new sensor technologies, permitting levels of musical expression approaching that of acoustic instruments, but with the benefit of polyphony. The first of these was the Haken Continuum. And people gradually started to appreciate this improvement in expressiveness, and forward-thinking musicians developed skills on the new instruments and became the instrumental stars of their era. And by around 2020, the era of electronically generated music started to embrace the musical expression, virtuosity and instrumental solos of the previous era of acoustic instruments.

And the electronic musicians of 2020 will wonder in amazement how the electronic musicians of the previous 50 years were able to make music with such limited tools as on/off switches, knobs and sliders.