Gino Robair is former editor of EM

A Moment Beyond Experimental Music

In many ways, the concept of complete musical freedom that electronic instruments promise is still far from being realized, despite what we see in ads for new products or read in research journals. One could argue that it will never be realized, because musicians and composers are continually pushing beyond the boundaries of instrument design. (Meanwhile, some of us are still grappling with the unexplored potential of acoustic instruments that traditional forms of music — classical, jazz, rock, folk — don’t have a vocabulary for.)

Certainly, there is no shortage of unusual electronic instruments or interfaces. Some people think that the trick is simply to find a way to make a new controller popular enough that it will catch on with a large population so that a performance practice can be developed over time. That seems reasonable, in one sense: After several hundred years, there’s a wealth of great string and wind music because orchestral instruments were standardized. These days, manufacturers who want to sell a ton of products choose a piano-style keyboard as an interface, or perhaps an Akai MPC-style pad-array, because they fit popular forms of music making.

Yet there’s a continuing trend of musicians and developers who aren’t necessarily interested in standardization. They’re busy trying to unleash the music they hear in their heads, which might not be possible using common tools.

A recent article in the New York Times briefly touched on the subject, using a handful of Bay Area sound artists as a reference point. Take David Wessel, the director at UC Berkeley’s CNMAT (Center for New Music and Audio Technologies). His 32-pad instrument, SLABs, can fire samples and do real-time synthesis, among other things. But more importantly, it is highly responsive to performance gestures; each pad responds to finger movement in three directions — x, y, and pressure, with a discrete output channel dedicated to each, resulting in 96 channels of data.

That all sounds geeky and trendy, but the proof is in the music. Hearing Wessel play the SLABs is a real treat. At some point during a recent concert of his, I forgot that he was playing electronics: I can only describe the sounds as organic, at times transcending the acoustic vs. electronic divide. You can get a small taste of it in the video link above.

A purely musical performance at that level is not easy to achieve, but I think it’s a goal that is often forgotten as people get tied up in the technology of an instrument or interface. And it’s important to note that Wessel’s instrument is the result of years of development and practice.

Beyond using technology for sound control and creation, however, there is another level of musical potential that computers offer, which is merely hinted at by high-tech gaming environments such as Rocksmith.

“We are trying to find ways of musical interaction that are not possible without technology, which might lead to new kinds of music, new kinds of improvisation.” That’s how Palle Dahlstadt of the University of Gothenburg, Sweden, described his research interests during a 3-day workshop with Bay Area musicians at CNMAT this week. Together with fellow researcher Per Anders Nilsson, he is managing a 3-year project funded by the Swedish Research Council that will also explore new ways of using synthesis with gestural controllers. Another facet of the project involves interactive environments for the public: “Encouraging non-professional musicians,” Dahlstadt explains, “by capturing sounds from them and putting them together in real time for music, but keeping the identity of the material so that they feel that they have contributed. Because if the software is too intelligent, you lose the sense of participation.”

duopantomorf.png

Dahlstadt and Nilsson also perform as “duo pantoMorf“, creating fully improvised music using a pair of Nord Modular G2 synths and M-Audio pad controllers. The duo exhibits the same level of virtuosity that comes from playing the same instrument configuration (in this case, the G2) for years.

“There are two kinds of interactions that we are trying out,” Dahlstadt told me about the workshop. “One is on a timbral level, meaning a micro-time level that would not be possible to realize acoustically. For example, modulating each other’s timbres, playing on each other’s timbres, which creates links of dependencies between musicians that force you to think in a different way. And the other kind of interactions are where the rule systems are, perhaps, too complex to keep in your mind as a musician. For example, the computer might analyze what you’re playing, then make a simplified graphic or sonic representation of it and present that to another musician right after you’ve played it, who is then asked to respond in a certain way. This steers the musicians in directions they wouldn’t normally go. Because you can implement any kind of mapping between musicians, you can create very complex interactions that would not be possible in other ways”

This week’s workshop focused on a handful of such strategies to enhance and control musical interaction between players. One section, using trios and quartets, examined how one player can learn to control the playback of the sound of an adjacent player in order to create blended timbres as well as find new ways of approaching improvisation. To do this, the sound of each musician is constantly being sampled into a 2-second buffer, which is gated. One player can open the gate of another by playing their own instrument, so that you hear the real-time sound along with a portion of someone else’s sample. For example, each time I made a sound with my drums, it was sampled. But the sample was only heard when the bassist next to me played a note, at which point youâ’d hear both of our sounds: mine from the PA and hers from her bass.

All this sounds a bit trivial until you realize that by triggering someone else’s sound, you are also feeding a buffer that someone else controls. If you just play without thinking, it sounds like each person has a 2-second delay on them — immediately boring. But if you try to work within the system more creatively, you can tease out unusual sonorities that surpass a mere blend of two instrumental timbres. For example, I could wave my hand past my microphone to trigger a slice of the sampled sound, but without adding my own sounds to the mix. When I waved my hand quickly or in a random pattern, I could get stutter-edit effects.

However, that was just the first approach, which Dahlstadt referred to as amplitude modulation, referencing a common synthesis device. Later, a vocoder was introduced into the system in such a way that the sound you make would determine which partials are enhanced in the gated sample. For example, when I played low sounds on a drum, the sample I was triggering would consist of only high partials. When I played a high-frequency sound, you would only hear the low frequencies in the sample I triggered. That particular mapping was arbitrarily chosen, but proved to be musically satisfying because there was less likelihood that the player and sample would mask one another.

Once we figure out the technology of each structure, we practiced using it by improvising short pieces. Remarkably, by the end of the third day it felt like I was in a band and we had learned a new set of tunes, which we performed on the final evening. It was no longer a simple demonstration of computer music techniques. Rather, it was a series of musical environments that forced us to make music in ways we wouldn’t normally try if we were playing on our own. By the end of the concert, it felt like we had moved beyond so-called experimental music, and the results were satisfying for the performers as well as the audience.

Cool Links
Daphne Oram’s Synthesizer (video)

Echoes from the Sun

A Man Lost In Musical Time

Digg Syndication Del.icio.us Syndication Google Syndication MyYahoo Syndication Reddit Syndication

Email This Post Email This Post

Related Topics: Robair Report

Leave a Comment

You must be logged in to post a comment:
Register Here or Log in Here.