Gino Robair is former editor of EM

Archive for March, 2011

A Moment Beyond Experimental Music

In many ways, the concept of complete musical freedom that electronic instruments promise is still far from being realized, despite what we see in ads for new products or read in research journals. One could argue that it will never be realized, because musicians and composers are continually pushing beyond the boundaries of instrument design. (Meanwhile, some of us are still grappling with the unexplored potential of acoustic instruments that traditional forms of music — classical, jazz, rock, folk — don’t have a vocabulary for.)

Certainly, there is no shortage of unusual electronic instruments or interfaces. Some people think that the trick is simply to find a way to make a new controller popular enough that it will catch on with a large population so that a performance practice can be developed over time. That seems reasonable, in one sense: After several hundred years, there’s a wealth of great string and wind music because orchestral instruments were standardized. These days, manufacturers who want to sell a ton of products choose a piano-style keyboard as an interface, or perhaps an Akai MPC-style pad-array, because they fit popular forms of music making.

Yet there’s a continuing trend of musicians and developers who aren’t necessarily interested in standardization. They’re busy trying to unleash the music they hear in their heads, which might not be possible using common tools.

A recent article in the New York Times briefly touched on the subject, using a handful of Bay Area sound artists as a reference point. Take David Wessel, the director at UC Berkeley’s CNMAT (Center for New Music and Audio Technologies). His 32-pad instrument, SLABs, can fire samples and do real-time synthesis, among other things. But more importantly, it is highly responsive to performance gestures; each pad responds to finger movement in three directions — x, y, and pressure, with a discrete output channel dedicated to each, resulting in 96 channels of data.

That all sounds geeky and trendy, but the proof is in the music. Hearing Wessel play the SLABs is a real treat. At some point during a recent concert of his, I forgot that he was playing electronics: I can only describe the sounds as organic, at times transcending the acoustic vs. electronic divide. You can get a small taste of it in the video link above.

A purely musical performance at that level is not easy to achieve, but I think it’s a goal that is often forgotten as people get tied up in the technology of an instrument or interface. And it’s important to note that Wessel’s instrument is the result of years of development and practice.

Beyond using technology for sound control and creation, however, there is another level of musical potential that computers offer, which is merely hinted at by high-tech gaming environments such as Rocksmith.

“We are trying to find ways of musical interaction that are not possible without technology, which might lead to new kinds of music, new kinds of improvisation.” That’s how Palle Dahlstadt of the University of Gothenburg, Sweden, described his research interests during a 3-day workshop with Bay Area musicians at CNMAT this week. Together with fellow researcher Per Anders Nilsson, he is managing a 3-year project funded by the Swedish Research Council that will also explore new ways of using synthesis with gestural controllers. Another facet of the project involves interactive environments for the public: “Encouraging non-professional musicians,” Dahlstadt explains, “by capturing sounds from them and putting them together in real time for music, but keeping the identity of the material so that they feel that they have contributed. Because if the software is too intelligent, you lose the sense of participation.”

duopantomorf.png

Dahlstadt and Nilsson also perform as “duo pantoMorf“, creating fully improvised music using a pair of Nord Modular G2 synths and M-Audio pad controllers. The duo exhibits the same level of virtuosity that comes from playing the same instrument configuration (in this case, the G2) for years.

“There are two kinds of interactions that we are trying out,” Dahlstadt told me about the workshop. “One is on a timbral level, meaning a micro-time level that would not be possible to realize acoustically. For example, modulating each other’s timbres, playing on each other’s timbres, which creates links of dependencies between musicians that force you to think in a different way. And the other kind of interactions are where the rule systems are, perhaps, too complex to keep in your mind as a musician. For example, the computer might analyze what you’re playing, then make a simplified graphic or sonic representation of it and present that to another musician right after you’ve played it, who is then asked to respond in a certain way. This steers the musicians in directions they wouldn’t normally go. Because you can implement any kind of mapping between musicians, you can create very complex interactions that would not be possible in other ways”

This week’s workshop focused on a handful of such strategies to enhance and control musical interaction between players. One section, using trios and quartets, examined how one player can learn to control the playback of the sound of an adjacent player in order to create blended timbres as well as find new ways of approaching improvisation. To do this, the sound of each musician is constantly being sampled into a 2-second buffer, which is gated. One player can open the gate of another by playing their own instrument, so that you hear the real-time sound along with a portion of someone else’s sample. For example, each time I made a sound with my drums, it was sampled. But the sample was only heard when the bassist next to me played a note, at which point youâ’d hear both of our sounds: mine from the PA and hers from her bass.

All this sounds a bit trivial until you realize that by triggering someone else’s sound, you are also feeding a buffer that someone else controls. If you just play without thinking, it sounds like each person has a 2-second delay on them — immediately boring. But if you try to work within the system more creatively, you can tease out unusual sonorities that surpass a mere blend of two instrumental timbres. For example, I could wave my hand past my microphone to trigger a slice of the sampled sound, but without adding my own sounds to the mix. When I waved my hand quickly or in a random pattern, I could get stutter-edit effects.

However, that was just the first approach, which Dahlstadt referred to as amplitude modulation, referencing a common synthesis device. Later, a vocoder was introduced into the system in such a way that the sound you make would determine which partials are enhanced in the gated sample. For example, when I played low sounds on a drum, the sample I was triggering would consist of only high partials. When I played a high-frequency sound, you would only hear the low frequencies in the sample I triggered. That particular mapping was arbitrarily chosen, but proved to be musically satisfying because there was less likelihood that the player and sample would mask one another.

Once we figure out the technology of each structure, we practiced using it by improvising short pieces. Remarkably, by the end of the third day it felt like I was in a band and we had learned a new set of tunes, which we performed on the final evening. It was no longer a simple demonstration of computer music techniques. Rather, it was a series of musical environments that forced us to make music in ways we wouldn’t normally try if we were playing on our own. By the end of the concert, it felt like we had moved beyond so-called experimental music, and the results were satisfying for the performers as well as the audience.

Cool Links
Daphne Oram’s Synthesizer (video)

Echoes from the Sun

A Man Lost In Musical Time

Digg Syndication Del.icio.us Syndication Google Syndication MyYahoo Syndication Reddit Syndication

No Comments

Related Topics: Robair Report |

Bounce to Disc

vinyle_master12.jpg

If you’ve poked around the indie music scene at all in recent years, you’ve no doubt seen an increase in music being delivered on archaic media such as cassettes and vinyl records. Although one well-known band tried using cassette tapes to foil illegal file-sharing, the majority of artists release records and cassettes for commercial or sonic reasons, or some combination of the two. Listeners I’ve spoken with think these vintage formats sound great, particularly because they’re inclined towards the audio artifacts each one presents: Anyone who has compared a song played on CD or MP3 to the same piece on one of the older formats can attest to the differences in audio quality.

In an earlier blog, I described how a cassette can be thought of as a non-linear filtering device, offering a timbral quality that is difficult to achieve with digital plug-ins alone. A few weeks ago I was pleasantly surprised to hear that records were also being used to
“process” a mix: Arcade Fire’s mastering engineer created a master lacquer for each of the 16 songs on the band’s recent Grammy-winning album, The Suburbs, and then re-digitized them for release on CD and digital distribution. Mind you, the materials used to make a master lacquer — an aluminum disc coated with a hardened, nail-polish-like substance — are different than a mass-produced vinyl disc. Yet, as records, the two types of materials have similar sound qualities as well as physical limitations.

For example, records don’t tolerate active panning in the lower frequencies, so mastering engineers will pan bass instruments to the center before committing the project to lacquer. By essentially making the low-end mono, you mitigate certain types of tracking problems for the stylus. Each mastering house chooses their own crossover frequency for centering the low-end, determined by their experience, the gear they have, and the projects they’ve done in the past.

But bass isn’t the only problematic frequency range. Exaggerated high frequencies, particularly from sibilants in the vocals or from over-compressing the mix, will likely cause distortion during playback. In addition, the quality of high-frequency reproduction gets progressively lower as the needle approaches the center of the disc, primarily because there is less surface area per rotation. As John Golden of Golden Mastering told me in “Mastering Vinyl”, “Most people don’t realize that the distance around the inside of a 12-inch record is about half the distance than around the outside. As the distance around each revolution decreases, the high frequencies become harder for a playback stylus to read.” And, as it turns out, progressively boosting the highs in a mix to try to compensate doesn’t fix the problem, but only increases the distortion.

Fire in the Groove
The Arcade Fire production team were able to work around these issues by giving each song its own master lacquer. For example, this allowed the band’s mastering engineer to use as much of the record’s surface as possible in order to maximize playback levels.

I asked the man who mixed the project, Craig Silvey, if he had to approach the album differently considering the unusual way they planned to master it.

How did you decide to use vinyl as a step in the mastering process?
We had discussed when we were mixing the record that we wanted to have an analog stage in it. But for time reasons, and because it was a gargantuan project collecting all the mixes together and making final decisions on things, it made sense for us to do the mix digitally, in stems, so that we could recall it later.

We realized that we needed another analog stage, and they wanted something very physical to represent the album. I remembered that George Marino at Sterling Sound, who I use regularly for mastering, had mentioned that he’d done this before; where you take the process of putting it to vinyl, basically, but each song gets its own lacquer and gets played once, back into the computer, to make the CD release or downloadable version. Each song gets maximum groove width, so you can get it nice and loud on the vinyl. The record is cut and played at 45 rpm, and the playback is, of course, on the lathe, so it’ll be a super-stable playback. It’s the best the vinyl could ever possibly sound.

After George suggested it to me, I mentioned it to the band as a possibility, but I’d never actually heard it, myself. And everyone was really skeptical. But when it came back, it was very noticeable, really: It really opened up the bottom end. It was well worth it.

Did you have to do anything to the mix itself to prepare it to be transferred to vinyl?
We made the mixes how we liked them. Because there are 16 songs, we decided to digitally master the whole album to get the EQ right first. The mastering took a period of a few days, where people were deciding that, oh, we should have an extra dB of 16 kHz or whatever.

We didn’t want to waste all these lacquers [during this step], so we mastered it digitally until we were all happy with it. And then in one broad stroke, George did the entire vinyl process for it. He says that you can get it pretty loud at the kind of groove width you get when you’re running 45. But you only have about 7 minutes that you can put on a 12-inch, at that maximum ability. There are some potential issues with stereo low-frequencies, but it didn’t ever really seem to be a problem.

The only problem was that two or three masters came back that had a bit of static or a little crackle or pop on it. So those had to be redone.

How long did it take you to mix the record?
With a few breaks, three months. Technically, it’s a double-album, so it was like mixing two albums. The mixes were done at 24-bit, 96 kHz.

Was the entire band at the mix session?
They tag-teamed me, two at a time. They’d go out and take a rest, and another two would come in. Of course, there are lots of parts, and everybody has a different angle on it. So we were trying to satisfy everybody.

Do people create different masters for compressed file formats or music destined for online distribution?
I don’t think so, no. We used the same master that we used for the CD.

You sometimes do a different master for a single for radio play. When the pluggers are trying to plug their songs to the radio, they have to play them something that sounds really loud. So there are a lot of times when you do singles where you hit them hard at mastering, but then the album isn’t [mastered] that way.

The records I’m generally working on are by people who are trying to resist the dynamic wars. For the album and for the downloads, it’ll be mastered pretty calm. Generally the way I mix is calm, as well. We’re always trying to preserve the dynamics.

The Vinyl Frontier
After my talk with Silvey, I contacted George Marino by email about the final steps in the mastering process. Starting with the 24-bit, 96kHz mixes from Silvey, Marino went through his “normal mastering processing” before creating a master digital file.

From that file he created a single lacquer master of the entire project, which was used to cut the commercial vinyl album that was released. Then he used the digital master to cut the individual master lacquers for each song. Each one was played back form the lathe and re-digitized at 16-bit, 44.1kHz resolution for CD release and digital distribution.

Cool Links
Fender Amps For Your Car?

Reef Noise As Guide for Floating Crustaceans

Audio From Global Seismic Activity

Hear Whales Sing Live

Digg Syndication Del.icio.us Syndication Google Syndication MyYahoo Syndication Reddit Syndication

No Comments

Related Topics: Robair Report |