go to HorizonZero HorizonZero 15 vertical line layout graphic franšais >  

printer friendly version of article  >

Transference : In the Service of Music
View this article in flash  requires flash 7 >

In the Service of Music
New Developments in Music and Audio Technologies
by Theresa Leonard and Chris Chafe

Not only is the continuing evolution of the computer industry changing our lives today, it also has a huge impact on music, a cultural medium that speaks instantly to any language. Technological change is reflected in the history of music as far back as its earliest traces and in all corners of the globe. The most complex inventions of past ages were often in the service of music: the baroque pipe organ, the Telharmonium, and the Yamaha DX-7 are each milestones of this sort. In this essay we will discuss what we feel are some new technical developments with important implications for the future of both the music and audio industries, and briefly explain their impact on the future of the professional recording engineer/producer, the artist, and the listener.

Internet Audio and Musical Collaboration
We have come a long way with Internet capabilities for music and audio in the past few years, tracking the enormous revolution in bandwidth and connectivity that has affected the computer industry and our lives. This increase in bandwidth and connectivity will of course have huge implications not only for music composition, performance, and recording engineering, but ultimately for culture as well.

In terms of composition and performance, high-speed interactive flows can permit long-distance collaboration. Recent experiments with high definition audio flowing over research networks point to a not too distant future in which musicians will be meeting, playing, and rehearsing in connected studios. Significantly, their "presence" in the opposite player's space will be experienced via loudspeaker and perhaps video conference screens. This is a major shift for classical players who will need to learn the subtleties of recording and microphone craft. Or guitarists who will need to learn how to minimize audio noise that might otherwise be ignored, or perhaps use new direct digital instruments (eg: the Gibson Digital Guitar).

Recording engineers and producers will necessarily adapt to emerging distributed recording and performance paradigms. Just as engineers once optimized their approaches to capturing music on tape, vinyl, and compact disc in various ways, in the future they will be required to consider issues associated with data reduction, bit rate management, and latency in wide network collaborative musical projects.

Finally, and perhaps most importantly, new technologies will have an enormous impact on listener behaviors, interactive opportunities, and resultant culture(s). Interactive performances and recordings provided over the Internet will give users choices as to how to compose, modify, and listen to music.

Production, Sonic Resolution, and Number of Channels
Where audio recording is concerned, the tools of production are now better sounding and cheaper to purchase than ever before for musicians. Not long ago a professional-sounding production relied on the use of an expensive recording studio and the endorsement of a major record label. Today, "barriers to entry" for low budget artists seeking to record and release their own material are lower than ever - in fact, an entire production, from recording through sale, can be launched from a single computer. Therefore, for artists now and in the future, the recording and presentation of new works will be less influenced by market trends that are dictated and followed by large corporate entities. Ultimately, smaller labels and wholly independent productions will flourish, and hopefully audio quality will still play an important role.

Also with respect to content creators, there will be a greater use of surround sound technology for multiple playback channels in both radio and recording productions. This factor is emerging thanks to the increasing market penetration of home theatre systems. For decades artists have sought to create diffuse and distributed sound fields in recordings, placing speakers behind and above the listener to create effects and a sense of envelopment. These efforts have been hampered by technical issues in the past, but at present and in the future online and disc-based delivery will allow composers and artists to place the listener in the centre of a virtual soundscape.

On another note, the disparity between professional and consumer audio standards has become something of a paradox. In professional circles, ultra-high resolution digital equipment has advanced recording standards tremendously, while online delivery schemes utilize low fidelity standards such as mp3 that degrade sound quality in order to create smaller files that can be transmitted faster and take up less storage space. We foresee a change for the better in terms of the fidelity of media enjoyed by users - as more consumers opt for high-speed Internet, and as storage costs continue to decline. As transmission speed increases and storage is less of an issue, the need for data compression will decrease and the opportunity for the transmission of additional channels (i.e. surround sound) will emerge. We are moving in a direction that will not only allow the listener to have instant access to whatever music he/she desires, but that will more than likely negate physical delivery formats (cd, super audio cd, dvd, etc).

Transmission and recording are things that computers are good at, and it's a fantastic boon for the arts that bandwidth and storage are increasing so quickly in capacity. Another characteristic of computers is that the "real world" can be represented in terms of models - pieces of software that capture the essence of something and can create the illusion of its presence. New standards (structured audio / mpeg4, mpeg7) will deliver not the music or sound itself, but the information needed to "render" a musical construct from a high-level representation of its sound. Think of it as a set of coded commands that might actually draw frames on a TV screen you're watching, rather than directly transmitting the frames of the video one by one. Or, rather than simply drawing the visual and aural strokes, these commands might operate at a higher level - as software models, coded objects, they could actually become the characters or landscapes in the movie: instead of drawing a tree, content programmers might "grow" a tree in a particular spot for a particular scene.

Instruments, Performance, and New Technologies
In the pre-bandwidth past, digital technologies had already begun to expand the accuracy and flexibility of musical creation. The cd, MIDI instruments, and the all-digital recording studio represent stages that arrived in quick succession. And digital is synonymous with precision, and also with "smart systems". Inside many digital instruments, processing boxes, or software applications, music making tasks can be found to be utilizing "musical knowledge" to assist the musician. From rudimentary simulations of automatic drumming to sophisticated simulations of instrumental sound, the algorithms inside these computers are the result of research and invention. There is nothing all that new happening here - flute makers in China's Henan Province were devising unique bird bone instruments 9,000 years ago, and they must have been fascinated in a similar way with the sound effects and music they could engineer.

New Instrument Designs
In part, the history of electronics and sound concerns the invention of new ways to perform music. Expressivity has been a quality of all enduring instruments, and remains the design goal of a new generation of luthiers. The difference these days is that the physical apparatus is disconnected from the sound generating mechanism, and this leads to wide open choices in terms of what features may be "afforded" to the performer. Affordances are dimensions for the player to manipulate, and can include classical ones such as pitch, loudness, and timbre. They can become much more, however, involving spatial direction, clusters of events massed together, and augmented automatic playing. As seen in karaoke, this element can also be assistive, allowing some amount of guidance and correction.

Still, the true test of any new instrument is in its expressive power, and whether musicians are enabled to develop their craft to the point of virtuosity. "Craft" in music equates with "chops" - that quality which builds with practice (whether as a performer, composer, luthier, or sound engineer) and which "makes" the musician. Virtuosic instruments ("gesture amplifiers" like the violin) have evolved alongside the skill of their performers.

Now, consider the disconnect that could happen if the instrument and the player are physically distanced from each other. A subtle sense of touch is crucial to performance in traditional instruments, and we are now learning that adding it back in can become an important enhancement to these new "disembodied" instruments. The design of haptic or tactile musical interface systems draws from multidisciplinary fields. Challenging and fun to work with, these designs for interaction between humans and computer instruments are still in the research phase. New physical sensors and microprocessors are already being used onstage by musicians and composers interested in the experimental possibilities of interactive music. These will play a tantalizing role in a remotely connected world, quite akin to the tele-surgeon's requirements for touch-based precision when operating their long-distance operating tools successfully.

Conclusion
Today, technology has a direct influence on art, because it has a direct impact on the instrumentation as well as the distribution method. In this essay we have covered some new formats and ideas for musical collaboration and transmission. Since composition has always been dependent on current technology, we can expect a drastic change to happen soon in areas of composition and performance. Yet there will possibly be less variance between musical cultures as, once again, the medium becomes the message. To a great extent, art is determined by how it is presented. As we move away from physical delivery formats like the cd for music, we will most likely move to wireless, hand-held, interactive devices - devices which carry within them, literally, a global reach.

 

Theresa Leonard is the Director of Audio for the Music & Sound department at The Banff Centre. Her work spans many aspects of audio production, engineering, education, and administration. She holds a Masters degree in Music, specializing in Sound Recording, from McGill University. Trained as a classical pianist, she previously taught music in French and English schools in eastern Canada, and has produced and engineered recordings for companies such as Anelekta, Arktos, Bravo!, Centaur, EMI, and Marquis Records. She was recently appointed Education Chair for the Audio Engineering Society (AES) and was also elected AES President for the 2004-05 term.

Chris Chafe is a composer, cellist, and music researcher with an interest in computer music composition and interactive performance. He has been a long-term denizen of the Center for Computer Research in Music and Acoustics, Stanford University, where he directs the Center and teaches. Recent research has seen him developing methods for computer sound synthesis based on physical models of musical instrument mechanics, and a current project, SoundWIRE, explores musical collaboration using high-speed Internet for high-quality sound.

back to top back to top  

 

Valid XHTML 1.0!
Valid CSS!