Gareth Loy
Compute Music Journal, Vol: 9, No: 4, Winter 1985
© Massachusetts Institute of Technology.


What of the Future?

A corollary of Moore's law states that ,in other words, cost goes up linearly for exponential growth in complexity. F. R. Moore has argued (Moore 1981) that while traditional musical instruments are becoming more and more costly to produce, electronic instruments are becoming both more interesting and cheaper. Four years ago he predicted that there would be a point at which these two functions of time would cross, and that there would soon be a time when interesting electronic instruments would be cheaper than traditional instruments. Were this to happen, he argued, it would mean the wide availability of these new instruments, and the serious utilization of them in the music of our culture. Surveying today's popular music scene, it can be easily argued that this time has already arrived. The impact is also evident in schools of music. Projects to explore these new low-cost technologies in many computer music laboratories are underway.
Those who have access to more powerful tools can often be quite caustic about MIDI, seeing only its (numerous) limitations.

Irritating as Wuorinen's comments may be to some, he has a point. Engineers and scientists who want to contribute to the technology of music must have a deep insight into the aesthetics of music, otherwise the systems implemented will be archaic and inflexible. Of course, it works the other way too: without deep insight into the means of production, composers and performers will miss the essential contribution which computer technology can make, and the resulting music will be anachronistic.
There are also complaints about the nature of the human interface provided by typical commercial synthesizers.

There are reasons to be happy about MIDI, in spite of what it is, when we consider that it is helping to stimulate research in performance, improvisation, and interactive composition. Many are enthusiastic about using MIDI to help move beyond the limitations of tape music. Most are willing to put up with temporary gestural and sonic limitations to achieve this. Joel Chadabe's work in interactive composition (Chadabe 1984; Chadabe and Meyers 1978) Buxton's work with human interfaces (Buxton 1980) Appleton's work with the Synclavier, and many other projects as well stem from the motive to reclaim performance gesture as a part of the process .
My enthusiasm for this subject is similarly motivated. Throughout my career in electronic and then computer music, little of substance could be done in live performance. Analog synthesizers offered real-time control, but over pathetically limited resources. Software sound synthesis offered tremendous resources, but none of it in real time. Compositions for tape and live instruments usually required the live instrumentalists to synchronize their performance to the tape (unless one chose to declare that synchronization was unimportant, a limited aesthetic option). This made the live performers slaves to the tape part. Now at last we are in a position to make the performer the independent variable and the synthetic part the dependent variable. A whole continuum of possibilities has opened up. Tape music is at one end, where the electronics ignores the performer. At the opposite end of the continuum are systems that drive synthesizers directly from sensors that extract performance parameters. Here the electronics is slave to the performer. In between are many interesting areas, such as automatic accompaniment (Dannenberg 1984; Vercoe 1984) and numerous other strategies where the electronics and the performer share control. A computer science discipline called control theory (Rouse 1981)is devoted to considering human/machine interaction modalities, static and dynamic systems, and related issues. Suddenly, this seems quite germane to computer music.
Unfortunately, it appears that the needs of the research community will continue to be unaddressed in the marketplace. Most MIDI-based computer systems will continue to be directed at standard applications for nonprogrammers. Many will not even be directed at musicians but will be used for such things as to correct "wrong notes" and provide simple accompaniment for novices.

It is clear that we will continue to have to improvise systems that meet our needs from the best available technology. As a result, the field seems destined to be driven by technological imperatives for at least the near future. This has an interesting implication, which can be exposed with the following syllogism: computer music is to computer technology as the steamboat was to steam engine technology, meaning that computer music is still mostly an applied discipline, like the development of steamboats. We are aligned with the histoncal position of Fulton, rather than Watt. Furthermore, we are at the point in the development curve prior to where steamboats became capable of reliable navigation. We are still mostly engaged in improving the technology to yield the benefits we know are there. Just as reliable worldwide navigation had to wait for the steamboat, many of the experimental research fields awaiting us depend on the availability of appropriate tools. To take other examples, the field of microbiology was only possible after the advent of the electron microscope; astronomy was only possible after the perfection of the telescope.
(Another analogy to computer music can be borrowed from the history of astronomy: before the telescope, astronomy was called astrology. After the perfection of the musical equivalent of the telescope, will musicology become musiconomy?)
Fortunately, progress is being made along these lines in many places. One example is the computer music workstation development project at the Computer Audio Research Laboratory. We mean by this terrn a microcomputer system with the capability of running either out of real time for general-purpose signal processing and composition, or running in real time for performance processing and direct synthesis. In the latter case, the system uses special hardware to do performance capture, analysis, and synthesis. The goal is to extend the range of musical research with such a tool to include performance. Our current prototype uses MIDI devices in combination with other sources and sinks of information (Fig. 10)!, and it is from this work that I have garnered most of the experience related in this article. This development effort is aimed at providing low-cost tools to the research community that attempt to meet the criteria of those whom I have quoted above (among others) This work would not have been possible prior to the advent of sophisticated, standardized, low-cost, open-architecture components.

Fig. 10. Performance laboratory at CARL. Oval components are MIDI devices squares are processors. Components are: Yamaha DX7 synthesizer, Roland MKB-1000 weighted-action piano keyboard, Yamaha TX-816 synthesizer, Roland MPU-401 MIDI controller, Force 11 MC68000 VME-bus CPU, Sun Microsystems Inc. SUN workstation. The Performance Processor is an in-house real-time performance processing system under development. these facilities are in addition to our regular timesharing computer resources.

The relationship between electronic-instrument manufacturers and the computer music community has been rocky in the past. However, I see the advent of MIDI as the first sign that the commercial synthesizer industry is becoming relevant to the computer music community. By providing low cost, standardized performance processors and synthesizers, our field is gaining tools that will have a broad impact on the kinds of subjects we can investigate and on the numbers of researchers who can participate. In a sense, real-time control research is now the province of anybody with a MIDI synthesizer and a desktop computer. Some (McConkey 1984) even see the fading away of "the distinction between synthesizers and computer music"-meaning, presumably, the distinction between those who use commercial synthesizers and those who have access to facilities of computer music research centers. Perhaps so, but there will always be a distinction between making music and doing musical research, even if the latter is in the form of musical compositions that use commercial synthesis systems. The research uses of computer music tools- both scientific and musical-will always lead commercial application. The development of MIDI signals the emergence of several important technologies from the laboratory and into the field. If the intercommunication between the synthesizer industry and the computer music community can grow, as I see happening all around me, it presages better things to come.


Index | Previous Paragraph | Next Paragraph ]