What is permitted? The nature of music
The materials in the music modules of the course are intended to supplement the texts we will be discussing in class. The selection and presentation are neither systematic, nor are they intended to satisfy the standards of musicology. I have made every effort not to assume any previous musical training, and chosen the musical excerpts with the thought of creating a series of linked impressions, rather than providing any systematic account of compositional developments. The selection also presupposes that you do not want to spend more than an hour with each section, which is why I have been so philistine about taking pieces of music outside the context of the total work.
If you do not know how to read musical notes but are willing to devote a half hour or so to understanding music as a semiotic system, a very efficient way to get started would be at musictheory.net. See if you can get as far as generic and specific intervals under "Lessons." Those with some musical knowledge might spend a few minutes toying with the interval ear trainer under "Trainers."
1. Introduction
Our perception of musical tones depends on the intensity, frequency, and waveform of the original physical stimulus to our ears. As you might expect, our subjective sense of loudness is based mostly on the intensity of the objective source: larger variations in air pressure in the vicinity of our ears translate to more nerve impulses sent to the brain. (The familiar decibel system for measuring sound intensity levels reflects this: it expresses ratios of the quantity of energy passing through a given surface area compared with the same surface in conditions of effective silence.) The effect is not a linear one, however, since doubling the intensity of the incoming tone does not lead us to think the sound is twice as loud. That is why the decibel scale is conveniently chosen to be a logarithmic one. City traffic measuring 70 dB represents ten times more sound energy than a quiet conversation measuring 60 dB, but we only perceive this to be roughly a doubling or tripling of loudness. Or in a more musical context, ten people singing the same note will sound about twice as loud as one person singing that note at the same intensity, while a chorus of one hundred people will sound about four times as loud as the soloist. Even with this crudest aspect of musical perception —loudness— the creation of musical affect in our minds is already a complicated matter.
This is all the more true with timbre, our perception of the "texture" of a tone, i.e., the difference between a trumpet, an oboe, a violin, and the human voice when they are all producing the same note. In the first instance, the perceived differences in timbre can be attributed to the different shapes of the elaborate (but still periodic!) waveforms that make a given note in each instrument. Yet this isn't the entire story. Timbre also has to do with the ear's ability to detect tiny irregularities at the beginnings and endings of these sustained waveforms. These so-called transients range from the 20 milliseconds it takes for the blown oboe reed to settle into a steady oscillation, to the 70–90 milliseconds it takes for a flute or a violin bow attacking a string to do the same. Since the time from wave peak to wave peak for the notes above middle C ranges from about 2 to 4 milliseconds, it can thus take several dozen vibration periods for the tone to be established clearly. The ear takes considerable training to learn how to "separate" different notes of identical timbre, and it relies heavily on those little irregularities in order to do so successfully. Just for entertainment purposes, listen to how Béla Bartók uses timbre to both sustain and subtly modify a simple but intriguing melodic line from a late work, Concerto for Orchestra (1943). In the second movement ("Game of Pairs"), he links together in succession pairs of bassoons, oboes, clarinets, flutes, and muted trumpets. In each case the paired instruments move at fixed intervals with respect to each other, and because they share the same timbre, it is easy for our ears to let the combined chord fuse into a "single," slightly exotic voice.
Most importantly there is the problem of pitch—the location of a sound along the tonal scale, depending on the frequency of vibrations reaching the ear, fast ones producing a high pitch and slow ones a low. Since the middle of the 20th century the conventional "concert pitch" is tuned so that the A directly above middle C on the piano has 440 vibrations per second (440 Hz). The nineteenth-century standard was more often 435 Hz, and in the early modern era it ranged as low as 415 Hz. Whether the ear is presented with a smooth sinusoidal wave, or a very complex waveform, so long as both repeat themselves at the same intervals, our ears will perceive them as sharing the same pitch. Pitch can vary continuously: just squeeze or stretch the wave. So now comes the crucial musical question: how does one pitch relate to another? It turns out that in one crucial respect there is a perfect match between an objective regularity about frequency and a subjective regularity about pitch, namely, our perception of the octave. The octave is both the name we give to a frequency ratio (double the frequency and you rise one octave), and to a psychological quantity (pitches separated by octaves sound like the "same" note). When you realize that the octave frequency ratios also match up to the regularities observed in sound production by musical instruments (double the length of an organ pipe and the frequency of the note produced drops by half), the octave looks very much like an entity dictated by nature.