Digital Art/Digital Media - Theory and Practice
RTF 344M, FA360, FA 381
Dr. Bruce Pennycook

Unit II: Music, Sound & MIDI

Reference Book: Essentials of Music Technology, Mark Ballora, Prentice-Hall.(2002)

Part I: Sound & Hearing

Physics in the Classroom-Sound Waves & Music

This site provides an excellent introduction to the properties of sound. We will study some key principles of sound and hearing as described in Lessons 1-5 of this site. You do not need to retain the mathematical formulae but you should be able to describe the following terms:

• sound as a mechanical wave, longitudinal wave
• pressure wave, standing waves
• speed and propagation of sound
• reflection, refraction, diffraction
• interference
• echo and reverberation
• frequency and pitch
• amplitude and loudness
• speed of sound in air (the medium matters!)
• three primary parts of the ear-
• timbre - fundamental frequency, harmonics

Part II: Digital Audio

There are many tutorial sites that describe the process of converting audio signals to digital format. We will come to these in a moment. First, let's consider the overall process as you are most likely to encounter it using and creating digital media.

From our examination of the basic properties of sound and hearing, we know that acoustical energy from a voice for example results in sound pressure waves travelling through air. If we have a microphone connected to a sound system or computer, the diaphragm of the microphone (much like our ear drum) will vibrate according to the changes in air pressure. These vibrations are converted to a tiny but very accurate electrical signal that is an analog of the changes of air pressure. If we were playing in a band and merely wanted to amplify our voice, these electrical signals would go to the mixer then get amplified again and sent to the loudspeakers which, in turn, move the air and generate changes in sound pressure.

The path for a typical audio setup for a singer or band then would be:

Items in blue are physical (voice, waves, ear)
Items in green are analog-electrical (mixer, amplifiers, mics, speakers)
Items in red are digital (DAC, ADC, audio editing software such as ProTools, cd, mp3, etc.)
Items in italics - microphone, speakers, ADC and DAC are "transducers" that change signals from one format to another

voice -> sound pressure waves -> microphone -> mixer -> amplifier -> speakers -> sound pressure -> ear

In the case of digital recording, editing and playback, we can capture the output of the mixer (for example) and change these analog signals into digital signals through the analog-to-digital (ADC) hardware and software in our computer. Once we have saved this digital audio signal to a file, we can then edit, mix, process the file and play it back from our computer, burn a CD, share over the internet, etc. To play back digital audio, the computer sends the file to the digital-to-analog convertor in the computer which you can then connect to headphones or your stereo.

So we insert the digital part into the path above:

->mixer -> ADC -> Software (edit, process, etc.), save as files (mp3) -> DAC -> mixer ->

Web Sites:

The following site has a very good outline of the digitizing process: basics of digital audio.
Another useful site is this table of file formats.
The following digital audio recorder/editor/players are widely used. They are all excellent.

Acoustica (Windows, \$10)
Acid-Wave (Windows, \$45)
Cool Edit (Windows, \$69)
Pro Tools LE (Windows, Mac OS9, Mac OS-X - Free version)
Bias Peak (Max OS-X, \$399)