Source Code. Projects. Nerd Stuff. Art Stuff.

Sound In Media

[gn_media url="http://www.youtube.com/watch?v=PeTriGTENoc" width="600" height="400"]

Introduction

[gn_pullquote align="left"]The interaction between the physical properties of sound, and our perception of them, poses delicate and complex issues.[/gn_pullquote]

Sound can be viewed as a wave motion in air or other elastic media. In this case, sound is a stimulus. Sound can also be viewed as an excitation of the hearing mechanism that results in the perception of sound. In this case, sound is a sensation. These two views of sound are familiar to those interested in audio and music. The type of problem at hand dictates our approach to sound. If we are interested in the disturbance in air created by a loudspeaker, it is a problem in physics. If we are interested in how that disturbance sounds to a person near the loudspeaker, psychoacoustical methods must be used. This is easily seen when observing the relationship between pitch and frequency. For instance, frequency is an objective, measurable property of sound; it can be measured with an oscilloscope, for instance. On the other hand, pitch is a subjective property of sound, and is a perception of how the human ear and mind are interpreting the data. Although we cannot equate frequency and pitch, they are analogous.

Sound in Media

Without a medium, sound cannot be propagated. Sound is readily conducted in gases, liquids, and solids such as air, water, steel, concrete, and so on, which are all elastic media. Outer space is an almost perfect vacuum; no sound can be conducted except in the tiny island of atmosphere within a spaceship or a spacesuit. If an air particle is displaced from its original position, elastic forces of the air tend to restore it to its original position. Because of the inertia of the particle, it overshoots the resting position, bringing into play elastic forces in the opposite direction, and so on.

Air particles move slightly back and forth to carry sound across a medium (See Fig. 1-3)

In reality, there are more than a million molecules in a cubic inch of air. The molecules crowded together represent areas of compression in which the air pressure is slightly greater than the prevailing atmospheric pressure. The sparse areas (see Fig. 1-5) represent rarefactions in which the pressure is slightly less than atmospheric pressure. Any given molecule, because of elasticity, after an initial displacement, will return toward its original position. It will move a certain distance to the right and then the same distance to the left of its undisplaced position as the sound wave progresses uniformly to the right. Sound exists because of the transfer of momentum from one particle to another.

The speed of sound is dramatically slower than the speed of light. In air, it takes sound about 5 seconds for sound to travel 1 mile. Sound will propagate at a certain speed that depends on the medium, and other factors. The more dense the molecular structure, the easier it is for the molecules to transfer sound energy; compared to air, sound travels faster in denser media such as liquids and solids. Sound also travels faster in air as temperature increases, and faster as well in more humid air. It should be noted that there are many factors that affect this, for instance, in a rigid material such as steel, although the sound is traveling faster, there may be some dampening that occurs.

Wavelength and Frequency

A sine wave is illustrated in Fig. 1-7. The wavelength λ is the distance a wave travels in the time it takes to complete one cycle. A wavelength can be measured between successive peaks or between any two corresponding points on the cycle. The frequency f specifies the number of cycles per second, measured in hertz (Hz). Frequency and wavelength are related as follows:

To help calculate and illustrate this relationship, visit this site.

Harmonics

Combining sine waves can be constructive or destructive (see Fig. 1-9)

The sine wave with the lowest frequency (f1) of Fig. 1-9A is called the fundamental, the sine wave that is twice the frequency (f2) of Fig. 1-9B is called the second harmonic, and the sine wave that is three times the frequency (f3) of Fig. 1-9D is the third harmonic. The fourth harmonic and the fifth harmonic are four and five times the frequency of the fundamental, and so on. The fundamental frequency is vibrating at one frequency, it is a pure sine wave with no overtones.

Phase

In Fig. 1-9, all three components, f1, f2, and f3, start from zero together. This is called an in-phase condition. In some cases, the time relationships between harmonics or between harmonics and the fundamental are quite different from this (see Fig. 1-10).

What happens if the harmonics are out of phase with the fundamental? Figure 1-11 illustrates this case. The second harmonic f2 is now advanced 90°, and the third harmonic f3 is retarded 90°.

 Octaves

Audio engineers and acousticians frequently use the integral multiple concept of harmonics, closely allied as it is to the physical aspect of sound. Musicians often refer to the octave, a logarithmic concept that is firmly embedded in musical scales and terminology because of its relationship to the ear’s characteristics. Harmonics and octaves are compared in Fig. 1-12. Harmonics are linearly related; each next harmonic is an integer multiple of the preceding one. An octave is defined as a 2:1 ratio of two frequencies (that is, any time you halve or double a frequency, you have an octave). The interval from 100 to 200 Hz is an octave, as is the interval from 200 to 400 Hz. The interval from 100 to 200 Hz is perceived as being larger than the interval from 200 to 300 Hz; this demonstrates that the ear perceives intervals as ratios rather than arithmetic differences.

Consider: Octaves are multiples of the same wave that fit within the same physical space.

Spectrum

The commonly accepted scope of the audible spectrum is 20 Hz to 20 kHz; the range is one of the specific characteristics of the human ear. Here, in the context of sine waves and harmonics, we need to establish the concept of spectrum. The visible spectrum of light has its counterpart in sound in the audible spectrum, the range of frequencies that fall within the perceptual limits of the human ear. We cannot see far-ultraviolet light because the frequency of its electromagnetic energy is too high for the eye to perceive. Nor can we see far-infrared light because its frequency is too low. There are likewise sounds that are too low (infra- sound) and sounds that are too high (ultrasound) in frequency for the ear to hear.

For a more in-depth analysis of the sound spectrum, visit this site.