Chapter 3 Web Topics

3.1 Transfer Functions

Introduction

We are often interested in how a system affects a signal as it passes into and then out of the system. For example, what happens to the sounds produced by our larynx as they pass through our pharynx and nasal cavities and then emerge from our mouth and nostrils? What happens to a light signal as it passes through water? What happens to a bird’s song as it propagates through a leafy forest? In each case, the signal is passing through a black box and emerging, usually transformed, on the other side.

If the response of the black box to introduced signals is linear and time-invariant, we can compute a transfer function for it that allows us to predict the waveform and spectrogram of any signal after it has passed through the box.

The response of a black box is linear if it meets the principle of superposition: that is, suppose that inserting a simple signal x1 into the box produces an output signal y1, and inserting a signal x2 produces an output signal y2. The box is linear if inserting the sum of the two input signals, x1 + x2, produces the output signal y1 + y2. A system is nonlinear if the output contains products of inputs, (e.g., x1 × y2) or higher powers of terms (e.g., x12  or y13). Many natural systems are linear at least over some range of inputs; however, at very low or high values of x, the same system may become nonlinear.

A black box is time-invariant if inserting x1 produces y1 whether we test it now or sometime later. Again, many natural systems are time-invariant for short periods, but, later on, inserting x1 into the system might produce a different output y3. As an example, sound propagation near the ground early in the morning will follow the same rules until the sun begins to heat the ground. As the ground warms up, it heats the air just above it, and the patterns of sound refraction change. Thus the transfer function for sound propagation will not be time-invariant when we compare early morning to mid-morning testing periods.

Transfer functions

Suppose we limit our attention to the range of inputs and time intervals for which the response to signals of a black box of interest is linear and time-invariant. The transfer function that we can compute will have two parts: (1) the frequency response (which measures changes in the amplitude of any given frequency in the signal as it passes through the box); and (2) the phase response (which measures changes in relative phase of each frequency as it passes through the box). These two components of the transfer function are usually summarized as graphs called Bode plots. In both graphs, the x-axis is frequency.

In the frequency response Bode plot, the vertical axis indicates the relative change in the amplitude of each frequency as it passes through the box. One could use a linear scale in which the vertical axis coordinate indicated the fractional change in the input magnitude of a frequency. On this scale, a value of 1 would mean no change, a value of 0.5 would be a halving of the input amplitude, and a value of 2.0 would mean that the box amplified that frequency to twice its input value. In practice, frequency response Bode plots use a logarithmic dB scale in which 0 means no change in amplitude, –6 dB means a halving of the input amplitude, and + 6 dB means a doubling of the input amplitude.

Figure 1: Bode plot for frequency response (red line) of a sample black box. Dashed line indicates no change in amplitude of an input frequency at the output of the black box. Where the red line is above the dashed line, relevant frequencies are amplified during transit in the box; where the red line is below the dashed line, relevant frequencies have reduced (filtered) amplitudes.

In a phase response Bode plot, the vertical axis indicates the change in relative phase of each frequency component. The scaling is usually linear with 0 indicating no phase change, and plus and minus values indicating phase shifts measured either in degrees (360° for one full cycle) or radians (2π radians for one full cycle).

Figure 2: Bode plot for phase response (blue line) of a sample black box. Dashed line indicates no change in relative phase. Frequencies for which the blue line is above the dashed line are advanced during passage in box, and those for which the blue line is below the dashed line are retarded relative to the reference value.

The use of transfer functions in a linear system is shown in Figure 3:

Figure 3. Application of transfer functions to real input signal with waveform shown in (A). Latter is first broken down into (B), its power spectrum (amplitude versus frequency), and (C), its phase spectrum (phase relative to one component, here marked as a dot, versus frequency). The amplitude of each frequency component in the power spectrum of the input signal is then increased or decreased according to the frequency response in (D) to produce the output signal power spectrum (F). Similarly, the phase of each component in the input power spectrum of (C) is advanced or retarded according to the phase response graph (E). The frequencies in the output power spectrum with their adjusted amplitudes (F) are then added together using their new phases (G) to produce the output waveform (H).

This procedure allows one to predict the power spectrum, phase spectrum, and waveform of any input signal as long as the black box remains linear and time-invariant and the transfer functions have been measured previously for that box.

Measurement techniques

How does one measure the transfer functions for a black box? In most cases involving animal communication, we are only interested in the frequency spectrum of the output signal and can ignore its phase spectrum and waveform. This simplifies our task considerably.

The basic approach is to broadcast a signal of a known power spectrum through the black box and compare the power spectrum of the output signal to that of the input signal. In principle, any test sound could be used. In practice, most natural sounds lack certain frequency bands, which makes it impossible to know what would have happened to them during propagation had they been present. Frequency responses are usually measured using signals that cover all frequencies at approximately similar amplitudes. There are four basic ways this can be done:

  • Multiple frequency testing: Here, one introduces a single pure tone of a known frequency at a known amplitude, and measures the amplitude of this tone as it emerges from the black box. The frequency response is then constructed by repeating this process for many different frequencies and combining them to create the Bode plot.
    • Pros and cons: This method can be used in noisy environments because one can make each tone long enough to be detected at some point above the background. On the other hand, this method is the most tedious of the four listed here. In addition, long duration signals may create standing waves due to interference between outgoing sound and returning echoes.
  • FM signals: In this method, one plays back a frequency-modulated signal of constant amplitude that sweeps through the bandwidth of interest.
    • Pros and cons: A single FM pulse can provide a good initial sense of the frequency response of a black box very quickly. However, if the box has resonant properties with a high Q, the speed at which the signal sweeps through the frequencies may be too high to excite the natural resonance modes. Some frequency response measurement instruments actively slow down the sweep when the output indicates a resonant mode is being measured. As a minimum rule, the rate of sweep for FM test signals must be adjusted so that the duration of the FM signal is as long, or longer than the reciprocal of the bandwidth covered by the sweep.
  • Impulses: The Fourier decomposition of an instantaneous pulse signal consists of all frequencies at equal amplitude at once. Inserting a pulse into a black box and measuring the relative amplitudes of all emerging frequencies can produce the full frequency response very quickly.
    • Pros and cons: While the theoretical instantaneous impulse consists of all frequencies at equal amplitude, real impulses are finite in duration and this creates biases in the amplitudes of different frequencies that may need to be corrected later. The recording instrument must also have a fast enough response to record the output sound accurately, and, in noisy environments, it may be difficult to pick out a very brief impulse from background noise.
  • White noise: White noise, like impulses, theoretically consists of all frequencies at similar amplitudes. Exposing the black box to a segment of white noise should produce an output sound with a power spectrum that is a reasonably good replica of the frequency response of the system.
    • Pros and cons: In practice, one can only generate relatively even amplitudes of all frequencies within a given bandwidth. Also, white noise is more difficult to generate than “pink noise,” in which the relative amplitudes of the component frequencies decrease with increasing frequency.

More elaborate devices and techniques have been developed for measuring frequency responses, and a wide variety are discussed on the Internet. See also Wikipedia “Bode Plot.”

Potential problems

The major problems with measuring frequency responses in nature are noise and non-linearities. If noise is to be considered part of the black box response, then including it in the measurements is appropriate. However, in many cases we want to know how a specific process such as refraction, reflection, or scattering varies with habitat. In this case, we want to measure the frequency response of the system without noise. There are several ways to do this. One way is to measure noise without test signals and subtract average values from the frequency spectrum measured with test signals. There are also sophisticated instruments and statistical methods to extract noise from spectra once recorded.

Non-linearities are to be expected in nearly all natural systems. In many cases, exposing the system to test sound amplitudes above the range in which it responds linearly will generate harmonics of the test signal. Where one is using pure tones, this is easy to detect and correct. On the other hand, if natural signals occasionally reach levels at which the system responds non-linearly, then knowing that this results in the generation of harmonics of the input signal is useful. Other types of non-linear responses may be more difficult to quantify and characterize.

3.2 Dispersive Sound Propagation

Introduction

Most animal sounds and human speech consist of many different frequencies summed together. Each frequency has a given amplitude and phase relative to the others, and it is the particular combination of frequency, amplitude, and phase values that results in the waveform of the signal. In unbounded air and water, complex sounds propagate as a unit—all frequency components move at the same velocity and this preserves their initial alignment right to the ear of the receiver.

However, the component frequencies in a complex sound do not always propagate at the same velocity in all contexts. In certain situations, they travel at different speeds and thus get out of alignment. This changes both the spectrographic structure and the waveform of the signal. A medium in which different frequencies propagate at different velocities is said to be dispersive. The basic principles outlined here can be applied to light wave and sound wave propagation.

Group versus phase velocity

When a complex sound is generated in a dispersive medium, the entire ensemble of component frequencies initially radiates away from the source as a unit. However, because component frequencies propagate at different individual speeds, known as phase or wave velocities, the alignment of the components also changes as the signal propagates. If the medium were not dispersive, the part of the signal hosting the peak amplitude (the “envelope peak”) would propagate at the same speed as each component frequency. In dispersive media, the changes in component alignment due to different phase velocities causes the speed at which the signal peak moves, known as the group velocity, to be different from that of any component’s wave velocity. In some media and contexts, the group velocity of propagation for the signal is slower than that of any component phase velocity; in other contexts, the group velocity exceeds that of any frequency’s phase velocity. Signal propagation in dispersive environments is thus highly dependent on the relevant context.

Examples of dispersive propagation

Consider the following contexts, in all of which sound propagation is dispersive:

  • Solid bars and rods: Phase velocities in solid bars and rods increase with the square root of the component frequency and group velocities are greater than phase velocities.
  • Surface waves on Earth: Both Rayleigh and Love surface waves on the Earth’s surface are dispersive. In contrast to solid bars and rods, higher frequencies have phase velocities that are slower than those for low frequencies. Group velocities also tend to decrease with increasing frequency.
  • Surface waves on water: The water’s surface is kept flat in calm conditions by two forces: gravity and surface tension (a result of the attraction between water molecules). Low-frequency disturbances (< 14 Hz) in the water’s surface are restored to a flat condition by gravity forces; propagation is dispersive with higher frequencies (as long as they are less than 14 Hz) having slower phase velocities. High-frequency disturbances (>14 Hz) are restored by surface tension and phase velocities increase with frequency (just the opposite as for gravity effects). Group velocities are lower than phase velocities for low-frequency disturbances, and higher than phase velocities for high-frequency disturbances. Intermediate frequencies have the lowest phase velocities with the minimum occurring at the breaking point of 14 Hz. For this single frequency, group and phase velocities are equal. Dispersion can be reduced in shallower waters.
  • Bending waves in plants: Bending waves in plants move the plant tissue back and forth along a line perpendicular to the direction of propagation (which is along the stem or branch). While these waves are thus similar to transverse waves, they differ from them in that they cause a rhythmic bending back and forth of the entire stem. Phase velocities in plant stems are similar to those in other solid rods in that they increase with the square root of the component frequency. Group velocities are greater than component phase velocities.
  • Waveguides: Although waveguides can produce reduced spreading loss during propagation, and thus technically foster long-range communication, they are dispersive and thus can change the structure of propagated sound signals significantly. Both phase and group velocities increase with frequency in waveguides.

Waveform changes with dispersion

How the waveform of a complex signal changes as it propagates dispersively depends on the signal’s initial composition and the phase and group velocities in the propagating medium. Some movies of dispersive propagation of simple waveforms can be found at:

Follow-up references

Fletcher, N. H. 1992. Acoustic Systems In Biology. New York: Oxford University Press.

Greenfield, M. D. 2002. Signalers and Receivers: Mechanisms and Evolution of Arthropod Communication. Oxford: Oxford University Press.

Markl, H. 1983. Vibrational communication. In Neuroethology and Behavioral Physiology (Huber, F. and H. Markl, eds.), pp. 332–353. Berlin: Springer.

3.3 Animal Communication and Anthropogenic Noise

Introduction

Human activities add anthropogenic noise to all environments, including, of course, those in which animals are attempting to communicate using sound. There are four basic ways that this noise may impact animals and their communication systems:

  • Distraction: Although the frequencies of anthropogenic noise may not overlap with an animal’s own sound communication system, loud and sudden noises may distract it from it necessary activities. Distracting sounds can interrupt foraging, reproduction, growth, territorial defense, predator vigilance, proper hygiene, sleep, provisioning, nursing, etc.
  • Masking: In this case, the frequency distributions of anthropogenic noise and an animal’s signals are sufficiently overlapping that some signals are masked. Receivers thus cannot detect or evaluate signals and normal communication exchanges are interrupted.
  • Stress: Loud and disturbing noises may induce hormonal and neural responses in animal receivers that are physiologically expensive.
  • Damage: The most typical damage is injury to sensitive auditory organs. At extreme amplitudes, ambient noise can destroy other tissues as well.

The effects wrought on animals by any of these perturbations can be short-term or long-term. An animal may be able to recover its hearing acuity after a short bout of damage-level noise, but not after continued exposure over extended periods of time.

Research approaches

Current research to assess the effects of anthropogenic noise on natural populations of animals takes several tacks:

  • Audiograms and ambient noise measurements: This approach involves measuring the frequency composition and amplitudes of ambient noise and comparing it to the range of frequencies that focal animals can hear. Unless noise occurs at tissue-damaging levels, distraction, masking, or stress are only likely when there is overlap between noise frequencies and auditory sensitivities.
  • Behavioral shifts: Here, one compares behaviors of animals in the presence and in the absence of anthropogenic noise. If the noise is sufficiently intermittent, one can compare behaviors when it is absent to when it is present. If the noise is persistent, one must find a control study site as similar to the noisy site as possible except for the presence of the noise. Behavior shifts that might be monitored when noise is present include any cessation of normal behaviors, shifts in animal signal frequencies or amplitudes to reduce masking, altered activity time budgets in the presence of noise, relocation to less noisy sites for signaling, etc.
  • Health and demographic shifts: Again, here, one needs to compare a site with noise to one without noise to identify changes due to noise. Long-term health and demographic shifts might include higher mortality rates due to increased disease or predation, reduced reproductive success, lower recruitment of dispersing young, greater emigration of all age classes, etc.
  • Physical damage: Autopsies of dead animals that are known or suspected to have had exposure to high noise levels can usually reveal which organs or tissues, if any, are damaged, and estimate whether this damage could have been caused directly or indirectly by ambient noise.

Current research examples

  • Cross-taxonomic reviews: General reviews of the impacts of anthropogenic noise on ecosystems and conservation strategies can be found in Barber et al. 2010 and Laiolo 2010.
  • Terrestrial invertebrates: Despite the vast numbers of terrestrial insects that communicate with far-field sounds, very little effort has yet been expended on the possible impact of anthropogenic noise on insect behavior. It is worth noting that most insects use signal frequencies higher than the more common causes of loud far-field anthropogenic noise on land (e.g., vehicular traffic). In fact, insects such as cicadas, crickets, and katydids are themselves the major source of ambient noise at these frequencies. Insects, scorpions, and spiders that communicate using substrate signals propagating in plants, webs, or the ground may be more susceptible to disturbance since they use lower frequencies that are frequent components of anthropogenic substrate noise. However, this has yet to receive much research attention.
  • Aquatic invertebrates: Because sound attenuation is so much lower in water than in air, aquatic anthropogenic noise can carry long distances. In addition, humans generate some extremely loud sounds in water. Examples include the use of air-guns for seismic mineral exploration, pile driving, long-range military and climate monitoring communication signals (ATOC, ACME), and sonar sounds. Cargo ships, Jet Skis, racing boats, and military vessels all produce loud noise in water. Again, the impact of any of this noise on aquatic invertebrates has been little studied. Lovell et al. (2005, 2006) measured audiograms in marine prawns and argued that they may well be affected by ambient shipping and related human noises. The communication sounds of lobsters are inherently low volume and are likely to be masked by ambient sounds except very close to a signaling animal (Patek et al. 2009). A large number of other crustacean species on reefs are sensitive to sound and use this sensitivity both during larval settlement and as adults to avoid predators (Simpson et al. 2011). In both cases, anthropogenic noise may aggravate and hinder the normal uses of sounds in the animal’s biology.
  • Fish: Although they can be highly sensitive to low amplitude sounds, the hearing of most fish, which are hearing generalists, is limited to frequencies considerably less than 1 kHz. Hearing specialists such as carp, goldfish, and catfish can hear up to several kHz, with even higher sensitivities. Whereas hearing generalists can probably hear racing boat noise only at short ranges, hearing specialists can detect it at distances of several hundred meters (Amoser et al. 2004). A similar pattern shows up for air-gun noise (Mann et al. 2007). There is growing concern about the effects of human noise on fish (Slabbekoorn et al. 2010). Aquatic anthropogenic noise has been shown to perturb the normal behavior of several fish species (Popper 2003; Popper et al. 2003; Purser and Radford 2011), induce hormonal responses indicative of stress (Smith et al. 2004; Wysocki et al. 2006), and, in the case of air-guns and loud shipping noise, produce short-term (Smith et al. 2004) and permanent damage to fish ears (McCauley et al. 2003; Wysocki and Ladich 2005). Hearing specialists appear particularly vulnerable to ear damage (Scholik and Yan 2002a,b).
  • Amphibians and reptiles: Frogs, toads, lizards, and turtles are all potentially vulnerable to both airborne and substrate propagated anthropogenic noise. Some species of frogs increase calling rates, and consequently reduce both evening chorus durations and seasonal calling activity in the presence of anthropogenic noise (Sun and Narins 2005; Kaiser and Hammers 2009; Kaiser et al. 2011). Other species increase the pitch of their calls or reduce calling altogether when exposed to nearby traffic noise (Lengagne 2008; Parris et al. 2009). Frogs and turtles may also be exposed to aquatic noise. Studies of aquatic anthropogenic noise in a New York estuary frequented by marine turtles found significant amplitudes of noise within the known auditory sensitivities of the turtles (Samuel et al. 2005). Whether this noise causes any level of effect remains unstudied.
  • Birds: Lab studies clearly show that current levels of ambient noise can significantly mask the communication signals of birds (Lohr et al. 2003; Pohl et al. 2009). Species vary in their responses to this problem. Nightingales increase their song amplitude and decrease the number of notes per song in high ambient noise (Brumm and Todt 2002; Brumm 2004). House finches, song sparrows, blackbirds, and great tits living in noisy urban environments sing louder and faster, include fewer notes, and shift the minimum frequencies of their songs to higher values than birds outside urban areas (Slabbekoorn and Peet 2003; Fernandez-Juricic et al. 2005; Slabbekoorn and den Boer-Visser 2006; Wood and Yezerinac 2006; Bermudez-Cuamatzin et al. 2009; Mockford and Marshall 2009; Nemeth and Brumm 2009). In a study comparing relatively silent and noisy but otherwise matched sites, male ovenbirds showed significantly lower pairing success in the noisy sites (Habib et al. 2007). European robins close to urban areas decrease singing during noisy daytime hours and increase singing at night (Fuller et al. 2007). Anthropogenic noise also seems to affect bird dispersal and settlement patterns. Birds that normally sing at higher frequencies are more likely to settle in urban areas than those with lower frequencies (Hu and Cardoso 2009). Shifts in songbird dialect distributions may also arise from changing urbanization (Luther and Baptista 2010). European birds show significantly lower nesting densities in zones along highways and the effect increases with the level of vehicular traffic (Reijnen et al. 1996; Reijnen et al. 1997). The densities of nesting passerines in otherwise similar boreal forests were 50% higher when no gas compressors and other noisy facilities were nearby (Bayne et al. 2008). Reviews discussing other possible effects of anthropogenic noise on birds can be found in Katti and Warren (2004), Patricelli and Blickley (2006), and Warren et al. (2006).
  • Terrestrial mammals: There are currently few data characterizing the impact of anthropogenic noise on terrestrial mammals. Captive marmosets increase the amplitude and duration of their calls when exposed to continuous white noise (Brumm 2004). Ground squirrels near air turbines increase the amplitudes of their alarm calls and shift energy to higher harmonics; those near turbines are also more wary given the shorter distances that alarm calls can be detected (Rabin and Greene 2002; Rabin et al. 2003; Rabin et al. 2006). Captive bats actively avoid foraging in high levels of ambient noise (Schaub et al. 2008) and wild bats tend to leave to forage later when an adjacent music festival is in session (Shirley et al. 2001). Anthropogenic noise propagating in the substrate may interfere with elephant seismic communication (O’Connell-Rodwell et al. 2001).
  • Marine mammals: Perhaps because of the ubiquity and amplitude of anthropogenic noise in the oceans, much is now known about the potential and realized impacts of such sound on the behavior of marine mammals. Field measurements indicate that air-guns used in geological exploration, pile driving, and intensive boat traffic can produce sounds loud enough to mask marine mammal communication and echolocation sounds at considerable distances (Goold and Fish 1998; Erbe and Farmer 2000a,b; Southall et al. 2003; Boebel et al. 2005; David 2006; Madsen et al. 2006; Jensen et al. 2009; Bailey et al. 2010; Di Iorio and Clark 2010; Brandt et al. 2011; Gedamke et al. 2011). While some anthropogenic sounds appear to have little effect on the behavior of nearby marine mammals (Croll et al. 2001; Costa et al. 2003; Lemon et al. 2006), other studies show increases in call duration (Miller et al. 2000; Foote et al. 2004) and call rate (Buckstaff 2004) or a temporary cessation in acoustic communication when noisy vessels are nearby (Lesage et al. 1999). Manatees shift normal behavior patterns in the presence of high levels of noise (Miksis-Olds and Wagner 2011), and a variety of pinnipeds are known to avoid noisy areas (Gotz and Janik 2010). Captive animals exposed to high levels of anthropogenic sound show significant nervous and immune system effects (Romano et al. 2004). Beaked whales (Ziphiidae) have been shown to make the deepest and longest dives known among cetaceans (Tyack et al. 2006); examination of beached Ziphiids after military tests of intense sonar have suggested that these anthropogenic sounds may disrupt the slow ascent necessary to prevent gas-bubble formation and thus cause tissue damage and death (Cox et al. 2006). Whether this is true is still under investigation. General reviews on the impact of anthropogenic noise on marine mammals can be found in Richardson et al. (1995), Simmonds et al. (2004), Nowacek et al. (2007), and Weilgart (2007).

Mitigation

The research so far on animal responses to anthropogenic noise indicates that at least some frogs, birds, and mammals can modify their sound signals or signaling schedules to minimize masking. However, very loud or sudden sounds can cause stress and damage to animal receivers, either directly (as with fish ears) or indirectly (as with beaked whales). Current efforts are underway to design quieter shipping, and large scale monitoring schemes using underwater sound (e.g., ATOC, ACME) are the subject of considerable current discussion. However, the popularity of air-guns for seismic exploration in marine environments makes abatement of this source of noise unlikely. Similarly, vehicular traffic in urban areas is equally hard to reduce given current human activities. On the other hand, the use of all-terrain vehicles, snowmobiles, and Jet Skis in national parks and refuges could be reduced considering that sufficient research shows that such activities affect the survival and reproduction of protected species. Clearly, more research is needed to fill gaps in our knowledge about specific taxa and levels of effect.

Literature cited

Amoser, S., L. E. Wysocki, and F. Ladich. 2004. Noise emission during the first powerboat race in an Alpine lake and potential impact on fish communities. Journal of the Acoustical Society of America 116: 3789–3797.

Bailey, H., B. Senior, D. Simmons, J. Rusin, G. Picken, and P. M. Thompson. 2010. Assessing underwater noise levels during pile-driving at an offshore windfarm and its potential effects on marine mammals. Marine Pollution Bulletin 60: 888–897.

Barber, J. R., K. R. Crooks, and K. M. Fristrup. 2010. The costs of chronic noise exposure for terrestrial organisms. Trends in Ecology and Evolution 25: 180–189.

Bermudez-Cuamatzin, E., A. A. Rios-Chelen, D. Gil, and C. M. Garcia. 2009. Strategies of song adaptation to urban noise in the house finch: syllable pitch plasticity or differential syllable use? Behaviour 146: 1269–1286.

Boebel, O., P. Clarkson, R. Coates, R. Larter, P. E. O’Brien, J. Ploetz, C. Summerhayes, T. Tyack, D. W. H. Walton, and D. Wartzok. 2005. Risks posed to the Antarctic marine environment by acoustic instruments: a structured analysis. Antarctic Science 17: 533–540.

Brandt, M. J., A. Diederichs, K. Betke and G. Nehls. 2011. Responses of harbour porpoises to pile driving at the Horns Rev II offshore wind farm in the Danish North Sea. Marine Ecology-Progress Series 421: 205–216.

Brumm, H. 2004. The impact of environmental noise on song amplitude in a territorial bird. Journal of Animal Ecology 73: 434–440.

Brumm, H. and D. Todt. 2002. Noise-dependent song amplitude regulation in a territorial songbird. Animal Behaviour 63: 891–897.

Buckstaff, K.C. 2004. Effects of watercraft noise on the acoustic behavior of bottlenose dolphins, Tursiops truncatus, in Sarasota Bay, Florida. Marine Mammal Science 20: 709–725.

Costa, D. P., D. E. Crocker, J. Gedamke, P. M. Webb, D. S. Houser, S. B. Blackwell, D. Waples, S. A. Hayes, and B. J. Le Boeuf. 2003. The effect of a low-frequency sound source (acoustic thermometry of the ocean climate) on the diving behavior of juvenile northern elephant seals, Mirounga angustirostris. Journal of the Acoustical Society of America 113: 1155–1165.

Cox, T. M., T. J. Ragen, A. J. Read, E. Vos, R. W. Baird, K. Balcomb, J. Barlow, J. Caldwell, T. Cranford, L. Crum, A. D’Amico, G. D. Spain, A. Fernandez, J. J. Finneran, R. Gentry, W. Gerth, F. Gulland, J. Hildebrand, D. Houser, T. Hullar, P. D. Jepson, D. Ketten, C. D. MacLeod, P. Miller, S. Moore, D. C. Mountain, D. Palka, P. Ponganis, S. Rommel, T. Rowles, B. Taylor, P. Tyack, D. Wartzok, R. Gisiner, J. Mead, and L. Benner. 2006. Understanding the impacts of anthropogenic sound on beaked whales. Journal of Cetacean Research and Management 7: 177–187.

Croll, D. A., C. W. Clark, J. Calambokidis, W. T. Ellison, and B. R. Tershy. 2001. Effect of anthropogenic low-frequency noise on the foraging ecology of Balaenoptera whales. Animal Conservation 4: 13–27.

David, J. A. 2006. Likely sensitivity of bottlenose dolphins to pile-driving noise. Water and Environment Journal 20: 48–54.

Di Iorio, L. and C. W. Clark. 2010. Exposure to seismic survey alters blue whale acoustic communication. Biology Letters 6: 51–54.

Erbe, C. and D. M. Farmer. 2000a. A software model to estimate zones of impact on marine mammals around anthropogenic noise. Journal of the Acoustical Society of America 108: 1327–1331.

Erbe, C. and D. M. Farmer. 2000b. Zones of impact around icebreakers affecting beluga whales in the Beaufort Sea. Journal of the Acoustical Society of America 108: 1332–1340.

Fernandez-Juricic, E., R. Poston, K. de Collibus, C. Martin, K. Jones and R. Treminio. 2005. Microhabitat selection and singing behavior patterns of male house finches (Carpodacus mexicanus) in urban parks in a heavily urbanized landscape in the western U.S. Urban Habitats 3: 49–69.

Foote, A. D., R. W. Osborne, and A. R. Hoelzel. 2004. Environment - Whale-call response to masking boat noise. Nature 428: 910–910.

Fuller, R. A., P. H. Warren, and K. J. Gaston. 2007. Daytime noise predicts nocturnal singing in urban robins. Biology Letters 3: 368–370.

Gedamke, J., N. Gales, and S. Frydman. 2011. Assessing risk of baleen whale hearing loss from seismic surveys: The effect of uncertainty and individual variation. Journal of the Acoustical Society of America 129: 496–506.

Goold, J. C. and P. J. Fish. 1998. Broadband spectra of seismic survey air-gun emissions, with reference to dolphin auditory thresholds. Journal of the Acoustical Society of America 103: 2177–2184.

Gotz, T. and V. M. Janik. 2010. Aversiveness of sounds in phocid seals: psycho-physiological factors, learning processes and motivation. Journal of Experimental Biology 213: 1536–1548.

Hu, Y. and G. C. Cardoso. 2009. Are bird species that vocalize at higher frequencies preadapted to inhabit noisy urban areas? Behavioral Ecology 20: 1268–1273.

Jensen, F. H., L. Bejder, M. Wahlberg, N. A. Soto, M. Johnson, and P. T. Madsen. 2009. Vessel noise effects on delphinid communication. Marine Ecology-Progress Series 395: 161–175.

Kaiser, K. and J. L. Hammers. 2009. The effect of anthropogenic noise on male advertisement call rate in the neotropical treefrog, Dendropsophus triangulum. Behaviour 146: 1053–1069.

Kaiser, K., D. G. Scofield, M. Alloush, R. M. Jones, S. Marczak, K. Martineau, M. A. Oliva, and P. M. Narins. 2011. When sounds collide: the effect of anthropogenic noise on a breeding assemblage of frogs in Belize, Central America. Behaviour 148: 215–232.

Laiolo, P. 2010. The emerging significance of bioacoustics in animal species conservation. Biological Conservation 143: 1635–1645.

Lemon, M., T. P. Lynch, D.H. Cato, and R. G. Harcourt. 2006. Response of travelling bottlenose dolphins (Tursiops aduncus) to experimental approaches by a powerboat in Jervis Bay, New South Wales, Australia. Biological Conservation 127: 363–372.

Lengagne, T. 2008. Traffic noise affects communication behaviour in a breeding anuran, Hyla arborea. Biological Conservation 141: 2023–2031.

Lesage, V., C. Barrette, M. C. S. Kingsley, and B. Sjare. 1999. The effect of vessel noise on the vocal behavior of Belugas in the St. Lawrence River estuary, Canada. Marine Mammal Science 15: 65–84.

Lovell, J. M., M. M. Findlay, R. M. Moate, and H. Y. Yan. 2005. The hearing abilities of the prawn Palaemon serratus. Comparative Biochemistry and Physiology a-Molecular and Integrative Physiology 140: 89–100.

Lovell, J. M., R. M. Moate, L. Christiansen, and M. M. Findlay. 2006. The relationship between body size and evoked potentials from the statocysts of the prawn Palaemon serratus. Journal of Experimental Biology 209: 2480–2485.

Luther, D. and L. Baptista. 2010. Urban noise and the cultural evolution of bird songs. Proceedings of the Royal Society B-Biological Sciences 277: 469–473.

Madsen, P.T., M. Johnson, P.J.O. Miller, N.A. Soto, J. Lynch and P. Tyack. 2006. Quantitative measures of air-gun pulses recorded on sperm whales (Physeter macrocephalus) using acoustic tags during controlled exposure experiments. Journal of the Acoustical Society of America 120: 2366–2379.

Mann, D. A., P. A. Cott, B. W. Hanna, and A. N. Popper. 2007. Hearing in eight species of northern Canadian freshwater fishes. Journal of Fish Biology 70: 109–120.

McCauley, R. D., J. Fewtrell, and A. N. Popper. 2003. High intensity anthropogenic sound damages fish ears. Journal of the Acoustical Society of America 113: 638–642.

Miksis-Olds, J. L. and T. Wagner. 2011. Behavioral response of manatees to variations in environmental sound levels. Marine Mammal Science 27: 130–148.

Miller, P. J. O., N. Biassoni, A. Samuels, and P. L. Tyack. 2000. Whale songs lengthen in response to sonar. Nature 405: 903–903.

Mockford, E. J. and R. C. Marshall. 2009. Effects of urban noise on song and response behaviour in great tits. Proceedings of the Royal Society B-Biological Sciences 276: 2979–2985.

Nemeth, E. and H. Brumm. 2009. Blackbirds sing higher-pitched songs in cities: adaptation to habitat acoustics or side-effect of urbanization? Animal Behaviour 78: 637–641.

O’Connell-Rodwell, C. E., L. A. Hart, and B. T. Arnason. 2001. Exploring the potential use of seismic waves as a communication channel by elephants and other large mammals. American Zoologist 41: 1157–1170.

Parris, K. M., M. Velik-Lord, and J. M. A. North. 2009. Frogs call at a higher pitch in traffic noise. Ecology and Society 14: Article 5.

Patek, S. N., L. E. Shipp, and E. R. Staaterman. 2009. The acoustics and acoustic behavior of the California spiny lobster (Panulirus interruptus). Journal of the Acoustical Society of America 125: 3434–3443.

Popper, A.N. 2003. Effects of anthropogenic sounds on fishes. Fisheries 28: 24–31.

Popper, A. N., J. Fewtrell, M. E. Smith, and R. D. McCauley. 2003. Anthroplogenic sound: Effects on the behavior and physiology of fishes. Marine Technology Society Journal 37: 35–40.

Purser, J. and A. N. Radford. 2011. Acoustic noise induces attention shifts and reduces foraging performance in three-spined sticklebacks (Gasterosteus aculeatus). Plos One 6: article e17478.

Rabin, L. A., R. G. Coss, and D. H. Owings. 2006. The effects of wind turbines on antipredator behavior in California ground squirrels (Spermophilus beecheyi). Biological Conservation 131: 410–420.

Rabin, L. A. and C. M. Greene. 2002. Changes to acoustic communication systems in human-altered environments. Journal of Comparative Psychology 116: 137–141.

Rabin, L. A., B. McGowan, S. L. Hooper, and D. H. Owings. 2003. Anthropogenic noise and its effect on animal communication: an interface between comparative psychology and conservation biology. International Journal of Comparative Psychology 16: 172–192.

Reijnen, R., R. Foppen, and H. Meeuwsen. 1996. The effects of traffic on the density of breeding birds in Dutch agricultural grasslands. Biological Conservation 75: 255–260.

Reijnen, R., R. Foppen, and G. Veenbaas. 1997. Disturbance by traffic of breeding birds: Evaluation of the effect and considerations in planning and managing road corridors. Biodiversity and Conservation 6: 567–581.

Romano, T. A., M. J. Keogh, C. Kelly, P. Feng, L. Berk, C. E. Schlundt, D. A. Carder, and J. J. Finneran. 2004. Anthropogenic sound and marine mammal health: measures of the nervous and immune systems before and after intense sound exposure. Canadian Journal of Fisheries and Aquatic Sciences 61: 1124–1134.

Samuel, Y., S. J. Morreale, C. W. Clark, C. H. Greene, and M. E. Richmond. 2005. Underwater, low-frequency noise in a coastal sea turtle habitat. Journal of the Acoustical Society of America 117: 1465–1472.

Schaub, A., J. Ostwald, and B. M. Siemers. 2008. Foraging bats avoid noise. Journal of Experimental Biology 211: 3174–3180.

Scholik, A. R. and H. Y. Yan. 2002a. Effects of boat engine noise on the auditory sensitivity of the fathead minnow, Pimephales promelas. Environmental Biology of Fishes 63: 203–209.

Scholik, A. R. and H. Y. Yan. 2002b. The effects of noise on the auditory sensitivity of the bluegill sunfish, Lepomis macrochirus. Comparative Biochemistry and Physiology A-Molecular and Integrative Physiology 133: 43–52.

Shirley, M. D. F., V. L. Armitage, T. L. Barden, M. Gough, P. W. W. Lurz, D. E. Oatway, A. B. South, and S. P. Rushton. 2001. Assessing the impact of a music festival on the emergence behaviour of a breeding colony of Daubenton’s bats (Myotis daubentonii). Journal of Zoology 254: 367–373.

Simpson, S. D., A. N. Radford, E. J. Tickle, M. G. Meekan, and A. G. Jeffs. 2011. Adaptive avoidance of reef noise. Plos One 6: article e16625.

Slabbekoorn, H., N. Bouton, I. van Opzeeland, A. Coers, C. ten Cate, and A. N. Popper. 2010. A noisy spring: the impact of globally rising underwater sound levels on fish. Trends in Ecology and Evolution 25: 419–427.

Slabbekoorn, H. and A. den Boer-Visser. 2006. Cities change the songs of birds. Current Biology 16: 2326–2331.

Slabbekoorn, H. and M. Peet. 2003. Birds sing at a higher pitch in urban noise - Great tits hit the high notes to ensure that their mating calls are heard above the city’s din. Nature 424: 267–267.

Smith, M. E., A. S. Kane, and A. N. Popper. 2004. Noise-induced stress response and hearing loss in goldfish (Carassius auratus). Journal of Experimental Biology 207: 427–435.

Southall, B. L., R. J. Schusterman, and D. Kastak. 2003. Auditory masking in three pinnipeds: aerial critical ratios and direct critical bandwidth measurements. Journal of the Acoustical Society of America 114: 1660–1666.

Sun, J. W. C. and P. A. Narins. 2005. Anthropogenic sounds differentially affect amphibian call rate. Biological Conservation 121: 419–427.

Tyack, P. L., M. Johnson, N. A. Soto, A. Sturlese, and P. T. Madsen. 2006. Extreme diving of beaked whales. Journal of Experimental Biology 209: 4238–4253.

Wood, W. E. and S. M. Yezerinac. 2006. Song sparrow (Melospiza melodia) song varies with urban noise. Auk 123: 650–659.

Wysocki, L. E., J. P. Dittami, and F. Ladich. 2006. Ship noise and cortisol secretion in European freshwater fishes. Biological Conservation 128: 501–508.

Wysocki, L. E. and F. Ladich. 2005. Hearing in fishes under noise conditions. Jaro 6: 28–36.

3.4 Levers and Ears

Linear kinetics

Linear kinetics apply when a force is applied for a reasonable period in a single direction. Rules for linear kinetics are insufficient to explain processes such as sound where the relevant forces reverse directions rapidly. However, linear processes form the basis from which acoustic kinetics are derived. Some useful definitions for linear kinetics are:

  • Basic kinetics: Suppose a force, F, is applied to a static object for a given time period, t. If the force is sufficiently strong, it will begin to accelerate the object and will continue to do so until the fixed time period is completed. During this time, the object will move a given distance, d, from its starting point. The average velocity, v, achieved by the object is equal to the distance moved divided by the time that the force was applied: v = d/t. Velocity and distance traveled are thus proportional to each other for a fixed time interval.
  • Mechanical impedance: If the object is resistant to being accelerated by this force, it will not have moved very far during the time t. This resistance to being accelerated is called the mechanical impedance, z, of the object. It can be measured by dividing the force, F, by the velocity achieved, v: z = F/v. The further the object moves under force F in time t, the lower the impedance.
  • Work: The total work, W, done by moving the object is the product of the force F and the distance moved d: W = Fd.

Vibratory kinetics

Sounds and other vibratory processes experience a recurrent reversal of the direction of the relevant forces. The response of the system being forced may differ depending upon the frequency of these reversals. This requires some modifications of the definitions used in linear kinetics as follows:

  • Vibratory mechanical impedance: When a sinusoidally varying force, F, acts on an object and induces a sinusoidally varying velocity, u, of that object, the mechanical impedance is defined as z = F/u. The variable z is measured in N· s/m. If the phase of the induced velocity differs from that of the force, z is given as a complex quantity (i.e., includes both real and imaginary components).
  • Characteristic acoustic impedance: Characteristic acoustic impedance is a property of a medium. When an acoustic plane wave travels in an effectively unbounded medium (e.g., in a large volume of air or water), the acoustic pressure, P (measured in Pascals), at each point is proportional to the average particle velocity, u (measured in m/sec). The characteristic impedance of an unbounded volume of medium is then defined as the proportionality constant Zc. Thus, P = Zcu, or it can be rewritten as Zc = P/u. Here, Zc is measured in units Pa·s/m. Note that the characteristic acoustic impedance does not depend on frequency. In air at 20° C, Zc is around 420 Pa·s/m, whereas in water it is about 1500 Pa·s/m.
  • Acoustic impedance Za: In bounded contexts, such as at the opening of an animal’s ear canal, inside the canal, at an eardrum, or in the terrestrial vertebrate middle ear bones that conduct sounds into the inner ear, the ability of a localized region of medium to propagate an oscillating wave will be different from that in a large unbounded volume of the same medium. The response will differ depending on the area, S, of responding medium exposed to the oscillating sound pressures. During one cycle of the oscillating pressure, medium in the area S will be moved a distance d in the direction of the force for t seconds. The total volume moved will be S × d and the average volume velocity will be U = S × d/t. Since d/t = u, the particle velocity, we can also write this as U = S × u. The acoustic impedance of this patch of medium is then Za = P/U. This measure of impedance is measured in units of Pa·s/m3. Because there is usually a phase difference between the pressure variations and the associated volume velocity, Za is written as a complex number.
  • Specific acoustic impedance Zs: This measure controls for the area of medium responding in the bounded case above. It thus gives one a measure similar to the characteristic acoustic impedance for an unbounded volume of medium, but this time is based on the response of a bounded volume. The specific acoustic impedance is Zs = S × Za and has the same units (Pa ·s/m) as characteristic acoustic impedance.

Levers as impedance transformers

For both linear and vibratory systems, levers (or their fluid equivalents) are used to convert one ratio of force/velocity (e.g., one impedance) into another such ratio (another impedance). To see the general process, consider a solid beam resting on a balance point (fulcrum) at some location between the two ends of the beam. When one applies a force and moves one end of this lever a certain distance, one does work. By the conservation of energy principle, the same amount of work must simultaneously be done at the other end of the beam. Suppose the fulcrum is not at the midpoint of the beam so that one end of the beam passes through a larger arc than the other end when moved. For the work at the two ends to be the same, it follows that the force applied to the large-arc end must be less than that applied at the other end. By positioning the fulcrum off-center, a lever thus becomes an impedance transformer: large displacements at low force at one end can be turned into low displacements at high force at the other end, and vice versa.

Types of mechanical levers

Type 1 mechanical lever

This is a classical see-saw device. One balances the beam on a fulcrum at some location between the two ends of the lever. If the fulcrum is placed exactly in the middle of the beam, exerting a force and causing movement on one end of the beam (effort end) is replicated exactly (but in the opposite direction) on the other end of the beam (load end). There is no mechanical advantage to this geometry. If the fulcrum is moved toward the load end of the beam, a large movement at little force on the effort end results in a small movement but high force on the load end. Thus, someone on the effort end can use a type 1 lever to lift a heavy load more easily than if they try to lift it without a lever. The claw on the rear of a hammer works this way to remove nails. Scissors or pliers consist of a double-beam type 1 lever.

Type 2 mechanical lever

Here, the fulcrum is placed at one end of the beam. The effort is applied to the other end, and the load is placed at some point in the middle. A classic example is a wheelbarrow. A nutcracker is a two-beam type 2 lever system.

Type 3 mechanical lever

As with a Type 2 lever, the fulcrum is again placed at one end of the beam, but now the load and effort points are reversed, with the load on the opposite end of the beam from the fulcrum and the effort applied at some intermediate location. Many muscles that operate animal limbs work as type 3 levers. A pair of tweezers is a two-beam example.

Folded see-saw mechanical lever

This is a variation of a type 1 lever in which the beam is folded at the fulcrum so that the two resulting beam segments (called arms) maintain a fixed angle between them. When one arm is forced to rotate around the fulcrum in a particular direction, the other arm rotates in the same direction. While the two arms rotate at the same angular velocity (degrees/sec), the tip of the longer arm sweeps through a longer path than does the tip of the shorter arm. The mechanical advantages of the folded type 1 lever are exactly the same as for an unfolded one: if one arm is twice as long as the other, it will rotate through an arc twice as long as that for the smaller arm, and the shorter arm will exert twice the force of the longer arm.

Hydraulics

Hydraulics function in a manner analogous to solid levers. The difference is that fluids are used to exert the forces. In a typical hydraulic system, two surfaces of different area are connected by a relatively incompressible fluid inside a tube or cavity which is also incompressible. When pressure (force/unit area) is applied to one of these surfaces, the total force applied is equal to the product of the pressure and the area of the surface. This same force is applied by the fluid to the second surface. If the force generates movement, both surfaces will move in the same direction, a distance d, and thus both will do the same work as expected for a lever. Note, however, that if the two surfaces have different areas, the smaller surface will experience a higher pressure (e.g., the same force divided by a smaller area) than will the larger surface. Thus, this type of device can act as a transformer for acoustic impedance.

A similar principle is used to create hydraulic brakes and jacks, but in these cases, the effort is applied to the smaller surface and the larger surface carries the load.

Two cylinders are linked by a tube at the bottom, filled with some incompressible fluid, and each is equipped with a piston (the two surfaces). When one pushes down on the small piston, the fluid in the cylinders is compressed and the pressure is increased. The value of this pressure depends upon the ratio of the force exerted and the area of the small piston. Because pressure must be the same throughout a static fluid, the large piston now experiences an increased pressure from the fluid. The total force it experiences is the product of the fluid pressure and its surface area. Because that area is larger than the area of the small piston, the large piston experiences a greater force and thus a mechanical advantage for lifting heavy weights. As it moves upward due to this force, it brings the total volume of the cylinder fluids back to the value they had before the small cylinder was depressed. This lowers the fluid pressure back to its starting point and the large piston stops moving. Because each increment of movement by the large piston increases cylinder volume faster than similar movement by the small piston, the distance traveled by the large piston is shorter than that moved by the small piston. In the end, the result is just like a type 1 mechanical lever: one end of the system moves a long distance at low force, and the other end moves a short distance at high force.

Catenary levers

When a cable is attached at both ends to some fixed points and allowed to sag, the points close to the attachment experience the greatest force, since the whole cable is pulling down on them, and the least mobility, since they are closest to the attachment point. Points in the middle experience the least force and have the greatest mobility. A similar effect occurs when a circular membrane is attached at its margins and is forced to bend inward or outward. The high force and low mobility at the margins of the cable or membrane are transformed into low force and high displacements at the center. A small force applied to the center of the cable or membrane results in major displacement at the center, but a smaller displacement at considerable force near the margins.

Auditory levers

Animal ears often face impedance mismatch problems. For example, all terrestrial vertebrate inner ears are filled with fluids that must be set in motion to stimulate the auditory sensory cells. The high acoustic impedance of the fluid-filled inner ear requires a source that is high pressure, low velocity, and low displacement to set the fluids into motion. The available stimuli are sound waves in air that are low pressure, high velocity, and high displacement. Without some form of acoustic impedance matching, most of the incident sound energy would be reflected away from the animal’s tympana. Terrestrial vertebrates use type 1 and/or type 2 mechanical levers, hydraulic levers, and catenary levers to achieve effective impedance matches. Some examples follow.

Frogs and toads

A typical frog ear is diagrammed anatomically on the left and mechanically on the right:

Frog and toad middle ears contain three articulated cartilaginous or bony elements. The extrastapes (or extracolumella) acts approximately like a folded type 1 lever. One arm of the lever connects with the inside of the tympanum. An ascending process extends from one side and attaches to the skull. This provides a fulcrum for this element. The other arm of the lever attaches to the outside end of the stapes (or columella). The stapes is a long thin element that attaches on its inner end to the middle of the footplate element. The latter is hinged to the inner ear capsule. Together, the stapes and hinged footplate form a type 3 mechanical lever. The footplate presses in on the oval window and transfers its motions to the fluid-filled cavity on the other side of the membrane. The two lever systems are given different colors in the diagram on the right. White circles indicate articulations, the vertical dashed line shows the tympanum, and green triangles indicate fulcra. Given the orientation of the two sequential levers, the footplate and oval window membrane move outward when the tympanum moves inward. The tympanum and oval window thus move 180° out-of-phase in frogs and toads (Mason and Narins 2002; Werner 2003). The mechanical advantage of this ossicular system in male bullfrogs is about 5.7:1. Given the relative sizes of the tympanum and oval window in male bullfrogs, the hydraulic advantage is 50:1. An additional mechanical advantage may accrue from some flexibility and springiness in the extrastapes element (Mason and Narins 2002), and there is some evidence of a catenary advantage due to the bending of the tympanum in frogs (Moffat and Capranica 1978).

Birds and reptiles

Reptiles and birds also use a three-ossicle lever system, but the ossicles are arranged to form a single type 2 lever (Saunders et al. 2000). The extracolumella is a rod-shaped element anchored to the skull at the inner edge of the tympanum. The other end extends over and past the center of the inside surface of the tympanum. The second element, the columella, is a long thin bone that articulates with an intermediate point along the length of the extracolumella. On its other end, it attaches to the footplate which, again, sits over the oval window. It is held in place by soft tissues and by a ligament. The diagram below shows a simplified lizard middle ear:

In birds, the extracolumella spreads three arms instead of one over the inner surface of the tympanum; two of these can be hinged to form fulcra on the periphery of the tympanum (Saunders et al. 2000). In addition, the columella in birds connects to the extracolumella at an acute angle (less than 30° relative to the plane of the tympanum). This change in angle significantly changes the directions of motion of the lever elements. In addition, the ligament holding the footplate over the oval window may act as a hinge on one side (as with frogs and toads), adding a second lever system (Gaudin 1968).

The mechanical advantage of the middle ear ossicles in living reptiles and birds is typically about 2:1 to 4:1. The hydraulic advantage given the difference in tympanum and oval window size ranges is 13:1 in collared lizards, and from 11:1 to 40:1 in birds (Saunders et al. 2000).

Mammals

In mammals, three ossicles are again used to link the tympanum to the oval window. However, these have evolved independently from those acquired by amphibians, reptiles, and birds, and use a different leverage action:

The first ossicle (the malleus) has a long arm that is attached to the inside of the tympanum. The malleus is then linked to a second ossicle (the incus) at an angle so that together they form a folded type 1 lever. An inward movement of the tympanum causes the folded lever to rotate counter-clockwise, and the internal arm of the incus forces the footplate (the stapes) to press in on the oval window. In contrast to frogs, the tympanum and oval window are in-phase in mammal middle ears. Over a wide range of terrestrial mammals, the ratio of effective tympanum to oval window size is roughly constant at an average of 19:1. The mechanical advantage created by the middle ear bones is also relatively constant with an average around 2.4:1 (Rosowski 1994; Hemilä et al. 1995).

Impedance and transformer ratios

The goal of an acoustic impedance transformer is to convert a propagating sound wave from one set of pressure and particle velocities to another set. Airborne sounds propagate at low pressures and high particle velocities and displacements; inner ear fluids propagate sounds at high pressures and low particle velocities and displacements. The transformer must thus be able to vibrate in concert with airborne waves on its input side and with water waves on its output side. The hydraulic lever system is a pressure converter: low pressures hitting the tympanum are concentrated onto the smaller oval window resulting in higher pressures. The ratio of output to input pressures from this mechanism is equal to the ratio of the effective surface area of the tympanum (A1) to that of the oval window (A2). The ossicle chains can usually be reduced to an equivalent type 1 lever where the length of the beam between the input end and the fulcrum is L1, and that of the output side is L2. Since the ossicular levers both increase the output pressure, and decrease the output velocities, the ratio L1/L2 affects the acoustic impedance twice and is thus squared in computations. The resulting ratio between the output and input impedances of such an acoustic transformer is (Dallos 1973):

Thus if the average hydraulic ratio for mammal ears is 19:1 and the average lever ratio is 2.4:1, then, excluding any additional effects such as catenary leverage, the output impedance of the middle ear system at the oval window will be 109 times that experienced by the tympanum. The combination of terms on the right hand side of the equation is called the impedance transform ratio and varies from less than 100 in some birds and mammals to 500 or more in some lizards and frogs (Mason et al. 2003). These higher ratios do not necessarily imply improved hearing, and in some cases may be due to multiple functions of the ear structures (e.g., male call radiation from the tympanum in bullfrogs).

Literature cited

Dallos, P. 1973. The Auditory Periphery: Biophysics and Physiology. New York: Academic Press.  

Gaudin, E. P. 1968. On the middle ear of birds. Acta Otolaryngolica 65: 316–326.

Hemilä, S., S. Nummela, and T. Reuter. 1995. What middle ear parameters tell about impedance matching and high frequency hearing. Hearing Research 85: 31–44.

Mason, M. J., C. C. Lin, and P. M. Narins. 2003. Sex differences in the middle ear of the bullfrog (Rana catesbeiana). Brain Behavior and Evolution 61: 91–101.

Mason, M. J. and P. M. Narins. 2002. Vibrometric studies of the middle ear of the bullfrog Rana catesbeiana I. The extrastapes. Journal of Experimental Biology 205: 3153–3165.

Moffat, A. J. M. and R. R. Capranica. 1978. Middle ear sensitivity in anurans and reptiles measured by light scattering spectroscopy. Journal of Comparative Physiology 127: 97–107.

Rosowski, J. J. 1994. Outer and middle ears. In Comparative Hearing: Mammals (Fay, R. R. and A. N. Popper, eds.), pp. 172–247. New York: Springer-Verlag.

Saunders, J. C., R. K. Duncan, D. E. Doan, and Y. L. Werner. 2000. The middle ear of reptiles and birds. In Comparative Hearing: Birds and Reptiles (Dooling, R. J., R. R. Fay, and A.N. Popper, eds.), pp. 13–69. New York: Springer-Verlag.

Werner, Y. L. 2003. Mechanical leverage in the middle ear of the American bullfrog, Rana catesbeiana. Hearing Research 175: 54–65.

3.5 Auditory Amplification

Basic principle

Although individual auditory mechanoreceptors can be extremely sensitive, there are distinct advantages to cohesive stimulation of adjacent cells with similar characteristic frequencies. When multiple cells are stimulated, multiple nerves will be activated and jointly send impulses to the animal’s brain. Small differences in characteristic frequencies of receptor cells can also be averaged out by pooling the responses of multiple cells. Both arthropods and vertebrates have hit on a similar way to accomplish this coordination: motile sensory cells.

Near-field sound propagation (including transmitted sounds inside an inner ear) involves a tidal oscillation back and forth of a medium. The dendrites (in arthropods) or stereocilia (in vertebrates) of the sensory cells are then stimulated either directly by these fluid movements or indirectly through motions induced in an overlying membrane, otolith, or other structure. Once stimulated, some of the sensory cells respond by changing their shape (in mammals) or physically moving their dendrites or stereocilia (in other vertebrates and arthropods) in concert with the sound oscillation. This active movement by the sensory cells increases the amplitude of motion in the overlying medium or structures, and this, in turn, generates even greater stimulation of the sensory cells. The resulting feedback loop amplifies very small sound levels and provides very high sensitivities. It allows adjustable tuning of the resonant frequencies of the entire ensemble by varying how and when motions are induced. In both arthropods and vertebrates, motion in the feedback loop is most often found when the animal is exposed to very low level sounds; at high levels, active sensory cell movement is minimal and the system responds largely according to its physically determined natural modes.

One consequence of this active feedback loop is that tiny amounts of random noise may be perceived as a very low amplitude signal. This will cause the motile sensory cells to move in an attempt to amplify a very faint signal even though no sound is really present. The result is spontaneous motion of the sensory cells. In vertebrates, these movements generate artifactual sounds in the inner ear that can be propagated back to the ear drum and detected as otoacoustic emissions. They can also stimulate the sensory cells and produce the sensation known as tinnitus, or “ringing of the ears.”

Another consequence of the feedback loop is that the auditory system is no longer linear. In fact, the acute sensitivity of many auditory systems to very low-level sounds results from changes in the resonant properties of the system due to active feedback. Anesthetized or recently dead animals do not show these nonlinearities: their ears act as simple mechanical systems with natural modes set by their physical properties.

Auditory amplification in fruit flies

The common fruit fly (Drosophila melanogaster) detects near-field sounds using a pair of antennae on their heads (Göpfert and Robert 2002). Each antenna hosts a plume-like terminal segment called the arista that is anchored immovably to an elliptical segment called the funiculus. The funiculus articulates with a third segment, the pedicel, which is firmly attached to the fly’s head. The funiculus has a small hook on its proximal end that fits into an invagination in the pedicel wall. The tip of the hook then connects via a flexible hinge to the pedicel. The pedicel is a hollow cavity filled with several hundred sensory scolopidia forming the Johnston’s organ. The scolopidia are divided into two groups; the dendrites of each group attach via thin threads to one side of the funiculus hook.

Figure 1: Schematic of critical components in auditory functions of fruit fly antenna. Pedicel has been opened to show invagination for funicular hook and displacement of scolopidia. Note that only three scolopidia are shown on each side; in an actual fly, there are hundreds of scolopidia packed into the pedicel forming the Johnston’s organ. (After Göpfert 2002.)

Because the arista is located at a point off of the natural axis of motion of the funiculus, movements of the medium caused by near-field sound propagation generate a torque on the arista and an alternating rotation of the funiculus-arista in concert with the near-field oscillations. These movements alternately stretch one group of scolopidia while compressing the other. Stimulated scolopidia then send nerve impulses back to the fly’s brain that indicate the presence and frequency of the incident sound.

Based on the physical mechanics alone, the fly antenna has a resonant frequency of about 800 Hz. However, living flies can change the resonant properties of the system by active movement of stimulated scolopidia in a feedback loop (Göpfert & Robert 2003): lower ambient sound intensities result in lower resonant frequencies. The graph below illustrates this.

Figure 2: Active change in resonant frequency of antenna in a living fruit fly (red line) as a function of ambient near-field sound strength (measured here as particle velocity). Dead or anesthetized flies show constant resonant frequency for antenna of about 800 Hz. (After Göpfert and Robert 2003.)

In addition to changes in the resonant frequency due to scolopidial movements, the amplitude of antennal oscillations actually increases at low stimulus levels:

Figure 3: Changes in resonant frequency (at the peak of each curve) and amplitude of arista movement (vertical axis) as function of stimulus frequency (horizontal axis) and stimulus strength (measured as particle velocities in m/sec and indicated next to each curve in left hand graph). Note the increase in amplitude of movement despite the decrease in amplitude of stimulus in living flies (left graph), but the absence of any changes in amplitude or resonant frequency for dead flies (right graph).

As with vertebrate otoacoustic emissions, fruit fly antennal scolopidia move spontaneously in silence, and these spontaneous movements can be exaggerated by giving the fly drugs that break the feedback loop by blocking output from the sensory cells. These spontaneous movements disappear except for those induced by thermal molecular motion if the fly is dead:

Figure 4: Spontaneous movement of fly arista in absence of any sound stimulus. In a normal living fly in silent conditions, the frequency spectrum of arista movement pattern shows the dominant component around 207 Hz. The dashed line shows a fitted curve for the spectrum of a normal fly in silence. The middle row shows a fly drugged with dimethyl-sulfoxide, which blocks nervous output from sensory cells. This disrupts the normal feedback loop, causing the scolopidia to increase dendritic motions and thus increase arista movement. Compare the spectrum to the fitted line for normal conditions (black dashed line). Major components in motion are here around 100 and 300 Hz. Finally, the bottom row shows arista movement for a recently dead fly. The only movement is due to higher frequency molecular motion of the medium. (After Göpfert and Robert 2003.)

In Drosophila, the feedback loop is mechanical; there are no efferent nerves from the brain that regulate the strength of the feedback (Kamikouchi et al. 2010). Similar studies have shown that other insects, such as mosquitoes, employ sensory cell motion to create auditory amplification (Göpfert et al. 1999; Göpfert and Robert 2000, 2001; Robert and Göpfert 2002; Robert 2005).

Auditory amplification in vertebrates

Auditory amplification has now been demonstrated in each of the major terrestrial vertebrate groups. Fish may also exhibit this behavior, but it remains to be examined. Among the terrestrial vertebrates, mammals have two kinds of cochlear hair cells: a single row of inner hair cells that runs the entire length of the organ, and 3–5 additional parallel rows of outer hair cells, as shown below.

Figure 5: Diagrammatic cross-section through mammalian cochlea (inner ear). Sensory cells are suspended between three parallel cavities filled with fluid and sandwiched between a tectorial membrane and basilar membrane. Sounds propagated in fluid channels move the tectorial membrane relative to the basilar membrane and thus bend stereocilia on hair cells. While both inner and outer hair cells are stimulated by sounds, 95% of sound input to the central nervous system comes from the inner hair cells. Outer hair cells receive input from the central nervous system, and largely function by changing shape in oscillating fashion to amplify stimulation of adjacent inner hair cells at low sound intensities.

Most afferent innervation (from sense organ to central nervous system) in the mammalian ear involves synapses with the inner hair cells; the outer hair cells mostly receive efferent input (from central nervous system to sense organs). At any point along the length of the cochlea, outer hair cells are tuned to the same characteristic frequencies as adjacent inner hair cells. When both are stimulated, the outer hair cells change shape rhythmically to amplify stimulation of the nearby inner hair cells (Robles and Ruggero 2001). Unlike Drosophila, where the feedback is solely mechanical, the efferent nerves in mammals regulate the amount of movement and thus control the feedback level (Fettiplace 2006; Ashmore et al. 2010). This feedback loop is mostly used to improve reception of higher frequencies and low stimulus levels. As with other feedback systems, otoacoustic emissions are well known in mammals and are often used to study the functioning and health of mammalian hearing.

In amphibians, reptiles, and birds, auditory amplification is achieved by active waving of the hair cell stereocilia (Manley 2000; Manley et al. 2001; Fettiplace 2006; Strimbu et al. 2010). Stereocilia are linked to each other, making coordinated movement feasible. Most taxa have at least two types of hair cells, with one type being the main source of motile responses.

Lizards, like other taxa with active auditory amplification, produce spontaneous otoacoustic emissions that can be monitored at the eardrum in silent conditions. Lizards that have a single tectorial membrane over their hair cells produce spontaneous sounds with a few dominant frequencies. However, some lizards, such as geckoes, divide up their tectorial membrane into a pinnate shape rather like a fern leaf (Manley 2000, 2002). Whereas the otoacoustic emissions from lizards with a single continuous tectorial membrane show frequency spectra with 1–2 main peaks (see Figure 6A below), lizards with subdivided tectorial membranes show many different small peaks (see Figure 6B below), apparently reflecting different frequencies of spontaneous movement in different zones of the inner ear.

Figure 6: Frequency spectra of spontaneous otoacoustic emissions of (A) a dwarf tegu lizard (Callopistes maculates), and (B) a gecko (Gekko gecko). (After Manley 2002.)

Literature Cited

Ashmore, J., P. Avan, W. E. Brownell, P. Dallos, K. Dierkes, R. Fettiplace, K. Grosh, C. M. Hackney, A. J. Hudspeth, F. Julicher, B. Lindner, P. Martin, J. Meaud, C. Petit, J. R. S. Sacchi, and B. Canlon. 2010. The remarkable cochlear amplifier. Hearing Research 266: 1–17.

Fettiplace, R. 2006. Active hair bundle movements in auditory hair cells. Journal of Physiology-London 576: 29–36.

Göpfert, M. C., H. Briegel, and D. Robert. 1999. Mosquito hearing: sound-induced antennal vibrations in male and female Aedes aegypti. Journal of Experimental Biology 202: 2727–2738.

Göpfert, M. C. and D. Robert. 2000. Nanometre-range acoustic sensitivity in male and female mosquitoes. Proceedings of the Royal Society of London Series B-Biological Sciences 267: 453–457.

Göpfert, M. C. and D. Robert. 2001. Active auditory mechanics in mosquitoes. Proceedings of the Royal Society of London Series B-Biological Sciences 268: 333–339.

Göpfert, M. C. and D. Robert. 2002. The mechanical basis of Drosophila audition. Journal of Experimental Biology 205: 1199–1208.

Göpfert, M. C. and D. Robert. 2003. Motion generation by Drosophila mechanosensory neurons. Proceedings of the National Academy of Sciences of the United States of America 100: 5514–5519.

Kamikouchi, A., J. T. Albert, and M. C. Gopfert. 2010. Mechanical feedback amplification in Drosophila hearing is independent of synaptic transmission. European Journal of Neuroscience 31: 697–703.

Manley, G. A. 2000. The hearing organ of lizards. In Comparative Hearing: Birds and Reptiles (Dooling, R. J., R. R. Fay, and A. N. Popper, eds.), pp. 139–196. New York: Springer-Verlag.

Manley, G. A. 2002. Evolution of structure and function of the hearing organ of lizards. Journal of Neurobiology 53: 202–211.

Manley, G. A., D. L. Kirk, C. Koppl, and G. K. Yates. 2001. In vivo evidence for a cochlear amplifier in the hair-cell bundle of lizards. Proceedings of the National Academy of Sciences of the United States of America 98: 2826–2831.

Robert, D. 2005. Directional hearing in insects. In Sound Source Localization (Popper, A. N. and R. R. Fay, eds.), pp. 6–35. New York: Springer Science and Business Media, Inc.

Robert, D. and M. C. Gopfert. 2002. Acoustic sensitivity of fly antennae. Journal of Insect Physiology 48: 189–196.

Robles, L. and M. A. Ruggero. 2001. Mechanics of the mammalian cochlea. Physiological Reviews 81: 1305–1352.

Strimbu, C. E., A. Kao, J. Tokuda, D. Ramunno-Johnson, and D. Bozovic. 2010. Dynamic state and evoked motility in coupled hair bundles of the bullfrog sacculus. Hearing Research 265: 38-45.

3.6 Animations of Vertebrate Ears

Introduction

The terrestrial vertebrate ear is a complicated device that converts sounds from a low acoustic impedance to a high acoustic impedance (function of the middle ear), and then breaks the converted complex waveforms down into their component frequencies (function of the inner ear). Static images of the ear do not do justice to either process. Luckily, a number of websites provide excellent animations of the middle ear or the inner ear or both in operation. Below, we list some suggested sites. All focus on the mammalian (especially human) ear, but the processes are largely the same for reptiles, birds, and mammals. Frog ears are a bit different in geometry, but the principle is also the same.

Suggested sites

3.7 Measuring Auditory Resolution

Introduction

The text lists multiple reasons why different animals may differ in the limits and resolutions of their hearing organs. Airborne, waterborne, and substrate-propagated signals confer different constraints on suitable receiver mechanisms, and body size imposes limits at all stages of the communication process. There is also the problem that animals usually want to extract more than one type of information from receipt of a signal: improving ear resolution for one type of information invariably reduces resolution for another type of information. The impact of ambient noise on sound communication depends critically on the range of frequencies that a receiver can hear and on the spectral distribution of energy in the noise. Whether one is interested in the consequences of anatomical differences, the physics of sound signal exchanges, the physiology of hearing, or the behavioral ecology of sound communication, knowledge about the limits and resolutions of auditory organs in particular species can be very important. How can one measure auditory performance, and which measures are most useful in comparing taxa (such as animals versus humans)?

Level of measurement

Measuring the limits and resolution of an animal’s acoustical abilities is a challenging task. A first step is deciding at which stage in the sound perception process one should make the measurement. Options include measurements at:

  • Sensory cell stimulation: In many animals, one can use neurobiological methods to record the slow depolarization of auditory sensory cells during and after stimulation. In vertebrates, the summed slow potential changes of many hair cells creates a microphonic potential, which can be used as an index to test for bandwidth limits and sensitivity of the ear. While receptor potentials can assess the tuning responses of individual cells accurately, it can take many such probings to establish the bandwidth limits of an organ. In addition, the resolution of individual sensory cells is only one part of the final resolution seen at the level of behavior.
  • Otoacoustic emissions: In insects such as fruit flies and mosquitoes and all terrestrial vertebrates, active amplification of sounds results in the generation of otoacoustic sound emissions that can be monitored to characterize auditory sensitivities. These emissions are currently used as diagnostics for testing human hearing (see the article, Otoacoustic Emissions: http://emedicine.medscape.com/article/835943-overview), and have been used in various animal studies similarly to assess limits and resolutions.
  • Sensory nerve activity: Recordings of single fibers or entire nerve bundles can be used to monitor the responsiveness of an ear to different frequencies and amplitudes. These use standard neurobiological techniques and have the advantage that one is looking at the coded information being made available to the brain.
  • Brain activity: Recordings at the level of the brain have been widely used in both invertebrate and vertebrate acoustic studies. Where the sequential pathways are well understood, researchers can track successive analyses by the animal.
  • Behavior: The behavior of the receiver is the ultimate test of whether a given sound has been perceived or not. Unfortunately, a lack of response could be due either to an inability to perform a discrimination or a lack of motivation to respond. Signal detection methods allow one to separate these two factors (see Web Topic 8.10). The best context is conditioned learning (psychoacoustics) in which captive subjects are rewarded for correct discriminations and are thus highly motivated to detect and classify signals if they can.

Not all levels are feasible for all taxa. For example, researchers have had very poor luck applying conditioned learning techniques to lizards, and there are ethical reasons to exclude certain salient species such as higher primates from destructive neurobiological techniques. The result is that the mix of levels that has been studied varies for different taxa. The fact that measures obtained at different levels may produce different results even in the same species needs to be kept in mind when making cross-taxon comparisons. The strongest results are those that are consistent across levels in the same species.

Basic measures

Below, we define the logic and goal of some commonly applied auditory measures. The list is not meant to be exhaustive, as researchers are continually devising new features to measure or new ways to measure well-studied features.

Audiograms

The audiogram (also called audibility curve) is a graph with stimulus frequency as the horizontal axis (usually on a log scale), and the minimal stimulus amplitude (usually in dB re some reference) required to evoke a response at a given frequency on the vertical axis. Audiograms (red line below) are usually U-shaped, meaning that very low and very high frequencies require a high amplitude stimulus to evoke any response:

The frequency that requires the lowest amplitude stimulus (the low point of the U-shaped curve), is often called the best frequency. The threshold amplitude required at the best frequency is a commonly invoked measure of the overall sensitivity of the auditory system. The lower and upper frequencies in the audiogram at which thresholds are some specified number of dB above that at the best frequency define the frequency range of the system, and the difference in frequency between them is the system’s bandwidth. Note that the bandwidth is equivalent to measuring the Q (quality) factor for a resonant system: the narrower the bandwidth, the higher the associated Q (see Figure 2.34 in Chapter 2). Q values may be reported as a measure of the tuning of an auditory system.

Critical ratios

A critical ratio is a measure of how much greater the amplitude of a single frequency tone must be to elicit a response in a given level of ambient noise. In most birds and mammals, critical ratios increase with the frequency being considered (usually 2–3 dB more tone power is required for each doubling of frequency). Thus, higher frequency signals must be received at higher amplitudes to be detected against noise. In both birds and mammals, there are exceptions to this rule. Parrots, for example, show a decrease in critical ratio with increasing frequency at least up to the higher frequencies used in their long range contact calls (Dooling et al. 2000).

Critical bands

Tonotopic inner ears are often considered as a bank of filters: any filter in the sequence produces a response only when a component frequency in a sound stimulus falls within the critical bandwidth (usually abbreviated to critical band) of that filter. Critical bands can be measured by varying the bandwidth of the noise masking a pure tone, by examining critical ratios at different frequencies, or by varying the width of a “notch” in the noise used to mask a single pure tone. Critical bands are a measure of the frequency resolution of the ear, since an ear with many narrow filters will differentiate between more frequencies than will one made up of only a few wide filters. In general, critical bandwidths increase with the center frequency of the relevant filter. This is what one would expect given Weber’s Law (see Chapter 8 and Web Topic 8.6). In humans, each successive filter representing one critical bandwidth occupies about 1 mm of basilar membrane along the cochlea. Since frequencies are distributed logarithmically along the basilar membrane in tonotopic vertebrate ears, a constant 1 mm generates larger bandwidths as one moves to the high frequency end of the cochlea. Again, parrots are an exception: they have their most narrow critical bandwidths at the intermediate frequencies used in their contact calls (Dooling et al. 2000).

Frequency discrimination

The goal here is to measure the minimal difference that can be discriminated between the frequencies of two pure tones. It can be measured by decreasing the difference in frequency between successive stimuli until no differential response is obtained. Another method habituates the subject with a constant frequency signal and looks for the minimal difference in frequency that causes a renewed (dishabituated) response. As a rule, the minimal differences in frequencies of pure tones that can be differentiated are 10–20 times smaller than the critical bands obtained by masking a single pure tone with noise. Frequency discrimination, like critical bands, tends to follow Weber’s Law with threshold differences ranging from 0.5–4% of the compared frequencies for birds and mammals.

Comodulation masking release

If the band of noise masking a pure frequency tone is amplitude-modulated with the same pattern as another band of noise centered on one or more other regions of the audible spectrum, the critical ratio required to detect the noise can decrease by 10 dB or more. This is because the ear and brain can identify general noise by its shared modulation pattern and correct for its presence when masking a pure tone. This phenomenon has been demonstrated in both mammals and birds.

Intensity discrimination

This simple test is similar to frequency discrimination: two pure tones of similar but slightly different amplitude are made increasingly similar until differential responses to them are lost. This test can be performed at all levels to identify the degree to which higher level processing augments or decreases intensity resolution.

Temporal integration

The ability to detect a sound depends on the ear receiving a minimal amount of energy. For sounds less than about a quarter of a second in duration, the same energy could be supplied at low amplitude over a long period or high amplitude over a short period. This principle sets the stage for plotting the minimum amplitude that is required to obtain a response to a sound of a given duration. The rate at which threshold amplitude decreases with increasing signal duration is a measure of the integration constant of the system. Birds and mammals show similar constants for this measure.

Gap detection

This measures the ability of a receiver to detect a break between two consecutive sounds. Where the consecutive sounds are both noise, birds and mammals show similar abilities; where they are pure tones, humans and birds perform similarly if the two sounds have similar frequencies, but birds do better than humans when the sounds have different frequencies.

Duration discrimination

The goal here is to identify the threshold difference—usually scaled as a relative percentage difference between the means of the stimuli—in the durations of two stimuli.

Excellent reviews of these and additional measures can be found in Dooling et al. (2000) and Dooling (2004) for birds, and Long (1994) for mammals.

Pattern measures

Animal signals usually involve patterned structure in the distribution of energy across the available frequencies and time segments in the signal. There is thus considerable research interest in assessing how well receivers of different species can classify patterns into a priori categories and discriminate between slightly different patterns in two or more signals. Because so much is known about human pattern processing of sounds, human capabilities are often used as a standard reference and animal abilities are then examined in comparison. Some pattern measures that are of recent research interest are detailed below.

Acoustic scene analysis

Humans routinely parse an acoustically complicated environment into individual acoustic objects that can be tracked individually by varying attentive focus (Deutsch 1999). This is known as “acoustic scene analysis.” A number of studies have now shown that birds perform similar acoustic classifications (Gentner and Hulse 2000), and can use these to track individual objects in noisy contexts (Hulse et al. 1997; Wisniewski and Hulse 1997; MacDougall-Shackleton et al. 1998; Hulse 2002; Appeltants et al. 2005).

Missing harmonics

As we note in the text, most animal sounds are likely to consist of harmonics; it is only with great effort and anatomical specialization that animals can produce single pure frequencies. Human speech is a classical example: each person’s speech consists of a harmonic series with different vowels having different relative amplitudes of the same component harmonics. When humans are presented with two tones that could be harmonically related, (e.g., with frequencies having a ratio of 3:2), they typically perceive a third tone equal to the presumed (but actually absent) fundamental. This perception of the “missing fundamental” can be used to create the illusion that a low frequency is present when in fact the sound only contains higher harmonics. Examples include the perception of bass notes by organs and by small home speaker systems. Laboratory experiments have shown that starlings also infer “missing fundamentals” when presented with suitable pairs of pure tone frequencies (Cynx and Shapiro 1986). This raises the interesting but unstudied question of whether this, or similar auditory illusions, might be exploited by animals to extend their perceived frequency ranges despite physical constraints on sound production.

Consonance and dissonance

When two sound frequencies both stimulate the same critical band in a human cochlea, the perception is of a “rough” and “unresolved” sound (Plomp and Levelt 1965). Such sounds are said to be dissonant. Maximal dissonance occurs when one frequency is just 25% of the critical band higher in frequency than the other. If the two frequencies are sufficiently different that they stimulate different critical bands, the mixture of two pure tones will sound smoother and is said to be consonant.

The sounds of animals (and of most musical instruments) are not pure tones: they are usually complex sounds containing many harmonically related frequencies. When two complex sounds with fundamental frequencies that excite different critical bands are played together, they are no longer necessarily perceived as consonant. In fact, there is considerable variation in the perceived consonance depending upon the ratio of the two fundamental frequencies (Rossing 1990; Deutsch 1999). The most consonant complex sounds have fundamentals that have a frequency ratio of 2:1 (called octaves). The next most consonant combinations are sounds that form a perfect fifth (fundamental frequency ratio of 3:2) or a perfect fourth (ratio of 4:3). As observed by Pythagoras before 500 BC, fundamental frequency ratios that require higher integer values, e.g., major sixths (5:3), major thirds (5:4), minor sixths (8:5), etc., are perceived as increasingly dissonant despite the absence of overlap of their fundamentals in the same critical band. Two non-exclusive explanations have been proposed. One is that while the fundamentals of complex sounds may not fall in the same critical band, the logarithmic scaling of frequency along the cochlea can easily result in higher harmonics of the two sounds falling into the same critical band (Terhardt 1974a,b). The number of harmonics that overlap this way will generally increase as the fundamental frequency ratios require higher integers. The second explanation is that the waveform of complex sounds will be more regularly periodic for pairs of sounds with low integers relating their fundamental frequencies (Tramo et al. 2001; Tramo et al. 2005). This periodicity is known to be conveyed to the brain by the auditory nerves in humans and cats, and is highly correlated with perceptions of consonance. It becomes less noticeable with a lower frequency of repeating for mixtures of complex sounds having higher ratios like sixths and thirds.

Interestingly, the most visible periodicity in mammalian auditory nerve activity is the “missing fundamental” implied by the fundamentals of the two complex sounds. If one complex sound has a fundamental of 440 Hz and the second has a fundamental of 660 Hz (making them, with a ratio of 3:2, a perfect fifth), the auditory nerve will exhibit a periodicity in its pooled nerve impulses equal to 220 Hz even though this frequency is not present in either complex sound. This “inferred” component is the fundamental of a harmonic series in which the 440 Hz component of one sound is the second harmonic, the 660 Hz component is the third harmonic, and all other components in either complex sound are also higher harmonics. It has been suggested that consonance for complex sounds whose fundamentals excite different critical bands depends largely on whether a missing fundamental can be identified that is consistent with all components in the complex sounds.

Humans attend to consonances both when several complex sounds are heard at the same time and when separate sounds are presented serially (e.g., melodies). Since birds and other mammals also have tonotopic inner ears divided into critical bands, they may well attend in similar ways to whether complex sounds, heard either simultaneously or serially, are perceived and discriminated as dissonant or consonant. One intriguing study by Hulse et al. (1995) suggests that this is probably the case for starlings. Birds such as motmots, penguins, and oilbirds produce independent sounds on each side of their syrinx that are very similar, but not identical in frequency. To a human ear, these sounds are very harsh and dissonant. Do these birds produce such sounds because their dissonance is jarring and thus demands a receiver’s attention (Owren et al. 2010)? Do animals favor dissonant sounds for aggressive signals (Morton 1975)? Similarly, humans have remarked for millennia on the musical nature of many passerine songs. Are there selective pressures for male songbirds to use more consonant sounds when attracting females? Hopefully, future studies will examine these possibilities.

Relative versus absolute pitch

Most humans can listen to a tune and then recognize it as the same melody even after it has been transposed into another key. Transposition involves either raising or lowering the frequencies of all notes in the tune while retaining the ratios between them. In practice, a melody in the key of C that consisted of notes C-E-G can be transposed into the key of D as the sequence D-F#-A. As a rule, most humans cannot identify, nor are much interested, in which frequencies are actually used to play back a melody. A minority of humans have absolute (or perfect) pitch which allows them to identify a given note in a melody on an absolute scale. However, these individuals, like other humans, still recognize a tune as the same even if it is transposed to another key. The easiest transposition is to double or triple each frequency in the melody (e.g. move it to a higher octave). Even human infants are capable of recognizing an octave transposition as the same melody. The emphasis on the frequency ratios of successive notes when learning or recognizing a melody, rather than the absolute frequencies of each note, is called relative pitch.

Studies on a variety of birds suggest that many birds may not discriminate between sound signals using relative pitch and melodic pattern, but instead memorize the frequencies of successive notes using absolute pitch. Starlings, pigeons, and zebra finches can be taught to discriminate rising or falling patterns of successive notes within a familiar frequency range, but cannot then recognize the same relative sequence transposed to a frequency range outside of that in which the training occurred (Hulse and Cynx 1985, 1986; Page et al. 1989; Cynx 1993, 1995). Octave shifts are particularly devastating to generalization in these birds. Field sparrows did not recognize their own species songs even when transposed by small amounts (Nelson 1988). On the other hand, pet shama thrushes and bullfinches have been reported to transpose human melodies easily into higher keys (Tretzel 1997; Guttinger et al. 2002). Wild chickadees engaged in song contests routinely transpose their “feebee” songs up and down while holding the ratio of frequencies for successive notes constant (Ratcliffe and Weisman 1985; Christie et al. 2004). Veeries and white-throated sparrows also vary the initial frequencies of their songs while holding frequency ratios between successive notes constant; despite the variations, these birds have no difficulty recognizing conspecific vocalizations (Weary et al. 1991; Hurley et al. 1992). Finally, rhesus macaques can recognize melodic (consonant) sequences when transposed an octave, but cannot do so with random notes having no serial harmonic relationships (Wright et al. 2000). The take-home message from research to date is that species vary in their usage of relative versus absolute pitch in recognizing and classifying different sound signals. While we humans take for granted our natural ability to recognize a transposed melody, it may be naïve to assume that other species, especially songbirds, share that skill.

Evolutionary roots of music (“biomusic”)

There has been considerable interest in recent years as to whether the roots of human music can be found in behaviors and sound signals of other animals (West and King 1990; Krause 1992; Gray et al. 2001; Huron 2001; West et al. 2004; Baptista and Keister 2005; Fitch 2005; McDermott and Hauser 2005a,b; Fitch 2006). As we can see above, some of the most interesting pattern measures appear to show at least some similar processes in animals and humans. There are multiple levels at which comparisons can be made. Below, we list some of the levels of comparison and a few comments about whether or not that level currently shows suggestive links between taxa.

  • Ancestry: There are few behaviors in our primate relatives that appear to be phylogenetic precursors to human music. One possible exception is percussion (see below). Some of the authors listed above have argued that music in humans arose de novo without any clear antecedents in other primates.
  • Development: Much has been made of the fact that most songbirds, like infant humans, must learn their vocal signals by imitating the vocal signals of adults and only rarely by innovation. The claim that bird song sheds light on the evolution of human speech has fueled a highly successful research establishment working on the possible parallels. A similar claim has been made for music. In fact, song-learning in passerine birds differs from that in humans in that it is often limited to males, may occur only during a limited period early in life, and occurs in limited contexts (usually territorial defense and mate attraction), whereas humans engage in musical expression in a much wider variety of situations. Parrots may be an interesting contrast in that, like humans, both sexes must learn most of their vocal repertoire, learning is open-ended throughout life, and learned vocal signals are used in a much wider variety of contexts than just mate attraction or territorial defense.
    Whether learning should be invoked as a necessary condition for calling a signal music is problematic. Once one looks at a variety of animal taxa, one is faced with deciding how much of the acquisition must be learned versus innate or innovative, how open-ended the learning period must be, what fraction of the population must have the capability, etc. If the relevant learning must be exactly like that in humans, few other taxa meet the condition. If a more relaxed requirement is invoked, then it is not clear if the learning component helps much in explaining the evolution of human speech or music. While some learning is involved in passerine bird acquisition of songs and human acquisition of musical patterns, what does this tell us?
  • Physiology: As we have seen, sound producing organs can draw on a limited set of mechanisms, and all taxa are limited by their body size in the range of frequencies that they can produce efficiently. Humans share these constraints with other taxa. Similarly, auditory organs have access to a limited number of mechanisms, and are also constrained by body size and the sound-propagating medium. It is thus not surprising that there would be convergences in the ways frogs, lizards, crocodiles, birds, and mammals make and process sounds. How finely these parallels can be drawn will require further comparative research. The discussion above of the role of critical bands in producing consonance versus dissonance suggests that birds and humans may process complex sounds in similar ways. On the other hand, the emphasis on relative pitch in humans and absolute pitch in many songbirds when learning or discriminating between note sequences suggests that the role of melody patterns, rather than memorized sequences of notes, may be a major difference between these taxa.
  • Signal structure: Traditional human music usually consists of multiple notes produced serially in somewhat stereotyped patterns. This is a trait shared with the acoustic signals of many other animals, including birds, sac-winged bats, and humpback whales. The selective forces that have favored structured patterns in sound signals include improved receiver discrimination between signals and ambient noise, receiver discrimination between signals of different species and conspecific signals having different functions, provision of different kinds of information in different signals, and better tests of performance for potential mates when perfecting a difficult pattern. Whether any of these benefits accrue to human music remains unclear. Traditional human music is often melody- and harmony-based. While some bird songs appear to follow fixed frequency ratio rules like human melodies, many songs do not. On the other hand, some modern music is as dissonant as the calls of motmots and penquins, and we currently have no clear hypotheses about why any of these species do or do not limit consecutive notes to consonant alternatives. Percussion is a mechanism for sound signal generation that is common to humans and many animals. The drumming of membracid insects on their plants and woodpeckers against trees are clearly patterned signals not unlike rhythmic percussion by humans. Percussion may be one area in which parallels between humans and animals in the structure of sound signals are marked and worthy of more quantitative comparisons.
  • Performance mode: While individual musicians are common in many human cultures, group performances of music are also widespread. Similarly, many birds and mammals vocalize individually without any inter-individual cooperation, whereas others regularly produce sounds in groups. In most lek and male chorusing species, each displaying individual is competing with conspecifics for attention by potential mates, and any apparent synchrony or anti-synchrony is likely an emergent consequence of individual display rules (Greenfield 2005). However, there are species in which coordinated chorusing by entire groups does occur. This includes lions, coyotes, chimpanzees, hyaenas, and wolves among mammals, and greater anis, kookaburras, barbets, Australian magpies, and quail among birds. The level of coordination with individual group choruses is often low, however. Perhaps the most coordinated group performances are the duets of tropical birds. These include the joint displays of male Chiroxiphia manakins and vocal duets by a wide variety of mated pairs in various bird taxa (boubou shrikes, wrens, parrots, etc.). These duets are typically highly coordinated temporally, must be learned and perfected over time, and often consist of specific roles assigned to each partner. They thus show striking parallels to the duets of human musicians and vocalists.
  • Ecology: The environment in which signals propagate becomes very important for long-range signaling, but is often less critical for short-range signals. Many of the sounds suggested as examples of animal music (whale and passerine songs) are highly adapted for long-range propagation in the relevant medium. Most human music is performed at close range. These animal signals and human music are thus more likely to show parallels at other levels (e.g., physiology or function) than through common selective forces for propagation.
    While there is clearly competition among sympatric species of animals for an “acoustic niche” (Krause 1992), there is little evidence that sympatric species collaborate with each other to produce a given “symphony” of joint sounds. Mutual avoidance of using the same frequency ranges at the same times of day often spreads out the calls of species in a given habitat across the possible times and frequency bands. One possible exception may be mixed species flocks of birds or monkeys that forage as a unit and respond to each other’s alarm calls. Some, such as drongos, may even mimic another species’ calls when a predator is spotted. However, most of the vocalizations used by these species in this context would hardly be considered “musical.” While there is clear evidence of allomimesis (copying of another species’ signals) by a variety of bird species, there is little evidence to date that species other than humans indulge in alloesthetics (the sensory or psychological enjoyment of listening to other species’ sounds independent of any specific signal function). However, one never knows until one looks whether salient species such as parrots, chimps, elephants, or dolphins might not have evolved this capability.
  • Function: While behavioral ecologists have become quite adept at identifying the specific functions of animal signals (see the list of options in Chapter 1 and subsequent chapters in text), the functions of music in humans remain unclear. Some authors have suggested that music arose to promote sexual advertisement much as the displays of lekking birds and mammals function to advertise male condition and quality. Others suggest that human music evolved to promote social cohesion with competing groups. Fitch (2006, 2010) suggests that some forms of music may have been the antecedents of language. Until some consensus is reached about the current and—even more challenging—original functions of music in humans, comparative contrasts with animals will be difficult. On the other hand, the extensive amount of information we have and continue to accumulate on the functions of sound signals in animals will provide a relatively exhaustive list of possibilities to be considered when discussing human music.

This short list is designed only to outline possible points of overlap between the patterns of human music and animal sounds. More details on specific levels may be found in the citations that began this section. Clearly, data suggest significant overlap for some levels, whereas other levels have been little studied. This field is in its early stages and many surprises may appear with time.

Literature Cited

Appeltants, D., T. Q. Gentner, S. H. Hulse, J. Balthazart, and G. F. Ball. 2005. The effect of auditory distractors on song discrimination in male canaries (Serinus canaria). Behavioural Processes 69: 331–341.

Baptista, L. F. and R. A. Keister. 2005. Why birdsong is sometimes like music. Perspectives in Biology and Medicine 48: 426–443.

Christie, P. J., D. J. Mennill, and L. M. Ratcliffe. 2004. Pitch shifts and song structure indicate male quality in the dawn chorus of black-capped chickadees. Behavioral Ecology and Sociobiology 55: 341–348.

Cynx, J. 1993. Auditory frequency generalization and a failure to find octave generalization in a songbird, the European starling (Sturnus vulgaris). Journal of Comparative Psychology 107: 140–146.

Cynx, J. 1995. Similarities in absolute and relative pitch perception in songbirds (starling and zebra finch) and a non-songbird (pidgeon). Journal of Comparative Psychology 109: 261–267.

Cynx, J. and M. Shapiro. 1986. Perception of missing fundamental by a species of songbird (Sturnus vulgaris). Journal of Comparative Psychology 100: 356–360.

Deutsch, D., ed. 1999. The Psychology of Music. Academic Press: San Diego, CA.

Dooling, R. J. 2004. Audition: can birds hear everything they sing? In Nature’s Music: The Science of Birdsong (Marler, P. and H. Slabbekoorn, eds.), pp. 206–225. New York: Elsevier/Academic Press.

Dooling, R. J., B. Lohr, and M. L. Dent. 2000. Hearing in birds and reptiles. In Comparative Hearing: Birds and Reptiles (Dooling, R. J., R. R. Fay, and A. N. Popper, eds.), pp. 308–359. New York: Springer-Verlag.

Fitch, W. T. 2005. The evolution of music in comparative perspective. In Neurosciences and Music II: from Perception to Performance (Avanzini, G., L. Lopez, S. Koelsch, and M. Manjo, eds.), pp. 29–49. New York: New York Academy of Sciences.

Fitch, W. T. 2006. The biology and evolution of music: A comparative perspective. Cognition 100: 173–215.

Gentner, T. Q. and S. H. Hulse. 2000. Perceptual classification based on the component structure of song in European starlings. Journal of the Acoustical Society of America 107: 3369–3381.

Gray, P. M., B. Krause, J. Atema, R. Payne, C. Krumhansi, and L. Baptista. 2001. The music of nature and the nature of music. Science 291: 52–54.

Greenfield, M. D. 2005. Mechanisms and evolution of communal sexual displays in arthropods and anurans. Advances in the Study of Behavior 35: 1–62.

Guttinger, H. R., T. Turner, S. Dobmeyer, and J. Nicolai. 2002. Melody learning and transposition in the bullfinch (Pyrrhula pyrrhula). Journal für Ornithologie 143: 303–318.

Hulse, S. H. 2002. Auditory scene analysis in animal communication. Advances in the Study of Behavior 31: 163–200.

Hulse, S. H., D. J. Bernard, and R. F. Braaten. 1995. Auditory discrimination of chord-based spectral structures by European starlings (Sturnus vulgaris). Journal of Experimental Psychology-General 124: 409–423.

Hulse, S. H. and J. Cynx. 1985. Relative pitch perception is constrained by absolute pitch in songbirds (Mimus, Molothrus, and Sturnus). Journal of Comparative Psychology 99: 176–196.

Hulse, S. H. and J. Cynx. 1986. Interval and contour in serial pitch perception by a passerine bird, the European starling (Sturnus vulgaris). Journal of Comparative Psychology 100: 215–228.

Hulse, S. H., S. A. MacDougall-Shackleton, and A. B. Wisniewski. 1997. Auditory scene analysis by songbirds: Stream segregation of birdsong by European starlings (Sturnus vulgaris). Journal of Comparative Psychology 111: 3–13.

Hurley, T. A., L. Ratcliffe, and R. Weisman. 1992. Relative pitch recognition in white-throated sparrows. Animal Behavior 40: 176–181.

Huron, D. 2001. Is music an evolutionary adaptation? In Biological Foundations of Music (Zatorre, R. J. and I. Peretz, eds.), pp. 43–61. New York: New York Academy of Sciences.

Krause, B. L. 1992. The habitat niche hypothesis–a hidden symphony of animal sounds. Literary Review 36: 40–45.

Long, G. R. 1994. Psychoacoustics. In Comparative Hearing: Mammals (Fay, R. R. and A. N. Popper, eds.), pp. 18–56. New York: Springer-Verlag.

MacDougall-Shackleton, S. A., S. H. Hulse, T. Q. Gentner, and W. White. 1998. Auditory scene analysis by European starlings (Sturnus vulgaris): Perceptual segregation of tone sequences. Journal of the Acoustical Society of America 103: 3581–3587.

McDermott, J. and M. Hauser. 2005a. The origins of music: Innateness, uniqueness, and evolution. Music Perception 23: 29–59.

McDermott, J. and M. D. Hauser. 2005b. Probing the evolutionary origins of music perception. In Neurosciences and Music Ii: from Perception to Performance (Avanzini, G., L. Lopez, S. Koelsch, and M. Manjo, eds.), pp. 6–16. New York: New York Academy of Sciences.

Nelson, D. A. 1988. Feature weighting in species song recognition by the field sparrow (Spizella pusilla). Behaviour 106: 158–182.

Page, S. C., S. H. Hulse, and J. Cynx. 1989. Relative pitch perception in the European starling (Sturnus vulgaris)- further evidence for an elusive phenomenon. Journal of Experimental Psychology-Animal Behavior Processes 15: 137–146.

Plomp, R. and W. J. M. Levelt. 1965. Tonal consonance and critical bandwidth. Journal of the Acoustical Society of America 38: 548–560.

Ratcliffe, L. and R. Weisman. 1985. Frequency shift in the fee bee song of the black-capped chickadee. Condor 87: 555–556.

Rossing, T. D. 1990. The Science of Sound. Reading, MA: Addison-Wesley Publishing Company.

Terhardt, E. 1974a. Perception of periodic sound fluctuations (roughness). Acustica 30: 201–213.

Terhardt, E. 1974b. Pitch, consonance, and harmony. Journal of the Acoustical Society of America 55: 1061–1069.

Tramo, M. J., P. A. Cariani, B. Delgutte, and L. D. Braida. 2001. Neurobiological foundations for the theory of harmony in western tonal music. In Biological Foundations of Music (Zatorre, R. J. and I. Peretz, eds.), pp. 92–116. New York: New York Academy of Sciences.

Tramo, M. J., P. A. Cariani, C. K. Koh, N. Makris, and L. D. Braida. 2005. Neurophysiology and neuroanatomy of pitch perception: Auditory cortex. In Neurosciences and Music II: from Perception to Performance (Avanzini, G., L. Lopez, S. Koelsch, and M. Manjo, eds.), pp. 148–174. New York: New York Academy of Sciences.

Tretzel, E. 1997. Learning of nonspecific sounds and ‘‘musicality’’ of birds: imitation and variation of a music scale by Shamas Copsychus malabaricus. Journal für Ornithologie 138: 505–530.

Weary, D. M., R. G. Weisman, R. E. Lemon, T. Chin, and J. Mongrain. 1991. Use of the relative frequency of notes by veeries in song recognition and production. Auk 108: 977–981.

West, M. J. and A. P. King. 1990. Mozart starling. American Scientist 78: 106–114.

West, M. J., A. P. King, and M. H. Goldstein. 2004. Singing, socializing, and the music effect. In Nature’s Music: The Science of Birdsong (Marler, P. and H. Slabbekoorn, eds.), pp. 374–387. New York: Elsevier/Academic Press.

Wisniewski, A. B. and S. H. Hulse. 1997. Auditory scene analysis in European starlings (Sturnus vulgaris): Discrimination of song segments, their segregation from multiple and reversed conspecific songs, and evidence for conspecific song categorization. Journal of Comparative Psychology 111: 337–350.

Wright, A. A., J. J. Rivera, S. H. Hulse, M. Shyan, and J. J. Neiworth. 2000. Music perception and octave generalization in rhesus monkeys. Journal of Experimental Psychology-General 129: 291–307.

Back to top