Tuesday, December 31, 2024

The call of the Koel

 

The Call of the Koel

1.       1. The loudmouth bellbird (but not the loudest):

Last year I read about the bellbird, one of the loudest birds: 125 decibels (dB), is what the authors said [1,2]. Naturally I wanted to know more. It turned out that the bellbird wasn’t the world champ, anyway [3], being stuck in the No.3 spot. The bird on the topmost perch is the conure at 155 dB.

Podos and Cohn-Haft [3] wonder how – and why - the female bellbird puts up with this ear-splitting sound/noise. The authors state: “Presumably these risks are offset by the benefits females gain is assessing prospective mates.”  On the other hand, the energetic costs of producing this sound must also be considerable. But that’s the way evolutionary arms races (as proposed by Richard Dawkins [4]) spiral upwards.

It reminds me of Zahavi’s handicap theory [5], which is controversial, and was inspired (among other things, probably) by the peacock’s beautiful, but rather unwieldy, tail. (Obviously, it hinders the peacock if it has to escape a predator, so Zahavi concluded it was an ‘honest’ signal of the fitness of the peacock).

It’s also described as LAM (known to parents when their kid, on a bicycle, yells: “Look At Me! No hands!”).

2.       2. Weber-Fechner law:

Another way of looking at it is to recall the Weber-Fechner law [6]. Fechner applied this to human perception of a stimulus (anything you see, hear, taste, smell or feel by touch).

Fechner’s law is expressed in terms of the intensity of the stimulus S and the Just-Noticeable-Difference (JND)(the smallest amount by which a stimulus can be changed and be detected by an individual) and K is a constant (the Weber fraction):

K = (JND)/S

Of course, the JND varies from one individual to the next. This can be understood as the change in the stimulus (dS) that is perceived is proportional to the stimulus S itself:

K = dS/S

 The Weber-Fechner law thus says that our sensory response is logarithmic. For example, the eye can perceive light from as little as a few photons per second to as many as 10^20 photons per cm2 per second, but the maximum tolerable laser intensity depends upon exposure time [7].

Similarly, the ear can respond over a wide range of pressures.

And the same logarithmic response would apply to birds.  But there’s a limit: when the intensity of light is so much as to be blinding or the sound shatters your eardrums, or maybe just causes temporary blindness or deafness (130 dB for people). I’m not sure about the poor female bellbird…

The logarithmic dependence is usually written as:

I  20 log(R)

For sound, it depends upon the distance from the source of the sound [8], decreasing by 6 dB when the distance is doubled (that follows from the above equation), treating it as a point source. The issue of sound attenuation with distance is discussed in more detail later.

The loudest recorded noise: when Krakatoa blew its top it generated 180 dB as measured a distance 160 kms away [9]! So that would be 186 dB 80 kms away, 192 dB at 40 kms, etc.

3.       3. Decibels? What happened to Bel(l)?

As an aside, the unit has history [10]: it started as the bel, to honour Alexander Graham Bell. But one-tenth of the bel – the decibel – was connected to both telephony and audiology [11]

https://www.interacoustics.com/abr-equipment/eclipse/support/the-variety-of-decibel-basics

“The dB (a 10th of a Bel) was derived from the attenuation of a signal transmitted along a mile of telephone cable. The dB was linked with audiology from the beginning because this mile of attenuation was considered the smallest amount of signal change that the average listener could detect.”

https://www.britannica.com/science/bel-measurement

But Britannica puts it slightly differently [12]: The unit decibel is used because a one-decibel difference in loudness between two sounds is the smallest difference detectable by human hearing.

 

4.       Back to the birds – and other loudmouths:

 

Even at 4 metres from the male, say Podos & Cohn-Haft [3] the peak decibels the female bellbird is subjected to would still be 113 dB. Maybe it helps if the exposure time to each chirp (more like a screech!) is reduced?

For the male bellbird, if it ups the ante by 6 dB, its coverage area goes up to 4X the initial value. That is, three times the initial number of mates – as well as predators! – is added on (on average).

And to get to the insect world, the loudest (known) is the African cicada [13]:

“The African cicada, Brevisana brevis (Homoptera: Cicadidae) produces a calling song with a mean sound pressure level of 106.7 decibels at a distance of 50cm. “

At the other end, the loudest animal as you may guess is the sperm whale [14] according to one website:  it reaches 230 dB!  Some others: the pistol shrimp (189 dB), the blue whale (188 dB), the Northern Pacific right whale (182 dB), the Atlantic spotted dolphin (163 dB at 1 metre distance), the bottlenose dolphin (163 dB), the North Atlantic right whale (150 dB). Anyway, this list has 15 animals… and yes, the bellbird is on it. But note the common rooster averages 130 dB (an excellent alarm clock), while one fine specimen hit 136 dB. But one omission in this list is the conure (155 dB) , although the screaming piha shows up at 116 dB.

Back to fish, briefly [15]:

https://www.popsci.com/environment/loudest-fish/?utm_term=pscene022924&utm_campaign=PopSci_Actives_Newsletter&utm_source=Sailthru&utm_medium=email

A study was published in PNAS (Feb.2024) of a small fish in Myanmar’s shallow but murky mountain streams, the one inch long Danionella cerebrum,  (in the minnow and carp family) that can produce sounds can produce sounds of over 140 decibels at a distance of 10 - 12 mms  - louder than an airplane taking off as perceived by human ears at a distance of 100 metres.

5.     5.  Why the koel?

Anyway, I am not about to get into an expedition into the jungles of the Brazilian Amazon to suss out the bellbird. So, I decided to get a reality check with a local bird, the koel.

Nor do I have a calibrated sound level meter (SLM). (Podos & Cohn-Haft [3] used a Larson Davis Sound Advisor 831C). All I’ve got is an iPhone 13 on which I downloaded a NIOSH SLM app.

“The first time I got 77 dB from a koel.

I got a few readings on 15th May 2023: 78, 81,83 dB.

On another day: 79,81 and 87.”

One minor hassle that occurred in my scientific endeavour was that my daughter observed me going from one park to the next, and told her friends that I had finally gone cuckoo. I did go around the bend a bit, trying to locate some elusive koel. And it was really difficult when I heard one koel to the right and one to the left, because they really rather made a Burridan’s ass of me. I thought the birds were just mocking me! Frankly, I was ready to koel a mockingbird! (Just kidding, of course).

Localization accuracy (i.e. terms of finding the direction to the sound source) – for humans - is 1° for sound sources in front of you, and 15° for sources to the sides [16]. No wonder these two sidewinder koels were untraceable… and even the others were also tough to find.

 

Fair warning: Trying to answer a few questions, I’m afraid, I got into a bit of a rabbit hole…

So, a lot of the following sections (6to 23) are about some of the basics of audiology. They have not been reviewed by any audiologist since I don’t know any. References are, of course, given – but some errors may have crept in anyway, if I misunderstood something.

6.       6. Here we go…

 

“Localizing sounds in the horizontal dimension (i.e., judging whether a sound is to our right or our left) involves detecting Interaural Time and Level Differences (ITD and ILD, respectively). That is, we judge a sound to be in the right hemifield because it reaches the right ear earlier and louder than the left ear (and vice versa). ITDs and ILDs are predominantly used to localize sounds with frequencies below and above 1500 Hz, respectively.” [17] (‘interaural’: means ‘between the two ears’).

https://quizlet.com/126553758/flashcards?funnelUUID=24603e0a-3388-4bda-926e-e7945de7b3aa

According to the quizlet [18]

a)       ITD: difference in time between a sound arriving at one ear as compared with the other

b)      ILD: difference in sound pressure (and thus the intensity of the sound) arriving at one ear as compared with the other. ILD is maximum at 0° and 180°, and zero at angles 90° (directly into the right ear) and -90°(directly into the left ear).

 a)        

 

 

 

1.       7. ITD: [19]



Fig.1 (from [21])

Fig.2

Fig.3   [19]  (from Ch.12: “Sound Localization and the Auditory Scene”)

“The ear can detect a time difference as slight as 30 microseconds and smaller differences through training (as low as 10 µs). The maximum time lag for sound generated at one side of the head is around 0.6 milliseconds (see diagram below).”

Also:

“Sounds heard with both ears may be called diotic, whereas those heard independently by each ear are called dichotic.” If the sounds can’t be heard at all, you risk being called idiotic [20]

From: https://www.sfu.ca/sonic-studio-webdav/handBook/Binaural_Hearing.html


Fig.4 (Figure from: [19] )

Fig.5 (from [21].

1.       8. ILD:

Fig.6 (Figure from [21])

The head masks sounds (except for those with empty heads!), and the resultant shadow effect reduces the intensity of sounds especially at higher frequencies. For wavelengths shorter than the diameter of the head, acoustic energy is reduced by reflection and absorption. The result is that the ipsilateral ear (closer to the sound source) experiences higher sound intensity than the contralateral ear (farther from the sound source). The lowest frequency at which the shadow effect occurs is approximately: f = c/hd, where c = 343 m/s is the speed of sound at 20 °C, and hd = 0.175 m is the average human head diameter [22].  This gives 1960 Hz. The magnitude of ILD is given by:

ILD = 0.18 [f sin (α)]1/2, where α is the azimuth angle.

That is, the more lateral the incidence angle, the greater the ILD.

Under optimal conditions, the human ear can detect differences in ILD as low as 0.5 dB [22].

The fact that ILD increases its dominance as frequency increases is shown in the following graph (from [19]):

Fig.7 (from [19])

 

Another equation for ILD [23] – also referred to as Interaural Intensity Difference (IID):

IID = 20 log[ (D + d)/D]

Where D is the distance from the sound source to the nearer ear, and d is the interaural path difference (0.75 ft, i.e. 20 cms).

To reiterate:

Fig.8 (Figure from [21])

]

 

ITD dominates at low frequencies below 1,000 Hz; ILD dominates at high frequencies (above 1,500 Hz) (J.Blauert 1997 & Lois Loiselle) [24, 25]

Fig.9 (from [22])

 

“The Fig.9 shows the frequency: ITD ('arrival time') works best at low frequencies, and ILD ('loudness') and HRTF ('external ear') work best at high frequencies. Together, they do a nice job for all frequencies.” Clearly, in the overlap region (roughly 1,000 to 3,000 Hz), the inaccuracy increases and our brains have a tougher time in integrating information from these two sets of angular localization data [26].

9..       MAAs and MAMAs:

This gets back to the minimum audible angle (MAA) which is the Just Noticeable Difference (JND) mentioned earlier: 1° straight ahead and 15° at the sides. The following graph, of MAA vs azimuth angle and frequency, is from Parvaneh [27], but due to Mills [28]:

Fig.10 (from [27], [28])

Particularly at high azimuth angles, MAA increases. Further, in the ITD-ILD overlap frequency range 1,000-3,000 Hz, MAA goes up, to a greater extent as the azimuth angles increases.

The variation of MAA with azimuthal angle is nonlinear, with the best sensitivity at midline (1°) and worst laterally (around 10°) – in experiments done both on humans and barn owls (Smith & Price 2014) [29].

Incidentally, the MAA refers to static sound sources. For moving sound sources, please refer to MAMA (minimum auditory moving angle) (Xuan Zhong thesis) [30] Most studies suggest that MAMAs are larger than MAAs (Middlebrooks Ch.6 UC Irvine) [31], if there is no specific mechanism to detect the motion of a sound source. According to another dissertation [Wessenyi] [32], MAMAs can be determined as a function of velocity of the sound source – but the brain cannot perceive velocity: subjects make discriminations based on distance. Both MAAs and MAMAs are optimal for frequencies either below 1,000 Hz or above 3,000 - 4,000 Hz. The two minima of MAA are (as in the Figure by Mills [28]):

i)                     between 250 Hz and 1,000 Hz

ii)                   between 3,000 – 6,000 Hz

The greatest detail on MAMAs is in a US Army and Air Force  monographs [29, 30] [Letkowski, Elias]: MAA is the detection threshold for location, while MAMA is the detection threshold for motion. MAMA is usually larger than MAA, by about a factor of 2X – assuming the same sound source and the same initial direction, and is independent of the direction of motion.  MAMAs are U-shaped functions of velocity, with optimum resolution at:

a)       8-16 degrees/sec in the horizontal plane

b)      7-10 degrees/sec in the vertical plane

MAA directly in front of the listener is as small as 1°, and at an azimuth of 90° can be as large as 40° for certain sounds [30]. The corresponding MAMAs are 1-3° at 0° azimuth, and 7-10° at 90° azimuth [30] i.e. at the sides.

I remember reading this point being made by Feynman in one of his lectures or books, but I have been unable to trace the reference (I even tried an open-source AI for help!). The statement Feynman made is that a cricket or a bird will choose to emit sounds or alarm calls in this frequency range so that a predator would not be able to locate it. But the middle of this frequency range depends upon the radius of the predator’s head hr: the wavelength (332/f) should be close to hr.

Fig.11 (Figure from [27]).

10.     Rayleigh’s Duplex theory:

The explanation of the low frequency and high frequency regions for ITD and ILD respectively is due to Lord Rayleigh, assuming the ‘spherical head model’ (SHM), which is referred to as the duplex theory. Note that it is valid for pure tones, but the situation is more complex for broadband sounds [29] (Smith & Price 2014). ITDs can be used for localization of sounds even at higher frequencies – but their contribution at higher frequencies is very small.

------------------------------------------------------------------------------------

(powerpoint slide from University College London (UCL) lectures on Binaural Hearing) [35].

--------------------------------------------------------------------------------------

 

“In the case of complex sounds, the ITD of the envelope (slow modulation) of the high frequencies can be perceived. This is known as the interaural envelope time difference.” (Lorenzi et al) [36].

 The equation for ITD and the figure from which it is obtained [27]:


Fig.12 (from [27])

This is also called the Woodsworth model (no relation to the poet). However, the Wikipedia entry [37] just ignores the first term for simplicity.

The smallest detectable ILD is about 0.5 dB, independent of frequency [27]. The near-field ILD may be 15 dB, while the far-field ILD is 5-6 dB. 

“The Woodworth model is a frequency-independent, ray-tracing model of a rigid spherical head that is expected to agree with the high-frequency limit of an exact diffraction model. The predictions by the Woodworth model for antipodal ears and for incident plane waves are here compared with the predictions of the exact model as a function of frequency to quantify the discrepancy when the frequency is not high.” The Woodworth model gives a good estimate for frequencies above 1.5 kHz; below that the complete diffraction model is required [38] (by Neil Aaronson and William Hartmann).

11.       Head-related transfer function (HRTF):

Fig.9 (from [26]) also includes the third system (apart from timing and intensity) used by humans:  the head-related transfer function (HRTF) - whose accuracy increases as the frequency increases. This utilizes reflections from the shoulders, the complex shape of the head and the pinna of the ear to glean more information about the direction of the sound source [23]:

a)       pinna: the folds of the outer ear act as a comb filter, creating delayed replications of the incoming sound signal, greatly improving localization of sound sources, especially of vertical position

b)      the head creates a shadow that greatly increases IIDs (ILDs), especially at high frequencies, where they can go as high as 20 dB

c)       upper torso causes reflections

It is a monaural system and the brain learns to interpret the filter function due to the complex shape of the ear - that varies a lot from one individual to another – so an ‘average’ HRTF can be defined, but better results are obtained with personalized filter functions.

 


Fig.13 (Figures from [39]).

https://www.sfu.ca/sonic-studio-webdav/handBook/Binaural_Hearing.html

12.       Localization in the vertical plane (determining elevation of sound source):

“Time delays of reflections from the ridges of the pinna. The first chart (left) shows the delays (in microseconds) caused by reflections from the inner pinna ridge which determine front-back directions in the horizontal plane. The other chart (right) shows delays from the outer pinna rim which are important in determining elevation in the vertical plane. The measurements were made on a 5x scale model and reduced to human pinnae size (after Batteau and Plante, from A.W. Mills, "Auditory Localization", in J.V. Tobias, ed., Foundations of Modem Auditory Theory, Academic Press, 1972, vol. 2, p.337, used by permission).” [40].

“The ability to localize a sound in a vertical plane is often attributed to the analysis of the spectral composition of the sound at each ear. In fact, the sound waves arriving at the ears have rebounded from structures such as the shoulders or pinnae, and these rebounds interfere with the direct sound as it enters the ear canal. This interference causes spectral modifications, reinforcements (spectral peaks) or deterioration (spectral gaps) in certain frequency zones which allow the localization of a sound source in the vertical plane.

Along the course of life, a multitude of transfer functions are learnt, which correspond to different directions for sound sources. These memorized filters are used to reweight the sound spectrum and are used to disambiguate the location of sounds in the cone of confusion (discussed below).

“This diagram (from J. Garas) shows the spectral modification of the original sound wave in function of the azimuth of the source (from top to bottom: -10°, 0°, 10°). It is apparent that the spectral gap moves from left to right.”

Note: the diagram from Garas is not shown here; please check Lorenzi’s website [36].

 The localization of the source in the vertical plane remains less precise than in the horizontal plane [36].

“By varying the spectral contrast of broadband sounds around the 6–9 kHz band, which falls within the human pinna’s most prominent elevation-related spectral notch, we here suggest that the auditory system performs a weighted spectral analysis across different frequency bands to estimate source elevation.” [41] (Bahram Zonooz et al).

More about HRTF after the next topic.

13..       Estimating distance of sound source:

Monaural cues are important in estimating distance and “is much easier for familiar sounds” (Risoud et al) [22]. But generally close distances tend to be overestimated and long distances underestimated.

“The frequency spectrum of a sound source varies with distance due to absorption effects caused by the medium high frequency attenuation is particularly important for distance judgments for larger distances (greater than approximately 15 m) but is largely uninformative for smaller distances.” [40].

“Higher frequencies are attenuated by a greater amount when the sound source is to the rear of the listener as opposed to the front of the listener. In the 5 kHz to 10 kHz frequency range, the HRTFs of individuals can differ by as much as 28 dB.  High frequency filtering is an important cue to sound source elevation perception and in resolving front-back ambiguities.” [42].

14..       The Cone of Confusion:

However, I soon found that there is something called the ‘cone of confusion’ [43]:

https://www.reddit.com/r/Mcat/comments/18x4ufd/wtf_is_cone_of_confusion/?rdt=46232

“All of the points on the cone of confusion have the same interaural level difference (ILD) and interaural time difference (ITD).” (This is so confusing, it’s more like a zone-of-confusion!)  Anyway, the diagram below (from Reddit [43]), shows that the ear cannot distinguish sounds coming from the front (D) and those from behind (C ), or sounds coming from above (A) from those coming nearer the ground (B). But, most likely as an evolutionary adaptation, the default assumption the brain makes is that the source of the sound is behind you (since we don’t have a backup system of eyes in the back of our heads!

“We speculate that our brains build a representation of the space based on the reliability of sensory stimuli in those spaces. This could explain the greater number of front-to- back errors, suggesting that, when stimuli are not visible and auditory information is useless, back space becomes more salient, because there hearing is the only sense available to detect stimuli. This pattern could be due to adaptive mechanisms.” [44].

Fig.14 (from [43])

 

Fig.15 (Figure from [22])

And, what’s more it’s not really a cone; it’s approximately a cone:

Mathematically, the set of spatial locations with the same distance difference to the two ears are located on a hyperbolic surface that is most closely approximated by a cone in the sound field, also called ‘the cone of confusion’ [30] i.e. the points of the hyperbolic surface asymptotically approach the cone (see Figure below):


 






Fig.16 (from [30])

We’ll get back to the hyperbolic surface in a bit; but first, we need to find the half-angle of the cone of confusion. I just found this diagram indicating that the angle depends upon the value of the ITD [45,46]:


Fig.17 (from [45, 46])

Vassilakis [47] in his Fig.9.9 makes it clear that the angle of the cone of confusion decreases as the ITD increases.



Fig.18 (from [47])

This may follow from the Woodworth equation, neglecting the 1st term (as in [37] (out of sheer laziness):

ITD = 600 sin(theta)

(600 microseconds is the maximum value of ITD for an average head size, as mentioned earlier).

If ITD = 300, theta = 30° and 150° → half-angle of cone of confusion αc = 60°

If T= 520, theta = 60° and 120° → half-angle of cone of confusion αc = 30°

15.      Mere hyperbole:

The hyperbolic surface that Xuan Zhong [30] discusses is reasonable since any point on either branch of a hyperbola is equidistant from the two focal points (along the major axis) – in this case the two ears, with the nose pointing along the minor axis:

Fig.19 (Hyperbola Image from [48])

https://www.mometrix.com/academy/hyperbolas/

 The equation for a hyperbola is [49]:

https://claregladwinresd.glk12.org/mod/book/tool/print/index.php?id=888

A hyperbola centered at (0, 0) whose transverse axis is along the x-axis has the following equation in standard form.



Vertices: (a, 0) and (-a, 0)
Foci: (c, 0) and (-c, 0), where c2 = a2 + b2
Equation of asymptote lines: y = ±(b/a) x        

The slope b/a = tan (αc)

where αc is the half-angle of the cone of confusion and of the asymptotes.

So, in Zhong’s hyperbola, the distance between two ears is the parameter c, while the slope can be determined from Woodworth’s formula for a given ITD.

16.     Insects hear things differently:

Xuan Zhong [30] also has a figure explaining the difference between the ITD and ILD:

 

 

 



Fig.20 (From [30]}

 

These two types of sound cues (ITD and ILD) are explained in more detail [50], for how insects hear:

“…there are two cues available for detecting the direction of sound waves. The first is diffraction and the second is the time of arrival. Both cues require comparisons between two detectors (ears) in different locations. No matter where an animal’s ears are located, they are almost always on opposite sides of the body. Diffraction refers to the bending of waves around an occluding object. Diffraction is heavily dependent on the size of the occluding object relative to the wavelength of sounds, and the small size of insect bodies complicates the problem of sound localization. Significant diffraction occurs when the distance between the ears is greater than one-tenth of the wavelength of the sound. In this case, the sound bends around the body, which produces changes in both the amplitude and the phase of the sound wave arriving at each ear. When the wavelengths of the sound are very small relative to the size of the head, less diffraction occurs, which means that the sounds do not bend around the body as readily and this creates a sound shadow: the ear that is farther from the source receives a less intense signal that the ear that is nearer to the source. The time of arrival cue simply results from the fact that the sound must travel farther to arrive at the more distant of the two ears, so it arrives at the distant ear later in time.”

For 1 kHz, the wavelength of sound is: 332/1000 = 0.332 metres; and 10% of that is: 3.3 cms. So insects will have problems, and they solved it in a different way from mammals [50].

Nah, I’m not discussing creepy-crawlies; you want to know about their unique hearing capability, go look it up [50]!

17.      Head-related Transfer Function (HRTF) vs Cone of Confusion:

However, the cone of confusion doesn’t seem to cause problems in the real world because the HRTF overrides it [27](Parvaneh), due to the filtering effects of the pinna:

“The pinna of individuals varies widely in size, shape, and general makeup. This leads to variations in the filtering of the sound source spectrum, particularly when the sound source is to the rear of the listener and when the sound is within the 5–10 kHz frequency range.” [42].




 FFig.21 (Figure from [21]).

In addition, people can move their heads, sideways as well as up and down:

“In normal listening environments, humans are mobile rather than stationary. Head movements are a crucial and natural component of human sound source localization, reducing front-back confusion and increasing sound source localization accuracy.  Head movements lead to changes in the ITD and ILD cues and in the sound spectrum reaching the ears. We are capable of integrating these changes temporally in order to resolve ambiguous situations. Lateral head motions can also be used to distinguish frontal low frequency sound sources as being either above or below the horizon.”   [42] (Kapralos et al, and many references therein). In addition, optimal localization is obtained when the full sound spectrum (1 to 16 kHz) is available, and decreases when the bandwidth of the sound source decreases.

In closed spaces, another distance cue is obtained from the reverberation time (the time required for the sound wave to decrease by 60 dB) [23]. Elias quotes Kinsler et al (1982) who report the reverberation time in the Carnegie Hall is 1.8 secs from 125 to 500 Hz, and drops to 1.4 secs at 4,000 Hz, while the Boston Symphony Hall has 2.2 secs at 125 Hz, dropping to 1.5 secs at 4,000 Hz.

Fig.22 (from [21])

Fig.21  [21] shows the distinction between direct energy (from a nearby sound source) and reverberant energy (from a distant sound source).

So, if you want to locate a bird you need to know the distance and the direction

 (both the azimuth and elevation angles) (from [22]):

Fig.23 (from [22])

1.       18. Blind birding:

This raises an interesting question: could a visually-challenged person locate a bird using its bird calls? Sounds purely hypothetical? Well, I found an Audubon website for it! There you go:

https://www.audubon.org/news/birding-blind-open-your-ears-amazing-world-bird-sounds#:~:text=Yet%20you%20are%20often%20more,and%20songs%20offers%20even%20more.

Trevor Attenberg [51] talks about birding-by-ear: he recognizes bird calls and bird songs, plus he probably knows the habitats of each bird that he identifies. I would guess that the cone of confusion is not something that any ornithologist bothers about; there are too many real world cues to use, for them to be concerned about what a lab audiologist would like to measure. Nevertheless, the problem with MAAs being worse on the side (Mill’s figure) is probably a real-world effect. I doubt that a visually challenged person would be as effective at locating a bird in the bush as a standard person.

Also [52}:

https://ornithology.com/hearing-impaired-birding/

The author complains that birds have perfect hearing because the hair cells in their cochleas get replaced when they die, which, unfortunately doesn’t happen to older humans. So, you’re stuck with hearing aids of varying quality and reliability – until researchers figure out how to emulate birds… as far as how their hair cells regenerate [53].

2.       19. Hearing of males and females:

A study of five different species of songbirds – all finches - by Sarah Woolley et al [54] did not find significant differences between the ‘auditory sensitivities and courtship vocalizations’ of males and females. Songbirds are most sensitive in the 2-5 kHz range, and can hear up to 8 kHz. But differences in vocal acoustics can be very large even in species with similar hearing: frequency of peak power varies between 2 and 5 kHz, 3 species peaking at low frequencies and one at high frequencies, with bandwidth varying from 1.7 to 5.6 kHz for different frequencies. However, a lot more research is needed with a larger number of species.

What is the situation for humans? The basic hearing apparatus is the same for both sexes, but males are 5X more likely to suffer more from hearing damage than females, with males more prone to diabetes and heart disease that are correlated with hearing loss. Additionally, men are more likely to work at jobs that damage hearing (noise-induced hearing loss, NIHL).

In terms of age-related hearing loss (ARHL), men tend to lose their ‘hearing’ in the higher frequency levels earlier (1 - 4 kHz range) [55]   (Koichiro Wasano et al).  For women, hearing loss generally occurs in the lower frequencies (1 - 2 kHz), so they struggle to hear lower tones. This would suggest they would have less sensitivity to ITD.

While the anatomy of the ear is the same regardless of sex, research shows that the way men and women process sounds in the brain is different. Brain scans while listening show that men listen with just one hemisphere of their brain (mostly the left hemisphere), while women use both hemispheres (possibly why they are better at listening than men) [56] (‘living sounds’), [57] (Science Daily).

“Weight, smoking, and hormone exposure show varying links with risk of age-related hearing loss, per study of 2,349 males and females.” Administering estrogen seems to reduce loss of hair cells in the cochlea and in improving hearing recovery after exposure to noise [58](Y.H.Park).

3.       20. The effect of noise on hearing:

20.The cocktail party effect in [59] (Letowski) and [60] (Smith and Price) is also referred to as the noisy restaurant problem [61] (Robert Dooling Acoustics Today 2019]. Dooling specifies 3 levels of SNR in human communication in a speech-in-noise situation:

i)                    SNR of – 5dB: (i.e. steady masking noise is 5 dB louder than the speech) so there is a 50% probability of identifying it as speech

ii)                   SNR of 5dB: enough for somewhat strained communication in a noisy restaurant

iii)                 SNR of 15 dB: required for unambiguous acoustic communication and the perception of speech

Dooling [61] also mentions the ‘critical ratio’: the threshold SNR at which an animal can just detect a tone that is just masked by noise that occurs in a band of frequencies around the signal frequency. These critical ratios have been determined for many species including humans and 16 species of birds. Dooling adds that for humans the signal tone should be 20 dB above the masking noise, but birds have worse hearing: they need 26 dB. This extra 6 dB (extending over the entire sensitive frequency range) required means that humans can hear songbirds in noise at double the distance that the birds themselves can detect each other amidst say traffic or construction noise!   

Dooling also points out that mere detection is not enough. In order to discriminate between two different speakers (or birds), an additional 2-3 dB is needed. And to actually recognize a particular sound (like a word), a further top-up of 2-3 dB is required. The SNR differences between detection, discrimination and recognition are similar for birds and humans – but so far we are unable to check if the communication between birds is comfortable. There is a difference between just-noticeable-differences (JNDs) measured in a lab and just-meaningful-differences that may be measured in the field. Combining the inverse-square law with attenuation of sound in the medium of air, one can determine the theoretical maximum distance at which two birds can communicate. Then one must add on the effect of noise that reduces this to some lower value depending upon the amount of noise. If this distance is lower than the diameter of the bird’s territory, the ambient noise may have serious biological consequences since birds vocalize to defend their territories, to maintain social relations and to find mates. There is a difference between permanent-threshold-shift (PTS) and temporary-threshold-shifts (TTS), but birds generally recover pretty fast from exposure to high levels of noise (the hair cells in their cochleas regenerate): canaries and zebra finches recover from even continuous exposure to 120 dB within a few weeks, while budgerigars have a 10 dB shift even after a few weeks. However, Beason questions whether continuous exposure to loud noise could prevent hair cell regeneration in cochleas of birds – a major problem for people who make systems to warn birds away from airports [62].

The Lombard effect [63] (raising your voice in a noisy restaurant) applies to birds too: they can raise the dB of their vocalizations by up to 10 dB.

Another study (of European blackbirds and great tits) shows that the vocalizing bird can move upwards on their perch by 9 m and get an SNR benefit equivalent to closing the inter-bird distance by 50% - and the listening bird can get an even greater benefit by moving up [61, 64] (FHWA document). In a coniferous forest, a bird on the ground would see sound attenuation of 20 dB/m, while a bird 10 metres up would experience just 5 dB/m attenuation of its vocalization [FHWA doc, 2004].

Recent research has shown that birds hear birdsong differently from humans: they focus on fine details that humans are unable to resolve – but these details are more likely to be masked by traffic noise while humans pay more attention to sequences in communication that are less likely to be masked completely. The good news for birds is that traffic noise is mostly at lower frequencies than those of their vocalizations, so the effect is somewhat less [61], as seen in the Figure below:

Fig.24: (from [61])

However, there are two types of traffic noise. If we have a few vehicles passing by on a road, they may be each treated as a point source of sound; by the inverse square law, the noise drops 6 dB when the distance between the source and the receiver is doubled (as mentioned earlier). But on a highway, with a steady stream of vehicles, the sound is better approximated as a line source: the sound intensity drops as the inverse of the distance and by 3 dB as the distance is doubled. That is, the noise propagates further [65] (Judith Rochat). So, a highway cutting through a forest disturbs birds much more than a relatively isolated country road does. There is another point: while it is true that the peak frequency of traffic noise is at about 1 kHz and that birds concentrate their vocalization mostly in the 4-8 kHz range, the level of traffic noise even at 10 kHz can be a pretty substantial 50 dB [Judith Rochat] which, as seen above, can be disruptive.

Note that there is virtually no traffic noise in Dooling’s graph above at 10 kHz. But Dooling has an escape hatch: the phrase ‘traffic-noise shaped spectrum’. The apparent contradiction can be resolved by noting that neither is the traffic density nor the specific material used to construct the road mentioned. Rochat [65] mentions the latter – but not the former. The details about traffic density are mentioned in the 2004 FHWA document [64], along with the effect distance (the distance over which the effects of traffic noise are significant):

i)                    10,000 cars/day: 40 to 1, 500 metres

ii)                   60,000 cars/day: 70 to 2,800 metres

A main road would see traffic as low as 1,700 to 11,500 car/day while a busy highway has 30,000-40,000 cars/day and an interstate highway sees 34,000-50,000 cars/day.

a)       A study in the Netherlands by van der Zande [64] reported that some species avoid rural roads to a distance of 500-600 metres and busy highways to a distance of 1,600-1,800 metres.

b)      A study of passerine bird species in grasslands near a road with 5,000 cars per day showed a 12-56% reduction in species within 100 metres of the road.

c)       Raty [64] found a 2/3 reduction in the number of birds up to 250 metres, and some reduction up to 500 metres, in a forest next to a highway with 700-3,000 cars/day.

d)      A highway with 50,000 cars/day showed a mean noise of 50 dB at 500 metres.

However, one must note that some species do not seem to be bothered by traffic noise. An obvious example is the ubiquitous crow that seems to thrive in urban areas [64].

1.       21, Attenuation of sound as a function of distance:

Daniel Yip et al [66] have discussed the attenuation of sound in both forest and roadside environments. 

First, the 6 dB number for a point source is due to [67], but it follows from the log dependence as mentioned in Sec.2.

Second, the traffic as a line source with 3 dB attenuation when the distance is doubled was discussed above by Rochat [65] – but not considered by Yip.

Third, Stokes’s law of sound attenuation [68]. Also, the comparison between Stokes’s law and the inverse square law dependence was discussed in the Physics Stackexchange [69].

Wait a minute! I heard about Stoke’s law in connection with viscous force and terminal velocity of an object falling in a viscous medium, but when did he analyse attenuation of sound depending on the viscosity of a medium? Anyway, for those interested in this matter these 2 references [68,69] should suffice. Since the viscosity of air isn’t very high, the distance at which sound gets attenuated of 1/e of its initial magnitude works out to be about 6 miles for sound at 1 kHz for a point source. So most people just stick with the 6 dB estimate as a good enough approximation.

But, the fourth issue is what worried Yip et al [66]. A lot of estimates of bird counts were done on roadsides rather than in the interior of forests (these are country roads with low traffic density and noise). But sound gets attenuated a lot more in forests by tree trunks and by leaves, as compared with the unobstructed road corridor. The effect is greater at higher sound frequencies, because at low frequencies the diffraction effects are not as strong. Yip et al concluded that bird counts may be half of what were usually obtained due to these differences in attenuation in roads and forests.

A more detailed analysis was done by Margaret Price in her thesis [70] and her subsequent publication [71] of sound attenuation through trees in various woodlands, comparing models of multiple scattering by tree trunks and by foliage – as well as the effect of the ground - with measurements. The measured spectra show a low frequency peak in excess attenuation below 500 Hz, a mid-frequency dip and a gradual increase in attenuation above 1 kHz. 

 

2.       22. The Evolutionary angle:

To answer the question about the evolutionary origins of hearing, one should look at the brain. This I will refrain from doing except to mention that the two mechanisms ITD and ILD get inputted into different areas of the brain, the MSO and the LSO, and thence to higher areas in the brain [21]:

a)       ITD: low frequency cues go to the medial superior olive (MSO)

b)      ILD: high frequency cues go to the lateral superior olive (LSO)

How do these neurons work? There are different MSO neurons tuned to different time delays e.g. some for 50 microseconds, some for 60, 70 et. For a given ITD, only the corresponding subset will fire [54](Heeger). Similarly, some LSO neurons fire when the right cochlea experiences greater pressure than the left, and others that favour the left cochlea. Depending on the firing rates of these LSO neurons, the loudness of the sound is estimated [72] (Heeger).

Roger Lederer:   https://ornithology.com/the-hearing-of-birds/

According to Lederer [53], most birds hear in the range 1,000 – 4,000 Hz, whereas humans hearing ranges from 20 Hz to 20,000 Hz. But there are variations: horned lark (350 – 7,600 Hz), canary (1,100 – 10,000 Hz), house sparrow (675 - 11,000 Hz) and the long-eared owl (100 – 18,000 Hz). Birds are more sensitive to tone and rhythm than humans, so “they can more easily discern sounds in a noisy environment.”

“Nocturnal birds depend more on sound even though their night vision is excellent. Barn Owls have a flattish facial disk that funnels sounds toward the ears and fleshy ears not unlike humans’, but asymmetrical in shape and location – they don’t look exactly alike, and one is higher on the head than the other.” Apparently, the asymmetric height helps barn owls in overcoming ITD ambiguity.

Unlike humans, birds do not have the external ear (the pinna) that surrounds the opening to the ear canal, but in most birds the ear canal is “covered by feathers that protect the ear from air rushing over it and help to funnel sounds into the ears as the bird flies.” [53].

Birds and mammals are thought to have diverged about 300 million years ago.

Manley [73] argues:

“Although physics often constrains what evolution can do to optimize hearing, biological constraints arising from evolutionary contingencies also limit the nature and degree of the physical process involved in hearing optimization.” He adds that: “The functional differences between the ears of birds, reptiles and mammals, are, despite large differences in structure, quite small.” Because of the common ancestor, Manley further argues [73] that: “All organs of extant land vertebrates whether reptiles, birds or mammals, evolved fully independent of each other, yet all originated from the same, very simple, ancestral form in the earliest reptiles.”

Going further back, to 500 million years ago, the origin of hearing is traced to the lateral line - a surface organ used by fishes to detect fluid motion in water - particularly the sensory hair cells that are common to vertebrate inner ears (in the cochlea).

Summing up, Manley [73] disputes the idea of convergent evolution in this case: in mammals, the shared sensory architecture and physiology arose from a common ancestry. “The traits involved show a high degree of conservation across the group, with specific adaptations arising from the specific modification of these pre-existing cochlear structures and hearing processes.”

A more detailed analysis, based on genetic studies of various proteins involved in hearing, by Marcela Lipovsek [74], shows that these proteins were evolutionary hotspots. Detection of high frequency in mammals (16 – 150 kHz), particularly in echo-locating mammals (like bats and dolphins) were driven by changes in something called ‘stereocilia proteins’ and in outer hair cells (OHCs). This paper mentions the categories: convergent evolution, parallel evolution, adaptive selection and positive selection.

3.     23.  More ears?

A blog I read [26] considers the possibility of more ears.

Although this sounds quirky, there is logic behind it. Xuan Zhong points out that artificial systems for computerized audition, at least 4 sensors are needed for 3D localization of a sound source, such as a sniper.



Fig.25 (from [26])

 

 “There would be no point in having four ears if the two localization systems could not be combined. The source of the sound must lie on both cones, so we need to find the parts that the two cones have in common: their intersection. That intersection is, in this case, a nice parabola. It is shown as stopping at the end of the cones but should again extend into space along with the cones. Does this improve localization? Yes: the possible source of the sound is reduced from the surface of a cone to a line in 3D.” [26].

A 2009 paper by Schnupp and Carr [75] with the title “On hearing with more than one ear: lessons from evolution” and concludes by:

“Of course, artificial directional hearing designs would not necessarily have to be binaural. Even insects rarely have more than two ears, and sometimes only one, which is perhaps unexpected, given that a separation (‘unmixing’) of sounds from different simultaneous sound sources can in theory easily be achieved using techniques such as independent component analysis, provided that the number of sound receivers (ears or microphones) is as large as the number of sound sources.

Perhaps bionic ears of the future will interface to elaborate cocktail-party hats that sport as many miniature microphones as there are guests at the party. The basic algorithm for independent component analysis requires that the relationship between sources and receivers be stable over time. To adapt this to mobile speakers and listeners, methods would have to be developed to track auditory streams when the sound sources and receivers move relative to each other, but that may well be a solvable problem. If so, many-eared, rather than merely binaural, devices might ultimately turn out to be optimal solutions for bionic hearing.”

Anyway, never mind Fig.23! It turns out that there is an insect with 6 pairs of ears located on the side of its abdomen: the bladder grasshopper [76] – and it may not be the only one with more than a a pair of ears.  Mother Nature wins again! However, why did this grasshopper need so many ears? Maybe each pair (alone) wasn’t up to it?

The ears of other insects also manifest at various different positions:

“There are ears on antennae (mosquitoes and fruit flies), forelegs (crickets and katydids), wings (lacewings), abdomen (cicadas, grasshoppers and locusts) and on what passes for a “neck” (parasitic flies). Among moths and butterflies, ears crop up practically anywhere, even on mouthparts. Praying mantises have a single, “cyclopean” ear in the middle of their chest.” [76].

__________________________________________________________________________________

1.       24. Out of the rabbit hole:

Real problems I faced, while attempting to track koels:

i)                    I can’t even see, the pesky koel let alone determine the distance. Forget elevation, even direction was tough to get.

ii)                   The koel doesn’t call continuously, and the NIOSH SLM has a sampling time of about a second

iii)                 I don’t know if the koel’s call is directional (more in the forward direction)?

iv)                 The orientation of the phone in the right way couldn’t be done either since I just broadly knew the koel was above me. You can manage with two ears to figure out where the bird is: left-right or ahead-behind, but the third dimension means you have to align you head almost horizontally!

v)                   The koel is not a point source, so the inverse square law applies only for a distance d (probably) > 10s, where s is the size of the bird (or its beak?).

vi)                 Not every bird of the same species will be equally loud. So, there may be overlap in loudness of different species.

Podos et al [2] mentioned a SW and a calibrated sound meter, which then extrapolates the sound level to the value at a distance of 1 metre from the bird.

The mic of a cellphone is an electret or MEMS both of which are omnidirectional. However, the mic is encased in a case with a small input aperture. So, you have to orient it with respect to the source to get maximum signal. I figured that the orientation shouldn’t matter that much because, at least as a first approximation, the sound amplitude should follow Lambert’s cosine law, and even if the angle was off by 84°, the cosine of that is about 0.1 - i.e. an error of 1 dB.

Three microphones in a triangle should be enough to find the arbitrary location of a bird in 3D? But that’s not what Xuan Zhong [30] says!

Two microphones would be enough if it is straight ahead (direction passing thru midpoint of the 2 microphones). Getting 2 microphones – or more – means a group activity. You need to be able to brainwash more people (with cellphones) to participate is koel-tracking events. (Either that, or you get access to a sniper detection unit.)

Anyway, that was last year. This year, I’m ignoring the calls of the koels … no matter how many come-hither calls they emit. I will, of course, continue to hum the tune of the Hindi song: “Nazar lagi Raja tore bangale pe…”

I’m going to buy me a cuckoo clock! Just keep the cuckoo in place (nazar mein).

References:

 

1)      1. https://www.popularmechanics.com/science/animals/a29565240/worlds-loudest-bird/

 

2)      2. Jeffrey Podos .and Mario Cohn-Haft Biol. 21 (2019) R1055-R1069

 

3)      3. https://a-z-animals.com/blog/10-birds-that-chirp-the-loudest/

 

4)      4. Richard Dawkins “The Blind Watchmaker”

 

5)      5. https://en.wikipedia.org/wiki/Handicap_principle

 

6)      6. https://en.wikipedia.org/wiki/Weber%E2%80%93Fechner_law

 

 

7)      7. Bill Otto  https://www.quora.com/How-many-watts-of-light-can-damage-the-human-eye-or-how-many-watts-of-light-can-blind-a-human

 

8)      8. https://www.wkcgroup.com/tools-room/inverse-square-law-sound-calculator/#:~:text=According%20to%20the%20inverse%20square,valves%2C%20small%20pumps%20and%20motors.

 

   9.          https://www.acoustical.co.uk/distance-attenuation/how-sound-reduces-with-distance-from-a-point-source/

 

9)      10. https://en.wikipedia.org/wiki/1883_eruption_of_Krakatoa

 

1011)   https://en.wikipedia.org/wiki/Decibel

 

1112)   https://www.interacoustics.com/abr-equipment/eclipse/support/the-variety-of-decibel-basics

 

1213)                        https://www.britannica.com/science/bel-measurement

 

1314)   J.M.Petti https://entnemdept.ufl.edu/walker/ufbir/chapters/chapter_24.shtml

 

1415)   https://www.ifaw.org/international/journal/loudest-animals-on-earth#:~:text=1.,sounds%20up%20to%20230%20decibels.

 

1516)   https://www.popsci.com/environment/loudest-fish/?utm_term=pscene022924&utm_campaign=PopSci_Actives_Newsletter&utm_source=Sailthru&utm_medium=ema11il

 

 

1617)   https://en.wikipedia.org/wiki/Sound_localization

1718)   J.C.Middlebrooks and D.M.Green, “Sound localization by human listeners”

Ann.Rev.Psychol. 42 (1991) 135-59

 

1819)   https://quizlet.com/126553758/flashcards?funnelUUID=24603e0a-3388-4bda-926e-e7945de7b3aa

 

1920)   Ch.12: “Sound Localization and the Auditory Scene”  https://courses.washington.edu/psy333/lecture_pdfs/chapter12_SoundLocalization.pdf

 

 

2021)   https://www.sfu.ca/sonic-studio-webdav/handBook/Binaural_Hearing.html

 

2122)   Jonathan Pillow: Lecture 18, Spring 2015, (Ch.10, Part II), “Auditory system and hearing”

 

2223)   M.Risoud et al, “Sound source localization” Eur.J.Otorhinolaryngology, Head and Neck Diseases 135 (2018) 259-64

 

2324)   Bart Elias (Aug.1998) “Sound basics: a primer in psychoacoustics” (U.S.Air Force Research Lab)

 

2425)   J.Blauert, “Spatial hearing: the psychophysics of human sound localization“  (MIT Press, 1997)

 

2526)    Lois H.Loiselle et al “Using ILD or ITD cues for sound source localization” J.Speech, language and hearing research 59 (2016) 810-18

 

2627)   https://planetfuraha.blogspot.com/2022/05/playing-it-by-ears-hearing-3.html

 

2728)   Parvaneh Parhizikari “Binaural hearing: human ability of sound source localization” (Dec.2008) (Master’s thesis, Blekinge Institute of Technology)

 

2829)   A.W.Mills “ On the minimum audible angle” J.Acoust.Soc. America 30 (1958) 237-46

 

2930)   Rosanna C.G.Smith and Stephen R.Price PLoS “Modelling  of human low frequency sound localization acuity…“ 9 (2014) e89033

 

3031)   Xuan Zhong “Dynamic binaural sound source localization with interaural time difference cues”  (Ph.D.thesis, Arizona State University, Apr.2015)

 

3132)   John C.Middlebrooks, “Sound localization” Handbook of Clinical Neurology Vol.129 (Ch.6, UC Irvine, 2015)

 

3233)   Gyorgy Wersenyi “HRTFs in human localization” (Ph.D. thesis, Brandenburg University of Technology, 2002)

 

3334)   Tomasz and Szymon Letkowski, “ Auditory spatial perception: Auditory localization” (Army Research Lab, May 2012) ARL-TR-6016

 

3435)   Bartholomew Elias “The coordination of dynamic visual and auditory spatial percepts and responsive motor actions  Air Force Materiel Command, Wright Patterson Lab (1995) AFB OH 45433-7901

 

3536)   UCL powerpoint slides “Binaural hearing”  https://www.phon.ucl.ac.uk/courses/spsci/AUDL4007/Binaural%20aspects%202015.pdf

 

3637)   Antoine Lorenzi “Localization” https://www.cochlea.eu/en/sound/psychoacoustics/localisation#:~:text=Human%20beings%20instinctively%20make%20small,%C2%B0and%20180%C2%B0azimuth.

 

3738)   https://en.wikipedia.org/wiki/Sound_localization

 

3839)   Neil Aaronson and William Hartmann “Testing, correcting, and extending the Woodworth model

for interaural time difference” J.Acoust.Soc.Am. 135 (2014) 817-23

 

 

3940)   https://www.sfu.ca/sonic-studio-webdav/handBook/Binaural_Hearing.html

 

4041)   A.W. Mills, "Auditory Localization", in J.V. Tobias, ed., Foundations of Modem Auditory Theory, Academic Press, 1972, vol. 2, p.337

 

4142)   Bahram Zonooz et al “Spectral weighting underlies perceived sound elevation” Sci.Rep. 9(2018)1642-54

 

4243)   Bill Kapralos et al, “Virtual audio systems” in Presence Teleoperators and Virtual Environments (Dec.2008)

 

4344)   https://www.reddit.com/r/Mcat/comments/18x4ufd/wtf_is_cone_of_confusion/?rdt=46232

 

4445)   Elena Aggius-Vella et al “Audio representation around the body” Front.in Psych. 8 (2017)

 

doi: 10.3389/fpsyg.2017.01932

 

4546)   Elizabeth M.Wenzel “The Design of Multidimensional Sound Interfaces” (Oxford, 1995)

 

4647)   Elizabeth M.Wenzel

https://pubmed.ncbi.nlm.nih.gov/8354753/: “Localization using nonindividualized head-related transfer functions”

 

4748)   Vassilakis “Module 7: Sound source auditory localization”

https://www.acousticslab.org/RECA220/PMFiles/Module07.htm

 

4849)   https://www.mometrix.com/academy/hyperbolas/

 

4550) https://claregladwinresd.glk12.org/mod/book/tool/print/index.php?id=888

 

5051)   “Auditory Systems in Insects” H.C. Hughes, S.S. Wang, in Encyclopedia of Neuroscience,    (2009) 771-8

 

 

5152)   Trevor Attenberg

https://www.audubon.org/news/birding-blind-open-your-ears-amazing-world-bird-sounds#:~:text=Yet%20you%20are%20often%20more,and%20songs%20offers%20even%20more.

 

5253)   https://ornithology.com/hearing-impaired-birding/

 

5354)   Roger Lederer https://ornithology.com/the-hearing-of-birds/

 

 

5455)   Sarah Woolley et al https://www.sciencedirect.com/science/article/pii/S0003347222002998

 

5556)   Koichiro Wasano et al Lancet Regional Health 9(2021)100131

 

5657)   https://www.livingsounds.ca/do-women-and-men-hear-differently/#:~:text=Women%20May%20Have%20Better%20Hearing,2%2C000%20Hz)%20compared%20to%20men.

 

 

5758)   https://www.sciencedaily.com/releases/2024/03/240306150450.htm

 

5859)   Y.H.Park Clinical and Experimental Otorhinolaryngology 14(Feb.2021)

 

5960)   Tomasz Letowski and Szymon Letowski, “Auditory spatial perception: auditory localization” Army Research Laboratory report ARL-TR-5016 (May 2012)

 

6061)   Rosanna Smith and Stephen Price “Modelling of human low frequency sound localization acuity…” PLOS One 9 (Feb.2014) e89033

 

6162)   Robert J.Dooling et al “The impact of urban and traffic noise on birds” Acoustics Today 15 (3)(2019)19-27

 

6263)   Robert C.Beason “What birds can hear?” (Sep.2004) USDA National Wildlife Center, Staff Publications-78   https://digitalcommons.unl.edu/icwdm_usdanwrc/78

 

 

6364)   https://en.wikipedia.org/w/index.php?title=Special:DownloadAsPdf&page=Lombard_effect&action=show-download-screen

 

6465)   Federal Highway Authority document “Synthesis of Noise effect on wildlife populations” Publication No. FHWA-HEP-06-016 (Sep.2004)

 

6566)   Judith Rochat “Highway traffic noise” Acoustics Today 12 (4)(2016) 38-47

 

6667)   Daniel Yip et al in: The Condor 119 (2012)73-84, published by the American Ornithological Society

 

6768)   R.H.Wiley and D.G.Richards  “Adaptations for acoustic communications in birds” in “Acoustic communications in birds” Editors: D.E.Kroodsma, E.H.Miller and H.Ouellet (Academic Press, MA, USA, 1982)

 

6869)   Stokes’s law of sound attenuation Wikipedia

 

https://en.wikipedia.org/wiki/Stokes%27s_law_of_sound_attenuation#:~:text=where%20%CE%B7%20is%20the%20dynamic,the%20Anglo%2DIrish%20physicist%20G.%20G.

 

 

6970)   https://physics.stackexchange.com/questions/79715/relationship-between-stokes-law-of-sound-attenuation-and-the-inverse-square-law#:~:text=I%20believe%20Stoke's%20law%20is,reduce%20the%20intensity%20of%20sound.

 

7071)   Margaret Price “Sound propagation in woodland” Ph.D. thesis (June 1986) Open University, Milton Keynes, England

 

7172)   Margaret Price, Keith Attenborough and Nicholas Heap J.Acoust.Soc. of America 84 (1988) 1836- 44

 

7273)   David Heeger https://www.cns.nyu.edu/~david/courses/perception/lecturenotes/localization/localization.html

 

7374)   Geoffrey Manley “The Mammalian Ear: physics and principles of evolution”

 

7475)   Marcela Lipovsek and Ana Belen Legoyhen “The tuning of evolutionary hearing” Trends in Neurosciences 46 (2023) 110 – 123

 

7576)   Jan W.H.Schnupp and Catherine E.Carr “On hearing with more than one ear: lessons from evolution”  Nat.Neuroscience 12(2009) 692-7

 

7677)   Stephanie Pain “Awesome ears: The weird world of insect hearing”

https://knowablemagazine.org/content/article/living-world/2018/how-do-insects-hear