Research‎ > ‎

Measuring Sonar Beam Shape of Echolocating Bats



This page is still under development.!!

Spatial information obtained via the bat’s sonar system is influenced by the directional properties of the ears and the vocalizations. Because bat sonar emissions are directional and the size of the ensonified region around the bat can have important implications for the inner workings of the sonar system. For instance, a narrow sonar emission beam extends the perceivable range for echo sources on the beam’s main axis by focusing the acoustic energy while also reducing interference from other off-axis echo sources. On the other hand, a narrow sonar emission beam limits spatial coverage and, consequently, detection of peripheral echo sources. Conversely, a broad sonar emission beam allows detection of objects in a larger area of space, while being susceptible to problems caused by low signal-to-noise ratio and interference. A broad beam dissipates the emitted acoustic energy resulting in weaker echoes.

The goal of this project was to record a complete spatial profile of sonar vocalizations of freely vocalizing Eptesicus fuscus (Big Brown Bat). In previous studies on sonar beam patterns of this and other species of bats, researchers restrained the head of the animal and stimulated its brainstem to have the bat produce sonar vocalizations. By restraining the head a single microphone could be moved systematically around the bat and change in emitted acoustic energy with direction relative to the bat could be measured. Brainstem stimulation was necessary since the restrained bats did not vocalize voluntarily.

Unlike these earlier studies, the methodology I followed permitted free movement of the head which in turn encouraged voluntary sonar vocalizations. My approach relied on the assumption that the bat’s sonar beam does not change from one vocalization to another (the results obtained with this approach suggested that this assumption was not met by freely vocalizing bats). Stability of sonar emission patterns was also assumed in the earlier reports. If this assumption held it would be possible to combine spatial measurements taken from different vocalizations to construct the sonar beam shape.

A novel idea: Capturing bat's sonar beam shape from multiple vocalizations

What is needed:

  • A microphone array
  • A 3-D head tracking system
  • A freely vocalizing echolocating bat


Building a large scale microphone array

Modern of the shelf data acquisitions (DAQ) boards can only handle a handful of channels for real time simultaneousness acquisition of large bandwidth frequency modulated (FM) bat echolocation calls. A typical sonar signal of a big brown bat, Eptesicus fuscus, is between 10 kHz and 100 kHz. To obtained sufficient number of microphone channels may require several of the DAQ boards s which can be a costly investment. 

Instead I choose to develop an alternative approach that could use 22-channels from a single DAQ board. The idea was to capture a 5 kHz band limited portions of echolocation calls. due to decrease bandwidth it was now possible to use many more analog input channels of a single DAQ board. 

A super-heterodyne circuit microphone circuit 

The name of the solution was super heterodyning. This technique is commonly used for tuning of radio and TVs. The principle involves bringing the desired narrow band portion of the signal down to a fixed frequency low range, say 0 and 5 kHz. In the case of the sonar signals the down shifting of the frequency content does not effect the energy in the narrow band signal. Which is what was needed to build the beam shape. This way at minimum 10 kHz sampling rate per microphone was needed. For a maximum sampling rate of 1.1 MHz 11 microphone channels could be supported with a single DAQ board.

The super-heterodyne principle allows channelling different frequency ranges within the echolocation frequency range by tuning. The circuit developed had a single common tuning for all channels, hence the experimenter could adjust 22 microphone channels to output acoustic energy information in 5 kHz bandwidth around a desired frequency. Frequency tuning was achieved via programmable oscillator controlled via computer with I2C communication protocol.


How to track a big brown bat's head in 3-D?

In order to combine spatial samples of the sonar beam, obtained from individual sonar calls, into single high representation, all the samples obtained from spatially-fixed microphones at different vocalization should be brought in a common reference frame. Being able to move its head freely bat could vocalize in any direction, in fact this was the requirement of the method for successful capture of the sonar beam shape. This implies that for every vocalization the microphone positions should be represented in a head-centered reference frame. Thus accurate estimation of the head position in 3D space was essential.

To achieve this an infrared high speed camera was positioned above the platform oriented towards the platform. A light weight (<3g), portable rigid structure, headpost, with infrared reflector markers were attached to the bat's head. The headpost was made out of delrin consisted of several pieces (see the photos below). To further eliminate the discomfort that the marker headpost weight might inflict on the bat a counter weight was thighed to the structure. At the beginning of the experimental session base of the headpost with a microphone was placed on the head and pieces with the infrared reflector markers at the tips attached to the head post to complete the assembly (second figure below).
                                            
                                                                                 



Bat was trained to remain on the platform and track tethered mealworms moving in front of it. In all sessions the bats were alert and attentive to the task. The pixel locations of the five markers were localized automatically for each frame using a custom written software in MATLAB offline. These pixel coordinates later used to estimate rotation and translation of the head in 3D space. The camera was calibrated using MATLAB camera calibration toolbox before each session. Head pose was estimated using algorithm developed by Dementhon and Davis.

Before each session a pan-and-tilt (PTU) unit with an ultrasound speaker was placed at the position of the platform. The generic echolocation calls were played through the speaker were recorded by the microphone array while the speaker scanned different azimuth and elevation positions with 5 degree steps. These recordings later were used to find the direction of each microphone in the array. The direction of maximum reception gain was determined as the direction of the microphone. Spatial interpolation (using spherical harmonics) was used to estimate this direction with a resolution well below 2.5 degree the maximum that could be obtained by the 5 degree sampling. The 0 degree elevation and azimuth pose of the head post carried by the speaker was used as a reference pose to estimate any pose of the headpost. The speaker than brought to each microphones direction and wide band computer generated echolocation calls were used to obtain relative frequency response of each microphone with reference to the microphone placed on the headpost. These relative frequency responses later used to eliminate difference between the microphones.

Once the head pose was obtained the microphone positions were converted to the head-centered reference frame using the rotation and translation values. The sampled beam values brought under a single reference frame constitute the scaffold of the beam shape. Using spatial interpolation a high resolution beam shape then was computed. 


Proof of concept with a robot head


To test the performance of the system first experiments were done using the PTU unit with the ultrasound loudspeaker. The speaker was oriented in random directions and synthesized sonar call with random amplitudes were broadcast. The video images and sounds were acquired with the system then analyzed to compute the beam shape. On the right an example of the computed speaker beam at 55 kHz is depicted. The black and the blue dots represent a subset of the sampled points in zero degree elevation and azimuth planes. The obtained beam shape agreed well with the beam shape obtained directly using the PTU unit and a single microphone.
This second control measurement involved orienting the speaker in different predetermined directions and recording he sounds from a microphone placed at 50 cm distance in front of the platform. Note that this step did not require video-based pose estimation and combining acoustic information from multiple microphones.


Results from echolocating bats

Results obtained from bats did not provide a consistent beam shape (see on the left). The spatial variation across the samples were much higher than the ones obtained when recording the speaker beam shape. This combined with the performance obtained for the speaker suggest that sonar beam shape of big brown bats may not be constant. The sources variation can include change in the mouth shape within the same vocalization and from one vocalization to another. Bats remained within the boundaries of the platform while vocalizing, hence, it is also possible that the platform could have acoustically interfered with the recorded vocalizations. A close look at the recorded sounds with wide band microphones, however, did not provide conclusive results to confirm this possibility.





Latest