Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 28
Filter
1.
J Acoust Soc Am ; 154(1): 502-517, 2023 07 01.
Article in English | MEDLINE | ID: mdl-37493330

ABSTRACT

Many odontocetes produce whistles that feature characteristic contour shapes in spectrogram representations of their calls. Automatically extracting the time × frequency tracks of whistle contours has numerous subsequent applications, including species classification, identification, and density estimation. Deep-learning-based methods, which train models using analyst-annotated whistles, offer a promising way to reliably extract whistle contours. However, the application of such methods can be limited by the significant amount of time and labor required for analyst annotation. To overcome this challenge, a technique that learns from automatically generated pseudo-labels has been developed. These annotations are less accurate than those generated by human analysts but more cost-effective to generate. It is shown that standard training methods do not learn effective models from these pseudo-labels. An improved loss function designed to compensate for pseudo-label error that significantly increases whistle extraction performance is introduced. The experiments show that the developed technique performs well when trained with pseudo-labels generated by two different algorithms. Models trained with the generated pseudo-labels can extract whistles with an F1-score (the harmonic mean of precision and recall) of 86.31% and 87.2% for the two sets of pseudo-labels that are considered. This performance is competitive with a model trained with 12 539 expert-annotated whistles (F1-score of 87.47%).


Subject(s)
Deep Learning , Animals , Humans , Vocalization, Animal , Sound Spectrography , Algorithms , Whales
2.
J Anim Ecol ; 91(8): 1567-1581, 2022 08.
Article in English | MEDLINE | ID: mdl-35657634

ABSTRACT

BACKGROUND: The manual detection, analysis and classification of animal vocalizations in acoustic recordings is laborious and requires expert knowledge. Hence, there is a need for objective, generalizable methods that detect underlying patterns in these data, categorize sounds into distinct groups and quantify similarities between them. Among all computational methods that have been proposed to accomplish this, neighbourhood-based dimensionality reduction of spectrograms to produce a latent space representation of calls stands out for its conceptual simplicity and effectiveness. Goal of the study/what was done: Using a dataset of manually annotated meerkat Suricata suricatta vocalizations, we demonstrate how this method can be used to obtain meaningful latent space representations that reflect the established taxonomy of call types. We analyse strengths and weaknesses of the proposed approach, give recommendations for its usage and show application examples, such as the classification of ambiguous calls and the detection of mislabelled calls. What this means: All analyses are accompanied by example code to help researchers realize the potential of this method for the study of animal vocalizations.


Subject(s)
Herpestidae , Vocalization, Animal , Animals
3.
J Acoust Soc Am ; 151(1): 414, 2022 01.
Article in English | MEDLINE | ID: mdl-35105012

ABSTRACT

Automatic algorithms for the detection and classification of sound are essential to the analysis of acoustic datasets with long duration. Metrics are needed to assess the performance characteristics of these algorithms. Four metrics for performance evaluation are discussed here: receiver-operating-characteristic (ROC) curves, detection-error-trade-off (DET) curves, precision-recall (PR) curves, and cost curves. These metrics were applied to the generalized power law detector for blue whale D calls [Helble, Ierley, D'Spain, Roch, and Hildebrand (2012). J. Acoust. Soc. Am. 131(4), 2682-2699] and the click-clustering neural-net algorithm for Cuvier's beaked whale echolocation click detection [Frasier, Roch, Soldevilla, Wiggins, Garrison, and Hildebrand (2017). PLoS Comp. Biol. 13(12), e1005823] using data prepared for the 2015 Detection, Classification, Localization and Density Estimation Workshop. Detection class imbalance, particularly the situation of rare occurrence, is common for long-term passive acoustic monitoring datasets and is a factor in the performance of ROC and DET curves with regard to the impact of false positive detections. PR curves overcome this shortcoming when calculated for individual detections and do not rely on the reporting of true negatives. Cost curves provide additional insight on the effective operating range for the detector based on the a priori probability of occurrence. Use of more than a single metric is helpful in understanding the performance of a detection algorithm.


Subject(s)
Echolocation , Vocalization, Animal , Acoustics , Animals , Benchmarking , Sound Spectrography , Whales
4.
J Acoust Soc Am ; 152(6): 3800, 2022 12.
Article in English | MEDLINE | ID: mdl-36586843

ABSTRACT

This work presents an open-source matlab software package for exploiting recent advances in extracting tonal signals from large acoustic data sets. A whistle extraction algorithm published by Li, Liu, Palmer, Fleishman, Gillespie, Nosal, Shiu, Klinck, Cholewiak, Helble, and Roch [(2020). Proceedings of the International Joint Conference on Neural Networks, July 19-24, Glasgow, Scotland, p. 10] is incorporated into silbido, an established software package for extraction of cetacean tonal calls. The precision and recall of the new system were over 96% and nearly 80%, respectively, when applied to a whistle extraction task on a challenging two-species subset of a conference-benchmark data set. A second data set was examined to assess whether the algorithm generalized to data that were collected across different recording devices and locations. These data included 487 h of weakly labeled, towed array data collected in the Pacific Ocean on two National Oceanographic and Atmospheric Administration (NOAA) cruises. Labels for these data consisted of regions of toothed whale presence for at least 15 species that were based on visual and acoustic observations and not limited to whistles. Although the lack of per whistle-level annotations prevented measurement of precision and recall, there was strong concurrence of automatic detections and the NOAA annotations, suggesting that the algorithm generalizes well to new data.


Subject(s)
Deep Learning , Animals , Vocalization, Animal , Sound Spectrography , Cetacea , Software
5.
J Acoust Soc Am ; 150(4): 3204, 2021 10.
Article in English | MEDLINE | ID: mdl-34717489

ABSTRACT

The use of machine learning (ML) in acoustics has received much attention in the last decade. ML is unique in that it can be applied to all areas of acoustics. ML has transformative potentials as it can extract statistically based new information about events observed in acoustic data. Acoustic data provide scientific and engineering insight ranging from biology and communications to ocean and Earth science. This special issue included 61 papers, illustrating the very diverse applications of ML in acoustics.


Subject(s)
Acoustics , Machine Learning , Attention , Engineering
6.
J Acoust Soc Am ; 149(5): 3301, 2021 05.
Article in English | MEDLINE | ID: mdl-34241092

ABSTRACT

This work demonstrates the effectiveness of using humans in the loop processes for constructing large training sets for machine learning tasks. A corpus of over 57 000 toothed whale echolocation clicks was developed by using a permissive energy-based echolocation detector followed by a machine-assisted quality control process that exploits contextual cues. Subsets of these data were used to train feed forward neural networks that detected over 850 000 echolocation clicks that were validated using the same quality control process. It is shown that this network architecture performs well in a variety of contexts and is evaluated against a withheld data set that was collected nearly five years apart from the development data at a location over 600 km distant. The system was capable of finding echolocation bouts that were missed by human analysts, and the patterns of error in the classifier consist primarily of anthropogenic sources that were not included as counter-training examples. In the absence of such events, typical false positive rates are under ten events per hour even at low thresholds.


Subject(s)
Echolocation , Animals , Cetacea , Neural Networks, Computer , Vocalization, Animal
7.
J Acoust Soc Am ; 146(5): 3590, 2019 11.
Article in English | MEDLINE | ID: mdl-31795641

ABSTRACT

Acoustic data provide scientific and engineering insights in fields ranging from biology and communications to ocean and Earth science. We survey the recent advances and transformative potential of machine learning (ML), including deep learning, in the field of acoustics. ML is a broad family of techniques, which are often based in statistics, for automatically detecting and utilizing patterns in data. Relative to conventional acoustics and signal processing, ML is data-driven. Given sufficient training data, ML can discover complex relationships between features and desired labels or actions, or between features themselves. With large volumes of training data, ML can discover models describing complex acoustic phenomena such as human speech and reverberation. ML in acoustics is rapidly developing with compelling results and significant future promise. We first introduce ML, then highlight ML developments in four acoustics research areas: source localization in speech processing, source localization in ocean acoustics, bioacoustics, and environmental sounds in everyday scenes.

8.
PLoS Comput Biol ; 13(12): e1005823, 2017 Dec.
Article in English | MEDLINE | ID: mdl-29216184

ABSTRACT

Delphinids produce large numbers of short duration, broadband echolocation clicks which may be useful for species classification in passive acoustic monitoring efforts. A challenge in echolocation click classification is to overcome the many sources of variability to recognize underlying patterns across many detections. An automated unsupervised network-based classification method was developed to simulate the approach a human analyst uses when categorizing click types: Clusters of similar clicks were identified by incorporating multiple click characteristics (spectral shape and inter-click interval distributions) to distinguish within-type from between-type variation, and identify distinct, persistent click types. Once click types were established, an algorithm for classifying novel detections using existing clusters was tested. The automated classification method was applied to a dataset of 52 million clicks detected across five monitoring sites over two years in the Gulf of Mexico (GOM). Seven distinct click types were identified, one of which is known to be associated with an acoustically identifiable delphinid (Risso's dolphin) and six of which are not yet identified. All types occurred at multiple monitoring locations, but the relative occurrence of types varied, particularly between continental shelf and slope locations. Automatically-identified click types from autonomous seafloor recorders without verifiable species identification were compared with clicks detected on sea-surface towed hydrophone arrays in the presence of visually identified delphinid species. These comparisons suggest potential species identities for the animals producing some echolocation click types. The network-based classification method presented here is effective for rapid, unsupervised delphinid click classification across large datasets in which the click types may not be known a priori.


Subject(s)
Computational Biology/methods , Dolphins/physiology , Echolocation/classification , Pattern Recognition, Automated/methods , Signal Processing, Computer-Assisted , Vocalization, Animal/classification , Algorithms , Animals , Gulf of Mexico , Sound Spectrography
9.
J Acoust Soc Am ; 141(2): 737, 2017 02.
Article in English | MEDLINE | ID: mdl-28253689

ABSTRACT

Divergence in acoustic signals used by different populations of marine mammals can be caused by a variety of environmental, hereditary, or social factors, and can indicate isolation between those populations. Two types of genetically and morphologically distinct short-finned pilot whales, called the Naisa- and Shiho-types when first described off Japan, have been identified in the Pacific Ocean. Acoustic differentiation between these types would support their designation as sub-species or species, and improve the understanding of their distribution in areas where genetic samples are difficult to obtain. Calls from two regions representing the two types were analyzed using 24 recordings from Hawai'i (Naisa-type) and 12 recordings from the eastern Pacific Ocean (Shiho-type). Calls from the two types were significantly differentiated in median start frequency, frequency range, and duration, and were significantly differentiated in the cumulative distribution of start frequency, frequency range, and duration. Gaussian mixture models were used to classify calls from the two different regions with 74% accuracy, which was significantly greater than chance. The results of these analyses indicate that the two types are acoustically distinct, which supports the hypothesis that the two types may be separate sub-species.

10.
J Acoust Soc Am ; 137(1): 22-9, 2015 Jan.
Article in English | MEDLINE | ID: mdl-25618035

ABSTRACT

A concern for applications of machine learning techniques to bioacoustics is whether or not classifiers learn the categories for which they were trained. Unfortunately, information such as characteristics of specific recording equipment or noise environments can also be learned. This question is examined in the context of identifying delphinid species by their echolocation clicks. To reduce the ambiguity between species classification performance and other confounding factors, species whose clicks can be readily distinguished were used in this study: Pacific white-sided and Risso's dolphins. A subset of data from autonomous acoustic recorders located at seven sites in the Southern California Bight collected between 2006 and 2012 was selected. Cepstral-based features were extracted for each echolocation click and Gaussian mixture models were used to classify groups of 100 clicks. One hundred Monte-Carlo three-fold experiments were conducted to examine classification performance where fold composition was determined by acoustic encounter, recorder characteristics, or recording site. The error rate increased from 6.1% when grouped by acoustic encounter to 18.1%, 46.2%, and 33.2% for grouping by equipment, equipment category, and site, respectively. A noise compensation technique reduced error for these grouping schemes to 2.7%, 4.4%, 6.7%, and 11.4%, respectively, a reduction in error rate of 56%-86%.


Subject(s)
Dolphins/physiology , Echolocation , Machine Learning , Pattern Recognition, Automated/methods , Sound Spectrography/methods , Subtraction Technique , Algorithms , Animals , Dolphins/classification , Echolocation/classification , Fourier Analysis , Monte Carlo Method , Normal Distribution , Pacific Ocean , Sound Spectrography/instrumentation , Species Specificity , Transducers
11.
J Acoust Soc Am ; 134(6): 4435, 2013 Dec.
Article in English | MEDLINE | ID: mdl-25669255

ABSTRACT

Dolphins and whales use tonal whistles for communication, and it is known that frequency modulation encodes contextual information. An automated mathematical algorithm could characterize the frequency modulation of tonal calls for use with clustering and classification. Most automatic cetacean whistle processing techniques are based on peak or edge detection or require analyst assistance in verifying detections. An alternative paradigm is introduced using techniques of image processing. Frequency information is extracted as ridges in whistle spectrograms. Spectral ridges are the fundamental structure of tonal vocalizations, and ridge detection is a well-established image processing technique, easily applied to vocalization spectrograms. This paradigm is implemented as freely available matlab scripts, coined IPRiT (image processing ridge tracker). Its fidelity in the reconstruction of synthesized whistles is compared to another published whistle detection software package, silbido. Both algorithms are also applied to real-world recordings of bottlenose dolphin (Tursiops trunactus) signature whistles and tested for the ability to identify whistles belonging to different individuals. IPRiT gave higher fidelity and lower false detection than silbido with synthesized whistles, and reconstructed dolphin identity groups from signature whistles, whereas silbido could not. IPRiT appears to be superior to silbido for the extraction of the precise frequency variation of the whistle.


Subject(s)
Acoustics , Algorithms , Bottle-Nosed Dolphin/physiology , Signal Processing, Computer-Assisted , Vocalization, Animal , Animals , Bottle-Nosed Dolphin/psychology , Computer Simulation , Pattern Recognition, Automated , Software , Sound Spectrography , Species Specificity
12.
J Acoust Soc Am ; 134(5): 3513-21, 2013 Nov.
Article in English | MEDLINE | ID: mdl-24180762

ABSTRACT

To study delphinid near surface movements and behavior, two L-shaped hydrophone arrays and one vertical hydrophone line array were deployed at shallow depths (<125 m) from the floating instrument platform R/P FLIP, moored northwest of San Clemente Island in the Southern California Bight. A three-dimensional propagation-model based passive acoustic tracking method was developed and used to track a group of five offshore killer whales (Orcinus orca) using their emitted clicks. In addition, killer whale pulsed calls and high-frequency modulated (HFM) signals were localized using other standard techniques. Based on these tracks sound source levels for the killer whales were estimated. The peak to peak source levels for echolocation clicks vary between 170-205 dB re 1 µPa @ 1 m, for HFM calls between 185-193 dB re 1 µPa @ 1 m, and for pulsed calls between 146-158 dB re 1 µPa @ 1 m.


Subject(s)
Acoustics/instrumentation , Echolocation/classification , Environmental Monitoring/instrumentation , Oceanography/instrumentation , Transducers , Vocalization, Animal/classification , Whale, Killer/classification , Whale, Killer/physiology , Animals , Environmental Monitoring/methods , Equipment Design , Oceanography/methods , Oceans and Seas , Population Density , Signal Processing, Computer-Assisted , Sound Spectrography , Species Specificity , Swimming , Time Factors
13.
J Acoust Soc Am ; 134(3): 2293-301, 2013 Sep.
Article in English | MEDLINE | ID: mdl-23967959

ABSTRACT

Beaked whale echolocation signals are mostly frequency-modulated (FM) upsweep pulses and appear to be species specific. Evolutionary processes of niche separation may have driven differentiation of beaked whale signals used for spatial orientation and foraging. FM pulses of eight species of beaked whales were identified, as well as five distinct pulse types of unknown species, but presumed to be from beaked whales. Current evidence suggests these five distinct but unidentified FM pulse types are also species-specific and are each produced by a separate species. There may be a relationship between adult body length and center frequency with smaller whales producing higher frequency signals. This could be due to anatomical and physiological restraints or it could be an evolutionary adaption for detection of smaller prey for smaller whales with higher resolution using higher frequencies. The disadvantage of higher frequencies is a shorter detection range. Whales echolocating with the highest frequencies, or broadband, likely lower source level signals also use a higher repetition rate, which might compensate for the shorter detection range. Habitat modeling with acoustic detections should give further insights into how niches and prey may have shaped species-specific FM pulse types.


Subject(s)
Echolocation , Vocalization, Animal , Whales/physiology , Acoustics , Adaptation, Physiological , Animals , Biological Evolution , Feeding Behavior , Predatory Behavior , Sound Spectrography , Species Specificity , Time Factors
14.
Biol Rev Camb Philos Soc ; 98(5): 1633-1647, 2023 10.
Article in English | MEDLINE | ID: mdl-37142263

ABSTRACT

Monitoring on the basis of sound recordings, or passive acoustic monitoring, can complement or serve as an alternative to real-time visual or aural monitoring of marine mammals and other animals by human observers. Passive acoustic data can support the estimation of common, individual-level ecological metrics, such as presence, detection-weighted occupancy, abundance and density, population viability and structure, and behaviour. Passive acoustic data also can support estimation of some community-level metrics, such as species richness and composition. The feasibility of estimation and certainty of estimates is highly context dependent, and understanding the factors that affect the reliability of measurements is useful for those considering whether to use passive acoustic data. Here, we review basic concepts and methods of passive acoustic sampling in marine systems that often are applicable to marine mammal research and conservation. Our ultimate aim is to facilitate collaboration among ecologists, bioacousticians, and data analysts. Ecological applications of passive acoustics require one to make decisions about sampling design, which in turn requires consideration of sound propagation, sampling of signals, and data storage. One also must make decisions about signal detection and classification and evaluation of the performance of algorithms for these tasks. Investment in the research and development of systems that automate detection and classification, including machine learning, are increasing. Passive acoustic monitoring is more reliable for detection of species presence than for estimation of other species-level metrics. Use of passive acoustic monitoring to distinguish among individual animals remains difficult. However, information about detection probability, vocalisation or cue rate, and relations between vocalisations and the number and behaviour of animals increases the feasibility of estimating abundance or density. Most sensor deployments are fixed in space or are sporadic, making temporal turnover in species composition more tractable to estimate than spatial turnover. Collaborations between acousticians and ecologists are most likely to be successful and rewarding when all partners critically examine and share a fundamental understanding of the target variables, sampling process, and analytical methods.


Subject(s)
Acoustics , Mammals , Animals , Humans , Reproducibility of Results , Population Density , Vocalization, Animal
15.
J Acoust Soc Am ; 131(4): 2682-99, 2012 Apr.
Article in English | MEDLINE | ID: mdl-22501048

ABSTRACT

Conventional detection of humpback vocalizations is often based on frequency summation of band-limited spectrograms under the assumption that energy (square of the Fourier amplitude) is the appropriate metric. Power-law detectors allow for a higher power of the Fourier amplitude, appropriate when the signal occupies a limited but unknown subset of these frequencies. Shipping noise is non-stationary and colored and problematic for many marine mammal detection algorithms. Modifications to the standard power-law form are introduced to minimize the effects of this noise. These same modifications also allow for a fixed detection threshold, applicable to broadly varying ocean acoustic environments. The detection algorithm is general enough to detect all types of humpback vocalizations. Tests presented in this paper show this algorithm matches human detection performance with an acceptably small probability of false alarms (P(FA) < 6%) for even the noisiest environments. The detector outperforms energy detection techniques, providing a probability of detection P(D) = 95% for P(FA) < 5% for three acoustic deployments, compared to P(FA) > 40% for two energy-based techniques. The generalized power-law detector also can be used for basic parameter estimation and can be adapted for other types of transient sounds.


Subject(s)
Algorithms , Humpback Whale/physiology , Vocalization, Animal/physiology , Acoustics/instrumentation , Animals , Equipment Design , Fourier Analysis , Noise, Transportation , Ships , Signal-To-Noise Ratio , Sound Spectrography
16.
J Acoust Soc Am ; 129(1): 467-75, 2011 Jan.
Article in English | MEDLINE | ID: mdl-21303026

ABSTRACT

This study presents a system for classifying echolocation clicks of six species of odontocetes in the Southern California Bight: Visually confirmed bottlenose dolphins, short- and long-beaked common dolphins, Pacific white-sided dolphins, Risso's dolphins, and presumed Cuvier's beaked whales. Echolocation clicks are represented by cepstral feature vectors that are classified by Gaussian mixture models. A randomized cross-validation experiment is designed to provide conditions similar to those found in a field-deployed system. To prevent matched conditions from inappropriately lowering the error rate, echolocation clicks associated with a single sighting are never split across the training and test data. Sightings are randomly permuted before assignment to folds in the experiment. This allows different combinations of the training and test data to be used while keeping data from each sighting entirely in the training or test set. The system achieves a mean error rate of 22% across 100 randomized three-fold cross-validation experiments. Four of the six species had mean error rates lower than the overall mean, with the presumed Cuvier's beaked whale clicks showing the best performance (<2% error rate). Long-beaked common and bottlenose dolphins proved the most difficult to classify, with mean error rates of 53% and 68%, respectively.


Subject(s)
Dolphins/physiology , Echolocation/classification , Models, Statistical , Signal Processing, Computer-Assisted , Vocalization, Animal/classification , Whales/physiology , Acoustics/instrumentation , Animals , California , Oceans and Seas , Sound Spectrography
17.
J Acoust Soc Am ; 130(4): 2212-23, 2011 Oct.
Article in English | MEDLINE | ID: mdl-21973376

ABSTRACT

Many odontocetes produce frequency modulated tonal calls known as whistles. The ability to automatically determine time × frequency tracks corresponding to these vocalizations has numerous applications including species description, identification, and density estimation. This work develops and compares two algorithms on a common corpus of nearly one hour of data collected in the Southern California Bight and at Palmyra Atoll. The corpus contains over 3000 whistles from bottlenose dolphins, long- and short-beaked common dolphins, spinner dolphins, and melon-headed whales that have been annotated by a human, and released to the Moby Sound archive. Both algorithms use a common signal processing front end to determine time × frequency peaks from a spectrogram. In the first method, a particle filter performs Bayesian filtering, estimating the contour from the noisy spectral peaks. The second method uses an adaptive polynomial prediction to connect peaks into a graph, merging graphs when they cross. Whistle contours are extracted from graphs using information from both sides of crossings. The particle filter was able to retrieve 71.5% (recall) of the human annotated tonals with 60.8% of the detections being valid (precision). The graph algorithm's recall rate was 80.0% with a precision of 76.9%.


Subject(s)
Dolphins/physiology , Signal Processing, Computer-Assisted , Vocalization, Animal , Algorithms , Animals , Bayes Theorem , Reproducibility of Results , Sound Spectrography , Time Factors
18.
J R Soc Interface ; 18(180): 20210297, 2021 07.
Article in English | MEDLINE | ID: mdl-34283944

ABSTRACT

Many animals rely on long-form communication, in the form of songs, for vital functions such as mate attraction and territorial defence. We explored the prospect of improving automatic recognition performance by using the temporal context inherent in song. The ability to accurately detect sequences of calls has implications for conservation and biological studies. We show that the performance of a convolutional neural network (CNN), designed to detect song notes (calls) in short-duration audio segments, can be improved by combining it with a recurrent network designed to process sequences of learned representations from the CNN on a longer time scale. The combined system of independently trained CNN and long short-term memory (LSTM) network models exploits the temporal patterns between song notes. We demonstrate the technique using recordings of fin whale (Balaenoptera physalus) songs, which comprise patterned sequences of characteristic notes. We evaluated several variants of the CNN + LSTM network. Relative to the baseline CNN model, the CNN + LSTM models reduced performance variance, offering a 9-17% increase in area under the precision-recall curve and a 9-18% increase in peak F1-scores. These results show that the inclusion of temporal information may offer a valuable pathway for improving the automatic recognition and transcription of wildlife recordings.


Subject(s)
Neural Networks, Computer , Animals , Time Factors
19.
J Acoust Soc Am ; 127(6): 3790-9, 2010 Jun.
Article in English | MEDLINE | ID: mdl-20550277

ABSTRACT

Acoustic recordings from Palmyra Atoll, northern Line Islands, central Pacific, showed upsweep frequency modulated pulses reminiscent of those produced by beaked whales. These signals had higher frequencies, broader bandwidths, longer pulse durations and shorter inter-pulse intervals than previously described pulses of Blainville's, Cuvier's and Gervais' beaked whales [Zimmer et al. (2005). J. Acoust. Soc. Am. 117, 3919-3927; Johnson et al. (2006). J. Exp. Biol. 209, 5038-5050; Gillespie et al. (2009). J. Acoust. Soc. Am. 125, 3428-3433]. They were distinctly different temporally and spectrally from the unknown beaked whale at Cross Seamount, HI [McDonald et al. (2009). J. Acoust. Soc. Am. 125, 624-627]. Genetics on beaked whale specimens found at Palmyra Atoll suggest the presence of a poorly known beaked whale species. Mesoplodon sp. might be the source of the FM pulses described in this paper. The Palmyra Atoll FM pulse peak frequency was at 44 kHz with a -10 dB bandwidth of 26 kHz. Mean pulse duration was 355 mus and inter-pulse interval was 225 ms, with a bimodal distribution. Buzz sequences were detected with inter-pulse intervals below 20 ms and unmodulated spectra, with about 20 dB lower amplitude than prior FM pulses. These clicks had a 39 kHz bandwidth (-10 dB), peak frequency at 37 kHz, click duration 155 mus, and inter-click interval between 4 and 10 ms.


Subject(s)
Echolocation , Whales , Acoustics , Animals , Animals, Wild , Hawaii , Pacific Ocean , Sound Spectrography , Species Specificity , Time Factors
20.
J Acoust Soc Am ; 128(4): 2212-24, 2010 Oct.
Article in English | MEDLINE | ID: mdl-20968391

ABSTRACT

Spectral parameters were used to discriminate between echolocation clicks produced by three dolphin species at Palmyra Atoll: melon-headed whales (Peponocephala electra), bottlenose dolphins (Tursiops truncatus) and Gray's spinner dolphins (Stenella longirostris longirostris). Single species acoustic behavior during daytime observations was recorded with a towed hydrophone array sampling at 192 and 480 kHz. Additionally, an autonomous, bottom moored High-frequency Acoustic Recording Package (HARP) collected acoustic data with a sampling rate of 200 kHz. Melon-headed whale echolocation clicks had the lowest peak and center frequencies, spinner dolphins had the highest frequencies and bottlenose dolphins were nested in between these two species. Frequency differences were significant. Temporal parameters were not well suited for classification. Feature differences were enhanced by reducing variability within a set of single clicks by calculating mean spectra for groups of clicks. Median peak frequencies of averaged clicks (group size 50) of melon-headed whales ranged between 24.4 and 29.7 kHz, of bottlenose dolphins between 26.7 and 36.7 kHz, and of spinner dolphins between 33.8 and 36.0 kHz. Discriminant function analysis showed the ability to correctly discriminate between 93% of melon-headed whales, 75% of spinner dolphins and 54% of bottlenose dolphins.


Subject(s)
Bottle-Nosed Dolphin/physiology , Dolphins/physiology , Echolocation , Stenella/physiology , Vocalization, Animal , Animals , Discriminant Analysis , Signal Processing, Computer-Assisted , Sound Spectrography , Species Specificity , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL