Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters











Database
Language
Publication year range
1.
J Acoust Soc Am ; 154(4): 2579-2593, 2023 10 01.
Article in English | MEDLINE | ID: mdl-37874222

ABSTRACT

Passive acoustic monitoring is widely used for detection and localization of marine mammals. Typically, pressure sensors are used, although several studies utilized acoustic vector sensors (AVSs), that measure acoustic pressure and particle velocity and can estimate azimuths to acoustic sources. The AVSs can localize sources using a reduced number of sensors and do not require precise time synchronization between sensors. However, when multiple animals are calling concurrently, automated tracking of individual sources still poses a challenge, and manual methods are typically employed to link together sequences of measurements from a given source. This paper extends the method previously reported by Tenorio-Hallé, Thode, Lammers, Conrad, and Kim [J. Acoust. Soc. Am. 151(1), 126-137 (2022)] by employing and comparing two fully-automated approaches for azimuthal tracking based on the AVS data. One approach is based on random finite set statistics and the other on message passing algorithms, but both approaches utilize the underlying Bayesian statistical framework. The proposed methods are tested on several days of AVS data obtained off the coast of Maui and results show that both approaches successfully and efficiently track multiple singing humpback whales. The proposed methods thus made it possible to develop a fully-automated AVS tracking approach applicable to all species of baleen whales.


Subject(s)
Humpback Whale , Animals , Bayes Theorem , Acoustics , Algorithms , Cetacea
2.
J Acoust Soc Am ; 154(1): 502-517, 2023 07 01.
Article in English | MEDLINE | ID: mdl-37493330

ABSTRACT

Many odontocetes produce whistles that feature characteristic contour shapes in spectrogram representations of their calls. Automatically extracting the time × frequency tracks of whistle contours has numerous subsequent applications, including species classification, identification, and density estimation. Deep-learning-based methods, which train models using analyst-annotated whistles, offer a promising way to reliably extract whistle contours. However, the application of such methods can be limited by the significant amount of time and labor required for analyst annotation. To overcome this challenge, a technique that learns from automatically generated pseudo-labels has been developed. These annotations are less accurate than those generated by human analysts but more cost-effective to generate. It is shown that standard training methods do not learn effective models from these pseudo-labels. An improved loss function designed to compensate for pseudo-label error that significantly increases whistle extraction performance is introduced. The experiments show that the developed technique performs well when trained with pseudo-labels generated by two different algorithms. Models trained with the generated pseudo-labels can extract whistles with an F1-score (the harmonic mean of precision and recall) of 86.31% and 87.2% for the two sets of pseudo-labels that are considered. This performance is competitive with a model trained with 12 539 expert-annotated whistles (F1-score of 87.47%).


Subject(s)
Deep Learning , Animals , Humans , Vocalization, Animal , Sound Spectrography , Algorithms , Whales
3.
J Acoust Soc Am ; 150(5): 3399, 2021 11.
Article in English | MEDLINE | ID: mdl-34852628

ABSTRACT

Acoustic line transect surveys are often used in combination with visual methods to estimate the abundance of marine mammal populations. These surveys typically use towed linear hydrophone arrays and estimate the time differences of arrival (TDOAs) of the signal of interest between the pairs of hydrophones. The signal source TDOAs or bearings are then tracked through time to estimate the animal position, often manually. The process of estimating TDOAs from data and tracking them through time can be especially challenging in the presence of multiple acoustically active sources, missed detections, and clutter (false TDOAs). This study proposes a multi-target tracking method to automate TDOA tracking. The problem formulation is based on the Gaussian mixture probability hypothesis density filter and includes multiple sources, source appearance and disappearance, missed detections, and false alarms. It is shown that by using an extended measurement model and combining measurements from broadband echolocation clicks and narrowband whistles, more information can be extracted from the acoustic encounters. The method is demonstrated on false killer whale (Pseudorca crassidens) recordings from Hawaiian waters.


Subject(s)
Dolphins , Echolocation , Acoustics , Animals , Sound , Sound Spectrography , Vocalization, Animal
4.
J Acoust Soc Am ; 148(5): 3014, 2020 11.
Article in English | MEDLINE | ID: mdl-33261403

ABSTRACT

The need for automated methods to detect and extract marine mammal vocalizations from acoustic data has increased in the last few decades due to the increased availability of long-term recording systems. Automated dolphin whistle extraction represents a challenging problem due to the time-varying number of overlapping whistles present in, potentially, noisy recordings. Typical methods utilize image processing techniques or single target tracking, but often result in fragmentation of whistle contours and/or partial whistle detection. This study casts the problem into a more general statistical multi-target tracking framework and uses the probability hypothesis density filter as a practical approximation to the optimal Bayesian multi-target filter. In particular, a particle version, referred to as a sequential Monte Carlo probability hypothesis density (SMC-PHD) filter, is adapted for frequency tracking and specific models are developed for this application. Based on these models, two versions of the SMC-PHD filter are proposed and the performance of these versions is investigated on an extensive real-world dataset of dolphin acoustic recordings. The proposed filters are shown to be efficient tools for automated extraction of whistles, suitable for real-time implementation.


Subject(s)
Bottle-Nosed Dolphin , Acoustics , Animals , Bayes Theorem , Sound Spectrography , Vocalization, Animal
5.
J Acoust Soc Am ; 140(3): 1981, 2016 09.
Article in English | MEDLINE | ID: mdl-27914409

ABSTRACT

This work considers automated multi target tracking of odontocete whistle contours. An adaptation of the Gaussian mixture probability hypothesis density (GM-PHD) filter is described and applied to the acoustic recordings from six odontocete species. From the raw data, spectral peaks are first identified and then the GM-PHD filter is used to simultaneously track the whistles' frequency contours. Overall over 9000 whistles are tracked with a precision of 85% and recall of 71.8%. The proposed filter is shown to track whistles precisely (with mean deviation of 104 Hz, about one frequency bin, from the annotated whistle path) and 80% coverage. The filter is computationally efficient, suitable for real-time implementation, and is widely applicable to different odontocete species.

SELECTION OF CITATIONS
SEARCH DETAIL