Your browser doesn't support javascript.
loading
Montrer: 20 | 50 | 100
Résultats 1 - 5 de 5
Filtrer
Plus de filtres










Base de données
Gamme d'année
1.
J Acoust Soc Am ; 150(2): 1264, 2021 08.
Article de Anglais | MEDLINE | ID: mdl-34470309

RÉSUMÉ

We present a new method of detecting North Atlantic Right Whale (NARW) upcalls using a Multimodel Deep Learning (MMDL) algorithm. A MMDL detector is a classifier that embodies Convolutional Neural Networks (CNNs) and Stacked Auto Encoders (SAEs) and a fusion classifier to evaluate their output for a final decision. The MMDL detector aims for diversity in the choice of the classifier so that its architecture is learned to fit the data. Spectrograms and scalograms of signals from passive acoustic sensors are used to train the MMDL detector. Guided by previous applications, we trained CNNs with spectrograms and SAEs with scalograms. Outputs from individual models were evaluated by the fusion classifier. The results obtained from the MMDL algorithm were compared to those obtained from conventional machine learning algorithms trained with handcrafted features. It showed the superiority of the MMDL algorithm in terms of the upcall detection rate, non-upcall detection rate, and false alarm rate. The autonomy of the MMDL detector has immediate application to the effective monitoring and protection of one of the most endangered species in the world where accurate call detection of a low-density species is critical, especially in environments of high acoustic-masking.


Sujet(s)
Apprentissage profond , Baleines , Acoustique , Algorithmes , Animaux ,
2.
J Acoust Soc Am ; 148(3): EL260, 2020 09.
Article de Anglais | MEDLINE | ID: mdl-33003883

RÉSUMÉ

A transfer learning approach is proposed to classify grouper species by their courtship-associated sounds produced during spawning aggregations. Vessel sounds are also included in order to potentially identify human interaction with spawning fish. Grouper sounds recorded during spawning aggregations were first converted to time-frequency representations. Two types of time frequency representations were used in this study: spectrograms and scalograms. These were converted to images, and then fed to pretrained deep neural network models: VGG16, VGG19, Google Net, and MobileNet. The experimental results revealed that transfer learning significantly outperformed the manually identified features approach for grouper sound classification. In addition, both time-frequency representations produced almost identical results in terms of classification accuracy.


Sujet(s)
Serran , Animaux , Humains , Apprentissage , Apprentissage machine , , Son (physique)
3.
J Acoust Soc Am ; 146(4): 2155, 2019 10.
Article de Anglais | MEDLINE | ID: mdl-31671953

RÉSUMÉ

In this paper, a method is introduced for the classification of call types of red hind grouper, an important fishery resource in the Caribbean that produces sounds associated with reproductive behaviors during yearly spawning aggregations. For the undertaken task, two distinct call types of red hind are analyzed. An ensemble of stacked autoencoders (SAEs) is then designed by randomly selecting the hyperparameters of SAEs in the network. These hyperparameters include a number of hidden layers in each SAE and a number of nodes in each hidden layer. Spectrograms of red hind calls are used to train this randomly generated ensemble of SAEs one at a time. Once all individual SAEs are trained, this ensemble is used as a whole to classify call types of red hind. More specifically, the outputs of individual SAEs are combined with a fusion mechanism to produce a final decision on the call type of the input red hind sound. Experimental results show that the innovative approach produces superior results in comparison with those obtained by non-ensemble methods. The algorithm reliably classified red hind call types with over 90% accuracy and successfully detected some calls missed by human observers.

4.
J Acoust Soc Am ; 144(3): EL196, 2018 09.
Article de Anglais | MEDLINE | ID: mdl-30424627

RÉSUMÉ

In this paper, the effectiveness of deep learning for automatic classification of grouper species by their vocalizations has been investigated. In the proposed approach, wavelet denoising is used to reduce ambient ocean noise, and a deep neural network is then used to classify sounds generated by different species of groupers. Experimental results for four species of groupers show that the proposed approach achieves a classification accuracy of around 90% or above in all of the tested cases, a result that is significantly better than the one obtained by a previously reported method for automatic classification of grouper calls.


Sujet(s)
Apprentissage profond/classification , , Son (physique) , Vocalisation animale/physiologie , Animaux , Poissons
5.
J Acoust Soc Am ; 143(2): 666, 2018 02.
Article de Anglais | MEDLINE | ID: mdl-29495690

RÉSUMÉ

Grouper, a family of marine fishes, produce distinct vocalizations associated with their reproductive behavior during spawning aggregation. These low frequencies sounds (50-350 Hz) consist of a series of pulses repeated at a variable rate. In this paper, an approach is presented for automatic classification of grouper vocalizations from ambient sounds recorded in situ with fixed hydrophones based on weighted features and sparse classifier. Group sounds were labeled initially by humans for training and testing various feature extraction and classification methods. In the feature extraction phase, four types of features were used to extract features of sounds produced by groupers. Once the sound features were extracted, three types of representative classifiers were applied to categorize the species that produced these sounds. Experimental results showed that the overall percentage of identification using the best combination of the selected feature extractor weighted mel frequency cepstral coefficients and sparse classifier achieved 82.7% accuracy. The proposed algorithm has been implemented in an autonomous platform (wave glider) for real-time detection and classification of group vocalizations.

SÉLECTION CITATIONS
DÉTAIL DE RECHERCHE