Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 23
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Neural Netw ; 175: 106320, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38640696

RESUMO

The rhythm of bonafide speech is often difficult to replicate, which causes that the fundamental frequency (F0) of synthetic speech is significantly different from that of real speech. It is expected that the F0 feature contains the discriminative information for the fake speech detection (FSD) task. In this paper, we propose a novel F0 subband for FSD. In addition, to effectively model the F0 subband so as to improve the performance of FSD, the spatial reconstructed local attention Res2Net (SR-LA Res2Net) is proposed. Specifically, Res2Net is used as a backbone network to obtain multiscale information, and enhanced with a spatial reconstruction mechanism to avoid losing important information when the channel group is constantly superimposed. In addition, local attention is designed to make the model focus on the local information of the F0 subband. Experimental results on the ASVspoof 2019 LA dataset show that our proposed method obtains an equal error rate (EER) of 0.47% and a minimum tandem detection cost function (min t-DCF) of 0.0159, achieving the state-of-the-art performance among all of the single systems.


Assuntos
Redes Neurais de Computação , Humanos , Fala , Atenção/fisiologia , Algoritmos
2.
J Acoust Soc Am ; 155(1): 436-451, 2024 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-38240664

RESUMO

In indoor environments, reverberation often distorts clean speech. Although deep learning-based speech dereverberation approaches have shown much better performance than traditional ones, the inferior speech quality of the dereverberated speech caused by magnitude distortion and limited phase recovery is still a serious problem for practical applications. This paper improves the performance of deep learning-based speech dereverberation from the perspectives of both network design and mapping target optimization. Specifically, on the one hand, a bifurcated-and-fusion network and its guidance loss functions were designed to help reduce the magnitude distortion while enhancing the phase recovery. On the other hand, the time boundary between the early and late reflections in the mapped speech was investigated, so as to make a balance between the reverberation tailing effect and the difficulty of magnitude/phase recovery. Mathematical derivations were provided to show the rationality of the specially designed loss functions. Geometric illustrations were given to explain the importance of preserving early reflections in reducing the difficulty of phase recovery. Ablation study results confirmed the validity of the proposed network topology and the importance of preserving 20 ms early reflections in the mapped speech. Objective and subjective test results showed that the proposed system outperformed other baselines in the speech dereverberation task.


Assuntos
Aprendizado Profundo , Percepção da Fala , Fala , Inteligibilidade da Fala
3.
Trends Hear ; 27: 23312165231209913, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37956661

RESUMO

Frequency-domain monaural speech enhancement has been extensively studied for over 60 years, and a great number of methods have been proposed and applied to many devices. In the last decade, monaural speech enhancement has made tremendous progress with the advent and development of deep learning, and performance using such methods has been greatly improved relative to traditional methods. This survey paper first provides a comprehensive overview of traditional and deep-learning methods for monaural speech enhancement in the frequency domain. The fundamental assumptions of each approach are then summarized and analyzed to clarify their limitations and advantages. A comprehensive evaluation of some typical methods was conducted using the WSJ + Deep Noise Suppression (DNS) challenge and Voice Bank + DEMAND datasets to give an intuitive and unified comparison. The benefits of monaural speech enhancement methods using objective metrics relevant for normal-hearing and hearing-impaired listeners were evaluated. The objective test results showed that compression of the input features was important for simulated normal-hearing listeners but not for simulated hearing-impaired listeners. Potential future research and development topics in monaural speech enhancement are suggested.


Assuntos
Aprendizado Profundo , Perda Auditiva , Percepção da Fala , Humanos , Fala
4.
Neural Netw ; 168: 508-517, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37832318

RESUMO

Recent multi-domain processing methods have demonstrated promising performance for monaural speech enhancement tasks. However, few of them explain why they behave better over single-domain approaches. As an attempt to fill this gap, this paper presents a complementary single-channel speech enhancement network (CompNet) that demonstrates promising denoising capabilities and provides a unique perspective to understand the improvements introduced by multi-domain processing. Specifically, the noisy speech is initially enhanced through a time-domain network. However, despite the waveform can be feasibly recovered, the distribution of the time-frequency bins may still be partly different from the target spectrum when we reconsider the problem in the frequency domain. To solve this problem, we design a dedicated dual-path network as a post-processing module to independently filter the magnitude and refine the phase. This further drives the estimated spectrum to closely approximate the target spectrum in the time-frequency domain. We conduct extensive experiments with the WSJ0-SI84 and VoiceBank + Demand datasets. Objective test results show that the performance of the proposed system is highly competitive with existing systems.


Assuntos
Algoritmos , Fala , Ruído , Razão Sinal-Ruído
5.
Neural Netw ; 166: 566-578, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37586257

RESUMO

End-to-end neural diarization (EEND) which has the capability to directly output speaker diarization results and handle overlapping speech has attracted more and more attention due to its promising performance. Although existing EEND-based methods often outperform clustering-based methods, they cannot generalize well to unseen test sets because fixed attractors are often utilized to estimate speech activities of each speaker. An iterative adaptive attractor estimation (IAAE) network was proposed to refine diarization results, in which the self-attentive EEND (SA-EEND) was implemented to initialize diarization results and frame-wise embeddings. There are two main parts in the proposed IAAE network: an attention-based pooling was designed to obtain a rough estimation of the attractors based on the diarization results of the previous iteration, and an adaptive attractor was then calculated by using transformer decoder blocks. A unified training framework was proposed to further improve the diarization performance, making the embeddings more discriminable based on the well separated attractors. We evaluated the proposed method on both the simulated mixtures and the real CALLHOME dataset using the diarization error rate (DER). Our proposed method provides relative reductions in DER by up to 44.8% on simulated 2-speaker mixtures and 23.6% on the CALLHOME dataset over the baseline SA-EEND at the 2nd iteration step. We also demonstrated that with an increasing number of refinement steps applied, the DER on the CALLHOME dataset could be further reduced to 7.36%, achieving the state-of-the-art diarization results when compared with other methods.


Assuntos
Fala , Análise por Conglomerados
6.
Trends Hear ; 27: 23312165231192290, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37551089

RESUMO

Speech and music both play fundamental roles in daily life. Speech is important for communication while music is important for relaxation and social interaction. Both speech and music have a large dynamic range. This does not pose problems for listeners with normal hearing. However, for hearing-impaired listeners, elevated hearing thresholds may result in low-level portions of sound being inaudible. Hearing aids with frequency-dependent amplification and amplitude compression can partly compensate for this problem. However, the gain required for low-level portions of sound to compensate for the hearing loss can be larger than the maximum stable gain of a hearing aid, leading to acoustic feedback. Feedback control is used to avoid such instability, but this can lead to artifacts, especially when the gain is only just below the maximum stable gain. We previously proposed a deep-learning method called DeepMFC for controlling feedback and reducing artifacts and showed that when the sound source was speech DeepMFC performed much better than traditional approaches. However, its performance using music as the sound source was not assessed and the way in which it led to improved performance for speech was not determined. The present paper reveals how DeepMFC addresses feedback problems and evaluates DeepMFC using speech and music as sound sources with both objective and subjective measures. DeepMFC achieved good performance for both speech and music when it was trained with matched training materials. When combined with an adaptive feedback canceller it provided over 13 dB of additional stable gain for hearing-impaired listeners.


Assuntos
Auxiliares de Audição , Música , Percepção da Fala , Humanos , Fala , Retroalimentação , Estimulação Acústica , Processamento de Sinais Assistido por Computador
7.
Hear Res ; 434: 108781, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-37156121

RESUMO

When presenting a stereo sound through bilateral stimulation by two bone conduction transducers (BTs), part of the sound at the left side leaks to the right side, and vice versa. The sound transmitted to the contralateral cochlea becomes cross-talk, which can affect space perception. The negative effects of the cross-talk can be mitigated by a cross-talk cancellation system (CCS). Here, a CCS is designed from individual bone conduction (BC) transfer functions using a fast deconvolution algorithm. The BC response functions (BCRFs) from the stimulation positions to the cochleae were obtained by measurements of BC evoked otoacoustic emissions (OAEs) of 10 participants. The BCRFs of the 10 participants showed that the interaural isolation was low. In 5 of the participants, a cross-talk cancellation experiment was carried out based on the individualized BCRFs. Simulations showed that the CCS gave a channel separation (CS) of more than 50 dB in the 1-3 kHz range with appropriately chosen parameter values. Moreover, a localization test showed that the BC localization accuracy improved using the CCS where a 2-4.5 kHz narrowband noise gave better localization performance than a broadband 0.4-10 kHz noise. The results indicate that using a CCS with bilateral BC stimulation can improve interaural separation and thereby improve spatial hearing by bilateral BC.


Assuntos
Condução Óssea , Audição , Humanos , Condução Óssea/fisiologia , Estimulação Acústica/métodos , Audição/fisiologia , Som , Cóclea/fisiologia
8.
J Acoust Soc Am ; 153(1): 248, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36732256

RESUMO

Individual head-related transfer functions (HRTFs) are usually measured with high spatial resolution or modeled with anthropometric parameters. This study proposed an HRTF individualization method using only spatially sparse measurements using a convolutional neural network (CNN). The HRTFs were represented by two-dimensional images, in which the horizontal and vertical ordinates indicated direction and frequency, respectively. The CNN was trained by using the HRTF images measured at specific sparse directions as input and using the corresponding images with a high spatial resolution as output in a prior HRTF database. The HRTFs of a new subject can be recovered by the trained CNN with the sparsely measured HRTFs. Objective experiments showed that, when using 23 directions to recover individual HRTFs at 1250 directions, the spectral distortion (SD) is around 4.4 dB; when using 105 directions, the SD reduced to around 3.8 dB. Subjective experiments showed that the individualized HRTFs recovered from 105 directions had smaller discrimination proportion than the baseline method and were perceptually undistinguishable in many directions. This method combines the spectral and spatial characteristics of HRTF for individualization, which has potential for improving virtual reality experience.

9.
J Acoust Soc Am ; 152(6): 3616, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36586835

RESUMO

For hearing aids, it is critical to reduce the acoustic coupling between the receiver and microphone to ensure that prescribed gains are below the maximum stable gain, thus preventing acoustic feedback. Methods for doing this include fixed and adaptive feedback cancellation, phase modulation, and gain reduction. However, the behavior of hearing aids in situations where the prescribed gain is only just below the maximum stable gain, called here "marginally stable gain," is not well understood. This paper analyzed marginally stable systems and identified three problems, including increased gain at frequencies with the smallest gain margin, short whistles caused by the limited rate of decay of the output when the input drops, and coloration effects. A deep learning framework, called deep marginal feedback cancellation (DeepMFC), was developed to suppress short whistles, and reduce coloration effects, as well as to limit excess amplification at certain frequencies. To implement DeepMFC, many receiver signals in closed-loop systems and corresponding open-loop systems were simulated, and the receiver signals of the closed-loop and open-loop systems were paired together to obtain parallel signals for training. DeepMFC achieved much better performance than existing feedback control methods using objective and subjective measures.


Assuntos
Aprendizado Profundo , Auxiliares de Audição , Retroalimentação , Acústica
10.
Front Neurosci ; 16: 1068682, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36466173

RESUMO

All hearing aid fittings should be validated with appropriate outcome measurements, whereas there is a lack of well-designed objective verification methods for bone conduction (BC) hearing aids, compared to the real-ear measurement for air conduction hearing aids. This study aims to develop a new objective verification method for BC hearing aids by placing a piezoelectric thin-film force transducer between the BC transducer and the stimulation position. The newly proposed method was compared with the ear canal method and the artificial mastoid method through audibility estimation. The audibility estimation adopted the responses from the transducers that correspond to the individual BC hearing thresholds and three different input levels of pink noise. Twenty hearing-impaired (HI) subjects without prior experience with hearing aids were recruited for this study. The measurement and analysis results showed that the force transducer and ear canal methods almost yielded consistent results, while the artificial mastoid method exhibited significant differences from these two methods. The proposed force transducer method showed a lower noise level and was less affected by the sound field signal when compared with other methods. This indicates that it is promising to utilize a piezoelectric thin-film force transducer as an in-situ objective measurement method of BC stimulation.

11.
Trends Hear ; 26: 23312165221130185, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36200171

RESUMO

The position of a bone conduction (BC) transducer influences the perception of BC sound, but the relation between the stimulation position and BC sound perception is not entirely clear. In the current study, eleven participants with normal hearing were evaluated for their hearing thresholds and speech intelligibility for three stimulation positions (temple, mastoid, and condyle) and four types of ear canal occlusion produced by headphones. In addition, the sound quality for three types of music was rated with stimulation at the three positions. Stimulation at the condyle gave the best performance while the temple showed the worst performance for hearing thresholds, speech intelligibility, and sound quality. The in-ear headphones gave the highest occlusion effect while fully open headphones gave the least occlusion effect. BC stimulated speech intelligibility improved with greater occlusion, especially for the temple stimulation position. The results suggest that BC stimulation at the condyle is generally superior to the other positions tested in terms of sensitivity, clarity, and intelligibility, and that occlusion with ordinary headphones improves the BC signal.


Assuntos
Meato Acústico Externo , Percepção da Fala , Estimulação Acústica/métodos , Limiar Auditivo/fisiologia , Condução Óssea/fisiologia , Osso e Ossos , Meato Acústico Externo/fisiologia , Humanos
12.
Front Psychol ; 13: 841926, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36106044

RESUMO

With the development of deep neural networks, automatic music composition has made great progress. Although emotional music can evoke listeners' different auditory perceptions, only few research studies have focused on generating emotional music. This paper presents EmotionBox -a music-element-driven emotional music generator based on music psychology that is capable of composing music given a specific emotion, while this model does not require a music dataset labeled with emotions as previous methods. In this work, pitch histogram and note density are extracted as features that represent mode and tempo, respectively, to control music emotions. The specific emotions are mapped from these features through Russell's psychology model. The subjective listening tests show that the Emotionbox has a competitive performance in generating different emotional music and significantly better performance in generating music with low arousal emotions, especially peaceful emotion, compared with the emotion-label-based method.

13.
J Acoust Soc Am ; 151(5): 3291, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35649938

RESUMO

It is highly desirable that speech enhancement algorithms can achieve good performance while keeping low latency for many applications, such as digital hearing aids, mobile phones, acoustically transparent hearing devices, and public address systems. To improve the performance of traditional low-latency speech enhancement algorithms, a deep filter-bank equalizer (FBE) framework was proposed that integrated a deep learning-based subband noise reduction network with a deep learning-based shortened digital filter mapping network. In the first network, a deep learning model was trained with a controllable small frame shift to satisfy the low-latency demand, i.e., no greater than 4 ms, so as to obtain (complex) subband gains that could be regarded as an adaptive digital filter in each frame. In the second network, to reduce the latency, this adaptive digital filter was implicitly shortened by a deep learning-based framework and was then applied to noisy speech to reconstruct the enhanced speech without the overlap-add method. Experimental results on the WSJ0-SI84 corpus indicated that the proposed DeepFBE with only 4-ms latency achieved much better performance than traditional low-latency speech enhancement algorithms across several objective metrics. Listening test results further confirmed that our approach achieved higher speech quality than other methods.


Assuntos
Auxiliares de Audição , Fala , Algoritmos , Percepção Auditiva , Ruído/efeitos adversos , Ruído/prevenção & controle
14.
Trends Hear ; 26: 23312165221097196, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35491731

RESUMO

Virtual sound localization tests were conducted to examine the effects of stimulation position (mastoid, condyle, supra-auricular, temple, and bone-anchored hearing aid implant position) and frequency band (low frequency, high frequency, and broadband) on bone-conduction (BC) horizontal localization. Non-individualized head-related transfer functions were used to reproduce virtual sound through bilateral BC transducers. Subjective experiments showed that stimulation at the mastoid gave the best performance while the temple gave the worst performance in localization. Stimulation at the mastoid and condyle did not differ significantly from that using air-conduction (AC) headphones in localization accuracy. However, binaural reproduction at all BC stimulation positions led to similar levels of front-back confusion (FBC), which were also comparable to that with AC headphones. Binaural BC reproduction with high-frequency stimulation led to significantly higher localization accuracy than with low-frequency stimulation. When transcranial attenuation (TA) was measured, the attenuation became larger at the condyle and mastoid, and increased at high frequencies. The experiments imply that larger TAs may improve localization accuracy but do not improve FBC. The present study indicates that the BC stimulation at the mastoid and condyle can effectively convey spatial information, especially with high-frequency stimulation.


Assuntos
Auxiliares de Audição , Localização de Som , Estimulação Acústica , Condução Óssea/fisiologia , Audição , Humanos
15.
J Acoust Soc Am ; 151(3): 1434, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-35364914

RESUMO

Bone conduction devices are used in audiometric tests, hearing rehabilitation, and communication systems. The mechanical impedance of the stimulated skull location affects the performance of the bone conduction devices. In the present study, the mechanical impedances of the mastoid and condyle were measured in 100 Chinese subjects aged from 22 to 67 years. The results show that the mastoid and condyle impedances within the same subject differ significantly and the impedance differences between subjects at the same stimulation position are mainly below the resonance frequency. The mechanical impedance of the mastoid is significantly influenced by age, and not related to gender or body mass index (BMI). While the mechanical impedance of the condyle is significantly affected by BMI, followed by gender, and not related to age. There are some differences in mastoid impedance between the Chinese and Western subjects. An analogy model predicts that the difference in mechanical impedance between the mastoid and condyle leads to a significant difference in the output force of the bone conduction devices. The results can be used to develop improved condyle and mastoid stimulators for the Chinese.


Assuntos
Auxiliares de Audição , Processo Mastoide , Adulto , Idoso , Condução Óssea/fisiologia , Impedância Elétrica , Humanos , Processo Mastoide/fisiologia , Pessoa de Meia-Idade , Crânio/fisiologia , Adulto Jovem
16.
J Acoust Soc Am ; 150(4): 2577, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34717509

RESUMO

Packet loss concealment (PLC) aims to mitigate speech impairments caused by packet losses so as to improve speech perceptual quality. This paper proposes an end-to-end PLC algorithm with a time-frequency hybrid generative adversarial network, which incorporates a dilated residual convolution and the integration of a time-domain discriminator and frequency-domain discriminator into a convolutional encoder-decoder architecture. The dilated residual convolution is employed to aggregate the short-term and long-term context information of lost speech frames through two network receptive fields with different dilation rates, and the integrated time-frequency discriminators are proposed to learn multi-resolution time-frequency features from correctly received speech frames with both time-domain waveform and frequency-domain complex spectrums. Both causal and noncausal strategies are proposed for the packet-loss problem, which can effectively reduce the transitional distortion caused by lost speech frames with a significantly reduced number of training parameters and computational complexity. The experimental results show that the proposed method can achieve better performance in terms of three objective measurements, including the signal-to-noise ratio, perceptual evaluation of speech quality, and short-time objective intelligibility. The results of the subjective listening test further confirm a better performance in the speech perceptual quality.


Assuntos
Algoritmos , Fala , Percepção Auditiva , Razão Sinal-Ruído
17.
J Acoust Soc Am ; 150(2): 816, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34470328

RESUMO

Traditional stereophonic acoustic echo cancellation algorithms need to estimate acoustic echo paths from stereo loudspeakers to a microphone, which often suffers from the nonuniqueness problem caused by a high correlation between the two far-end signals of these stereo loudspeakers. Many decorrelation methods have already been proposed to mitigate this problem. However, these methods may reduce the audio quality and/or stereophonic spatial perception. This paper proposes to use a convolutional recurrent network (CRN) to suppress the stereophonic echo components by estimating a nonlinear gain, which is then multiplied by the complex spectrum of the microphone signal to obtain the estimated near-end speech without a decorrelation procedure. The CRN includes an encoder-decoder module and two-layer gated recurrent network module, which can take advantage of the feature extraction capability of the convolutional neural networks and temporal modeling capability of recurrent neural networks simultaneously. The magnitude spectra of the two far-end signals are used as input features directly without any decorrelation preprocessing and, thus, both the audio quality and stereophonic spatial perception can be maintained. The experimental results in both the simulated and real acoustic environments show that the proposed algorithm outperforms traditional algorithms such as the normalized least-mean square and Wiener algorithms, especially in situations of low signal-to-echo ratio and high reverberation time RT60.


Assuntos
Aprendizado Profundo , Acústica , Algoritmos , Análise dos Mínimos Quadrados , Redes Neurais de Computação
18.
Front Psychol ; 12: 656052, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34149541

RESUMO

The ability to localize a sound source is very important in our daily life, specifically to analyze auditory scenes in complex acoustic environments. The concept of minimum audible angle (MAA), which is defined as the smallest detectable difference between the incident directions of two sound sources, has been widely used in the research fields of auditory perception to measure localization ability. Measuring MAAs usually involves a reference sound source and either a large number of loudspeakers or a movable sound source in order to reproduce sound sources at a large number of predefined incident directions. However, existing MAA test systems are often cumbersome because they require a large number of loudspeakers or a mechanical rail slide and thus are expensive and inconvenient to use. This study investigates a novel MAA test method using virtual sound source synthesis and avoiding the problems with traditional methods. We compare the perceptual localization acuity of sound sources in two experimental designs: using the virtual presentation and real sound sources. The virtual sound source is reproduced through a pair of loudspeakers weighted by vector-based amplitude panning (VBAP). Results show that the average measured MAA at 0° azimuth is 1.1° and the average measured MAA at 90° azimuth is 3.1° in a virtual acoustic system, meanwhile the average measured MAA at 0° azimuth is about 1.2° and the average measured MAA at 90° azimuth is 3.3° when using the real sound sources. The measurements of the two methods have no significant difference. We conclude that the proposed MAA test system is a suitable alternative to more complicated and expensive setups.

19.
JASA Express Lett ; 1(1): 014802, 2021 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36154095

RESUMO

Previous studies have shown the importance of introducing power compression on both feature and target when only the magnitude is considered in the dereverberation task. When both real and imaginary components are estimated without power compression, it has been shown that it is important to take magnitude constraint into account. In this paper, both power compression and phase estimation are considered to show their equal importance in the dereverberation task, where we propose to reconstruct the compressed real and imaginary components (cRI) for training. Both objective and subjective results reveal that better dereverberation can be achieved when using cRI.

20.
J Acoust Soc Am ; 147(4): 2625, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-32359271

RESUMO

The state-of-the-art supervised binaural distance estimation methods often use binaural features that are related to both the distance and the azimuth, and thus the distance estimation accuracy may degrade a great deal with fluctuant azimuth. To incorporate the azimuth on estimating the distance, this paper proposes a supervised method to jointly estimate the azimuth and the distance of binaural signals based on deep neural networks (DNNs). In this method, the subband binaural features, including many statistical properties of several subband binaural features and the binaural spectral magnitude difference standard deviation, are extracted together as cues to jointly estimate the azimuth and the distance using binaural signals by exploiting a multi-objective DNN framework. Especially, both the azimuth and the distance cues are utilized in the learning stage of the error back-propagation in the multi-objective DNN framework, which can improve the generalization ability of the azimuth and the distance estimation. Experimental results demonstrate that the proposed method can not only achieve high azimuth estimation accuracy but can also effectively improve the distance estimation accuracy when compared with several state-of-the-art supervised binaural distance estimation methods.


Assuntos
Localização de Som , Sinais (Psicologia) , Redes Neurais de Computação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...