RESUMO
Artificial intelligence (AI) holds significant potential for enhancing quality of gastrointestinal (GI) endoscopy, but the adoption of AI in clinical practice is hampered by the lack of rigorous standardisation and development methodology ensuring generalisability. The aim of the Quality Assessment of pre-clinical AI studies in Diagnostic Endoscopy (QUAIDE) Explanation and Checklist was to develop recommendations for standardised design and reporting of preclinical AI studies in GI endoscopy.The recommendations were developed based on a formal consensus approach with an international multidisciplinary panel of 32 experts among endoscopists and computer scientists. The Delphi methodology was employed to achieve consensus on statements, with a predetermined threshold of 80% agreement. A maximum three rounds of voting were permitted.Consensus was reached on 18 key recommendations, covering 6 key domains: data acquisition and annotation (6 statements), outcome reporting (3 statements), experimental setup and algorithm architecture (4 statements) and result presentation and interpretation (5 statements). QUAIDE provides recommendations on how to properly design (1. Methods, statements 1-14), present results (2. Results, statements 15-16) and integrate and interpret the obtained results (3. Discussion, statements 17-18).The QUAIDE framework offers practical guidance for authors, readers, editors and reviewers involved in AI preclinical studies in GI endoscopy, aiming at improving design and reporting, thereby promoting research standardisation and accelerating the translation of AI innovations into clinical practice.
RESUMO
Deep learning, due to its unprecedented success in tasks such as image classification, has emerged as a new tool in image reconstruction with potential to change the field. In this paper, we demonstrate a crucial phenomenon: Deep learning typically yields unstable methods for image reconstruction. The instabilities usually occur in several forms: 1) Certain tiny, almost undetectable perturbations, both in the image and sampling domain, may result in severe artefacts in the reconstruction; 2) a small structural change, for example, a tumor, may not be captured in the reconstructed image; and 3) (a counterintuitive type of instability) more samples may yield poorer performance. Our stability test with algorithms and easy-to-use software detects the instability phenomena. The test is aimed at researchers, to test their networks for instabilities, and for government agencies, such as the Food and Drug Administration (FDA), to secure safe use of deep learning methods.
RESUMO
This study examines how the Affordable Care Act (ACA) affected income related inequality in health insurance coverage in the United States. Analyzing data from the American Community Survey (ACS) from 2010 through 2018, we apply difference-in-differences, and triple-differences estimation to the Recentered Influence Function OLS estimation. We find that the ACA reduced inequality in health insurance coverage in the United States. Most of this reduction was a result of the Medicaid expansion. Additional decomposition analysis shows there was little change in inequality of coverage through an employer plan, and a decrease in inequality for coverage through direct purchase of health insurance. These results indicate that the insurance exchanges also contributed to declining inequality in health insurance coverage.
Assuntos
Cobertura do Seguro , Patient Protection and Affordable Care Act , Humanos , Renda , Seguro Saúde , Medicaid , Estados UnidosRESUMO
OBJECTIVE: Artificial intelligence (AI) may reduce underdiagnosed or overlooked upper GI (UGI) neoplastic and preneoplastic conditions, due to subtle appearance and low disease prevalence. Only disease-specific AI performances have been reported, generating uncertainty on its clinical value. DESIGN: We searched PubMed, Embase and Scopus until July 2020, for studies on the diagnostic performance of AI in detection and characterisation of UGI lesions. Primary outcomes were pooled diagnostic accuracy, sensitivity and specificity of AI. Secondary outcomes were pooled positive (PPV) and negative (NPV) predictive values. We calculated pooled proportion rates (%), designed summary receiving operating characteristic curves with respective area under the curves (AUCs) and performed metaregression and sensitivity analysis. RESULTS: Overall, 19 studies on detection of oesophageal squamous cell neoplasia (ESCN) or Barrett's esophagus-related neoplasia (BERN) or gastric adenocarcinoma (GCA) were included with 218, 445, 453 patients and 7976, 2340, 13 562 images, respectively. AI-sensitivity/specificity/PPV/NPV/positive likelihood ratio/negative likelihood ratio for UGI neoplasia detection were 90% (CI 85% to 94%)/89% (CI 85% to 92%)/87% (CI 83% to 91%)/91% (CI 87% to 94%)/8.2 (CI 5.7 to 11.7)/0.111 (CI 0.071 to 0.175), respectively, with an overall AUC of 0.95 (CI 0.93 to 0.97). No difference in AI performance across ESCN, BERN and GCA was found, AUC being 0.94 (CI 0.52 to 0.99), 0.96 (CI 0.95 to 0.98), 0.93 (CI 0.83 to 0.99), respectively. Overall, study quality was low, with high risk of selection bias. No significant publication bias was found. CONCLUSION: We found a high overall AI accuracy for the diagnosis of any neoplastic lesion of the UGI tract that was independent of the underlying condition. This may be expected to substantially reduce the miss rate of precancerous lesions and early cancer when implemented in clinical practice.
RESUMO
This work considers the problem of segmenting heart sounds into their fundamental components. We unify statistical and data-driven solutions by introducing Markov-based Neural Networks (MNNs), a hybrid end-to-end framework that exploits Markov models as statistical inductive biases for an Artificial Neural Network (ANN) discriminator. We show that an MNN leveraging a simple one-dimensional Convolutional ANN significantly outperforms two recent purely data-driven solutions for this task in two publicly available datasets: PhysioNet 2016 (Sensitivity: 0.947 ±0.02; Positive Predictive Value : 0.937 ±0.025) and the CirCor DigiScope 2022 (Sensitivity: 0.950 ±0.008; Positive Predictive Value: 0.943 ±0.012). We also propose a novel gradient-based unsupervised learning algorithm that effectively makes the MNN adaptive to unseen datum sampled from unknown distributions. We perform a cross dataset analysis and show that an MNN pre-trained in the CirCor DigiScope 2022 can benefit from an average improvement of 3.90% Positive Predictive Value on unseen observations from the PhysioNet 2016 dataset using this method.
Assuntos
Ruídos Cardíacos , Humanos , Redes Neurais de Computação , Algoritmos , Processamento de Imagem Assistida por Computador/métodosRESUMO
In this paper we study the heart sound segmentation problem using Deep Neural Networks. The impact of available electrocardiogram (ECG) signals in addition to phonocardiogram (PCG) signals is evaluated. To incorporate ECG, two different models considered, which are built upon a 1D U-net - an early fusion one that fuses ECG in an early processing stage, and a late fusion one that averages the probabilities obtained by two networks applied independently on PCG and ECG data. Results show that, in contrast with traditional uses of ECG for PCG gating, early fusion of PCG and ECG information can provide more robust heart sound segmentation. As a proof of concept, we use the publicly available PhysioNet dataset. Validation results provide, on average, a sensitivity of 97.2%, 94.5%, and 95.6% and a Positive Predictive Value of 97.5%, 96.2%, and 96.1% for Early-fusion, Late-fusion, and unimodal (PCG only) models, respectively, showing the advantages of combining both signals at early stages to segment heart sounds.Clinical relevance- Cardiac auscultation is the first line of screening for cardiovascular diseases. Its low cost and simplicity are especially suitable for screening large populations in underprivileged countries. The proposed analysis and algorithm show the potential of effectively including electrocardiogram information to improve heart sound segmentation performance, thus enhancing the capacity of extracting useful information from heart sound recordings.
Assuntos
Ruídos Cardíacos , Fonocardiografia , Processamento de Sinais Assistido por Computador , Eletrocardiografia , CoraçãoRESUMO
Gastric Intestinal Metaplasia (GIM) is one of the precancerous conditions in the gastric carcinogenesis cascade and its optical diagnosis during endoscopic screening is challenging even for seasoned endoscopists. Several solutions leveraging pre-trained deep neural networks (DNNs) have been recently proposed in order to assist human diagnosis. In this paper, we present a comparative study of these architectures in a new dataset containing GIM and non-GIM Narrow-band imaging still frames. We find that the surveyed DNNs perform remarkably well on average, but still measure sizeable inter-fold variability during cross-validation. An additional ad-hoc analysis suggests that these baseline architectures may not perform equally well at all scales when diagnosing GIM.Clinical relevance- Enhanching a clinician's ability to detect and localize intestinal metaplasia can be a crucial tool for gastric cancer management policies.
Assuntos
Aprendizado Profundo , Lesões Pré-Cancerosas , Humanos , Gastroscopia/métodos , Estômago/diagnóstico por imagem , Metaplasia , Lesões Pré-Cancerosas/diagnósticoRESUMO
The use of contrast-enhanced computed tomography (CTCA) for detection of coronary artery disease (CAD) exposes patients to the risks of iodine contrast-agents and excessive radiation, increases scanning time and healthcare costs. Deep learning generative models have the potential to artificially create a pseudo-enhanced image from non-contrast computed tomography (CT) scans.In this work, two specific models of generative adversarial networks (GANs) - the Pix2Pix-GAN and the Cycle-GAN - were tested with paired non-contrasted CT and CTCA scans from a private and public dataset. Furthermore, an exploratory analysis of the trade-off of using 2D and 3D inputs and architectures was performed. Using only the Structural Similarity Index Measure (SSIM) and the Peak Signal-to-Noise Ratio (PSNR), it could be concluded that the Pix2Pix-GAN using 2D data reached better results with 0.492 SSIM and 16.375 dB PSNR. However, visual analysis of the output shows significant blur in the generated images, which is not the case for the Cycle-GAN models. This behavior can be captured by the evaluation of the Fréchet Inception Distance (FID), that represents a fundamental performance metric that is usually not considered by related works in the literature.Clinical relevance- Contrast-enhanced computed tomography is the first line imaging modality to detect CAD resulting in unnecessary exposition to the risk of iodine contrast and radiation in particularly in young patients with no disease. This algorithm has the potential of being translated into clinical practice as a screening method for CAD in asymptomatic subjects or quick rule-out method of CAD in the acute setting or centres with no CTCA service. This strategy can eventually represent a reduction in the need for CTCA reducing its burden and associated costs.
Assuntos
Doença da Artéria Coronariana , Iodo , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Doença da Artéria Coronariana/diagnóstico por imagem , Custos de Cuidados de SaúdeRESUMO
OBJECTIVE: Murmurs are abnormal heart sounds, identified by experts through cardiac auscultation. The murmur grade, a quantitative measure of the murmur intensity, is strongly correlated with the patient's clinical condition. This work aims to estimate each patient's murmur grade (i.e., absent, soft, loud) from multiple auscultation location phonocardiograms (PCGs) of a large population of pediatric patients from a low-resource rural area. METHODS: The Mel spectrogram representation of each PCG recording is given to an ensemble of 15 convolutional residual neural networks with channel-wise attention mechanisms to classify each PCG recording. The final murmur grade for each patient is derived based on the proposed decision rule and considering all estimated labels for available recordings. The proposed method is cross-validated on a dataset consisting of 3456 PCG recordings from 1007 patients using a stratified ten-fold cross-validation. Additionally, the method was tested on a hidden test set comprised of 1538 PCG recordings from 442 patients. RESULTS: The overall cross-validation performances for patient-level murmur gradings are 86.3% and 81.6% in terms of the unweighted average of sensitivities and F1-scores, respectively. The sensitivities (and F1-scores) for absent, soft, and loud murmurs are 90.7% (93.6%), 75.8% (66.8%), and 92.3% (84.2%), respectively. On the test set, the algorithm achieves an unweighted average of sensitivities of 80.4% and an F1-score of 75.8%. CONCLUSIONS: This study provides a potential approach for algorithmic pre-screening in low-resource settings with relatively high expert screening costs. SIGNIFICANCE: The proposed method represents a significant step beyond detection of murmurs, providing characterization of intensity, which may provide an enhanced classification of clinical outcomes.
Assuntos
Sopros Cardíacos , Ruídos Cardíacos , Humanos , Criança , Fonocardiografia/métodos , Sopros Cardíacos/diagnóstico , Auscultação Cardíaca/métodos , Algoritmos , AuscultaçãoRESUMO
Cardiac auscultation is an accessible diagnostic screening tool that can help to identify patients with heart murmurs, who may need follow-up diagnostic screening and treatment for abnormal cardiac function. However, experts are needed to interpret the heart sounds, limiting the accessibility of cardiac auscultation in resource-constrained environments. Therefore, the George B. Moody PhysioNet Challenge 2022 invited teams to develop algorithmic approaches for detecting heart murmurs and abnormal cardiac function from phonocardiogram (PCG) recordings of heart sounds. For the Challenge, we sourced 5272 PCG recordings from 1452 primarily pediatric patients in rural Brazil, and we invited teams to implement diagnostic screening algorithms for detecting heart murmurs and abnormal cardiac function from the recordings. We required the participants to submit the complete training and inference code for their algorithms, improving the transparency, reproducibility, and utility of their work. We also devised an evaluation metric that considered the costs of screening, diagnosis, misdiagnosis, and treatment, allowing us to investigate the benefits of algorithmic diagnostic screening and facilitate the development of more clinically relevant algorithms. We received 779 algorithms from 87 teams during the Challenge, resulting in 53 working codebases for detecting heart murmurs and abnormal cardiac function from PCG recordings. These algorithms represent a diversity of approaches from both academia and industry, including methods that use more traditional machine learning techniques with engineered clinical and statistical features as well as methods that rely primarily on deep learning models to discover informative features. The use of heart sound recordings for identifying heart murmurs and abnormal cardiac function allowed us to explore the potential of algorithmic approaches for providing more accessible diagnostic screening in resource-constrained environments. The submission of working, open-source algorithms and the use of novel evaluation metrics supported the reproducibility, generalizability, and clinical relevance of the research from the Challenge.
RESUMO
This study aimed to build convolutional neural network (CNN) models capable of classifying upper endoscopy images, to determine the stage of infection in the development of a gastric cancer. Two different problems were covered. A first one with a smaller number of categorical classes and a lower degree of detail. A second one, consisting of a larger number of classes, corresponding to each stage of precancerous conditions in the Correa's cascade. Three public datasets were used to build the dataset that served as input for the classification tasks. The CNN models built for this study are capable of identifying the stage of precancerous conditions/lesions in the moment of an upper endoscopy. A model based on the DenseNet169 architecture achieved an average accuracy of 0.72 in discriminating among the different stages of infection. The trade-off between detail in the definition of lesion classes and classification performance has been explored. Results from the application of Grad CAMs to the trained models show that the proposed CNN architectures base their classification output on the extraction of physiologically relevant image features. Clinical relevance- This research could improve the accuracy of upper endoscopy exams, which have margin for improvement, by assisting doctors when analysing the lesions seen in patient's images.
Assuntos
Aprendizado Profundo , Lesões Pré-Cancerosas , Neoplasias Gástricas , Humanos , Redes Neurais de Computação , Neoplasias Gástricas/diagnóstico por imagemRESUMO
Stomach cancer is the third deadliest type of cancer in the world (0.86 million deaths in 2017). In 2035, a 20% increase will be observed both in incidence and mortality due to demographic effects if no interventions are foreseen. Upper GI endoscopy (UGIE) plays a paramount role in early diagnosis and, therefore, improved survival rates. On the other hand, human and technical factors can contribute to misdiagnosis while performing UGIE. In this scenario, artificial intelligence (AI) has recently shown its potential in compensating for the pitfalls of UGIE, by leveraging deep learning architectures able to efficiently recognize endoscopic patterns from UGIE video data. This work presents a review of the current state-of-the-art algorithms in the application of AI to gastroscopy. It focuses specifically on the threefold tasks of assuring exam completeness (i.e., detecting the presence of blind spots) and assisting in the detection and characterization of clinical findings, both gastric precancerous conditions and neoplastic lesion changes. Early and promising results have already been obtained using well-known deep learning architectures for computer vision, but many algorithmic challenges remain in achieving the vision of AI-assisted UGIE. Future challenges in the roadmap for the effective integration of AI tools within the UGIE clinical practice are discussed, namely the adoption of more robust deep learning architectures and methods able to embed domain knowledge into image/video classifiers as well as the availability of large, annotated datasets.
RESUMO
This work focuses on detection of upper gas-trointestinal (GI) landmarks, which are important anatomical areas of the upper GI tract digestive system that should be photodocumented during endoscopy to guarantee a complete examination. The aim of this work consisted in testing new automatic algorithms, specifically based on convolutional neural network (CNN) systems, able to detect upper GI landmarks, that can help to avoid the presence of blind spots during esophagogastroduodenoscopy. We tested pre-trained CNN architectures, such as the ResNet-50 and VGG-16, in conjunction with different training approaches, including the use of class weights, batch normalization, dropout, and data augmentation. The ResNet-50 model trained with class weights was the best performing CNN, achieving an accuracy of 71.79% and a Mathews Correlation Coefficient (MCC) of 65.06%. The combination of supervised and unsupervised learning was also explored to increase classification performance. In particular, convolutional autoencoder architectures trained with unlabeled GI images were used to extract representative features. Such features were then concatenated with those extracted by the pre-trained ResNet-50 architecture. This approach achieved a classification accuracy of 72.45% and an MCC of 65.08%. Clinical relevance- Esophagogastroduodenoscopy (EGD) photodocumentation is essential to guarantee that all areas of the upper GI system are examined avoiding blind spots. This work has the objective to help the EGD photodocumentation monitorization by testing new CNN-based systems able to detect EGD landmarks.
Assuntos
Algoritmos , Redes Neurais de Computação , Endoscopia do Sistema DigestórioRESUMO
Cardiac auscultation is one of the most cost-effective techniques used to detect and identify many heart conditions. Computer-assisted decision systems based on auscultation can support physicians in their decisions. Unfortunately, the application of such systems in clinical trials is still minimal since most of them only aim to detect the presence of extra or abnormal waves in the phonocardiogram signal, i.e., only a binary ground truth variable (normal vs abnormal) is provided. This is mainly due to the lack of large publicly available datasets, where a more detailed description of such abnormal waves (e.g., cardiac murmurs) exists. To pave the way to more effective research on healthcare recommendation systems based on auscultation, our team has prepared the currently largest pediatric heart sound dataset. A total of 5282 recordings have been collected from the four main auscultation locations of 1568 patients, in the process, 215780 heart sounds have been manually annotated. Furthermore, and for the first time, each cardiac murmur has been manually annotated by an expert annotator according to its timing, shape, pitch, grading, and quality. In addition, the auscultation locations where the murmur is present were identified as well as the auscultation location where the murmur is detected more intensively. Such detailed description for a relatively large number of heart sounds may pave the way for new machine learning algorithms with a real-world application for the detection and analysis of murmur waves for diagnostic purposes.
Assuntos
Sopros Cardíacos , Ruídos Cardíacos , Algoritmos , Auscultação , Criança , Auscultação Cardíaca/métodos , Sopros Cardíacos/diagnóstico , HumanosRESUMO
Cardiac auscultation is the key screening procedure to detect and identify cardiovascular diseases (CVDs). One of many steps to automatically detect CVDs using auscultation, concerns the detection and delimitation of the heart sound boundaries, a process known as segmentation. Whether to include or not a segmentation step in the signal classification pipeline is nowadays a topic of discussion. Up to our knowledge, the outcome of a segmentation algorithm has been used almost exclusively to align the different signal segments according to the heartbeat. In this paper, the need for a heartbeat alignment step is tested and evaluated over different machine learning algorithms, including deep learning solutions. From the different classifiers tested, Gate Recurrent Unit (GRU) Network and Convolutional Neural Network (CNN) algorithms are shown to be the most robust. Namely, these algorithms can detect the presence of heart murmurs even without a heartbeat alignment step. Furthermore, Support Vector Machine (SVM) and Random Forest (RF) algorithms require an explicit segmentation step to effectively detect heart sounds and murmurs, the overall performance is expected drop approximately 5% on both cases.
Assuntos
Ruídos Cardíacos , Algoritmos , Auscultação Cardíaca , Redes Neurais de Computação , Máquina de Vetores de SuporteRESUMO
In this paper, we consider the problem of classifying skin lesions into multiple classes using both dermoscopic and clinical images. Different convolutional neural network architectures are considered for this task and a novel ensemble scheme is proposed, which makes use of a progressive transfer learning strategy. The proposed approach is tested over a dataset of 4000 images containing both dermoscopic and clinical examples and it is shown to achieve an average specificity of 93.3% and an average sensitivity of 79.9% in discriminating skin lesions belonging to four different classes.
Assuntos
Dermatopatias , Neoplasias Cutâneas , Dermoscopia , Humanos , Redes Neurais de Computação , Sensibilidade e EspecificidadeRESUMO
This paper presents an adaptable dictionary-based feature extraction approach for spike sorting offering high accuracy and low computational complexity for implantable applications. It extracts and learns identifiable features from evolving subspaces through matched unsupervised subspace filtering. To provide compatibility with the strict constraints in implantable devices such as the chip area and power budget, the dictionary contains arrays of {-1, 0 and 1} and the algorithm need only process addition and subtraction operations. Three types of such dictionary were considered. To quantify and compare the performance of the resulting three feature extractors with existing systems, a neural signal simulator based on several different libraries was developed. For noise levels σN between 0.05 and 0.3 and groups of 3 to 6 clusters, all three feature extractors provide robust high performance with average classification errors of less than 8% over five iterations, each consisting of 100 generated data segments. To our knowledge, the proposed adaptive feature extractors are the first able to classify reliably 6 clusters for implantable applications. An ASIC implementation of the best performing dictionary-based feature extractor was synthesized in a 65-nm CMOS process. It occupies an area of 0.09 mm2 and dissipates up to about 10.48 µW from a 1 V supply voltage, when operating with 8-bit resolution at 30 kHz operating frequency.
Assuntos
Processamento de Sinais Assistido por Computador , Aprendizado de Máquina não Supervisionado , Potenciais de Ação/fisiologia , Algoritmos , Engenharia Biomédica/instrumentação , Eletrodos Implantados , Modelos NeurológicosRESUMO
This paper studies the use of deep convolutional neural networks to segment heart sounds into their main components. The proposed methods are based on the adoption of a deep convolutional neural network architecture, which is inspired by similar approaches used for image segmentation. Different temporal modeling schemes are applied to the output of the proposed neural network, which induce the output state sequence to be consistent with the natural sequence of states within a heart sound signal (S1, systole, S2, diastole). In particular, convolutional neural networks are used in conjunction with underlying hidden Markov models and hidden semi-Markov models to infer emission distributions. The proposed approaches are tested on heart sound signals from the publicly available PhysioNet dataset, and they are shown to outperform current state-of-the-art segmentation methods by achieving an average sensitivity of 93.9% and an average positive predictive value of 94% in detecting S1 and S2 sounds.
Assuntos
Ruídos Cardíacos/fisiologia , Redes Neurais de Computação , Processamento de Sinais Assistido por Computador , Algoritmos , Bases de Dados Factuais , Humanos , Cadeias de Markov , Fonocardiografia/métodosRESUMO
Heart sounds are difficult to interpret due to events with very short temporal onset between them (tens of milliseconds) and dominant frequencies that are out of the human audible spectrum. Computer-assisted decision systems may help but they require robust signal processing algorithms. In this paper, we propose a new algorithm for heart sound segmentation using a hidden semi-Markov model. The proposed algorithm infers more suitable sojourn time parameters than those currently suggested by the state of the art, through a maximum likelihood approach. We test our approach over three different datasets, including the publicly available PhysioNet and Pascal datasets. We also release a pediatric dataset composed of 29 heart sounds. In contrast with any other dataset available online, the annotations of the heart sounds in the released dataset contain information about the beginning and the ending of each heart sound event. Annotations were made by two cardiopulmonologists. The proposed algorithm is compared with the current state of the art. The results show a significant increase in segmentation performance, regardless the dataset or the methodology presented. For example, when using the PhysioNet dataset to train and to evaluate the HSMMs, our algorithm achieved average an F-score of [Formula: see text] compared to [Formula: see text] achieved by the algorithm described in [D.B. Springer, L. Tarassenko, and G. D. Clifford, "Logistic regressionHSMM-based heart sound segmentation," IEEE Transactions on Biomedical Engineering, vol. 63, no. 4, pp. 822-832, 2016]. In this sense, the proposed approach to adapt sojourn time parameters represents an effective solution for heart sound segmentation problems, even when the training data does not perfectly express the variability of the testing data.
Assuntos
Ruídos Cardíacos/fisiologia , Fonocardiografia/métodos , Processamento de Sinais Assistido por Computador , Adolescente , Algoritmos , Criança , Pré-Escolar , Cardiopatias/fisiopatologia , Humanos , Lactente , Funções Verossimilhança , Cadeias de Markov , Pessoa de Meia-IdadeRESUMO
This paper studies the use of non-invasive acoustic emission recordings for clinical device tracking. In particular, audio signals recorded at the proximal end of a needle are used to detect perforation events that occur when the needle tip crosses internal tissue layers.A comparative study is performed to assess the capacity of different features and envelopes in detecting perforation events. The results obtained from the considered experimental setup show a statistically significant correlation between the extracted envelopes and the perforation events, thus leading the way for future development of perforation detection algorithms.