Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
1.
Biomed Phys Eng Express ; 10(4)2024 Apr 30.
Artigo em Inglês | MEDLINE | ID: mdl-38457844

RESUMO

Objective.Although emotion recognition has been studied for decades, a more accurate classification method that requires less computing is still needed. At present, in many studies, EEG features are extracted from all channels to recognize emotional states, however, there is a lack of an efficient feature domain that improves classification performance and reduces the number of EEG channels.Approach.In this study, a continuous wavelet transform (CWT)-based feature representation of multi-channel EEG data is proposed for automatic emotion recognition. In the proposed feature, the time-frequency domain information is preserved by using CWT coefficients. For a particular EEG channel, each CWT coefficient is mapped into a strength-to-entropy component ratio to obtain a 2D representation. Finally, a 2D feature matrix, namely CEF2D, is created by concatenating these representations from different channels and fed into a deep convolutional neural network architecture. Based on the CWT domain energy-to-entropy ratio, effective channel and CWT scale selection schemes are also proposed to reduce computational complexity.Main results.Compared with previous studies, the results of this study show that valence and arousal classification accuracy has improved in both 3-class and 2-class cases. For the 2-class problem, the average accuracies obtained for valence and arousal dimensions are 98.83% and 98.95%, respectively, and for the 3-class, the accuracies are 98.25% and 98.68%, respectively.Significance.Our findings show that the entropy-based feature of EEG data in the CWT domain is effective for emotion recognition. Utilizing the proposed feature domain, an effective channel selection method can reduce computational complexity.


Assuntos
Algoritmos , Eletroencefalografia , Emoções , Redes Neurais de Computação , Análise de Ondaletas , Humanos , Eletroencefalografia/métodos , Processamento de Sinais Assistido por Computador , Entropia , Nível de Alerta/fisiologia
2.
Comput Biol Med ; 165: 107378, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37678139

RESUMO

Precise cell nucleus segmentation is very critical in many biologically related analyses and disease diagnoses. However, the variability in nuclei structure, color, and modalities of histopathology images make the automatic computer-aided nuclei segmentation task very difficult. Traditional encoder-decoder based deep learning schemes mainly utilize the spatial domain information that may limit the performance of recognizing small nuclei regions in subsequent downsampling operations. In this paper, a boundary aware wavelet guided network (BAWGNet) is proposed by incorporating a boundary aware unit along with an attention mechanism based on a wavelet domain guidance in each stage of the encoder-decoder output. Here the high-frequency 2 Dimensional discrete wavelet transform (2D-DWT) coefficients are utilized in the attention mechanism to guide the spatial information obtained from the encoder-decoder output stages to leverage the nuclei segmentation task. On the other hand, the boundary aware unit (BAU) captures the nuclei's boundary information, ensuring accurate prediction of the nuclei pixels in the edge region. Furthermore, the preprocessing steps used in our methodology confirm the data's uniformity by converting it to similar color statistics. Extensive experimentations conducted on three benchmark histopathology datasets (DSB, MoNuSeg and TNBC) exhibit the outstanding segmentation performance of the proposed method (with dice scores 90.82%, 85.74%, and 78.57%, respectively). Implementation of the proposed architecture is available at https://github.com/tamjidimtiaz/BAWGNet.


Assuntos
Benchmarking , Núcleo Celular , Extremidade Superior , Análise de Ondaletas , Processamento de Imagem Assistida por Computador
3.
Artigo em Inglês | MEDLINE | ID: mdl-37651481

RESUMO

Automated classification of cardiovascular diseases from electrocardiogram (ECG) signals using deep learning has gained significant interest due to its wide range of applications. However, existing deep learning approaches often overlook inter-channel shared information or lose time-sequence dependent information when considering 1D and 2D ECG representations, respectively. Moreover, besides considering spatial dimension, it is necessary to understand the context of the signals from a global feature space. We propose MD-CardioNet, an efficient deep learning architecture that captures temporal, spatial, and volumetric features from multi-lead ECG signals using multidimensional (1D, 2D, and 3D) convolutions to address these challenges. Sequential feature extractors capture time-dependent information, while a 2D convolution is applied to form an image representation from the multi-channel ECG signal, extracting inter-channel features. Additionally, a volumetric feature extraction network is designed to incorporate intra-channel, inter-channel, and inter-filter global space information. To reduce computational complexity, we introduce a practical knowledge distillation framework that reduces the number of trainable parameters by up to eight times ( from 4,304,910 parameters to 94,842 parameters) while maintaining satisfactory performance compatible with the other existing approaches. The proposed architecture is evaluated on a large publicly available dataset containing ECG signals from over 10,000 patients, achieving an accuracy of 97.3% in classifying six heartbeat rhythms. Our results surpass the performance of some state-of-the-art approaches. This paper presents a novel deep-learning approach for ECG classification that addresses the limitations of existing methods. The experimental results highlight the robustness and accuracy of MD-CardioNet in cardiovascular disease classification, offering valuable insights for future research in this field.

4.
Comput Biol Med ; 160: 106945, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-37163966

RESUMO

BACKGROUND: Colorectal polyp is a common structural gastrointestinal (GI) anomaly, which can in certain cases turn malignant. Colonoscopic image inspection is, thereby, an important step for isolating the polyps as well as removing them if necessary. However, the process is around 30-60 min long and inspecting each image for polyps can prove to be a tedious task. Hence, an automatic computerized process for efficient and accurate polyp isolation can be a useful tool. METHODS: In this study, a deep learning network is introduced for colorectal polyp segmentation. The network is based on an encoder-decoder architecture, however, having both un-dilated and dilated filtering in order to extract both near and far local information as well as perceive image depth. Four-fold skip-connections exist between each spatial encoder-decoder due to both type of filtering and a 'Feature-to-Mask' pipeline processes the decoded dilated and un-dilated features for final prediction. The proposed network implements a 'Stretch-Relax' based attention system, SR-Attention, to generate high variance spatial features in order to obtain useful attention masks for cognitive feature selection. From this 'Stretch-Relax' attention based operation, the network is termed as 'SR-AttNet'. RESULTS: Training and optimization is performed on four different datasets, and inference has been done on five (Kvasir-SEG, CVC-ClinicDB, CVC-Colon, ETIS-Larib, EndoCV2020); all of which output higher Dice-score compared to state-of-the-art and existing networks. The efficacy and interpretability of SR-Attention is also demonstrated based on quantitative variance. CONCLUSION: In consequence, the proposed SR-AttNet can be considered for an automated and general approach for polyp segmentation during colonoscopy.


Assuntos
Pólipos do Colo , Humanos , Pólipos do Colo/diagnóstico por imagem , Colonoscopia , Colo , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador
5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 1024-1027, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36086584

RESUMO

Atrial fibrillation is the most common sustained cardiac arrhythmia and the electrocardiogram (ECG) is a powerful non-invasive tool for its clinical diagnosis. Automatic AF detection remains a very challenging task due to the high inter-patient variability of ECGs. In this paper, an automatic AF detection scheme is proposed based on a deep learning network that utilizes both raw ECG signal and its discrete wavelet transform (DWT) version. In order to utilize the time-frequency characteristics of the ECG signal, first level DWT is applied and both high and low frequency components are then utilized in the 1D CNN network in parallel. If only the transformed data are utilized in the network, original variations in the data may not be explored, which also contains useful information to identify the abnormalities. A multi-phase training scheme is proposed which facilitates parallel optimization for efficient gradient propagation. In the proposed network, features are directly extracted from raw ECG and DWT coefficients, followed by 2 fully connected layers to process features furthermore and to detect arrhythmia in the recordings. Classification performance of the proposed method is tested on PhysioNet-2017 dataset and it offers superior performance in detecting AF from normal, alternating and noisy cases in comparison to some state-of-the-art methods.


Assuntos
Fibrilação Atrial , Aprendizado Profundo , Fibrilação Atrial/diagnóstico , Diagnóstico por Computador/métodos , Eletrocardiografia/métodos , Humanos , Análise de Ondaletas
6.
IEEE J Transl Eng Health Med ; 10: 3300108, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36032311

RESUMO

Background: The emergence of wireless capsule endoscopy (WCE) has presented a viable non-invasive mean of identifying gastrointestinal diseases in the field of clinical gastroenterology. However, to overcome its extended time of manual inspection, a computer aided automatic detection system is getting vast popularity. In this case, major challenges are low resolution and lack of regional context in images extracted from WCE videos. Methods: For tackling these challenges, in this paper a convolution neural network (CNN) based architecture, namely RAt-CapsNet, is proposed that reliably employs regional information and attention mechanism to classify abnormalities from WCE video data. The proposed RAt-CapsNet consists of two major pipelines: Compression Pipeline and Regional Correlative Pipeline. In the compression pipeline, an encoder module is designed using a Volumetric Attention Mechanism which provides 3D enhancement to feature maps using spatial domain condensation as well as channel-wise filtering for preserving relevant structural information of images. On the other hand, the regional correlative pipeline consists of Pyramid Feature Extractor which operates on image driven feature vectors to generalize and propagate local relationships of pixels from WCE abnormalities with respect to the normal healthy surrounding. The feature vectors generated by the pipelines are then accumulated to formulate a classification standpoint. Results: Promising computational accuracy of mean 98.51% in binary class and over 95.65% in multi-class are obtained through extensive experimentation on a highly unbalanced public dataset with over 47 thousand labelled. Conclusion: This outcome in turn supports the efficacy of the proposed methodology as a noteworthy WCE abnormality detection as well as diagnostic system.


Assuntos
Endoscopia por Cápsula , Compressão de Dados , Aprendizado Profundo , Animais , Trato Gastrointestinal , Redes Neurais de Computação , Ratos
7.
Comput Biol Med ; 149: 105806, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-35994932

RESUMO

In the Coronavirus disease-2019 (COVID-19) pandemic, for fast and accurate diagnosis of a large number of patients, besides traditional methods, automated diagnostic tools are now extremely required. In this paper, a deep convolutional neural network (CNN) based scheme is proposed for automated accurate diagnosis of COVID-19 from lung computed tomography (CT) scan images. First, for the automated segmentation of lung regions in a chest CT scan, a modified CNN architecture, namely SKICU-Net is proposed by incorporating additional skip interconnections in the U-Net model that overcome the loss of information in dimension scaling. Next, an agglomerative hierarchical clustering is deployed to eliminate the CT slices without significant information. Finally, for effective feature extraction and diagnosis of COVID-19 and pneumonia from the segmented lung slices, a modified DenseNet architecture, namely P-DenseCOVNet is designed where parallel convolutional paths are introduced on top of the conventional DenseNet model for getting better performance through overcoming the loss of positional arguments. Outstanding performances have been achieved with an F1 score of 0.97 in the segmentation task along with an accuracy of 87.5% in diagnosing COVID-19, common pneumonia, and normal cases. Significant experimental results and comparison with other studies show that the proposed scheme provides very satisfactory performances and can serve as an effective diagnostic tool in the current pandemic.


Assuntos
COVID-19 , COVID-19/diagnóstico por imagem , Teste para COVID-19 , Humanos , Pulmão/diagnóstico por imagem , Redes Neurais de Computação , Pandemias , Tomografia Computadorizada por Raios X/métodos
8.
Biocybern Biomed Eng ; 41(4): 1685-1701, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34690398

RESUMO

With the onset of the COVID-19 pandemic, the automated diagnosis has become one of the most trending topics of research for faster mass screening. Deep learning-based approaches have been established as the most promising methods in this regard. However, the limitation of the labeled data is the main bottleneck of the data-hungry deep learning methods. In this paper, a two-stage deep CNN based scheme is proposed to detect COVID-19 from chest X-ray images for achieving optimum performance with limited training images. In the first stage, an encoder-decoder based autoencoder network is proposed, trained on chest X-ray images in an unsupervised manner, and the network learns to reconstruct the X-ray images. An encoder-merging network is proposed for the second stage that consists of different layers of the encoder model followed by a merging network. Here the encoder model is initialized with the weights learned on the first stage and the outputs from different layers of the encoder model are used effectively by being connected to a proposed merging network. An intelligent feature merging scheme is introduced in the proposed merging network. Finally, the encoder-merging network is trained for feature extraction of the X-ray images in a supervised manner and resulting features are used in the classification layers of the proposed architecture. Considering the final classification task, an EfficientNet-B4 network is utilized in both stages. An end to end training is performed for datasets containing classes: COVID-19, Normal, Bacterial Pneumonia, Viral Pneumonia. The proposed method offers very satisfactory performances compared to the state of the art methods and achieves an accuracy of 90:13% on the 4-class, 96:45% on a 3-class, and 99:39% on 2-class classification.

9.
Health Inf Sci Syst ; 9(1): 28, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34257953

RESUMO

Lung Ultrasound (LUS) images are considered to be effective for detecting Coronavirus Disease (COVID-19) as an alternative to the existing reverse transcription-polymerase chain reaction (RT-PCR)-based detection scheme. However, the recent literature exhibits a shortage of works dealing with LUS image-based COVID-19 detection. In this paper, a spectral mask enhancement (SpecMEn) scheme is introduced along with a histogram equalization pre-processing stage to reduce the noise effect in LUS images prior to utilizing them for feature extraction. In order to detect the COVID-19 cases, we propose to utilize the SpecMEn pre-processed LUS images in the deep learning (DL) models (namely the SpecMEn-DL method), which offers a better representation of some characteristics features in LUS images and results in very satisfactory classification performance. The performance of the proposed SpecMEn-DL technique is appraised by implementing some state-of-the-art DL models and comparing the results with related studies. It is found that the use of the SpecMEn scheme in DL techniques offers an average increase in accuracy and F 1 score of 11 % and 11.75 % , respectively, at the video-level. Comprehensive analysis and visualization of the intermediate steps manifest a very satisfactory detection performance creating a flexible and safe alternative option for the clinicians to get assistance while obtaining the immediate evaluation of the patients.

10.
Comput Biol Med ; 132: 104296, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33684688

RESUMO

The COVID-19 pandemic has become one of the biggest threats to the global healthcare system, creating an unprecedented condition worldwide. The necessity of rapid diagnosis calls for alternative methods to predict the condition of the patient, for which disease severity estimation on the basis of Lung Ultrasound (LUS) can be a safe, radiation-free, flexible, and favorable option. In this paper, a frame-based 4-score disease severity prediction architecture is proposed with the integration of deep convolutional and recurrent neural networks to consider both spatial and temporal features of the LUS frames. The proposed convolutional neural network (CNN) architecture implements an autoencoder network and separable convolutional branches fused with a modified DenseNet-201 network to build a vigorous, noise-free classification model. A five-fold cross-validation scheme is performed to affirm the efficacy of the proposed network. In-depth result analysis shows a promising improvement in the classification performance by introducing the Long Short-Term Memory (LSTM) layers after the proposed CNN architecture by an average of 7-12%, which is approximately 17% more than the traditional DenseNet architecture alone. From an extensive analysis, it is found that the proposed end-to-end scheme is very effective in detecting COVID-19 severity scores from LUS images.


Assuntos
COVID-19 , Humanos , Pulmão/diagnóstico por imagem , Redes Neurais de Computação , Pandemias , SARS-CoV-2
11.
IEEE Trans Artif Intell ; 2(3): 283-297, 2021 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37981918

RESUMO

Automatic lung lesion segmentation of chest computer tomography (CT) scans is considered a pivotal stage toward accurate diagnosis and severity measurement of COVID-19. Traditional U-shaped encoder-decoder architecture and its variants suffer from diminutions of contextual information in pooling/upsampling operations with increased semantic gaps among encoded and decoded feature maps as well as instigate vanishing gradient problems for its sequential gradient propagation that result in suboptimal performance. Moreover, operating with 3-D CT volume poses further limitations due to the exponential increase of computational complexity making the optimization difficult. In this article, an automated COVID-19 lesion segmentation scheme is proposed utilizing a highly efficient neural network architecture, namely CovSegNet, to overcome these limitations. Additionally, a two-phase training scheme is introduced where a deeper 2-D network is employed for generating region-of-interest (ROI)-enhanced CT volume followed by a shallower 3-D network for further enhancement with more contextual information without increasing computational burden. Along with the traditional vertical expansion of Unet, we have introduced horizontal expansion with multistage encoder-decoder modules for achieving optimum performance. Additionally, multiscale feature maps are integrated into the scale transition process to overcome the loss of contextual information. Moreover, a multiscale fusion module is introduced with a pyramid fusion scheme to reduce the semantic gaps between subsequent encoder/decoder modules while facilitating the parallel optimization for efficient gradient propagation. Outstanding performances have been achieved in three publicly available datasets that largely outperform other state-of-the-art approaches. The proposed scheme can be easily extended for achieving optimum segmentation performances in a wide variety of applications. Impact Statement-With lower sensitivity (60-70%), elongated testing time, and a dire shortage of testing kits, traditional RTPCR based COVID-19 diagnostic scheme heavily relies on postCT based manual inspection for further investigation. Hence, automating the process of infected lesions extraction from chestCT volumes will be major progress for faster accurate diagnosis of COVID-19. However, in challenging conditions with diffused, blurred, and varying shaped edges of COVID-19 lesions, conventional approaches fail to provide precise segmentation of lesions that can be deleterious for false estimation and loss of information. The proposed scheme incorporating an efficient neural network architecture (CovSegNet) overcomes the limitations of traditional approaches that provide significant improvement of performance (8.4% in averaged dice measurement scale) over two datasets. Therefore, this scheme can be an effective, economical tool for the physicians for faster infection analysis to greatly reduce the spread and massive death toll of this deadly virus through mass-screening.

12.
Comput Biol Med ; 128: 104119, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33254083

RESUMO

Colorectal cancer has become one of the major causes of death throughout the world. Early detection of Polyp, an early symptom of colorectal cancer, can increase the survival rate to 90%. Segmentation of Polyp regions from colonoscopy images can facilitate the faster diagnosis. Due to varying sizes, shapes, and textures of polyps with subtle visible differences with the background, automated segmentation of polyps still poses a major challenge towards traditional diagnostic methods. Conventional Unet architecture and some of its variants have gained much popularity for its automated segmentation though having several architectural limitations that result in sub-optimal performance. In this paper, an encoder-decoder based modified deep neural network architecture is proposed, named as PolypSegNet, to overcome several limitations of traditional architectures for very precise automated segmentation of polyp regions from colonoscopy images. For achieving more generalized representation at each scale of both the encoder and decoder module, several sequential depth dilated inception (DDI) blocks are integrated into each unit layer for aggregating features from different receptive areas utilizing depthwise dilated convolutions. Different scales of contextual information from all encoder unit layers pass through the proposed deep fusion skip module (DFSM) to generate skip interconnection with each decoder layer rather than separately connecting different levels of encoder and decoder. For more efficient reconstruction in the decoder module, multi-scale decoded feature maps generated at various levels of the decoder are jointly optimized in the proposed deep reconstruction module (DRM) instead of only considering the decoded feature map from final decoder layer. Extensive experimentations on four publicly available databases provide very satisfactory performance with mean five-fold cross-validation dice scores of 91.52% in CVC-ClinicDB database, 92.8% in CVC-ColonDB database, 88.72% in Kvasir-SEG database, and 84.79% in ETIS-Larib database. The proposed network provides very accurate segmented polyp regions that will expedite the diagnosis of polyps even in challenging conditions.


Assuntos
Colonoscopia , Processamento de Imagem Assistida por Computador , Bases de Dados Factuais , Redes Neurais de Computação
13.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 5580-5583, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33019242

RESUMO

The topic of automatic detection of sleep apnea which is a respiratory sleep disorder, affecting millions of patients worldwide, is continuously being explored by researchers. Electroencephalogram signal (EEG) represents a promising tool due to its direct correlation to neural activity and ease of extraction. Here, an innovative approach is proposed to automatically detect apnea by incorporating local variations of temporal features for identifying the global feature variations over a broader window. An EEG data frame is divided into smaller sub-frames to effectively extract local feature variation within one larger frame. A fully convolutional neural network (FCNN) is proposed that will take each sub-frame of a single frame individually to extract local features. Following that, a dense classifier consisting of a series of fully connected layers is trained to analyze all the local features extracted from subframes for classifying the entire frame as apnea/non-apnea. Finally, a unique post-processing technique is applied which significantly improves accuracy. Both the EEG frame length and post-processing parameters are varied to find optimal detection conditions. Large-scale experimentation is executed on publicly available data of patients with varying apnea-hypopnea indices for performance evaluation of the suggested method.


Assuntos
Eletroencefalografia , Síndromes da Apneia do Sono , Humanos , Redes Neurais de Computação , Fases de Leitura , Sono , Síndromes da Apneia do Sono/diagnóstico
14.
Comput Biol Med ; 122: 103869, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-32658740

RESUMO

With the recent outbreak of COVID-19, fast diagnostic testing has become one of the major challenges due to the critical shortage of test kit. Pneumonia, a major effect of COVID-19, needs to be urgently diagnosed along with its underlying reasons. In this paper, deep learning aided automated COVID-19 and other pneumonia detection schemes are proposed utilizing a small amount of COVID-19 chest X-rays. A deep convolutional neural network (CNN) based architecture, named as CovXNet, is proposed that utilizes depthwise convolution with varying dilation rates for efficiently extracting diversified features from chest X-rays. Since the chest X-ray images corresponding to COVID-19 caused pneumonia and other traditional pneumonias have significant similarities, at first, a large number of chest X-rays corresponding to normal and (viral/bacterial) pneumonia patients are used to train the proposed CovXNet. Learning of this initial training phase is transferred with some additional fine-tuning layers that are further trained with a smaller number of chest X-rays corresponding to COVID-19 and other pneumonia patients. In the proposed method, different forms of CovXNets are designed and trained with X-ray images of various resolutions and for further optimization of their predictions, a stacking algorithm is employed. Finally, a gradient-based discriminative localization is integrated to distinguish the abnormal regions of X-ray images referring to different types of pneumonia. Extensive experimentations using two different datasets provide very satisfactory detection performance with accuracy of 97.4% for COVID/Normal, 96.9% for COVID/Viral pneumonia, 94.7% for COVID/Bacterial pneumonia, and 90.2% for multiclass COVID/normal/Viral/Bacterial pneumonias. Hence, the proposed schemes can serve as an efficient tool in the current state of COVID-19 pandemic. All the architectures are made publicly available at: https://github.com/Perceptron21/CovXNet.


Assuntos
Técnicas de Laboratório Clínico/métodos , Infecções por Coronavirus/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Pneumonia Viral/diagnóstico por imagem , Radiografia Torácica/métodos , Algoritmos , Betacoronavirus , COVID-19 , Teste para COVID-19 , Infecções por Coronavirus/diagnóstico , Bases de Dados Factuais , Aprendizado Profundo , Humanos , Redes Neurais de Computação , Pandemias , Reprodutibilidade dos Testes , SARS-CoV-2
15.
IEEE J Transl Eng Health Med ; 8: 3300111, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32190429

RESUMO

BACKGROUND: Computer-aided disease detection schemes from wireless capsule endoscopy (WCE) videos have received great attention by the researchers for reducing physicians' burden due to the time-consuming and risky manual review process. While single disease classification schemes are greatly dealt by the researchers in the past, developing a unified scheme which is capable of detecting multiple gastrointestinal (GI) diseases is very challenging due to the highly irregular behavior of diseased images in terms of color patterns. METHOD: In this paper, a computer-aided method is developed to detect multiple GI diseases from WCE videos utilizing linear discriminant analysis (LDA) based region of interest (ROI) separation scheme followed by a probabilistic model fitting approach. Commonly in training phase, as pixel-labeled images are available in small number, only the image-level annotations are used for detecting diseases in WCE images, whereas pixel-level knowledge, although a major source for learning the disease characteristics, is left unused. In view of learning the characteristic disease patterns from pixel-labeled images, a set of LDA models are trained which are later used to extract the salient ROI from WCE images both in training and testing stages. The intensity patterns of ROI are then modeled by a suitable probability distribution and the fitted parameters of the distribution are utilized as features in a supervised cascaded classification scheme. RESULTS: For the purpose of validation of the proposed multi-disease detection scheme, a set of pixel-labeled images of bleeding, ulcer and tumor are used to extract the LDA models and then, a large WCE dataset is used for training and testing. A high level of accuracy is achieved even with a small number of pixel-labeled images. CONCLUSION: Therefore, the proposed scheme is expected to help physicians in reviewing a large number of WCE images to diagnose different GI diseases.

16.
Comput Biol Med ; 115: 103478, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31698239

RESUMO

Wireless capsule endoscopy (WCE) is a video technology to inspect abnormalities, like bleeding in the gastrointestinal tract. In order to avoid a complex and long duration manual review process, automatic bleeding detection schemes are developed that mainly utilize features extracted from WCE images. In feature-based bleeding detection schemes, either global features are used which produce averaged characteristics ignoring the effect of smaller bleeding regions or local features are utilized that cause large feature dimension. In this paper, pixels of interest (POI) in a given WCE image are determined using a linear separation scheme, local spatial features are then extracted from the POI and finally, a suitable characteristic probability density function (PDF) is fitted over the resulting feature space. The proposed PDF model fitting based approach not only reduces the computational complexity but also offers more consistent representation of a class. Details analysis are carried out to find the best suitable PDF and it is found that fitting of Rayleigh PDF model to the local spatial features is best suited for bleeding detection. For the purpose of classification, the fitted PDF parameters are used as features in the supervised support vector machine classifier. Pixels residing in the close vicinity of the POI are further classified with the help of an unsupervised clustering-based scheme to extract more precise bleeding regions. A large number of WCE images obtained from 30 publicly available WCE videos are used for performance evaluation of the proposed scheme and the effects on classification performance due to the changes in PDF models, block statistics, color spaces, and classifiers are experimentally analyzed. The proposed scheme shows satisfactory performance in terms of sensitivity (97.55%), specificity (96.59%) and accuracy (96.77%) and the results obtained by the proposed method outperforms the results reported for some state-of-the-art methods.


Assuntos
Endoscopia por Cápsula , Hemorragia Gastrointestinal/diagnóstico por imagem , Processamento de Imagem Assistida por Computador , Máquina de Vetores de Suporte , Gravação em Vídeo , Tecnologia sem Fio , Humanos
17.
Healthc Technol Lett ; 6(3): 82-86, 2019 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-31341633

RESUMO

Sleep apnea is a potentially serious sleep disorder characterised by abnormal pauses in breathing. Electroencephalogram (EEG) signal analysis plays an important role for detecting sleep apnea events. In this research work, a method is proposed on the basis of inter-band energy ratio features obtained from multi-band EEG signals for subject-specific classification of sleep apnea and non-apnea events. The K-nearest neighbourhood classifier is used for classification purpose. Unlike conventional methods, instead of classifying apnea patient and healthy person, the objective here is to differentiate apnea and non-apnea events of an apnea patient, which makes the task very challenging. Extensive experimentation is carried out on EEG data of several subjects obtained from a publicly available database. Comprehensive experimental results reveal that the proposed method offers very satisfactory classification performance in terms of sensitivity, specificity and accuracy.

18.
IEEE J Biomed Health Inform ; 23(3): 1066-1074, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-29994231

RESUMO

Sleep apnea, a serious sleep disorder affecting a large population, causes disruptions in breathing during sleep. In this paper, an automatic apnea detection scheme is proposed using single lead electroencephalography (EEG) signal to discriminate apnea patients and healthy subjects as well as to deal with the difficult task of classifying apnea and nonapnea events of an apnea patient. A unique multiband subframe based feature extraction scheme is developed to capture the feature variation pattern within a frame of EEG data, which is shown to exhibit significantly different characteristics in apnea and nonapnea frames. Such within-frame feature variation can be better represented by some statistical measures and characteristic probability density functions. It is found that use of Rician model parameters along with some statistical measures can offer very robust feature qualities in terms of standard performance criteria, such as Bhattacharyya distance and geometric separability index. For the purpose of classification, proposed features are used in K Nearest Neighbor classifier. From extensive experimentations and analysis on three different publicly available databases it is found that the proposed method offers superior classification performance in terms of sensitivity, specificity, and accuracy.


Assuntos
Eletroencefalografia/métodos , Processamento de Sinais Assistido por Computador , Síndromes da Apneia do Sono/diagnóstico , Algoritmos , Bases de Dados Factuais , Humanos , Sensibilidade e Especificidade
19.
Med Biol Eng Comput ; 57(3): 689-702, 2019 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-30349957

RESUMO

The task of heart rate estimation using photoplethysmographic (PPG) signal is challenging due to the presence of various motion artifacts in the recorded signals. In this paper, a fast algorithm for heart rate estimation based on modified SPEctral subtraction scheme utilizing Composite Motion Artifacts Reference generation (SPECMAR) is proposed using two-channel PPG and three-axis accelerometer signals. First, the preliminary noise reduction is obtained by filtering unwanted frequency components from the recorded signals. Next, a composite motion artifacts reference generation method is developed to be employed in the proposed SPECMAR algorithm for motion artifacts reduction. The heart rate is then computed from the noise and motion artifacts reduced PPG signal. Finally, a heart rate tracking algorithm is proposed considering neighboring estimates. The performance of the SPECMAR algorithm has been tested on publicly available PPG database. The average heart rate estimation error is found to be 2.09 BPM on 23 recordings. The Pearson correlation is 0.9907. Due to low computational complexity, the method is faster than the comparing methods. The low estimation error, smooth and fast heart rate tracking makes SPECMAR an ideal choice to be implemented in wearable devices. Graphical Abstract Flow chart for the heart rate estimation using modified SPEctral subtraction scheme utilizing Composite Motion Artifacts Reference generation (SPECMAR) from photoplethysmographic (PPG) signals.


Assuntos
Algoritmos , Frequência Cardíaca/fisiologia , Fotopletismografia/métodos , Processamento de Sinais Assistido por Computador , Acelerometria , Artefatos , Bases de Dados Factuais , Eletrocardiografia , Humanos , Movimento (Física) , Corrida
20.
J Healthc Eng ; 2018: 9423062, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29682270

RESUMO

Wireless capsule endoscopy (WCE) is an effective video technology to diagnose gastrointestinal (GI) disease, such as bleeding. In order to avoid conventional tedious and risky manual review process of long duration WCE videos, automatic bleeding detection schemes are getting importance. In this paper, to investigate bleeding, the analysis of WCE images is carried out in normalized RGB color space as human perception of bleeding is associated with different shades of red. In the proposed method, at first, from the WCE image frame, an efficient region of interest (ROI) is extracted based on interplane intensity variation profile in normalized RGB space. Next, from the extracted ROI, the variation in the normalized green plane is presented with the help of histogram. Features are extracted from the proposed normalized green plane histograms. For classification purpose, the K-nearest neighbors classifier is employed. Moreover, bleeding zones in a bleeding image are extracted utilizing some morphological operations. For performance evaluation, 2300 WCE images obtained from 30 publicly available WCE videos are used in a tenfold cross-validation scheme and the proposed method outperforms the reported four existing methods having an accuracy of 97.86%, a sensitivity of 95.20%, and a specificity of 98.32%.


Assuntos
Endoscopia por Cápsula , Hemorragia Gastrointestinal/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão , Tecnologia sem Fio , Algoritmos , Análise por Conglomerados , Cor , Reações Falso-Positivas , Humanos , Modelos Estatísticos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Máquina de Vetores de Suporte , Gravação em Vídeo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA