Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
J Sleep Res ; 2023 Jul 04.
Artigo em Inglês | MEDLINE | ID: mdl-37402610

RESUMO

Obstructive sleep apnea (OSA) has a heavy health-related burden on patients and the healthcare system. Continuous positive airway pressure (CPAP) is effective in treating OSA, but adherence to it is often inadequate. A promising solution is to detect sleep apnea events in advance, and to adjust the pressure accordingly, which could improve the long-term use of CPAP treatment. The use of CPAP titration data may reflect a similar response of patients to therapy at home. Our study aimed to develop a machine-learning algorithm using retrospective electrocardiogram (ECG) data and CPAP titration to forecast sleep apnea events before they happen. We employed a support vector machine (SVM), k-nearest neighbour (KNN), decision tree (DT), and linear discriminative analysis (LDA) to detect sleep apnea events 30-90 s in advance. Preprocessed 30 s segments were time-frequency transformed to spectrograms using continuous wavelet transform, followed by feature generation using the bag-of-features technique. Specific frequency bands of 0.5-50 Hz, 0.8-10 Hz, and 8-50 Hz were also extracted to detect the most detected band. Our results indicated that SVM outperformed KNN, LDA, and DT across frequency bands and leading time segments. The 8-50 Hz frequency band gave the best accuracy of 98.2%, and a F1-score of 0.93. Segments 60 s before sleep events seemed to exhibit better performance than other pre-OSA segments. Our findings demonstrate the feasibility of detecting sleep apnea events in advance using only a single-lead ECG signal at CPAP titration, making our proposed framework a novel and promising approach to managing obstructive sleep apnea at home.

2.
Expert Syst Appl ; 216: 119430, 2023 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-36570382

RESUMO

The COVID-19 pandemic has been affecting the world since December 2019, and nowadays, the number of infected is increasing rapidly. Chest X-ray images are clinical adjuncts that can be used in the diagnosis of COVID-19 disease. Because of the rapid spread of COVID-19 disease worldwide and the limited number of expert radiologists, the proposed method uses the automatic diagnosis method rather than a manual diagnosis method. In the paper, COVID-19 Positive/Negative (2275 Positive, 4626 Negative) and Normal/Pneumonia (2313 Normal, 2313 Pneumonia) are diagnosed using chest X-ray images. Herein, 80 % and 20 % of the images are used in the training and validation set, respectively. In the proposed method, six different classifiers are trained using chest X-ray images, and the five most successful classifiers are used in both phases. In Phase-1 and Phase-2, image features are extracted using the Bag of Features method for Cosine K-Nearest Neighbor (KNN), Linear Discriminant, Logistic Regression, Bagged Trees Ensemble, Medium Gaussian Support Vector Machine (SVM), excluding SqueezeNet Deep Learning (K = 2000 and K = 1500 for Phase-1 and Phase-2, respectively). In both phases, the five most successful classifiers are determined, and images classify with the help of the Majority Voting (Mathematical Evaluation) method. The application of the proposed method is designed for users to diagnose COVID-19 Positive, Normal, and Pneumonia. The results show that accuracy values obtained by Majority Voting (Mathematical Evaluation) method for Phase-1 and Phase-2 are equal to 99.86 % and 99.28 %, respectively. Thus, it indicates that the accuracy of the whole system is 99.63 %. When we analyze the classification performance metrics for Phase-1 and Phase-2, Specificity (%), Precision (%), Recall (%), F1 Score (%), Area Under Curve (AUC), and Matthews Correlation Coefficient (MCC) are equal to 99.98-99.83-99.07-99.51-0.9974-0.9855 and 99.73-99.69-98.63-99.23-0.9928-0.9518, respectively. Moreover, if the classification performance metrics of the whole system are examined, it is seen that Specificity (%), Precision (%), Recall (%), F1 Score (%), AUC, and MCC are 99.88, 99.78, 98.90, 99.40, 0.9956, and 0.9720, respectively. When the studies in the literature are examined, the results show that the proposed model is better than its counterparts. Because the best performance metrics for the dataset used were obtained in this study. In addition, since the biphasic majority voting technique is used in the study, it is seen that the proposed model is more reliable. On the other hand, although there are tens of thousands of studies on this subject, the usability of these models is debatable since most of them do not have graphical user interface applications. Already, in artificial intelligence technologies, besides the performance of the developed models, their usability is also important. Because the developed models can generally be used by people who are less knowledgeable about artificial intelligence.

3.
Sensors (Basel) ; 20(8)2020 Apr 13.
Artigo em Inglês | MEDLINE | ID: mdl-32295036

RESUMO

While the number of casualties and amount of property damage caused by fires in urban areas are increasing each year, studies on their automatic detection have not maintained pace with the scale of such fire damage. Camera-based fire detection systems have numerous advantages over conventional sensor-based methods, but most research in this area has been limited to daytime use. However, night-time fire detection in urban areas is more difficult to achieve than daytime detection owing to the presence of ambient lighting such as headlights, neon signs, and streetlights. Therefore, in this study, we propose an algorithm that can quickly detect a fire at night in urban areas by reflecting its night-time characteristics. It is termed ELASTIC-YOLOv3 (which is an improvement over the existing YOLOv3) to detect fire candidate areas quickly and accurately, regardless of the size of the fire during the pre-processing stage. To reflect the dynamic characteristics of a night-time flame, N frames are accumulated to create a temporal fire-tube, and a histogram of the optical flow of the flame is extracted from the fire-tube and converted into a bag-of-features (BoF) histogram. The BoF is then applied to a random forest classifier, which achieves a fast classification and high classification performance of the tabular features to verify a fire candidate. Based on a performance comparison against a few other state-of-the-art fire detection methods, the proposed method can increase the fire detection at night compared to deep neural network (DNN)-based methods and achieves a reduced processing time without any loss in accuracy.

4.
J Med Syst ; 43(4): 87, 2019 Feb 28.
Artigo em Inglês | MEDLINE | ID: mdl-30820678

RESUMO

Chest radiography is the most preferred non-invasive imaging technique for early diagnosis of Tuberculosis (TB). However, lack of radiological expertise in TB detection leads to indiscriminate chest radiograph (CXR) screening. A modest classification approach based on the local image description to detect subtle characteristics of TB using CXRs is highly recommended. In this work, an attempt has been made to classify normal and TB CXR images using Bag of Features (BoF) approach with Speeded-Up Robust Feature (SURF) descriptor. The images are obtained from a public database. Lung fields segmentation is performed using Distance Regularized Level Set (DRLS) formulation. The results of segmentation are validated against the ground truth images using similarity, overlap and area correlation measures. BoF approach with SURF keypoint descriptors is implemented to categorize the images using Multilayer Perceptron (MLP) classifier. The obtained results demonstrate that the DRLS method is able to delineate lung fields from CXR images. The BoF with SURF keypoint descriptor is able to characterize local attributes of normal and TB images. The segmentation results are found to be in high correlation with ground truth. MLP classifier is found to provide high Recall, Specificity (Spec), Accuracy, F-score and Area Under the Curve (AUC) values of 87.7%, 85.9%, 87.8%, 87.6% and 94% respectively between normal and abnormal images. The proposed computer aided diagnostic approach is found to perform better as compared to the existing methods. Thus, the study can be of significant assistance to physicians at the point of care in resource constrained regions.


Assuntos
Algoritmos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Tuberculose Pulmonar/diagnóstico , Humanos , Radiografia Torácica , Sensibilidade e Especificidade , Tuberculose Pulmonar/diagnóstico por imagem
5.
Neuroimage ; 183: 212-226, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-30099077

RESUMO

This work presents an efficient framework, based on manifold approximation, for generating brain fingerprints from multi-modal data. The proposed framework represents images as bags of local features which are used to build a subject proximity graph. Compact fingerprints are obtained by projecting this graph in a low-dimensional manifold using spectral embedding. Experiments using the T1/T2-weighted MRI, diffusion MRI, and resting-state fMRI data of 945 Human Connectome Project subjects demonstrate the benefit of combining multiple modalities, with multi-modal fingerprints more discriminative than those generated from individual modalities. Results also highlight the link between fingerprint similarity and genetic proximity, monozygotic twins having more similar fingerprints than dizygotic or non-twin siblings. This link is also reflected in the differences of feature correspondences between twin/sibling pairs, occurring in major brain structures and across hemispheres. The robustness of the proposed framework to factors like image alignment and scan resolution, as well as the reproducibility of results on retest scans, suggest the potential of multi-modal brain fingerprinting for characterizing individuals in a large cohort analysis.


Assuntos
Encéfalo , Neuroimagem Funcional/métodos , Individualidade , Imageamento por Ressonância Magnética/métodos , Irmãos , Gêmeos , Adulto , Encéfalo/anatomia & histologia , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Estudos de Coortes , Conectoma/métodos , Imagem de Difusão por Ressonância Magnética/métodos , Feminino , Humanos , Masculino , Adulto Jovem
6.
Multimed Tools Appl ; : 1-17, 2023 Apr 20.
Artigo em Inglês | MEDLINE | ID: mdl-37362706

RESUMO

Lung cancer has the highest incidence in the world. The standard tests for its diagnostics are medical imaging exams, sputum cytology, and lung biopsy. Computed Tomography (CT) of the chest plays an essential role in the early detection of nodules since it can allow for more treatment options and increases patient survival. However, the analysis of these exams is a tiring and error-prone process. Thus, computational methods can help the specialist in this analysis. This work addresses the classification of pulmonary nodules as benign or malignant on CT images. Our approach uses the pre-trained VGG16, VGG19, Inception, Resnet50, and Xception, to extract features from each 2D slice of the 3D nodules. Then, we use Principal Component Analysis to reduce the dimensionality of the feature vectors and make them all the same length. Then, we use Bag of Features (BoF) to combine the feature vectors of the different 2D slices and generate only one signature representing the 3D nodule. The classification step uses Random Forest. We evaluated the proposed method with 1,405 segmented nodules from the LIDC-IDRI database and obtained an accuracy of 95.34%, F1-Score of 91.73, kappa of 0.88, sensitivity of 90.53%, specificity of 97.26% and AUC of 0.99. The main conclusion was that the combination by BoF of features extracted from 2D slices using pre-trained architectures produced better results than training 2D and 3D CNNs in the nodules. In addition, the use of BoF also makes the creation of the nodule signature independent of the number of slices.

7.
J Clin Med ; 11(1)2021 Dec 30.
Artigo em Inglês | MEDLINE | ID: mdl-35011934

RESUMO

BACKGROUND: Heart rate variability (HRV) and electrocardiogram (ECG)-derived respiration (EDR) have been used to detect sleep apnea (SA) for decades. The present study proposes an SA-detection algorithm using a machine-learning framework and bag-of-features (BoF) derived from an ECG spectrogram. METHODS: This study was verified using overnight ECG recordings from 83 subjects with an average apnea-hypopnea index (AHI) 29.63 (/h) derived from the Physionet Apnea-ECG and National Cheng Kung University Hospital Sleep Center database. The study used signal preprocessing to filter noise and artifacts, ECG time-frequency transformation using continuous wavelet transform (CWT), BoF feature generation, machine-learning classification using support vector machine (SVM), ensemble learning (EL), k-nearest neighbor (KNN) classification, and cross-validation. The time length of the spectrogram was set as 10 and 60 s to examine the required minimum spectrogram window time length to achieve satisfactory accuracy. Specific frequency bands of 0.1-50, 8-50, 0.8-10, and 0-0.8 Hz were also extracted to generate the BoF to determine the band frequency best suited for SA detection. RESULTS: The five-fold cross-validation accuracy using the BoF derived from the ECG spectrogram with 10 and 60 s time windows were 90.5% and 91.4% for the 0.1-50 Hz and 8-50 Hz frequency bands, respectively. CONCLUSION: An SA-detection algorithm utilizing BoF and a machine-learning framework was successfully developed in this study with satisfactory classification accuracy and high temporal resolution.

8.
Biomed Signal Process Control ; 68: 102656, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33897803

RESUMO

The medical and scientific communities are currently trying to treat infected patients and develop vaccines for preventing a future outbreak. In healthcare, machine learning is proven to be an efficient technology for helping to combat the COVID-19. Hospitals are now overwhelmed with the increased infections of COVID-19 cases and given patients' confidentiality and rights. It becomes hard to assemble quality medical image datasets in a timely manner. For COVID-19 diagnosis, several traditional computer-aided detection systems based on classification techniques were proposed. The bag-of-features (BoF) model has shown a promising potential in this domain. Thus, this work developed an ensemble-based BoF classification system for the COVID-19 detection. In this model, we proposed ensemble at the classification step of the BoF. The proposed system was evaluated and compared to different classification systems for different number of visual words to evaluate their effect on the classification efficiency. The results proved the superiority of the proposed ensemble-based BoF for the classification of normal and COVID19 chest X-ray (CXR) images compared to other classifiers.

9.
Artigo em Inglês | MEDLINE | ID: mdl-28755437

RESUMO

In this paper, we propose a fully automated learning-based approach for detecting cells in time-lapse phase contrast images. The proposed system combines 2 machine learning approaches to achieve bottom-up image segmentation. We apply pixel-wise classification using random forests (RF) classifiers to determine the potential location of the cells. Each pixel is classified into 4 categories (cell, mitotic cell, halo effect, and background noise). Various image features are extracted at different scales to train the RF classifier. The resulting probability map is partitioned using the k-means algorithm to form potential cell regions. These regions are expanded into the neighboring areas to recover some missing or broken cell regions. To validate the cell regions, another machine learning method based on the bag-of-features and spatial pyramid encoding is proposed. The result of the second classifier can be a validated cell, a merged cell, or a noncell. In the case that the cell region is classified as a merged cell, it is split by using the seeded watershed method. The proposed method is demonstrated on several phase contrast image datasets, ie, U2OS, HeLa, and NIH 3T3. In comparison to state-of-the-art cell detection techniques, the proposed method shows improved performance, particularly in dealing with noise interference and drastic shape variations.


Assuntos
Aprendizado de Máquina , Microscopia de Contraste de Fase , Animais , Automação , Linhagem Celular , Células HeLa , Humanos , Interpretação de Imagem Assistida por Computador/métodos , Camundongos , Células NIH 3T3 , Imagem com Lapso de Tempo
10.
Med Biol Eng Comput ; 56(4): 709-720, 2018 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-28891000

RESUMO

Dengue fever detection and classification have a vital role due to the recent outbreaks of different kinds of dengue fever. Recently, the advancement in the microarray technology can be employed for such classification process. Several studies have established that the gene selection phase takes a significant role in the classifier performance. Subsequently, the current study focused on detecting two different variations, namely, dengue fever (DF) and dengue hemorrhagic fever (DHF). A modified bag-of-features method has been proposed to select the most promising genes in the classification process. Afterward, a modified cuckoo search optimization algorithm has been engaged to support the artificial neural (ANN-MCS) to classify the unknown subjects into three different classes namely, DF, DHF, and another class containing convalescent and normal cases. The proposed method has been compared with other three well-known classifiers, namely, multilayer perceptron feed-forward network (MLP-FFN), artificial neural network (ANN) trained with cuckoo search (ANN-CS), and ANN trained with PSO (ANN-PSO). Experiments have been carried out with different number of clusters for the initial bag-of-features-based feature selection phase. After obtaining the reduced dataset, the hybrid ANN-MCS model has been employed for the classification process. The results have been compared in terms of the confusion matrix-based performance measuring metrics. The experimental results indicated a highly statistically significant improvement with the proposed classifier over the traditional ANN-CS model.


Assuntos
Biologia Computacional/métodos , Dengue , Perfilação da Expressão Gênica/métodos , Algoritmos , Dengue/classificação , Dengue/diagnóstico , Dengue/genética , Dengue/metabolismo , Diagnóstico por Computador , Humanos , Redes Neurais de Computação
11.
Microsc Res Tech ; 80(4): 419-429, 2017 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-27901295

RESUMO

The Local Polynomial Approximation (LPA) is a nonparametric filter that performs pixel-wise polynomial fit on a certain neighborhood. This filter can be supported by the Intersection of Confidence Interval rule (ICI) as an adaptation algorithm to identify the most suited neighborhood at which the polynomial assumptions provide superior fit for the observations. However, the LPA-ICI is considered to be a near-optimal de-noising filter. Moreover, the ICI rule has several parameters that affect its performance. The current study applied an optimization algorithm, namely the Particle swarm optimization (PSO) to determine the optimal ICI parameter values for microscopic images de-noising. As the ICI parameters are image's structure based, bag-of-features classifier is used to classify the images based on their structure into different classes. Afterward, a generated optimal ICI parameters' table was created using the LPA-ICI-PSO for further direct use without optimization. This table included the optimal ICI parameters based on the image structure. Based on the image category, the generated table can be used to attain the suitable ICI optimal parameters without using PSO. This guarantees less computational time along with the optimal de-noising compared to the LPA-ICI as established by the performance metrics. The experimental results established the superiority of the proposed LPA-ICI-PSO over the classical LPA-ICI filter.


Assuntos
Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Microscopia/métodos , Algoritmos
12.
Med Image Anal ; 17(7): 732-45, 2013 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-23706754

RESUMO

Much of the existing work on automatic classification of gestures and skill in robotic surgery is based on dynamic cues (e.g., time to completion, speed, forces, torque) or kinematic data (e.g., robot trajectories and velocities). While videos could be equally or more discriminative (e.g., videos contain semantic information not present in kinematic data), they are typically not used because of the difficulties associated with automatic video interpretation. In this paper, we propose several methods for automatic surgical gesture classification from video data. We assume that the video of a surgical task (e.g., suturing) has been segmented into video clips corresponding to a single gesture (e.g., grabbing the needle, passing the needle) and propose three methods to classify the gesture of each video clip. In the first one, we model each video clip as the output of a linear dynamical system (LDS) and use metrics in the space of LDSs to classify new video clips. In the second one, we use spatio-temporal features extracted from each video clip to learn a dictionary of spatio-temporal words, and use a bag-of-features (BoF) approach to classify new video clips. In the third one, we use multiple kernel learning (MKL) to combine the LDS and BoF approaches. Since the LDS approach is also applicable to kinematic data, we also use MKL to combine both types of data in order to exploit their complementarity. Our experiments on a typical surgical training setup show that methods based on video data perform equally well, if not better, than state-of-the-art approaches based on kinematic data. In turn, the combination of both kinematic and video data outperforms any other algorithm based on one type of data alone.


Assuntos
Gestos , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Fotografação/métodos , Robótica/métodos , Cirurgia Assistida por Computador/métodos , Gravação em Vídeo/métodos , Algoritmos , Aumento da Imagem/métodos , Imageamento Tridimensional/métodos , Movimento (Física) , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Técnicas de Sutura
13.
J Pathol Inform ; 2: S4, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-22811960

RESUMO

Histopathological images are an important resource for clinical diagnosis and biomedical research. From an image understanding point of view, the automatic annotation of these images is a challenging problem. This paper presents a new method for automatic histopathological image annotation based on three complementary strategies, first, a part-based image representation, called the bag of features, which takes advantage of the natural redundancy of histopathological images for capturing the fundamental patterns of biological structures, second, a latent topic model, based on non-negative matrix factorization, which captures the high-level visual patterns hidden in the image, and, third, a probabilistic annotation model that links visual appearance of morphological and architectural features associated to 10 histopathological image annotations. The method was evaluated using 1,604 annotated images of skin tissues, which included normal and pathological architectural and morphological features, obtaining a recall of 74% and a precision of 50%, which improved a baseline annotation method based on support vector machines in a 64% and 24%, respectively.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA