Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 28
Filtrar
Más filtros

Bases de datos
Tipo del documento
Intervalo de año de publicación
1.
Sensors (Basel) ; 22(21)2022 Nov 01.
Artículo en Inglés | MEDLINE | ID: mdl-36366083

RESUMEN

Accurately estimating respiratory rate (RR) has become essential for patients and the elderly. Hence, we propose a novel method that uses exact Gaussian process regression (EGPR)-assisted hybrid feature extraction and feature fusion based on photoplethysmography and electrocardiogram signals to improve the reliability of accurate RR and uncertainty estimations. First, we obtain the power spectral features and use the multi-phase feature model to compensate for insufficient input data. Then, we combine four different feature sets and choose features with high weights using a robust neighbor component analysis. The proposed EGPR algorithm provides a confidence interval representing the uncertainty. Therefore, the proposed EGPR algorithm, including hybrid feature extraction and weighted feature fusion, is an excellent model with improved reliability for accurate RR estimation. Furthermore, the proposed EGPR methodology is likely the only one currently available that provides highly stable variation and confidence intervals. The proposed EGPR-MF, 0.993 breath per minute (bpm), and EGPR-feature fusion, 1.064 (bpm), show the lowest mean absolute error compared to the other models.


Asunto(s)
Frecuencia Respiratoria , Procesamiento de Señales Asistido por Computador , Humanos , Anciano , Incertidumbre , Reproducibilidad de los Resultados , Fotopletismografía/métodos , Algoritmos , Frecuencia Cardíaca
2.
Sensors (Basel) ; 22(13)2022 Jun 30.
Artículo en Inglés | MEDLINE | ID: mdl-35808433

RESUMEN

One of the most promising research areas in the healthcare industry and the scientific community is focusing on the AI-based applications for real medical challenges such as the building of computer-aided diagnosis (CAD) systems for breast cancer. Transfer learning is one of the recent emerging AI-based techniques that allow rapid learning progress and improve medical imaging diagnosis performance. Although deep learning classification for breast cancer has been widely covered, certain obstacles still remain to investigate the independency among the extracted high-level deep features. This work tackles two challenges that still exist when designing effective CAD systems for breast lesion classification from mammograms. The first challenge is to enrich the input information of the deep learning models by generating pseudo-colored images instead of only using the input original grayscale images. To achieve this goal two different image preprocessing techniques are parallel used: contrast-limited adaptive histogram equalization (CLAHE) and Pixel-wise intensity adjustment. The original image is preserved in the first channel, while the other two channels receive the processed images, respectively. The generated three-channel pseudo-colored images are fed directly into the input layer of the backbone CNNs to generate more powerful high-level deep features. The second challenge is to overcome the multicollinearity problem that occurs among the high correlated deep features generated from deep learning models. A new hybrid processing technique based on Logistic Regression (LR) as well as Principal Components Analysis (PCA) is presented and called LR-PCA. Such a process helps to select the significant principal components (PCs) to further use them for the classification purpose. The proposed CAD system has been examined using two different public benchmark datasets which are INbreast and mini-MAIS. The proposed CAD system could achieve the highest performance accuracies of 98.60% and 98.80% using INbreast and mini-MAIS datasets, respectively. Such a CAD system seems to be useful and reliable for breast cancer diagnosis.


Asunto(s)
Neoplasias de la Mama , Redes Neurales de la Computación , Neoplasias de la Mama/diagnóstico por imagen , Femenino , Humanos , Modelos Logísticos , Aprendizaje Automático , Mamografía/métodos
3.
Appl Intell (Dordr) ; 51(5): 2890-2907, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34764573

RESUMEN

Coronavirus disease 2019 (COVID-19) is a novel harmful respiratory disease that has rapidly spread worldwide. At the end of 2019, COVID-19 emerged as a previously unknown respiratory disease in Wuhan, Hubei Province, China. The world health organization (WHO) declared the coronavirus outbreak a pandemic in the second week of March 2020. Simultaneous deep learning detection and classification of COVID-19 based on the full resolution of digital X-ray images is the key to efficiently assisting patients by enabling physicians to reach a fast and accurate diagnosis decision. In this paper, a simultaneous deep learning computer-aided diagnosis (CAD) system based on the YOLO predictor is proposed that can detect and diagnose COVID-19, differentiating it from eight other respiratory diseases: atelectasis, infiltration, pneumothorax, masses, effusion, pneumonia, cardiomegaly, and nodules. The proposed CAD system was assessed via five-fold tests for the multi-class prediction problem using two different databases of chest X-ray images: COVID-19 and ChestX-ray8. The proposed CAD system was trained with an annotated training set of 50,490 chest X-ray images. The regions on the entire X-ray images with lesions suspected of being due to COVID-19 were simultaneously detected and classified end-to-end via the proposed CAD predictor, achieving overall detection and classification accuracies of 96.31% and 97.40%, respectively. Most test images from patients with confirmed COVID-19 and other respiratory diseases were correctly predicted, achieving average intersection over union (IoU) greater than 90%. Applying deep learning regularizers of data balancing and augmentation improved the COVID-19 diagnostic performance by 6.64% and 12.17% in terms of the overall accuracy and the F1-score, respectively. It is feasible to achieve a diagnosis based on individual chest X-ray images with the proposed CAD system within 0.0093 s. Thus, the CAD system presented in this paper can make a prediction at the rate of 108 frames/s (FPS), which is close to real-time. The proposed deep learning CAD system can reliably differentiate COVID-19 from other respiratory diseases. The proposed deep learning model seems to be a reliable tool that can be used to practically assist health care systems, patients, and physicians.

4.
Adv Exp Med Biol ; 1213: 59-72, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32030663

RESUMEN

For computer-aided diagnosis (CAD), detection, segmentation, and classification from medical imagery are three key components to efficiently assist physicians for accurate diagnosis. In this chapter, a completely integrated CAD system based on deep learning is presented to diagnose breast lesions from digital X-ray mammograms involving detection, segmentation, and classification. To automatically detect breast lesions from mammograms, a regional deep learning approach called You-Only-Look-Once (YOLO) is used. To segment breast lesions, full resolution convolutional network (FrCN), a novel segmentation model of deep network, is implemented and used. Finally, three conventional deep learning models including regular feedforward CNN, ResNet-50, and InceptionResNet-V2 are separately adopted and used to classify or recognize the detected and segmented breast lesion as either benign or malignant. To evaluate the integrated CAD system for detection, segmentation, and classification, the publicly available and annotated INbreast database is used over fivefold cross-validation tests. The evaluation results of the YOLO-based detection achieved detection accuracy of 97.27%, Matthews's correlation coefficient (MCC) of 93.93%, and F1-score of 98.02%. Moreover, the results of the breast lesion segmentation via FrCN achieved an overall accuracy of 92.97%, MCC of 85.93%, Dice (F1-score) of 92.69%, and Jaccard similarity coefficient of 86.37%. The detected and segmented breast lesions are classified via CNN, ResNet-50, and InceptionResNet-V2 achieving an average overall accuracies of 88.74%, 92.56%, and 95.32%, respectively. The performance evaluation results through all stages of detection, segmentation, and classification show that the integrated CAD system outperforms the latest conventional deep learning methodologies. We conclude that our CAD system could be used to assist radiologists over all stages of detection, segmentation, and classification for diagnosis of breast lesions.


Asunto(s)
Neoplasias de la Mama/diagnóstico por imagen , Aprendizaje Profundo , Diagnóstico por Computador , Interpretación de Imagen Asistida por Computador , Mamografía/métodos , Humanos
5.
J Xray Sci Technol ; 26(5): 727-746, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30056442

RESUMEN

BACKGROUND: Accurate measurement of bone mineral density (BMD) in dual-energy X-ray absorptiometry (DXA) is essential for proper diagnosis of osteoporosis. Calculation of BMD requires precise bone segmentation and subtraction of soft tissue absorption. Femur segmentation remains a challenge as many existing methods fail to correctly distinguish femur from soft tissue. Reasons for this failure include low contrast and noise in DXA images, bone shape variability, and inconsistent X-ray beam penetration and attenuation, which cause shadowing effects and person-to-person variation. OBJECTIVE: To present a new method namely, a Pixel Label Decision Tree (PLDT), and test whether it can achieve higher accurate performance in femur segmentation in DXA imaging. METHODS: PLDT involves mainly feature extraction and selection. Unlike photographic images, X-ray images include features on the surface and inside an object. In order to reveal hidden patterns in DXA images, PLDT generates seven new feature maps from existing high energy (HE) and low energy (LE) X-ray features and determines the best feature set for the model. The performance of PLDT in femur segmentation is compared with that of three widely used medical image segmentation algorithms, the Global Threshold (GT), Region Growing Threshold (RGT), and artificial neural networks (ANN). RESULTS: PLDT achieved a higher accuracy of femur segmentation in DXA imaging (91.4%) than either GT (68.4%), RGT (76%) or ANN (84.4%). CONCLUSIONS: The study demonstrated that PLDT outperformed other conventional segmentation techniques in segmenting DXA images. Improved segmentation should help accurate computation of BMD which later improves clinical diagnosis of osteoporosis.


Asunto(s)
Absorciometría de Fotón/métodos , Árboles de Decisión , Fémur/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador/métodos , Aprendizaje Automático , Humanos , Osteoporosis/diagnóstico por imagen
6.
J Xray Sci Technol ; 26(3): 395-412, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29562584

RESUMEN

BACKGROUND: In general, the image quality of high and low energy images of dual energy X-ray absorptiometry (DXA) suffers from noise due to the use of a small amount of X-rays. Denoising of DXA images could be a key process to improve a bone mineral density map, which is derived from a pair of high and low energy images. This could further improve the accuracy of diagnosis of bone fractures and osteoporosis. OBJECTIVE: This study aims to develop and test a new technology to improve the quality, remove the noise, and preserve the edges and fine details of real DXA images. METHODS: In this study, a denoising technique for high and low energy DXA images using a non-local mean filter (NLM) was presented. The source and detector noises of a DXA system were modeled for both high and low DXA images. Then, the optimized parameters of the NLM filter were derived utilizing the experimental data from CIRS-BFP phantoms. After that, the optimized NLM was tested and verified using the DXA images of the phantoms and real human spine and femur. RESULTS: Quantitative evaluation of the results showed average 24.22% and 34.43% improvement of the signal-to-noise ratio for real high and low spine images, respectively, while the improvements were about 15.26% and 13.55% for the high and low images of the femur. The qualitative visual observations of both phantom and real structures also showed significantly improved quality and reduced noise while preserving the edges in both high and low energy images. Our results demonstrate that the proposed NLM outperforms the conventional method using an anisotropic diffusion filter (ADF) and median techniques for all phantom and real human DXA images. CONCLUSIONS: Our work suggests that denoising via NLM could be a key preprocessing method for clinical DXA imaging.


Asunto(s)
Absorciometría de Fotón/métodos , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Absorciometría de Fotón/instrumentación , Fémur/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador/instrumentación , Fantasmas de Imagen , Relación Señal-Ruido , Columna Vertebral/diagnóstico por imagen
7.
Diagnostics (Basel) ; 14(12)2024 Jun 15.
Artículo en Inglés | MEDLINE | ID: mdl-38928680

RESUMEN

Rapid advancements in artificial intelligence (AI) and machine learning (ML) are currently transforming the field of diagnostics, enabling unprecedented accuracy and efficiency in disease detection, classification, and treatment planning. This Special Issue, entitled "Artificial Intelligence Advances for Medical Computer-Aided Diagnosis", presents a curated collection of cutting-edge research that explores the integration of AI and ML technologies into various diagnostic modalities. The contributions presented here highlight innovative algorithms, models, and applications that pave the way for improved diagnostic capabilities across a range of medical fields, including radiology, pathology, genomics, and personalized medicine. By showcasing both theoretical advancements and practical implementations, this Special Issue aims to provide a comprehensive overview of current trends and future directions in AI-driven diagnostics, fostering further research and collaboration in this dynamic and impactful area of healthcare. We have published a total of 12 research articles in this Special Issue, all collected between March 2023 and December 2023, comprising 1 Editorial cover letter, 9 regular research articles, 1 review article, and 1 article categorized as "other".

8.
Diagnostics (Basel) ; 14(12)2024 Jun 17.
Artículo en Inglés | MEDLINE | ID: mdl-38928696

RESUMEN

Alzheimer's disease (AD) is a neurological disorder that significantly impairs cognitive function, leading to memory loss and eventually death. AD progresses through three stages: early stage, mild cognitive impairment (MCI) (middle stage), and dementia. Early diagnosis of Alzheimer's disease is crucial and can improve survival rates among patients. Traditional methods for diagnosing AD through regular checkups and manual examinations are challenging. Advances in computer-aided diagnosis systems (CADs) have led to the development of various artificial intelligence and deep learning-based methods for rapid AD detection. This survey aims to explore the different modalities, feature extraction methods, datasets, machine learning techniques, and validation methods used in AD detection. We reviewed 116 relevant papers from repositories including Elsevier (45), IEEE (25), Springer (19), Wiley (6), PLOS One (5), MDPI (3), World Scientific (3), Frontiers (3), PeerJ (2), Hindawi (2), IO Press (1), and other multiple sources (2). The review is presented in tables for ease of reference, allowing readers to quickly grasp the key findings of each study. Additionally, this review addresses the challenges in the current literature and emphasizes the importance of interpretability and explainability in understanding deep learning model predictions. The primary goal is to assess existing techniques for AD identification and highlight obstacles to guide future research.

9.
Diagnostics (Basel) ; 14(11)2024 May 28.
Artículo en Inglés | MEDLINE | ID: mdl-38893643

RESUMEN

The evaluation of mammographic breast density, a critical indicator of breast cancer risk, is traditionally performed by radiologists via visual inspection of mammography images, utilizing the Breast Imaging-Reporting and Data System (BI-RADS) breast density categories. However, this method is subject to substantial interobserver variability, leading to inconsistencies and potential inaccuracies in density assessment and subsequent risk estimations. To address this, we present a deep learning-based automatic detection algorithm (DLAD) designed for the automated evaluation of breast density. Our multicentric, multi-reader study leverages a diverse dataset of 122 full-field digital mammography studies (488 images in CC and MLO projections) sourced from three institutions. We invited two experienced radiologists to conduct a retrospective analysis, establishing a ground truth for 72 mammography studies (BI-RADS class A: 18, BI-RADS class B: 43, BI-RADS class C: 7, BI-RADS class D: 4). The efficacy of the DLAD was then compared to the performance of five independent radiologists with varying levels of experience. The DLAD showed robust performance, achieving an accuracy of 0.819 (95% CI: 0.736-0.903), along with an F1 score of 0.798 (0.594-0.905), precision of 0.806 (0.596-0.896), recall of 0.830 (0.650-0.946), and a Cohen's Kappa (κ) of 0.708 (0.562-0.841). The algorithm achieved robust performance that matches and in four cases exceeds that of individual radiologists. The statistical analysis did not reveal a significant difference in accuracy between DLAD and the radiologists, underscoring the model's competitive diagnostic alignment with professional radiologist assessments. These results demonstrate that the deep learning-based automatic detection algorithm can enhance the accuracy and consistency of breast density assessments, offering a reliable tool for improving breast cancer screening outcomes.

10.
Heliyon ; 10(10): e30756, 2024 May 30.
Artículo en Inglés | MEDLINE | ID: mdl-38784532

RESUMEN

Sentiment analysis has broad use in diverse real-world contexts, particularly in the online movie industry and other e-commerce platforms. The main objective of our work is to examine the word information order and analyze the content of texts by exploring the hidden meanings of words in online movie text reviews. This study presents an enhanced method of representing text and computationally feasible deep learning models, namely the PEW-MCAB model. The methodology categorizes sentiments by considering the full written text as a unified piece. The feature vector representation is processed using an enhanced text representation called Positional embedding and pretrained Glove Embedding Vector (PEW). The learning of these features is achieved by inculcating a multichannel convolutional neural network (MCNN), which is subsequently integrated into an Attention-based Bidirectional Long Short-Term Memory (AB) model. This experiment examines the positive and negative of online movie textual reviews. Four datasets were used to evaluate the model. When tested on the IMDB, MR (2002), MRC (2004), and MR (2005) datasets, the (PEW-MCAB) algorithm attained accuracy rates of 90.3%, 84.1%, 85.9%, and 87.1%, respectively, in the experimental setting. When implemented in practical settings, the proposed structure shows a great deal of promise for efficacy and competitiveness.

11.
Bioengineering (Basel) ; 11(5)2024 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-38790344

RESUMEN

The analysis of body motion is a valuable tool in the assessment and diagnosis of gait impairments, particularly those related to neurological disorders. In this study, we propose a novel automated system leveraging artificial intelligence for efficiently analyzing gait impairment from video-recorded images. The proposed methodology encompasses three key aspects. First, we generate a novel one-dimensional representation of each silhouette image, termed a silhouette sinogram, by computing the distance and angle between the centroid and each detected boundary points. This process enables us to effectively utilize relative variations in motion at different angles to detect gait patterns. Second, a one-dimensional convolutional neural network (1D CNN) model is developed and trained by incorporating the consecutive silhouette sinogram signals of silhouette frames to capture spatiotemporal information via assisted knowledge learning. This process allows the network to capture a broader context and temporal dependencies within the gait cycle, enabling a more accurate diagnosis of gait abnormalities. This study conducts training and an evaluation utilizing the publicly accessible INIT GAIT database. Finally, two evaluation schemes are employed: one leveraging individual silhouette frames and the other operating at the subject level, utilizing a majority voting technique. The outcomes of the proposed method showed superior enhancements in gait impairment recognition, with overall F1-scores of 100%, 90.62%, and 77.32% when evaluated based on sinogram signals, and 100%, 100%, and 83.33% when evaluated based on the subject level, for cases involving two, four, and six gait abnormalities, respectively. In conclusion, by comparing the observed locomotor function to a conventional gait pattern often seen in healthy individuals, the recommended approach allows for a quantitative and non-invasive evaluation of locomotion.

12.
Diagnostics (Basel) ; 13(4)2023 Feb 12.
Artículo en Inglés | MEDLINE | ID: mdl-36832175

RESUMEN

We would like to express our gratitude to all authors who contributed to the Special Issue of "Artificial Intelligence Advances for Medical Computer-Aided Diagnosis" by providing their excellent and recent research findings for AI-based medical diagnosis [...].

13.
Diagnostics (Basel) ; 13(6)2023 Mar 15.
Artículo en Inglés | MEDLINE | ID: mdl-36980428

RESUMEN

Magnetic resonance imaging (MRI) is an efficient, non-invasive diagnostic imaging tool for a variety of disorders. In modern MRI systems, the scanning procedure is time-consuming, which leads to problems with patient comfort and causes motion artifacts. Accelerated or parallel MRI has the potential to minimize patient stress as well as reduce scanning time and medical costs. In this paper, a new deep learning MR image reconstruction framework is proposed to provide more accurate reconstructed MR images when under-sampled or aliased images are generated. The proposed reconstruction model is designed based on the conditional generative adversarial networks (CGANs) where the generator network is designed in a form of an encoder-decoder U-Net network. A hybrid spatial and k-space loss function is also proposed to improve the reconstructed image quality by minimizing the L1-distance considering both spatial and frequency domains simultaneously. The proposed reconstruction framework is directly compared when CGAN and U-Net are adopted and used individually based on the proposed hybrid loss function against the conventional L1-norm. Finally, the proposed reconstruction framework with the extended loss function is evaluated and compared against the traditional SENSE reconstruction technique using the evaluation metrics of structural similarity (SSIM) and peak signal to noise ratio (PSNR). To fine-tune and evaluate the proposed methodology, the public Multi-Coil k-Space OCMR dataset for cardiovascular MR imaging is used. The proposed framework achieves a better image reconstruction quality compared to SENSE in terms of PSNR by 6.84 and 9.57 when U-Net and CGAN are used, respectively. Similarly, it demonstrates SSIM of the reconstructed MR images comparable to the one provided by the SENSE algorithm when U-Net and CGAN are used. Comparing cases where the proposed hybrid loss function is used against the cases with the simple L1-norm, the reconstruction performance can be noticed to improve by 6.84 and 9.57 for U-Net and CGAN, respectively. To conclude this, the proposed framework using CGAN provides the best reconstruction performance compared with U-Net or the conventional SENSE reconstruction techniques. The proposed framework seems to be useful for the practical reconstruction of cardiac images since it can provide better image quality in terms of SSIM and PSNR.

14.
Data Brief ; 50: 109491, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37636132

RESUMEN

The term quality of life (QoL) refers to a wide range of multifaceted concepts that often involve subjective assessments of both positive and negative aspects of life. It is difficult to quantify QoL as the word has varied meanings in different academic areas and may have different connotations in different circumstances. The five sectors most commonly associated with QoL, however, are Health, Education, Environmental Quality, Personal Security, Civic Engagement, and Work-Life Balance. An emerging issue that falls under environmental quality is visual pollution (VP) which, as detailed in this study, refers to disruptive presences that limit visual ability in public roads with an emphasis on excavation barriers, potholes, and dilapidated sidewalks. Quantifying VP has always been difficult due to its subjective nature and lack of a consistent set of rules for systematic assessment of visual pollution. This emphasizes the need for research and module development that will allow government agencies to automatically predict and detect VP. Our dataset was collected from different regions in the Kingdom of Saudi Arabia (KSA) via the Ministry of Municipal and Rural Affairs and Housing (MOMRAH) as a part of a VP campaign to improve Saudi Arabia's urban landscape. It consists of 34,460 RGB images separated into three distinct classes: excavation barriers, potholes, and dilapidated sidewalks. To annotate all images for detection (i.e., bounding box) and classification (i.e., classification label) tasks, the deep active learning strategy (DAL) is used where an initial 1,200 VP images (i.e., 400 images per class) are manually annotated by four experts. Images with more than one object increase the number of training object ROIs which are recorded to be 8,417 for excavation barriers, 25,975 for potholes, and 7,412 for dilapidated sidewalks. The MOMRAH dataset is publicly published to enrich the research domain with the new VP image dataset.

15.
Diagnostics (Basel) ; 13(8)2023 Apr 14.
Artículo en Inglés | MEDLINE | ID: mdl-37189517

RESUMEN

Identifying Human Epithelial Type 2 (HEp-2) mitotic cells is a crucial procedure in anti-nuclear antibodies (ANAs) testing, which is the standard protocol for detecting connective tissue diseases (CTD). Due to the low throughput and labor-subjectivity of the ANAs' manual screening test, there is a need to develop a reliable HEp-2 computer-aided diagnosis (CAD) system. The automatic detection of mitotic cells from the microscopic HEp-2 specimen images is an essential step to support the diagnosis process and enhance the throughput of this test. This work proposes a deep active learning (DAL) approach to overcoming the cell labeling challenge. Moreover, deep learning detectors are tailored to automatically identify the mitotic cells directly in the entire microscopic HEp-2 specimen images, avoiding the segmentation step. The proposed framework is validated using the I3A Task-2 dataset over 5-fold cross-validation trials. Using the YOLO predictor, promising mitotic cell prediction results are achieved with an average of 90.011% recall, 88.307% precision, and 81.531% mAP. Whereas, average scores of 86.986% recall, 85.282% precision, and 78.506% mAP are obtained using the Faster R-CNN predictor. Employing the DAL method over four labeling rounds effectively enhances the accuracy of the data annotation, and hence, improves the prediction performance. The proposed framework could be practically applicable to support medical personnel in making rapid and accurate decisions about the mitotic cells' existence.

16.
J Adv Res ; 48: 191-211, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-36084812

RESUMEN

INTRODUCTION: Pneumonia is a microorganism infection that causes chronic inflammation of the human lung cells. Chest X-ray imaging is the most well-known screening approach used for detecting pneumonia in the early stages. While chest-Xray images are mostly blurry with low illumination, a strong feature extraction approach is required for promising identification performance. OBJECTIVES: A new hybrid explainable deep learning framework is proposed for accurate pneumonia disease identification using chest X-ray images. METHODS: The proposed hybrid workflow is developed by fusing the capabilities of both ensemble convolutional networks and the Transformer Encoder mechanism. The ensemble learning backbone is used to extract strong features from the raw input X-ray images in two different scenarios: ensemble A (i.e., DenseNet201, VGG16, and GoogleNet) and ensemble B (i.e., DenseNet201, InceptionResNetV2, and Xception). Whereas, the Transformer Encoder is built based on the self-attention mechanism with multilayer perceptron (MLP) for accurate disease identification. The visual explainable saliency maps are derived to emphasize the crucial predicted regions on the input X-ray images. The end-to-end training process of the proposed deep learning models over all scenarios is performed for binary and multi-class classification scenarios. RESULTS: The proposed hybrid deep learning model recorded 99.21% classification performance in terms of overall accuracy and F1-score for the binary classification task, while it achieved 98.19% accuracy and 97.29% F1-score for multi-classification task. For the ensemble binary identification scenario, ensemble A recorded 97.22% accuracy and 97.14% F1-score, while ensemble B achieved 96.44% for both accuracy and F1-score. For the ensemble multiclass identification scenario, ensemble A recorded 97.2% accuracy and 95.8% F1-score, while ensemble B recorded 96.4% accuracy and 94.9% F1-score. CONCLUSION: The proposed hybrid deep learning framework could provide promising and encouraging explainable identification performance comparing with the individual, ensemble models, or even the latest AI models in the literature. The code is available here: https://github.com/chiagoziemchima/Pneumonia_Identificaton.


Asunto(s)
Neumonía , Humanos , Rayos X , Neumonía/diagnóstico por imagen , Inflamación , Tórax , Suministros de Energía Eléctrica
17.
J King Saud Univ Comput Inf Sci ; 35(7): 101596, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37275558

RESUMEN

COVID-19 is a contagious disease that affects the human respiratory system. Infected individuals may develop serious illnesses, and complications may result in death. Using medical images to detect COVID-19 from essentially identical thoracic anomalies is challenging because it is time-consuming, laborious, and prone to human error. This study proposes an end-to-end deep-learning framework based on deep feature concatenation and a Multi-head Self-attention network. Feature concatenation involves fine-tuning the pre-trained backbone models of DenseNet, VGG-16, and InceptionV3, which are trained on a large-scale ImageNet, whereas a Multi-head Self-attention network is adopted for performance gain. End-to-end training and evaluation procedures are conducted using the COVID-19_Radiography_Dataset for binary and multi-classification scenarios. The proposed model achieved overall accuracies (96.33% and 98.67%) and F1_scores (92.68% and 98.67%) for multi and binary classification scenarios, respectively. In addition, this study highlights the difference in accuracy (98.0% vs. 96.33%) and F_1 score (97.34% vs. 95.10%) when compared with feature concatenation against the highest individual model performance. Furthermore, a virtual representation of the saliency maps of the employed attention mechanism focusing on the abnormal regions is presented using explainable artificial intelligence (XAI) technology. The proposed framework provided better COVID-19 prediction results outperforming other recent deep learning models using the same dataset.

18.
Diagnostics (Basel) ; 13(6)2023 Mar 09.
Artículo en Inglés | MEDLINE | ID: mdl-36980351

RESUMEN

Chest X-ray (CXR) is considered to be the most widely used modality for detecting and monitoring various thoracic findings, including lung carcinoma and other pulmonary lesions. However, X-ray imaging shows particular limitations when detecting primary and secondary tumors and is prone to reading errors due to limited resolution and disagreement between radiologists. To address these issues, we developed a deep-learning-based automatic detection algorithm (DLAD) to automatically detect and localize suspicious lesions on CXRs. Five radiologists were invited to retrospectively evaluate 300 CXR images from a specialized oncology center, and the performance of individual radiologists was subsequently compared with that of DLAD. The proposed DLAD achieved significantly higher sensitivity (0.910 (0.854-0.966)) than that of all assessed radiologists (RAD 10.290 (0.201-0.379), p < 0.001, RAD 20.450 (0.352-0.548), p < 0.001, RAD 30.670 (0.578-0.762), p < 0.001, RAD 40.810 (0.733-0.887), p = 0.025, RAD 50.700 (0.610-0.790), p < 0.001). The DLAD specificity (0.775 (0.717-0.833)) was significantly lower than for all assessed radiologists (RAD 11.000 (0.984-1.000), p < 0.001, RAD 20.970 (0.946-1.000), p < 0.001, RAD 30.980 (0.961-1.000), p < 0.001, RAD 40.975 (0.953-0.997), p < 0.001, RAD 50.995 (0.985-1.000), p < 0.001). The study results demonstrate that the proposed DLAD could be utilized as a decision-support system to reduce radiologists' false negative rate.

19.
Diagnostics (Basel) ; 13(6)2023 Mar 14.
Artículo en Inglés | MEDLINE | ID: mdl-36980412

RESUMEN

Melanoma, a kind of skin cancer that is very risky, is distinguished by uncontrolled cell multiplication. Melanoma detection is of the utmost significance in clinical practice because of the atypical border structure and the numerous types of tissue it can involve. The identification of melanoma is still a challenging process for color images, despite the fact that numerous approaches have been proposed in the research that has been done. In this research, we present a comprehensive system for the efficient and precise classification of skin lesions. The framework includes preprocessing, segmentation, feature extraction, and classification modules. Preprocessing with DullRazor eliminates skin-imaging hair artifacts. Next, Fully Connected Neural Network (FCNN) semantic segmentation extracts precise and obvious Regions of Interest (ROIs). We then extract relevant skin image features from ROIs using an enhanced Sobel Directional Pattern (SDP). For skin image analysis, Sobel Directional Pattern outperforms ABCD. Finally, a stacked Restricted Boltzmann Machine (RBM) classifies skin ROIs. Stacked RBMs accurately classify skin melanoma. The experiments have been conducted on five datasets: Pedro Hispano Hospital (PH2), International Skin Imaging Collaboration (ISIC 2016), ISIC 2017, Dermnet, and DermIS, and achieved an accuracy of 99.8%, 96.5%, 95.5%, 87.9%, and 97.6%, respectively. The results show that a stack of Restricted Boltzmann Machines is superior for categorizing skin cancer types using the proposed innovative SDP.

20.
Comput Math Methods Med ; 2022: 4593330, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35069782

RESUMEN

Drosophila melanogaster is an important genetic model organism used extensively in medical and biological studies. About 61% of known human genes have a recognizable match with the genetic code of Drosophila flies, and 50% of fly protein sequences have mammalian analogues. Recently, several investigations have been conducted in Drosophila to study the functions of specific genes exist in the central nervous system, heart, liver, and kidney. The outcomes of the research in Drosophila are also used as a unique tool to study human-related diseases. This article presents a novel automated system to classify the gender of Drosophila flies obtained through microscopic images (ventral view). The proposed system takes an image as input and converts it into grayscale illustration to extract the texture features from the image. Then, machine learning (ML) classifiers such as support vector machines (SVM), Naive Bayes (NB), and K-nearest neighbour (KNN) are used to classify the Drosophila as male or female. The proposed model is evaluated using the real microscopic image dataset, and the results show that the accuracy of the KNN is 90%, which is higher than the accuracy of the SVM classifier.


Asunto(s)
Drosophila melanogaster/anatomía & histología , Drosophila melanogaster/clasificación , Aprendizaje Automático , Análisis para Determinación del Sexo/métodos , Animales , Teorema de Bayes , Biología Computacional , Femenino , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/estadística & datos numéricos , Masculino , Microscopía , Análisis para Determinación del Sexo/estadística & datos numéricos , Máquina de Vectores de Soporte
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA