Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 28
Filtrar
1.
Diagnostics (Basel) ; 14(12)2024 Jun 15.
Artículo en Inglés | MEDLINE | ID: mdl-38928680

RESUMEN

Rapid advancements in artificial intelligence (AI) and machine learning (ML) are currently transforming the field of diagnostics, enabling unprecedented accuracy and efficiency in disease detection, classification, and treatment planning. This Special Issue, entitled "Artificial Intelligence Advances for Medical Computer-Aided Diagnosis", presents a curated collection of cutting-edge research that explores the integration of AI and ML technologies into various diagnostic modalities. The contributions presented here highlight innovative algorithms, models, and applications that pave the way for improved diagnostic capabilities across a range of medical fields, including radiology, pathology, genomics, and personalized medicine. By showcasing both theoretical advancements and practical implementations, this Special Issue aims to provide a comprehensive overview of current trends and future directions in AI-driven diagnostics, fostering further research and collaboration in this dynamic and impactful area of healthcare. We have published a total of 12 research articles in this Special Issue, all collected between March 2023 and December 2023, comprising 1 Editorial cover letter, 9 regular research articles, 1 review article, and 1 article categorized as "other".

2.
Diagnostics (Basel) ; 14(12)2024 Jun 17.
Artículo en Inglés | MEDLINE | ID: mdl-38928696

RESUMEN

Alzheimer's disease (AD) is a neurological disorder that significantly impairs cognitive function, leading to memory loss and eventually death. AD progresses through three stages: early stage, mild cognitive impairment (MCI) (middle stage), and dementia. Early diagnosis of Alzheimer's disease is crucial and can improve survival rates among patients. Traditional methods for diagnosing AD through regular checkups and manual examinations are challenging. Advances in computer-aided diagnosis systems (CADs) have led to the development of various artificial intelligence and deep learning-based methods for rapid AD detection. This survey aims to explore the different modalities, feature extraction methods, datasets, machine learning techniques, and validation methods used in AD detection. We reviewed 116 relevant papers from repositories including Elsevier (45), IEEE (25), Springer (19), Wiley (6), PLOS One (5), MDPI (3), World Scientific (3), Frontiers (3), PeerJ (2), Hindawi (2), IO Press (1), and other multiple sources (2). The review is presented in tables for ease of reference, allowing readers to quickly grasp the key findings of each study. Additionally, this review addresses the challenges in the current literature and emphasizes the importance of interpretability and explainability in understanding deep learning model predictions. The primary goal is to assess existing techniques for AD identification and highlight obstacles to guide future research.

3.
Diagnostics (Basel) ; 14(11)2024 May 28.
Artículo en Inglés | MEDLINE | ID: mdl-38893643

RESUMEN

The evaluation of mammographic breast density, a critical indicator of breast cancer risk, is traditionally performed by radiologists via visual inspection of mammography images, utilizing the Breast Imaging-Reporting and Data System (BI-RADS) breast density categories. However, this method is subject to substantial interobserver variability, leading to inconsistencies and potential inaccuracies in density assessment and subsequent risk estimations. To address this, we present a deep learning-based automatic detection algorithm (DLAD) designed for the automated evaluation of breast density. Our multicentric, multi-reader study leverages a diverse dataset of 122 full-field digital mammography studies (488 images in CC and MLO projections) sourced from three institutions. We invited two experienced radiologists to conduct a retrospective analysis, establishing a ground truth for 72 mammography studies (BI-RADS class A: 18, BI-RADS class B: 43, BI-RADS class C: 7, BI-RADS class D: 4). The efficacy of the DLAD was then compared to the performance of five independent radiologists with varying levels of experience. The DLAD showed robust performance, achieving an accuracy of 0.819 (95% CI: 0.736-0.903), along with an F1 score of 0.798 (0.594-0.905), precision of 0.806 (0.596-0.896), recall of 0.830 (0.650-0.946), and a Cohen's Kappa (κ) of 0.708 (0.562-0.841). The algorithm achieved robust performance that matches and in four cases exceeds that of individual radiologists. The statistical analysis did not reveal a significant difference in accuracy between DLAD and the radiologists, underscoring the model's competitive diagnostic alignment with professional radiologist assessments. These results demonstrate that the deep learning-based automatic detection algorithm can enhance the accuracy and consistency of breast density assessments, offering a reliable tool for improving breast cancer screening outcomes.

4.
Bioengineering (Basel) ; 11(5)2024 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-38790344

RESUMEN

The analysis of body motion is a valuable tool in the assessment and diagnosis of gait impairments, particularly those related to neurological disorders. In this study, we propose a novel automated system leveraging artificial intelligence for efficiently analyzing gait impairment from video-recorded images. The proposed methodology encompasses three key aspects. First, we generate a novel one-dimensional representation of each silhouette image, termed a silhouette sinogram, by computing the distance and angle between the centroid and each detected boundary points. This process enables us to effectively utilize relative variations in motion at different angles to detect gait patterns. Second, a one-dimensional convolutional neural network (1D CNN) model is developed and trained by incorporating the consecutive silhouette sinogram signals of silhouette frames to capture spatiotemporal information via assisted knowledge learning. This process allows the network to capture a broader context and temporal dependencies within the gait cycle, enabling a more accurate diagnosis of gait abnormalities. This study conducts training and an evaluation utilizing the publicly accessible INIT GAIT database. Finally, two evaluation schemes are employed: one leveraging individual silhouette frames and the other operating at the subject level, utilizing a majority voting technique. The outcomes of the proposed method showed superior enhancements in gait impairment recognition, with overall F1-scores of 100%, 90.62%, and 77.32% when evaluated based on sinogram signals, and 100%, 100%, and 83.33% when evaluated based on the subject level, for cases involving two, four, and six gait abnormalities, respectively. In conclusion, by comparing the observed locomotor function to a conventional gait pattern often seen in healthy individuals, the recommended approach allows for a quantitative and non-invasive evaluation of locomotion.

5.
Heliyon ; 10(10): e30756, 2024 May 30.
Artículo en Inglés | MEDLINE | ID: mdl-38784532

RESUMEN

Sentiment analysis has broad use in diverse real-world contexts, particularly in the online movie industry and other e-commerce platforms. The main objective of our work is to examine the word information order and analyze the content of texts by exploring the hidden meanings of words in online movie text reviews. This study presents an enhanced method of representing text and computationally feasible deep learning models, namely the PEW-MCAB model. The methodology categorizes sentiments by considering the full written text as a unified piece. The feature vector representation is processed using an enhanced text representation called Positional embedding and pretrained Glove Embedding Vector (PEW). The learning of these features is achieved by inculcating a multichannel convolutional neural network (MCNN), which is subsequently integrated into an Attention-based Bidirectional Long Short-Term Memory (AB) model. This experiment examines the positive and negative of online movie textual reviews. Four datasets were used to evaluate the model. When tested on the IMDB, MR (2002), MRC (2004), and MR (2005) datasets, the (PEW-MCAB) algorithm attained accuracy rates of 90.3%, 84.1%, 85.9%, and 87.1%, respectively, in the experimental setting. When implemented in practical settings, the proposed structure shows a great deal of promise for efficacy and competitiveness.

6.
Data Brief ; 50: 109491, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37636132

RESUMEN

The term quality of life (QoL) refers to a wide range of multifaceted concepts that often involve subjective assessments of both positive and negative aspects of life. It is difficult to quantify QoL as the word has varied meanings in different academic areas and may have different connotations in different circumstances. The five sectors most commonly associated with QoL, however, are Health, Education, Environmental Quality, Personal Security, Civic Engagement, and Work-Life Balance. An emerging issue that falls under environmental quality is visual pollution (VP) which, as detailed in this study, refers to disruptive presences that limit visual ability in public roads with an emphasis on excavation barriers, potholes, and dilapidated sidewalks. Quantifying VP has always been difficult due to its subjective nature and lack of a consistent set of rules for systematic assessment of visual pollution. This emphasizes the need for research and module development that will allow government agencies to automatically predict and detect VP. Our dataset was collected from different regions in the Kingdom of Saudi Arabia (KSA) via the Ministry of Municipal and Rural Affairs and Housing (MOMRAH) as a part of a VP campaign to improve Saudi Arabia's urban landscape. It consists of 34,460 RGB images separated into three distinct classes: excavation barriers, potholes, and dilapidated sidewalks. To annotate all images for detection (i.e., bounding box) and classification (i.e., classification label) tasks, the deep active learning strategy (DAL) is used where an initial 1,200 VP images (i.e., 400 images per class) are manually annotated by four experts. Images with more than one object increase the number of training object ROIs which are recorded to be 8,417 for excavation barriers, 25,975 for potholes, and 7,412 for dilapidated sidewalks. The MOMRAH dataset is publicly published to enrich the research domain with the new VP image dataset.

7.
J King Saud Univ Comput Inf Sci ; 35(7): 101596, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37275558

RESUMEN

COVID-19 is a contagious disease that affects the human respiratory system. Infected individuals may develop serious illnesses, and complications may result in death. Using medical images to detect COVID-19 from essentially identical thoracic anomalies is challenging because it is time-consuming, laborious, and prone to human error. This study proposes an end-to-end deep-learning framework based on deep feature concatenation and a Multi-head Self-attention network. Feature concatenation involves fine-tuning the pre-trained backbone models of DenseNet, VGG-16, and InceptionV3, which are trained on a large-scale ImageNet, whereas a Multi-head Self-attention network is adopted for performance gain. End-to-end training and evaluation procedures are conducted using the COVID-19_Radiography_Dataset for binary and multi-classification scenarios. The proposed model achieved overall accuracies (96.33% and 98.67%) and F1_scores (92.68% and 98.67%) for multi and binary classification scenarios, respectively. In addition, this study highlights the difference in accuracy (98.0% vs. 96.33%) and F_1 score (97.34% vs. 95.10%) when compared with feature concatenation against the highest individual model performance. Furthermore, a virtual representation of the saliency maps of the employed attention mechanism focusing on the abnormal regions is presented using explainable artificial intelligence (XAI) technology. The proposed framework provided better COVID-19 prediction results outperforming other recent deep learning models using the same dataset.

8.
Diagnostics (Basel) ; 13(8)2023 Apr 14.
Artículo en Inglés | MEDLINE | ID: mdl-37189517

RESUMEN

Identifying Human Epithelial Type 2 (HEp-2) mitotic cells is a crucial procedure in anti-nuclear antibodies (ANAs) testing, which is the standard protocol for detecting connective tissue diseases (CTD). Due to the low throughput and labor-subjectivity of the ANAs' manual screening test, there is a need to develop a reliable HEp-2 computer-aided diagnosis (CAD) system. The automatic detection of mitotic cells from the microscopic HEp-2 specimen images is an essential step to support the diagnosis process and enhance the throughput of this test. This work proposes a deep active learning (DAL) approach to overcoming the cell labeling challenge. Moreover, deep learning detectors are tailored to automatically identify the mitotic cells directly in the entire microscopic HEp-2 specimen images, avoiding the segmentation step. The proposed framework is validated using the I3A Task-2 dataset over 5-fold cross-validation trials. Using the YOLO predictor, promising mitotic cell prediction results are achieved with an average of 90.011% recall, 88.307% precision, and 81.531% mAP. Whereas, average scores of 86.986% recall, 85.282% precision, and 78.506% mAP are obtained using the Faster R-CNN predictor. Employing the DAL method over four labeling rounds effectively enhances the accuracy of the data annotation, and hence, improves the prediction performance. The proposed framework could be practically applicable to support medical personnel in making rapid and accurate decisions about the mitotic cells' existence.

9.
Diagnostics (Basel) ; 13(6)2023 Mar 09.
Artículo en Inglés | MEDLINE | ID: mdl-36980351

RESUMEN

Chest X-ray (CXR) is considered to be the most widely used modality for detecting and monitoring various thoracic findings, including lung carcinoma and other pulmonary lesions. However, X-ray imaging shows particular limitations when detecting primary and secondary tumors and is prone to reading errors due to limited resolution and disagreement between radiologists. To address these issues, we developed a deep-learning-based automatic detection algorithm (DLAD) to automatically detect and localize suspicious lesions on CXRs. Five radiologists were invited to retrospectively evaluate 300 CXR images from a specialized oncology center, and the performance of individual radiologists was subsequently compared with that of DLAD. The proposed DLAD achieved significantly higher sensitivity (0.910 (0.854-0.966)) than that of all assessed radiologists (RAD 10.290 (0.201-0.379), p < 0.001, RAD 20.450 (0.352-0.548), p < 0.001, RAD 30.670 (0.578-0.762), p < 0.001, RAD 40.810 (0.733-0.887), p = 0.025, RAD 50.700 (0.610-0.790), p < 0.001). The DLAD specificity (0.775 (0.717-0.833)) was significantly lower than for all assessed radiologists (RAD 11.000 (0.984-1.000), p < 0.001, RAD 20.970 (0.946-1.000), p < 0.001, RAD 30.980 (0.961-1.000), p < 0.001, RAD 40.975 (0.953-0.997), p < 0.001, RAD 50.995 (0.985-1.000), p < 0.001). The study results demonstrate that the proposed DLAD could be utilized as a decision-support system to reduce radiologists' false negative rate.

10.
Diagnostics (Basel) ; 13(6)2023 Mar 14.
Artículo en Inglés | MEDLINE | ID: mdl-36980412

RESUMEN

Melanoma, a kind of skin cancer that is very risky, is distinguished by uncontrolled cell multiplication. Melanoma detection is of the utmost significance in clinical practice because of the atypical border structure and the numerous types of tissue it can involve. The identification of melanoma is still a challenging process for color images, despite the fact that numerous approaches have been proposed in the research that has been done. In this research, we present a comprehensive system for the efficient and precise classification of skin lesions. The framework includes preprocessing, segmentation, feature extraction, and classification modules. Preprocessing with DullRazor eliminates skin-imaging hair artifacts. Next, Fully Connected Neural Network (FCNN) semantic segmentation extracts precise and obvious Regions of Interest (ROIs). We then extract relevant skin image features from ROIs using an enhanced Sobel Directional Pattern (SDP). For skin image analysis, Sobel Directional Pattern outperforms ABCD. Finally, a stacked Restricted Boltzmann Machine (RBM) classifies skin ROIs. Stacked RBMs accurately classify skin melanoma. The experiments have been conducted on five datasets: Pedro Hispano Hospital (PH2), International Skin Imaging Collaboration (ISIC 2016), ISIC 2017, Dermnet, and DermIS, and achieved an accuracy of 99.8%, 96.5%, 95.5%, 87.9%, and 97.6%, respectively. The results show that a stack of Restricted Boltzmann Machines is superior for categorizing skin cancer types using the proposed innovative SDP.

11.
Diagnostics (Basel) ; 13(6)2023 Mar 15.
Artículo en Inglés | MEDLINE | ID: mdl-36980428

RESUMEN

Magnetic resonance imaging (MRI) is an efficient, non-invasive diagnostic imaging tool for a variety of disorders. In modern MRI systems, the scanning procedure is time-consuming, which leads to problems with patient comfort and causes motion artifacts. Accelerated or parallel MRI has the potential to minimize patient stress as well as reduce scanning time and medical costs. In this paper, a new deep learning MR image reconstruction framework is proposed to provide more accurate reconstructed MR images when under-sampled or aliased images are generated. The proposed reconstruction model is designed based on the conditional generative adversarial networks (CGANs) where the generator network is designed in a form of an encoder-decoder U-Net network. A hybrid spatial and k-space loss function is also proposed to improve the reconstructed image quality by minimizing the L1-distance considering both spatial and frequency domains simultaneously. The proposed reconstruction framework is directly compared when CGAN and U-Net are adopted and used individually based on the proposed hybrid loss function against the conventional L1-norm. Finally, the proposed reconstruction framework with the extended loss function is evaluated and compared against the traditional SENSE reconstruction technique using the evaluation metrics of structural similarity (SSIM) and peak signal to noise ratio (PSNR). To fine-tune and evaluate the proposed methodology, the public Multi-Coil k-Space OCMR dataset for cardiovascular MR imaging is used. The proposed framework achieves a better image reconstruction quality compared to SENSE in terms of PSNR by 6.84 and 9.57 when U-Net and CGAN are used, respectively. Similarly, it demonstrates SSIM of the reconstructed MR images comparable to the one provided by the SENSE algorithm when U-Net and CGAN are used. Comparing cases where the proposed hybrid loss function is used against the cases with the simple L1-norm, the reconstruction performance can be noticed to improve by 6.84 and 9.57 for U-Net and CGAN, respectively. To conclude this, the proposed framework using CGAN provides the best reconstruction performance compared with U-Net or the conventional SENSE reconstruction techniques. The proposed framework seems to be useful for the practical reconstruction of cardiac images since it can provide better image quality in terms of SSIM and PSNR.

12.
Diagnostics (Basel) ; 13(4)2023 Feb 12.
Artículo en Inglés | MEDLINE | ID: mdl-36832175

RESUMEN

We would like to express our gratitude to all authors who contributed to the Special Issue of "Artificial Intelligence Advances for Medical Computer-Aided Diagnosis" by providing their excellent and recent research findings for AI-based medical diagnosis [...].

13.
J Adv Res ; 48: 191-211, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-36084812

RESUMEN

INTRODUCTION: Pneumonia is a microorganism infection that causes chronic inflammation of the human lung cells. Chest X-ray imaging is the most well-known screening approach used for detecting pneumonia in the early stages. While chest-Xray images are mostly blurry with low illumination, a strong feature extraction approach is required for promising identification performance. OBJECTIVES: A new hybrid explainable deep learning framework is proposed for accurate pneumonia disease identification using chest X-ray images. METHODS: The proposed hybrid workflow is developed by fusing the capabilities of both ensemble convolutional networks and the Transformer Encoder mechanism. The ensemble learning backbone is used to extract strong features from the raw input X-ray images in two different scenarios: ensemble A (i.e., DenseNet201, VGG16, and GoogleNet) and ensemble B (i.e., DenseNet201, InceptionResNetV2, and Xception). Whereas, the Transformer Encoder is built based on the self-attention mechanism with multilayer perceptron (MLP) for accurate disease identification. The visual explainable saliency maps are derived to emphasize the crucial predicted regions on the input X-ray images. The end-to-end training process of the proposed deep learning models over all scenarios is performed for binary and multi-class classification scenarios. RESULTS: The proposed hybrid deep learning model recorded 99.21% classification performance in terms of overall accuracy and F1-score for the binary classification task, while it achieved 98.19% accuracy and 97.29% F1-score for multi-classification task. For the ensemble binary identification scenario, ensemble A recorded 97.22% accuracy and 97.14% F1-score, while ensemble B achieved 96.44% for both accuracy and F1-score. For the ensemble multiclass identification scenario, ensemble A recorded 97.2% accuracy and 95.8% F1-score, while ensemble B recorded 96.4% accuracy and 94.9% F1-score. CONCLUSION: The proposed hybrid deep learning framework could provide promising and encouraging explainable identification performance comparing with the individual, ensemble models, or even the latest AI models in the literature. The code is available here: https://github.com/chiagoziemchima/Pneumonia_Identificaton.


Asunto(s)
Neumonía , Humanos , Rayos X , Neumonía/diagnóstico por imagen , Inflamación , Tórax , Suministros de Energía Eléctrica
14.
Biomedicines ; 10(11)2022 Nov 18.
Artículo en Inglés | MEDLINE | ID: mdl-36428538

RESUMEN

Breast cancer, which attacks the glandular epithelium of the breast, is the second most common kind of cancer in women after lung cancer, and it affects a significant number of people worldwide. Based on the advantages of Residual Convolutional Network and the Transformer Encoder with Multiple Layer Perceptron (MLP), this study proposes a novel hybrid deep learning Computer-Aided Diagnosis (CAD) system for breast lesions. While the backbone residual deep learning network is employed to create the deep features, the transformer is utilized to classify breast cancer according to the self-attention mechanism. The proposed CAD system has the capability to recognize breast cancer in two scenarios: Scenario A (Binary classification) and Scenario B (Multi-classification). Data collection and preprocessing, patch image creation and splitting, and artificial intelligence-based breast lesion identification are all components of the execution framework that are applied consistently across both cases. The effectiveness of the proposed AI model is compared against three separate deep learning models: a custom CNN, the VGG16, and the ResNet50. Two datasets, CBIS-DDSM and DDSM, are utilized to construct and test the proposed CAD system. Five-fold cross validation of the test data is used to evaluate the accuracy of the performance results. The suggested hybrid CAD system achieves encouraging evaluation results, with overall accuracies of 100% and 95.80% for binary and multiclass prediction challenges, respectively. The experimental results reveal that the proposed hybrid AI model could identify benign and malignant breast tissues significantly, which is important for radiologists to recommend further investigation of abnormal mammograms and provide the optimal treatment plan.

15.
Diagnostics (Basel) ; 12(11)2022 Nov 16.
Artículo en Inglés | MEDLINE | ID: mdl-36428875

RESUMEN

Blood cells carry important information that can be used to represent a person's current state of health. The identification of different types of blood cells in a timely and precise manner is essential to cutting the infection risks that people face on a daily basis. The BCNet is an artificial intelligence (AI)-based deep learning (DL) framework that was proposed based on the capability of transfer learning with a convolutional neural network to rapidly and automatically identify the blood cells in an eight-class identification scenario: Basophil, Eosinophil, Erythroblast, Immature Granulocytes, Lymphocyte, Monocyte, Neutrophil, and Platelet. For the purpose of establishing the dependability and viability of BCNet, exhaustive experiments consisting of five-fold cross-validation tests are carried out. Using the transfer learning strategy, we conducted in-depth comprehensive experiments on the proposed BCNet's architecture and test it with three optimizers of ADAM, RMSprop (RMSP), and stochastic gradient descent (SGD). Meanwhile, the performance of the proposed BCNet is directly compared using the same dataset with the state-of-the-art deep learning models of DensNet, ResNet, Inception, and MobileNet. When employing the different optimizers, the BCNet framework demonstrated better classification performance with ADAM and RMSP optimizers. The best evaluation performance was achieved using the RMSP optimizer in terms of 98.51% accuracy and 96.24% F1-score. Compared with the baseline model, the BCNet clearly improved the prediction accuracy performance 1.94%, 3.33%, and 1.65% using the optimizers of ADAM, RMSP, and SGD, respectively. The proposed BCNet model outperformed the AI models of DenseNet, ResNet, Inception, and MobileNet in terms of the testing time of a single blood cell image by 10.98, 4.26, 2.03, and 0.21 msec. In comparison to the most recent deep learning models, the BCNet model could be able to generate encouraging outcomes. It is essential for the advancement of healthcare facilities to have such a recognition rate improving the detection performance of the blood cells.

16.
Sensors (Basel) ; 22(21)2022 Nov 01.
Artículo en Inglés | MEDLINE | ID: mdl-36366083

RESUMEN

Accurately estimating respiratory rate (RR) has become essential for patients and the elderly. Hence, we propose a novel method that uses exact Gaussian process regression (EGPR)-assisted hybrid feature extraction and feature fusion based on photoplethysmography and electrocardiogram signals to improve the reliability of accurate RR and uncertainty estimations. First, we obtain the power spectral features and use the multi-phase feature model to compensate for insufficient input data. Then, we combine four different feature sets and choose features with high weights using a robust neighbor component analysis. The proposed EGPR algorithm provides a confidence interval representing the uncertainty. Therefore, the proposed EGPR algorithm, including hybrid feature extraction and weighted feature fusion, is an excellent model with improved reliability for accurate RR estimation. Furthermore, the proposed EGPR methodology is likely the only one currently available that provides highly stable variation and confidence intervals. The proposed EGPR-MF, 0.993 breath per minute (bpm), and EGPR-feature fusion, 1.064 (bpm), show the lowest mean absolute error compared to the other models.


Asunto(s)
Frecuencia Respiratoria , Procesamiento de Señales Asistido por Computador , Humanos , Anciano , Incertidumbre , Reproducibilidad de los Resultados , Fotopletismografía/métodos , Algoritmos , Frecuencia Cardíaca
17.
Sensors (Basel) ; 22(13)2022 Jun 30.
Artículo en Inglés | MEDLINE | ID: mdl-35808433

RESUMEN

One of the most promising research areas in the healthcare industry and the scientific community is focusing on the AI-based applications for real medical challenges such as the building of computer-aided diagnosis (CAD) systems for breast cancer. Transfer learning is one of the recent emerging AI-based techniques that allow rapid learning progress and improve medical imaging diagnosis performance. Although deep learning classification for breast cancer has been widely covered, certain obstacles still remain to investigate the independency among the extracted high-level deep features. This work tackles two challenges that still exist when designing effective CAD systems for breast lesion classification from mammograms. The first challenge is to enrich the input information of the deep learning models by generating pseudo-colored images instead of only using the input original grayscale images. To achieve this goal two different image preprocessing techniques are parallel used: contrast-limited adaptive histogram equalization (CLAHE) and Pixel-wise intensity adjustment. The original image is preserved in the first channel, while the other two channels receive the processed images, respectively. The generated three-channel pseudo-colored images are fed directly into the input layer of the backbone CNNs to generate more powerful high-level deep features. The second challenge is to overcome the multicollinearity problem that occurs among the high correlated deep features generated from deep learning models. A new hybrid processing technique based on Logistic Regression (LR) as well as Principal Components Analysis (PCA) is presented and called LR-PCA. Such a process helps to select the significant principal components (PCs) to further use them for the classification purpose. The proposed CAD system has been examined using two different public benchmark datasets which are INbreast and mini-MAIS. The proposed CAD system could achieve the highest performance accuracies of 98.60% and 98.80% using INbreast and mini-MAIS datasets, respectively. Such a CAD system seems to be useful and reliable for breast cancer diagnosis.


Asunto(s)
Neoplasias de la Mama , Redes Neurales de la Computación , Neoplasias de la Mama/diagnóstico por imagen , Femenino , Humanos , Modelos Logísticos , Aprendizaje Automático , Mamografía/métodos
18.
Comput Intell Neurosci ; 2022: 7937667, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35378816

RESUMEN

Social media networking is a prominent topic in real life, particularly at the current moment. The impact of comments has been investigated in several studies. Twitter, Facebook, and Instagram are just a few of the social media networks that are used to broadcast different news worldwide. In this paper, a comprehensive AI-based study is presented to automatically detect the Arabic text misogyny and sarcasm in binary and multiclass scenarios. The key of the proposed AI approach is to distinguish various topics of misogyny and sarcasm from Arabic tweets in social media networks. A comprehensive study is achieved for detecting both misogyny and sarcasm via adopting seven state-of-the-art NLP classifiers: ARABERT, PAC, LRC, RFC, LSVC, DTC, and KNNC. To fine tune, validate, and evaluate all of these techniques, two Arabic tweets datasets (i.e., misogyny and Abu Farah datasets) are used. For the experimental study, two scenarios are proposed for each case study (misogyny or sarcasm): binary and multiclass problems. For misogyny detection, the best accuracy is achieved using the AraBERT classifier with 91.0% for binary classification scenario and 89.0% for the multiclass scenario. For sarcasm detection, the best accuracy is achieved using the AraBERT as well with 88% for binary classification scenario and 77.0% for the multiclass scenario. The proposed method appears to be effective in detecting misogyny and sarcasm in social media platforms with suggesting AraBERT as a superior state-of-the-art deep learning classifier.


Asunto(s)
Inteligencia Artificial , Medios de Comunicación Sociales , Humanos , Red Social
19.
Comput Math Methods Med ; 2022: 4593330, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35069782

RESUMEN

Drosophila melanogaster is an important genetic model organism used extensively in medical and biological studies. About 61% of known human genes have a recognizable match with the genetic code of Drosophila flies, and 50% of fly protein sequences have mammalian analogues. Recently, several investigations have been conducted in Drosophila to study the functions of specific genes exist in the central nervous system, heart, liver, and kidney. The outcomes of the research in Drosophila are also used as a unique tool to study human-related diseases. This article presents a novel automated system to classify the gender of Drosophila flies obtained through microscopic images (ventral view). The proposed system takes an image as input and converts it into grayscale illustration to extract the texture features from the image. Then, machine learning (ML) classifiers such as support vector machines (SVM), Naive Bayes (NB), and K-nearest neighbour (KNN) are used to classify the Drosophila as male or female. The proposed model is evaluated using the real microscopic image dataset, and the results show that the accuracy of the KNN is 90%, which is higher than the accuracy of the SVM classifier.


Asunto(s)
Drosophila melanogaster/anatomía & histología , Drosophila melanogaster/clasificación , Aprendizaje Automático , Análisis para Determinación del Sexo/métodos , Animales , Teorema de Bayes , Biología Computacional , Femenino , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/estadística & datos numéricos , Masculino , Microscopía , Análisis para Determinación del Sexo/estadística & datos numéricos , Máquina de Vectores de Soporte
20.
Diagnostics (Basel) ; 13(1)2022 Dec 28.
Artículo en Inglés | MEDLINE | ID: mdl-36611382

RESUMEN

Early detection of breast cancer is an essential procedure to reduce the mortality rate among women. In this paper, a new AI-based computer-aided diagnosis (CAD) framework called ETECADx is proposed by fusing the benefits of both ensemble transfer learning of the convolutional neural networks as well as the self-attention mechanism of vision transformer encoder (ViT). The accurate and precious high-level deep features are generated via the backbone ensemble network, while the transformer encoder is used to diagnose the breast cancer probabilities in two approaches: Approach A (i.e., binary classification) and Approach B (i.e., multi-classification). To build the proposed CAD system, the benchmark public multi-class INbreast dataset is used. Meanwhile, private real breast cancer images are collected and annotated by expert radiologists to validate the prediction performance of the proposed ETECADx framework. The promising evaluation results are achieved using the INbreast mammograms with overall accuracies of 98.58% and 97.87% for the binary and multi-class approaches, respectively. Compared with the individual backbone networks, the proposed ensemble learning model improves the breast cancer prediction performance by 6.6% for binary and 4.6% for multi-class approaches. The proposed hybrid ETECADx shows further prediction improvement when the ViT-based ensemble backbone network is used by 8.1% and 6.2% for binary and multi-class diagnosis, respectively. For validation purposes using the real breast images, the proposed CAD system provides encouraging prediction accuracies of 97.16% for binary and 89.40% for multi-class approaches. The ETECADx has a capability to predict the breast lesions for a single mammogram in an average of 0.048 s. Such promising performance could be useful and helpful to assist the practical CAD framework applications providing a second supporting opinion of distinguishing various breast cancer malignancies.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...