Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 24.842
1.
Sci Rep ; 14(1): 10714, 2024 05 10.
Article En | MEDLINE | ID: mdl-38730250

A prompt diagnosis of breast cancer in its earliest phases is necessary for effective treatment. While Computer-Aided Diagnosis systems play a crucial role in automated mammography image processing, interpretation, grading, and early detection of breast cancer, existing approaches face limitations in achieving optimal accuracy. This study addresses these limitations by hybridizing the improved quantum-inspired binary Grey Wolf Optimizer with the Support Vector Machines Radial Basis Function Kernel. This hybrid approach aims to enhance the accuracy of breast cancer classification by determining the optimal Support Vector Machine parameters. The motivation for this hybridization lies in the need for improved classification performance compared to existing optimizers such as Particle Swarm Optimization and Genetic Algorithm. Evaluate the efficacy of the proposed IQI-BGWO-SVM approach on the MIAS dataset, considering various metric parameters, including accuracy, sensitivity, and specificity. Furthermore, the application of IQI-BGWO-SVM for feature selection will be explored, and the results will be compared. Experimental findings demonstrate that the suggested IQI-BGWO-SVM technique outperforms state-of-the-art classification methods on the MIAS dataset, with a resulting mean accuracy, sensitivity, and specificity of 99.25%, 98.96%, and 100%, respectively, using a tenfold cross-validation datasets partition.


Algorithms , Breast Neoplasms , Support Vector Machine , Humans , Breast Neoplasms/diagnosis , Female , Mammography/methods , Diagnosis, Computer-Assisted/methods
2.
Comput Biol Med ; 175: 108440, 2024 Jun.
Article En | MEDLINE | ID: mdl-38701589

The diagnosis of ankylosing spondylitis (AS) can be complex, necessitating a comprehensive assessment of medical history, clinical symptoms, and radiological evidence. This multidimensional approach can exacerbate the clinical burden and increase the likelihood of diagnostic inaccuracies, which may result in delayed or overlooked cases. Consequently, supplementary diagnostic techniques for AS have become a focal point in clinical research. This study introduces an enhanced optimization algorithm, SCJAYA, which incorporates salp swarm foraging behavior with cooperative predation strategies into the JAYA algorithm framework, noted for its robust optimization capabilities that emulate the evolutionary dynamics of biological organisms. The integration of salp swarm behavior is aimed at accelerating the convergence speed and enhancing the quality of solutions of the classical JAYA algorithm while the cooperative predation strategy is incorporated to mitigate the risk of convergence on local optima. SCJAYA has been evaluated across 30 benchmark functions from the CEC2014 suite against 9 conventional meta-heuristic algorithms as well as 9 state-of-the-art meta-heuristic counterparts. The comparative analyses indicate that SCJAYA surpasses these algorithms in terms of convergence speed and solution precision. Furthermore, we proposed the bSCJAYA-FKNN classifier: an advanced model applying the binary version of SCJAYA for feature selection, with the aim of improving the accuracy in diagnosing and prognosticating AS. The efficacy of the bSCJAYA-FKNN model was substantiated through validation on 11 UCI public datasets in addition to an AS-specific dataset. The model exhibited superior performance metrics-achieving an accuracy rate, specificity, Matthews correlation coefficient (MCC), F-measure, and computational time of 99.23 %, 99.52 %, 0.9906, 99.41 %, and 7.2800 s, respectively. These results not only underscore its profound capability in classification but also its substantial promise for the efficient diagnosis and prognosis of AS.


Algorithms , Spondylitis, Ankylosing , Spondylitis, Ankylosing/diagnosis , Humans , Fuzzy Logic , Diagnosis, Computer-Assisted/methods
3.
Comput Biol Med ; 175: 108483, 2024 Jun.
Article En | MEDLINE | ID: mdl-38704900

The timely and accurate diagnosis of breast cancer is pivotal for effective treatment, but current automated mammography classification methods have their constraints. In this study, we introduce an innovative hybrid model that marries the power of the Extreme Learning Machine (ELM) with FuNet transfer learning, harnessing the potential of the MIAS dataset. This novel approach leverages an Enhanced Quantum-Genetic Binary Grey Wolf Optimizer (Q-GBGWO) within the ELM framework, elevating its performance. Our contributions are twofold: firstly, we employ a feature fusion strategy to optimize feature extraction, significantly enhancing breast cancer classification accuracy. The proposed methodological motivation stems from optimizing feature extraction for improved breast cancer classification accuracy. The Q-GBGWO optimizes ELM parameters, demonstrating its efficacy within the ELM classifier. This innovation marks a considerable advancement beyond traditional methods. Through comparative evaluations against various optimization techniques, the exceptional performance of our Q-GBGWO-ELM model becomes evident. The classification accuracy of the model is exceptionally high, with rates of 96.54 % for Normal, 97.24 % for Benign, and 98.01 % for Malignant classes. Additionally, the model demonstrates a high sensitivity with rates of 96.02 % for Normal, 96.54 % for Benign, and 97.75 % for Malignant classes, and it exhibits impressive specificity with rates of 96.69 % for Normal, 97.38 % for Benign, and 98.16 % for Malignant classes. These metrics are reflected in its ability to classify three different types of breast cancer accurately. Our approach highlights the innovative integration of image data, deep feature extraction, and optimized ELM classification, marking a transformative step in advancing early breast cancer detection and enhancing patient outcomes.


Breast Neoplasms , Machine Learning , Humans , Breast Neoplasms/diagnostic imaging , Female , Mammography/methods , Diagnosis, Computer-Assisted/methods
4.
Comput Biol Med ; 175: 108519, 2024 Jun.
Article En | MEDLINE | ID: mdl-38688128

Lung cancer has seriously threatened human health due to its high lethality and morbidity. Lung adenocarcinoma, in particular, is one of the most common subtypes of lung cancer. Pathological diagnosis is regarded as the gold standard for cancer diagnosis. However, the traditional manual screening of lung cancer pathology images is time consuming and error prone. Computer-aided diagnostic systems have emerged to solve this problem. Current research methods are unable to fully exploit the beneficial features inherent within patches, and they are characterized by high model complexity and significant computational effort. In this study, a deep learning framework called Multi-Scale Network (MSNet) is proposed for the automatic detection of lung adenocarcinoma pathology images. MSNet is designed to efficiently harness the valuable features within data patches, while simultaneously reducing model complexity, computational demands, and storage space requirements. The MSNet framework employs a dual data stream input method. In this input method, MSNet combines Swin Transformer and MLP-Mixer models to address global information between patches and the local information within each patch. Subsequently, MSNet uses the Multilayer Perceptron (MLP) module to fuse local and global features and perform classification to output the final detection results. In addition, a dataset of lung adenocarcinoma pathology images containing three categories is created for training and testing the MSNet framework. Experimental results show that the diagnostic accuracy of MSNet for lung adenocarcinoma pathology images is 96.55 %. In summary, MSNet has high classification performance and shows effectiveness and potential in the classification of lung adenocarcinoma pathology images.


Adenocarcinoma of Lung , Lung Neoplasms , Neural Networks, Computer , Humans , Adenocarcinoma of Lung/diagnostic imaging , Adenocarcinoma of Lung/pathology , Adenocarcinoma of Lung/classification , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/pathology , Lung Neoplasms/classification , Deep Learning , Image Interpretation, Computer-Assisted/methods , Diagnosis, Computer-Assisted/methods
5.
Int Ophthalmol ; 44(1): 191, 2024 Apr 23.
Article En | MEDLINE | ID: mdl-38653842

Optical Coherence Tomography (OCT) is widely recognized as the leading modality for assessing ocular retinal diseases, playing a crucial role in diagnosing retinopathy while maintaining a non-invasive modality. The increasing volume of OCT images underscores the growing importance of automating image analysis. Age-related diabetic Macular Degeneration (AMD) and Diabetic Macular Edema (DME) are the most common cause of visual impairment. Early detection and timely intervention for diabetes-related conditions are essential for preventing optical complications and reducing the risk of blindness. This study introduces a novel Computer-Aided Diagnosis (CAD) system based on a Convolutional Neural Network (CNN) model, aiming to identify and classify OCT retinal images into AMD, DME, and Normal classes. Leveraging CNN efficiency, including feature learning and classification, various CNN, including pre-trained VGG16, VGG19, Inception_V3, a custom from scratch model, BCNN (VGG16) 2 , BCNN (VGG19) 2 , and BCNN (Inception_V3) 2 , are developed for the classification of AMD, DME, and Normal OCT images. The proposed approach has been evaluated on two datasets, including a DUKE public dataset and a Tunisian private dataset. The combination of the Inception_V3 model and the extracted feature from the proposed custom CNN achieved the highest accuracy value of 99.53% in the DUKE dataset. The obtained results on DUKE public and Tunisian datasets demonstrate the proposed approach as a significant tool for efficient and automatic retinal OCT image classification.


Deep Learning , Macular Degeneration , Macular Edema , Tomography, Optical Coherence , Humans , Tomography, Optical Coherence/methods , Macular Degeneration/diagnosis , Macular Edema/diagnosis , Macular Edema/diagnostic imaging , Macular Edema/etiology , Diabetic Retinopathy/diagnosis , Diabetic Retinopathy/diagnostic imaging , Neural Networks, Computer , Retina/diagnostic imaging , Retina/pathology , Diagnosis, Computer-Assisted/methods , Aged , Female , Male
6.
Comput Biol Med ; 174: 108428, 2024 May.
Article En | MEDLINE | ID: mdl-38631117

Diabetic retinopathy (DR) is a kind of ocular complication of diabetes, and its degree grade is an essential basis for early diagnosis of patients. Manual diagnosis is a long and expensive process with a specific risk of misdiagnosis. Computer-aided diagnosis can provide more accurate and practical treatment recommendations. In this paper, we propose a multi-view joint learning DR diagnostic model called RT2Net, which integrates the global features of fundus images and the local detailed features of vascular images to reduce the limitations of single fundus image learning. Firstly, the original image is preprocessed using operations such as contrast-limited adaptive histogram equalization, and the vascular structure of the extracted DR image is segmented. Then, the vascular image and fundus image are input into two branch networks of RT2Net for feature extraction, respectively, and the feature fusion module adaptively fuses the feature vectors' output from the branch networks. Finally, the optimized classification model is used to identify the five categories of DR. This paper conducts extensive experiments on the public datasets EyePACS and APTOS 2019 to demonstrate the method's effectiveness. The accuracy of RT2Net on the two datasets reaches 88.2% and 85.4%, and the area under the receiver operating characteristic curve (AUC) is 0.98 and 0.96, respectively. The excellent classification ability of RT2Net for DR can significantly help patients detect and treat lesions early and provide doctors with a more reliable diagnosis basis, which has significant clinical value for diagnosing DR.


Diabetic Retinopathy , Diagnosis, Computer-Assisted , Diabetic Retinopathy/diagnostic imaging , Diabetic Retinopathy/diagnosis , Humans , Diagnosis, Computer-Assisted/methods , Image Interpretation, Computer-Assisted/methods , Machine Learning
7.
Med Image Anal ; 94: 103158, 2024 May.
Article En | MEDLINE | ID: mdl-38569379

Magnetic resonance (MR) images collected in 2D clinical protocols typically have large inter-slice spacing, resulting in high in-plane resolution and reduced through-plane resolution. Super-resolution technique can enhance the through-plane resolution of MR images to facilitate downstream visualization and computer-aided diagnosis. However, most existing works train the super-resolution network at a fixed scaling factor, which is not friendly to clinical scenes of varying inter-slice spacing in MR scanning. Inspired by the recent progress in implicit neural representation, we propose a Spatial Attention-based Implicit Neural Representation (SA-INR) network for arbitrary reduction of MR inter-slice spacing. The SA-INR aims to represent an MR image as a continuous implicit function of 3D coordinates. In this way, the SA-INR can reconstruct the MR image with arbitrary inter-slice spacing by continuously sampling the coordinates in 3D space. In particular, a local-aware spatial attention operation is introduced to model nearby voxels and their affinity more accurately in a larger receptive field. Meanwhile, to improve the computational efficiency, a gradient-guided gating mask is proposed for applying the local-aware spatial attention to selected areas only. We evaluate our method on the public HCP-1200 dataset and the clinical knee MR dataset to demonstrate its superiority over other existing methods.


Diagnosis, Computer-Assisted , Magnetic Resonance Imaging , Humans , Magnetic Resonance Imaging/methods , Neural Networks, Computer , Knee Joint , Phantoms, Imaging , Image Processing, Computer-Assisted/methods
8.
Med Image Anal ; 94: 103149, 2024 May.
Article En | MEDLINE | ID: mdl-38574542

The variation in histologic staining between different medical centers is one of the most profound challenges in the field of computer-aided diagnosis. The appearance disparity of pathological whole slide images causes algorithms to become less reliable, which in turn impedes the wide-spread applicability of downstream tasks like cancer diagnosis. Furthermore, different stainings lead to biases in the training which in case of domain shifts negatively affect the test performance. Therefore, in this paper we propose MultiStain-CycleGAN, a multi-domain approach to stain normalization based on CycleGAN. Our modifications to CycleGAN allow us to normalize images of different origins without retraining or using different models. We perform an extensive evaluation of our method using various metrics and compare it to commonly used methods that are multi-domain capable. First, we evaluate how well our method fools a domain classifier that tries to assign a medical center to an image. Then, we test our normalization on the tumor classification performance of a downstream classifier. Furthermore, we evaluate the image quality of the normalized images using the Structural similarity index and the ability to reduce the domain shift using the Fréchet inception distance. We show that our method proves to be multi-domain capable, provides a very high image quality among the compared methods, and can most reliably fool the domain classifier while keeping the tumor classifier performance high. By reducing the domain influence, biases in the data can be removed on the one hand and the origin of the whole slide image can be disguised on the other, thus enhancing patient data privacy.


Coloring Agents , Neoplasms , Humans , Coloring Agents/chemistry , Staining and Labeling , Algorithms , Diagnosis, Computer-Assisted , Image Processing, Computer-Assisted/methods
9.
Med Image Anal ; 94: 103157, 2024 May.
Article En | MEDLINE | ID: mdl-38574544

Computer-aided detection and diagnosis systems (CADe/CADx) in endoscopy are commonly trained using high-quality imagery, which is not representative for the heterogeneous input typically encountered in clinical practice. In endoscopy, the image quality heavily relies on both the skills and experience of the endoscopist and the specifications of the system used for screening. Factors such as poor illumination, motion blur, and specific post-processing settings can significantly alter the quality and general appearance of these images. This so-called domain gap between the data used for developing the system and the data it encounters after deployment, and the impact it has on the performance of deep neural networks (DNNs) supportive endoscopic CAD systems remains largely unexplored. As many of such systems, for e.g. polyp detection, are already being rolled out in clinical practice, this poses severe patient risks in particularly community hospitals, where both the imaging equipment and experience are subject to considerable variation. Therefore, this study aims to evaluate the impact of this domain gap on the clinical performance of CADe/CADx for various endoscopic applications. For this, we leverage two publicly available data sets (KVASIR-SEG and GIANA) and two in-house data sets. We investigate the performance of commonly-used DNN architectures under synthetic, clinically calibrated image degradations and on a prospectively collected dataset including 342 endoscopic images of lower subjective quality. Additionally, we assess the influence of DNN architecture and complexity, data augmentation, and pretraining techniques for improved robustness. The results reveal a considerable decline in performance of 11.6% (±1.5) as compared to the reference, within the clinically calibrated boundaries of image degradations. Nevertheless, employing more advanced DNN architectures and self-supervised in-domain pre-training effectively mitigate this drop to 7.7% (±2.03). Additionally, these enhancements yield the highest performance on the manually collected test set including images with lower subjective quality. By comprehensively assessing the robustness of popular DNN architectures and training strategies across multiple datasets, this study provides valuable insights into their performance and limitations for endoscopic applications. The findings highlight the importance of including robustness evaluation when developing DNNs for endoscopy applications and propose strategies to mitigate performance loss.


Diagnosis, Computer-Assisted , Neural Networks, Computer , Humans , Diagnosis, Computer-Assisted/methods , Endoscopy, Gastrointestinal , Image Processing, Computer-Assisted/methods
10.
Int Ophthalmol ; 44(1): 174, 2024 Apr 13.
Article En | MEDLINE | ID: mdl-38613630

PURPOSE: This study aims to address the challenge of identifying retinal damage in medical applications through a computer-aided diagnosis (CAD) approach. Data was collected from four prominent eye hospitals in India for analysis and model development. METHODS: Data was collected from Silchar Medical College and Hospital (SMCH), Aravind Eye Hospital (Tamil Nadu), LV Prasad Eye Hospital (Hyderabad), and Medanta (Gurugram). A modified version of the ResNet-101 architecture, named ResNet-RS, was utilized for retinal damage identification. In this modified architecture, the last layer's softmax function was replaced with a support vector machine (SVM). The resulting model, termed ResNet-RS-SVM, was trained and evaluated on each hospital's dataset individually and collectively. RESULTS: The proposed ResNet-RS-SVM model achieved high accuracies across the datasets from the different hospitals: 99.17% for Aravind, 98.53% for LV Prasad, 98.33% for Medanta, and 100% for SMCH. When considering all hospitals collectively, the model attained an accuracy of 97.19%. CONCLUSION: The findings demonstrate the effectiveness of the ResNet-RS-SVM model in accurately identifying retinal damage in diverse datasets collected from multiple eye hospitals in India. This approach presents a promising advancement in computer-aided diagnosis for improving the detection and management of retinal diseases.


Retinal Diseases , Support Vector Machine , Humans , India/epidemiology , Diagnosis, Computer-Assisted , Hospitals , Retinal Diseases/diagnosis
11.
Comput Biol Med ; 173: 108370, 2024 May.
Article En | MEDLINE | ID: mdl-38564854

The transformer architecture has achieved remarkable success in medical image analysis owing to its powerful capability for capturing long-range dependencies. However, due to the lack of intrinsic inductive bias in modeling visual structural information, the transformer generally requires a large-scale pre-training schedule, limiting the clinical applications over expensive small-scale medical data. To this end, we propose a slimmable transformer to explore intrinsic inductive bias via position information for medical image segmentation. Specifically, we empirically investigate how different position encoding strategies affect the prediction quality of the region of interest (ROI) and observe that ROIs are sensitive to different position encoding strategies. Motivated by this, we present a novel Hybrid Axial-Attention (HAA) that can be equipped with pixel-level spatial structure and relative position information as inductive bias. Moreover, we introduce a gating mechanism to achieve efficient feature selection and further improve the representation quality over small-scale datasets. Experiments on LGG and COVID-19 datasets prove the superiority of our method over the baseline and previous works. Internal workflow visualization with interpretability is conducted to validate our success better; the proposed slimmable transformer has the potential to be further developed into a visual software tool for improving computer-aided lesion diagnosis and treatment planning.


COVID-19 , Humans , COVID-19/diagnostic imaging , Diagnosis, Computer-Assisted , Software , Workflow , Image Processing, Computer-Assisted
12.
Biomed Phys Eng Express ; 10(3)2024 Apr 18.
Article En | MEDLINE | ID: mdl-38599202

A lot of underdeveloped nations particularly in Africa struggle with cancer-related, deadly diseases. Particularly in women, the incidence of breast cancer is rising daily because of ignorance and delayed diagnosis. Only by correctly identifying and diagnosing cancer in its very early stages of development can be effectively treated. The classification of cancer can be accelerated and automated with the aid of computer-aided diagnosis and medical image analysis techniques. This research provides the use of transfer learning from a Residual Network 18 (ResNet18) and Residual Network 34 (ResNet34) architectures to detect breast cancer. The study examined how breast cancer can be identified in breast mammography pictures using transfer learning from ResNet18 and ResNet34, and developed a demo app for radiologists using the trained models with the best validation accuracy. 1, 200 datasets of breast x-ray mammography images from the National Radiological Society's (NRS) archives were employed in the study. The dataset was categorised as implant cancer negative, implant cancer positive, cancer negative and cancer positive in order to increase the consistency of x-ray mammography images classification and produce better features. For the multi-class classification of the images, the study gave an average accuracy for binary classification of benign or malignant cancer cases of 86.7% validation accuracy for ResNet34 and 92% validation accuracy for ResNet18. A prototype web application showcasing ResNet18 performance has been created. The acquired results show how transfer learning can improve the accuracy of breast cancer detection, providing invaluable assistance to medical professionals, particularly in an African scenario.


Breast Neoplasms , Female , Humans , Mammography/methods , Breast/diagnostic imaging , Diagnosis, Computer-Assisted , Machine Learning
13.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 41(2): 220-227, 2024 Apr 25.
Article Zh | MEDLINE | ID: mdl-38686401

In computer-aided medical diagnosis, obtaining labeled medical image data is expensive, while there is a high demand for model interpretability. However, most deep learning models currently require a large amount of data and lack interpretability. To address these challenges, this paper proposes a novel data augmentation method for medical image segmentation. The uniqueness and advantages of this method lie in the utilization of gradient-weighted class activation mapping to extract data efficient features, which are then fused with the original image. Subsequently, a new channel weight feature extractor is constructed to learn the weights between different channels. This approach achieves non-destructive data augmentation effects, enhancing the model's performance, data efficiency, and interpretability. Applying the method of this paper to the Hyper-Kvasir dataset, the intersection over union (IoU) and Dice of the U-net were improved, respectively; and on the ISIC-Archive dataset, the IoU and Dice of the DeepLabV3+ were also improved respectively. Furthermore, even when the training data is reduced to 70 %, the proposed method can still achieve performance that is 95 % of that achieved with the entire dataset, indicating its good data efficiency. Moreover, the data-efficient features used in the method have interpretable information built-in, which enhances the interpretability of the model. The method has excellent universality, is plug-and-play, applicable to various segmentation methods, and does not require modification of the network structure, thus it is easy to integrate into existing medical image segmentation method, enhancing the convenience of future research and applications.


Algorithms , Deep Learning , Image Processing, Computer-Assisted , Humans , Image Processing, Computer-Assisted/methods , Diagnostic Imaging/methods , Diagnosis, Computer-Assisted/methods , Neural Networks, Computer
14.
Sci Rep ; 14(1): 9715, 2024 04 27.
Article En | MEDLINE | ID: mdl-38678100

The tendency of skin diseases to manifest in a unique and yet similar appearance, absence of enough competent dermatologists, and urgency of diagnosis and classification on time and accurately, makes the need of machine aided diagnosis blatant. This study is conducted with the purpose of broadening the research in skin disease diagnosis with computer by traversing the capabilities of deep Learning algorithms to classify two skin diseases noticeably close in appearance, Psoriasis and Lichen Planus. The resemblance between these two skin diseases is striking, often resulting in their classification within the same category. Despite this, there is a dearth of research focusing specifically on these diseases. A customized 50 layers ResNet-50 architecture of convolutional neural network is used and the results are validated through fivefold cross-validation, threefold cross-validation, and random split. By utilizing advanced data augmentation and class balancing techniques, the diversity of the dataset has increased, and the dataset imbalance has been minimized. ResNet-50 has achieved an accuracy of 89.07%, sensitivity of 86.46%, and specificity of 86.02%. With their promising results, these algorithms make the potential of machine aided diagnosis clear. Deep Learning algorithms could provide assistance to physicians and dermatologists by classification of skin diseases, with similar appearance, in real-time.


Deep Learning , Lichen Planus , Psoriasis , Humans , Psoriasis/diagnosis , Lichen Planus/diagnosis , Lichen Planus/classification , Diagnosis, Computer-Assisted/methods , Algorithms , Neural Networks, Computer , Male , Female
15.
Respir Res ; 25(1): 177, 2024 Apr 24.
Article En | MEDLINE | ID: mdl-38658980

BACKGROUND: Computer Aided Lung Sound Analysis (CALSA) aims to overcome limitations associated with standard lung auscultation by removing the subjective component and allowing quantification of sound characteristics. In this proof-of-concept study, a novel automated approach was evaluated in real patient data by comparing lung sound characteristics to structural and functional imaging biomarkers. METHODS: Patients with cystic fibrosis (CF) aged > 5y were recruited in a prospective cross-sectional study. CT scans were analyzed by the CF-CT scoring method and Functional Respiratory Imaging (FRI). A digital stethoscope was used to record lung sounds at six chest locations. Following sound characteristics were determined: expiration-to-inspiration (E/I) signal power ratios within different frequency ranges, number of crackles per respiratory phase and wheeze parameters. Linear mixed-effects models were computed to relate CALSA parameters to imaging biomarkers on a lobar level. RESULTS: 222 recordings from 25 CF patients were included. Significant associations were found between E/I ratios and structural abnormalities, of which the ratio between 200 and 400 Hz appeared to be most clinically relevant due to its relation with bronchiectasis, mucus plugging, bronchial wall thickening and air trapping on CT. The number of crackles was also associated with multiple structural abnormalities as well as regional airway resistance determined by FRI. Wheeze parameters were not considered in the statistical analysis, since wheezing was detected in only one recording. CONCLUSIONS: The present study is the first to investigate associations between auscultatory findings and imaging biomarkers, which are considered the gold standard to evaluate the respiratory system. Despite the exploratory nature of this study, the results showed various meaningful associations that highlight the potential value of automated CALSA as a novel non-invasive outcome measure in future research and clinical practice.


Biomarkers , Cystic Fibrosis , Respiratory Sounds , Humans , Cross-Sectional Studies , Male , Female , Prospective Studies , Adult , Cystic Fibrosis/physiopathology , Cystic Fibrosis/diagnostic imaging , Young Adult , Adolescent , Auscultation/methods , Tomography, X-Ray Computed/methods , Lung/diagnostic imaging , Lung/physiopathology , Child , Proof of Concept Study , Diagnosis, Computer-Assisted/methods , Middle Aged
16.
Sci Rep ; 14(1): 8071, 2024 04 05.
Article En | MEDLINE | ID: mdl-38580700

Over recent years, researchers and practitioners have encountered massive and continuous improvements in the computational resources available for their use. This allowed the use of resource-hungry Machine learning (ML) algorithms to become feasible and practical. Moreover, several advanced techniques are being used to boost the performance of such algorithms even further, which include various transfer learning techniques, data augmentation, and feature concatenation. Normally, the use of these advanced techniques highly depends on the size and nature of the dataset being used. In the case of fine-grained medical image sets, which have subcategories within the main categories in the image set, there is a need to find the combination of the techniques that work the best on these types of images. In this work, we utilize these advanced techniques to find the best combinations to build a state-of-the-art lumber disc herniation computer-aided diagnosis system. We have evaluated the system extensively and the results show that the diagnosis system achieves an accuracy of 98% when it is compared with human diagnosis.


Intervertebral Disc Displacement , Humans , Intervertebral Disc Displacement/diagnostic imaging , Diagnosis, Computer-Assisted/methods , Algorithms , Machine Learning , Computers
17.
J Stroke Cerebrovasc Dis ; 33(6): 107714, 2024 Jun.
Article En | MEDLINE | ID: mdl-38636829

OBJECTIVES: We set out to develop a machine learning model capable of distinguishing patients presenting with ischemic stroke from a healthy cohort of subjects. The model relies on a 3-min resting electroencephalogram (EEG) recording from which features can be computed. MATERIALS AND METHODS: Using a large-scale, retrospective database of EEG recordings and matching clinical reports, we were able to construct a dataset of 1385 healthy subjects and 374 stroke patients. With subjects often producing more than one recording per session, the final dataset consisted of 2401 EEG recordings (63% healthy, 37% stroke). RESULTS: Using a rich set of features encompassing both the spectral and temporal domains, our model yielded an AUC of 0.95, with a sensitivity and specificity of 93% and 86%, respectively. Allowing for multiple recordings per subject in the training set boosted sensitivity by 7%, attributable to a more balanced dataset. CONCLUSIONS: Our work demonstrates strong potential for the use of EEG in conjunction with machine learning methods to distinguish stroke patients from healthy subjects. Our approach provides a solution that is not only timely (3-minutes recording time) but also highly precise and accurate (AUC: 0.95).


Brain Waves , Databases, Factual , Electroencephalography , Ischemic Stroke , Machine Learning , Predictive Value of Tests , Humans , Retrospective Studies , Male , Female , Middle Aged , Aged , Ischemic Stroke/diagnosis , Ischemic Stroke/physiopathology , Case-Control Studies , Adult , Brain/physiopathology , Signal Processing, Computer-Assisted , Reproducibility of Results , Aged, 80 and over , Diagnosis, Differential , Diagnosis, Computer-Assisted , Time Factors
18.
Neural Netw ; 175: 106296, 2024 Jul.
Article En | MEDLINE | ID: mdl-38653077

Structural magnetic resonance imaging (sMRI) has shown great clinical value and has been widely used in deep learning (DL) based computer-aided brain disease diagnosis. Previous DL-based approaches focused on local shapes and textures in brain sMRI that may be significant only within a particular domain. The learned representations are likely to contain spurious information and have poor generalization ability in other diseases and datasets. To facilitate capturing meaningful and robust features, it is necessary to first comprehensively understand the intrinsic pattern of the brain that is not restricted within a single data/task domain. Considering that the brain is a complex connectome of interlinked neurons, the connectional properties in the brain have strong biological significance, which is shared across multiple domains and covers most pathological information. In this work, we propose a connectional style contextual representation learning model (CS-CRL) to capture the intrinsic pattern of the brain, used for multiple brain disease diagnosis. Specifically, it has a vision transformer (ViT) encoder and leverages mask reconstruction as the proxy task and Gram matrices to guide the representation of connectional information. It facilitates the capture of global context and the aggregation of features with biological plausibility. The results indicate that CS-CRL achieves superior accuracy in multiple brain disease diagnosis tasks across six datasets and three diseases and outperforms state-of-the-art models. Furthermore, we demonstrate that CS-CRL captures more brain-network-like properties, and better aggregates features, is easier to optimize, and is more robust to noise, which explains its superiority in theory.


Brain , Deep Learning , Magnetic Resonance Imaging , Humans , Magnetic Resonance Imaging/methods , Brain/diagnostic imaging , Brain/physiology , Brain Diseases/diagnosis , Brain Diseases/physiopathology , Neural Networks, Computer , Diagnosis, Computer-Assisted/methods
19.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 41(2): 413-420, 2024 Apr 25.
Article Zh | MEDLINE | ID: mdl-38686425

Pneumoconiosis ranks first among the newly-emerged occupational diseases reported annually in China, and imaging diagnosis is still one of the main clinical diagnostic methods. However, manual reading of films requires high level of doctors, and it is difficult to discriminate the staged diagnosis of pneumoconiosis imaging, and due to the influence of uneven distribution of medical resources and other factors, it is easy to lead to misdiagnosis and omission of diagnosis in primary healthcare institutions. Computer-aided diagnosis system can realize rapid screening of pneumoconiosis in order to assist clinicians in identification and diagnosis, and improve diagnostic efficacy. As an important branch of deep learning, convolutional neural network (CNN) is good at dealing with various visual tasks such as image segmentation, image classification, target detection and so on because of its characteristics of local association and weight sharing, and has been widely used in the field of computer-aided diagnosis of pneumoconiosis in recent years. This paper was categorized into three parts according to the main applications of CNNs (VGG, U-Net, ResNet, DenseNet, CheXNet, Inception-V3, and ShuffleNet) in the imaging diagnosis of pneumoconiosis, including CNNs in pneumoconiosis screening diagnosis, CNNs in staging diagnosis of pneumoconiosis, and CNNs in segmentation of pneumoconiosis foci to conduct a literature review. It aims to summarize the methods, advantages and disadvantages, and optimization ideas of CNN applied to the images of pneumoconiosis, and to provide a reference for the research direction of further development of computer-aided diagnosis of pneumoconiosis.


Diagnosis, Computer-Assisted , Neural Networks, Computer , Pneumoconiosis , Humans , Pneumoconiosis/diagnosis , Pneumoconiosis/diagnostic imaging , Diagnosis, Computer-Assisted/methods , Deep Learning , Occupational Diseases/diagnosis , China , Tomography, X-Ray Computed , Image Processing, Computer-Assisted/methods
...