Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 107
Filter
1.
J Imaging Inform Med ; 2024 Sep 30.
Article in English | MEDLINE | ID: mdl-39349784

ABSTRACT

Our primary aim with this study was to build a patient-level classifier for stroke territory in DWI using AI to facilitate fast triage of stroke to a dedicated stroke center. A retrospective collection of DWI images of 271 and 122 consecutive acute ischemic stroke patients from two centers was carried out. Pretrained MobileNetV2 and EfficientNetB0 architectures were used to classify territorial subtypes as middle cerebral artery, posterior circulation, or watershed infarcts along with normal slices. Various input combinations using edge maps, thresholding, and hard attention versions were explored. The effect of augmenting the three-channel inputs of pre-trained models on classification performance was analyzed. ROC analyses and confusion matrix-derived performance metrics of the models were reported. Of the 271 patients included in this study, 151 (55.7%) were male and 120 (44.3%) were female. One hundred twenty-nine patients had MCA (47.6%), 65 patients had posterior circulation (24%), and 77 patients had watershed (28.0%) infarcts for center 1. Of the 122 patients from center 2, 78 (64%) were male and 44 (34%) were female. Fifty-two patients (43%) had MCA, 51 patients had posterior circulation (42%), and 19 (15%) patients had watershed infarcts. The Mobile-Crop model had the best performance with 0.95 accuracy and a 0.91 mean f1 score for slice-wise classification and 0.88 accuracy on external test sets, along with a 0.92 mean AUC. In conclusion, modified pre-trained models may be augmented with the transformation of images to provide a more accurate classification of affected territory by stroke in DWI.

2.
Vet Sci ; 11(9)2024 Sep 01.
Article in English | MEDLINE | ID: mdl-39330779

ABSTRACT

Body condition score (BCS) is a common tool used to assess the welfare of dairy cows and is based on scoring animals according to their external appearance. If the BCS of dairy cows deviates from the required value, it can lead to diseases caused by metabolic problems in the animal, increased medication costs, low productivity, and even the loss of dairy cows. BCS scores for dairy cows on farms are mostly determined by observation based on expert knowledge and experience. This study proposes an automatic classification system for BCS determination in dairy cows using the YOLOv8x deep learning architecture. In this study, firstly, an original dataset was prepared by dividing the BCS scale into five different classes of Emaciated, Poor, Good, Fat, and Obese for images of Holstein and Simmental cow breeds collected from different farms. In the experimental analyses performed on the dataset prepared in this study, the BCS values of 102 out of a total of 126 cow images in the test set were correctly classified using the proposed YOLOv8x deep learning architecture. Furthermore, an average accuracy of 0.81 was achieved for all BCS classes in Holstein and Simmental cows. In addition, the average area under the precision-recall curve was 0.87. In conclusion, the BCS classification system for dairy cows proposed in this study may allow for the accurate observation of animals with rapid declines in body condition. In addition, the BCS classification system can be used as a tool for production decision-makers in early lactation to reduce the negative energy balance.

3.
J Biophotonics ; : e202400233, 2024 Sep 11.
Article in English | MEDLINE | ID: mdl-39262127

ABSTRACT

Gleason grading system is dependable for quantifying prostate cancer. This paper introduces a fast multiphoton microscopic imaging method via deep learning for automatic Gleason grading. Due to the contradiction between multiphoton microscopy (MPM) imaging speed and quality, a deep learning architecture (SwinIR) is used for image super-resolution to address this issue. The quality of low-resolution image is improved, which increased the acquisition speed from 7.55 s per frame to 0.24 s per frame. A classification network (Swin Transformer) was introduced for automated Gleason grading. The classification accuracy and Macro-F1 achieved by training on high-resolution images are respectively 90.9% and 90.9%. For training on super-resolution images, the classification accuracy and Macro-F1 are respectively 89.9% and 89.9%. It shows that super-resolution image can provide a comparable performance to high-resolution image. Our results suggested that MPM joint image super-resolution and automatic classification methods hold the potential to be a real-time clinical diagnostic tool for prostate cancer diagnosis.

4.
Sci Rep ; 14(1): 19285, 2024 08 20.
Article in English | MEDLINE | ID: mdl-39164445

ABSTRACT

Age-related macular degeneration (AMD) and diabetic macular edema (DME) are significant causes of blindness worldwide. The prevalence of these diseases is steadily increasing due to population aging. Therefore, early diagnosis and prevention are crucial for effective treatment. Classification of Macular Degeneration OCT Images is a widely used method for assessing retinal lesions. However, there are two main challenges in OCT image classification: incomplete image feature extraction and lack of prominence in important positional features. To address these challenges, we proposed a deep learning neural network model called MSA-Net, which incorporates our proposed multi-scale architecture and spatial attention mechanism. Our multi-scale architecture is based on depthwise separable convolution, which ensures comprehensive feature extraction from multiple scales while minimizing the growth of model parameters. The spatial attention mechanism is aim to highlight the important positional features in the images, which emphasizes the representation of macular region features in OCT images. We test MSA-NET on the NEH dataset and the UCSD dataset, performing three-class (CNV, DURSEN, and NORMAL) and four-class (CNV, DURSEN, DME, and NORMAL) classification tasks. On the NEH dataset, the accuracy, sensitivity, and specificity are 98.1%, 97.9%, and 98.0%, respectively. After fine-tuning on the UCSD dataset, the accuracy, sensitivity, and specificity are 96.7%, 96.7%, and 98.9%, respectively. Experimental results demonstrate the excellent classification performance and generalization ability of our model compared to previous models and recent well-known OCT classification models, establishing it as a highly competitive intelligence classification approach in the field of macular degeneration.


Subject(s)
Deep Learning , Macular Degeneration , Neural Networks, Computer , Tomography, Optical Coherence , Humans , Macular Degeneration/diagnostic imaging , Macular Degeneration/classification , Macular Degeneration/pathology , Tomography, Optical Coherence/methods , Macular Edema/diagnostic imaging , Macular Edema/classification , Macular Edema/pathology , Diabetic Retinopathy/diagnostic imaging , Diabetic Retinopathy/classification , Diabetic Retinopathy/pathology , Diabetic Retinopathy/diagnosis , Image Processing, Computer-Assisted/methods
5.
Front Med (Lausanne) ; 11: 1402768, 2024.
Article in English | MEDLINE | ID: mdl-38947236

ABSTRACT

As machine learning progresses, techniques such as neural networks, decision trees, and support vector machines are being increasingly applied in the medical domain, especially for tasks involving large datasets, such as cell detection, recognition, classification, and visualization. Within the domain of bone marrow cell morphology analysis, deep learning offers substantial benefits due to its robustness, ability for automatic feature learning, and strong image characterization capabilities. Deep neural networks are a machine learning paradigm specifically tailored for image processing applications. Artificial intelligence serves as a potent tool in supporting the diagnostic process of clinical bone marrow cell morphology. Despite the potential of artificial intelligence to augment clinical diagnostics in this domain, manual analysis of bone marrow cell morphology remains the gold standard and an indispensable tool for identifying, diagnosing, and assessing the efficacy of hematologic disorders. However, the traditional manual approach is not without limitations and shortcomings, necessitating, the exploration of automated solutions for examining and analyzing bone marrow cytomorphology. This review provides a multidimensional account of six bone marrow cell morphology processes: automated bone marrow cell morphology detection, automated bone marrow cell morphology segmentation, automated bone marrow cell morphology identification, automated bone marrow cell morphology classification, automated bone marrow cell morphology enumeration, and automated bone marrow cell morphology diagnosis. Highlighting the attractiveness and potential of machine learning systems based on bone marrow cell morphology, the review synthesizes current research and recent advances in the application of machine learning in this field. The objective of this review is to offer recommendations to hematologists for selecting the most suitable machine learning algorithms to automate bone marrow cell morphology examinations, enabling swift and precise analysis of bone marrow cytopathic trends for early disease identification and diagnosis. Furthermore, the review endeavors to delineate potential future research avenues for machine learning-based applications in bone marrow cell morphology analysis.

6.
J Bone Oncol ; 46: 100606, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38778836

ABSTRACT

Objective: This study aims to explore an optimized deep-learning model for automatically classifying spinal osteosarcoma and giant cell tumors. In particular, it aims to provide a reliable method for distinguishing between these challenging diagnoses in medical imaging. Methods: This research employs an optimized DenseNet model with a self-attention mechanism to enhance feature extraction capabilities and reduce misclassification in differentiating spinal osteosarcoma and giant cell tumors. The model utilizes multi-scale feature map extraction for improved classification accuracy. The paper delves into the practical use of Gradient-weighted Class Activation Mapping (Grad-CAM) for enhancing medical image classification, specifically focusing on its application in diagnosing spinal osteosarcoma and giant cell tumors. The results demonstrate that the implementation of Grad-CAM visualization techniques has improved the performance of the deep learning model, resulting in an overall accuracy of 85.61%. Visualizations of images for these medical conditions using Grad-CAM, with corresponding class activation maps that indicate the tumor regions where the model focuses during predictions. Results: The model achieves an overall accuracy of 80% or higher, with sensitivity exceeding 80% and specificity surpassing 80%. The average area under the curve AUC for spinal osteosarcoma and giant cell tumors is 0.814 and 0.882, respectively. The model significantly supports orthopedics physicians in developing treatment and care plans. Conclusion: The DenseNet-based automatic classification model accurately distinguishes spinal osteosarcoma from giant cell tumors. This study contributes to medical image analysis, providing a valuable tool for clinicians in accurate diagnostic classification. Future efforts will focus on expanding the dataset and refining the algorithm to enhance the model's applicability in diverse clinical settings.

7.
Comput Methods Programs Biomed ; 251: 108201, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38703719

ABSTRACT

BACKGROUND AND OBJECTIVE: Surgical robotics tends to develop cognitive control architectures to provide certain degree of autonomy to improve patient safety and surgery outcomes, while decreasing the required surgeons' cognitive load dedicated to low level decisions. Cognition needs workspace perception, which is an essential step towards automatic decision-making and task planning capabilities. Robust and accurate detection and tracking in minimally invasive surgery suffers from limited visibility, occlusions, anatomy deformations and camera movements. METHOD: This paper develops a robust methodology to detect and track anatomical structures in real time to be used in automatic control of robotic systems and augmented reality. The work focuses on the experimental validation in highly challenging surgery: fetoscopic repair of Open Spina Bifida. The proposed method is based on two sequential steps: first, selection of relevant points (contour) using a Convolutional Neural Network and, second, reconstruction of the anatomical shape by means of deformable geometric primitives. RESULTS: The methodology performance was validated with different scenarios. Synthetic scenario tests, designed for extreme validation conditions, demonstrate the safety margin offered by the methodology with respect to the nominal conditions during surgery. Real scenario experiments have demonstrated the validity of the method in terms of accuracy, robustness and computational efficiency. CONCLUSIONS: This paper presents a robust anatomical structure detection in present of abrupt camera movements, severe occlusions and deformations. Even though the paper focuses on a case study, Open Spina Bifida, the methodology is applicable in all anatomies which contours can be approximated by geometric primitives. The methodology is designed to provide effective inputs to cognitive robotic control and augmented reality systems that require accurate tracking of sensitive anatomies.


Subject(s)
Robotic Surgical Procedures , Humans , Robotic Surgical Procedures/methods , Neural Networks, Computer , Algorithms , Spinal Dysraphism/surgery , Spinal Dysraphism/diagnostic imaging , Image Processing, Computer-Assisted/methods , Robotics , Augmented Reality
8.
Clin Oral Investig ; 28(4): 223, 2024 Mar 20.
Article in English | MEDLINE | ID: mdl-38507031

ABSTRACT

OBJECTIVES: An evaluation of the effectiveness of a new computational system proposed for automatic classification, developed based on a Siamese network combined with Convolutional Neural Networks (CNNs), is presented. It aims to identify endodontic technical errors using Cone Beam Computed Tomography (CBCT). The study also aims to compare the performance of the automatic classification system with that of dentists. METHODS: One thousand endodontically treated maxillary molars sagittal and coronal reconstructions were evaluated for the quality of the endodontic treatment and the presence of periapical hypodensities by three board-certified dentists and by an oral and maxillofacial radiologist. The proposed classification system was based on a Siamese network combined with EfficientNet B1 or EfficientNet B7 networks. Accuracy, sensivity, precision, specificity, and F1-score values were calculated for automated artificial systems and dentists. Chi-square tests were performed. RESULTS: The performances were obtained for EfficienteNet B1, EfficientNet B7 and dentists. Regarding accuracy, sensivity and specificity, the best results were obtained with EfficientNet B1. Concerning precision and F1-score, the best results were obtained with EfficientNet B7. The presence of periapical hypodensity lesions was associated with endodontic technical errors. In contrast, the absence of endodontic technical errors was associated with the absence of hypodensity. CONCLUSIONS: Quality evaluation of the endodontic treatment performed by dentists and by Siamese Network combined with EfficientNet B7 or EfficientNet B1 networks was comparable with a slight superiority for the Siamese Network. CLINICAL RELEVANCE: CNNs have the potential to be used as a support and standardization tool in assessing endodontic treatment quality in clinical practice.


Subject(s)
Root Canal Therapy , Spiral Cone-Beam Computed Tomography , Humans , Cone-Beam Computed Tomography/methods , Dental Care , Molar
9.
Brain Spine ; 4: 102738, 2024.
Article in English | MEDLINE | ID: mdl-38510635

ABSTRACT

Introduction: Modic Changes (MCs) are MRI alterations in spine vertebrae's signal intensity. This study introduces an end-to-end model to automatically detect and classify MCs in lumbar MRIs. The model's two-step process involves locating intervertebral regions and then categorizing MC types (MC0, MC1, MC2) using paired T1-and T2-weighted images. This approach offers a promising solution for efficient and standardized MC assessment. Research question: The aim is to investigate how different MRI normalization techniques affect MCs classification and how the model can be used in a clinical setting. Material and methods: A combination of Faster R-CNN and a 3D Convolutional Neural Network (CNN) is employed. The model first identifies intervertebral regions and then classifies MC types (MC0, MC1, MC2) using paired T1-and T2-weighted lumbar MRIs. Two datasets are used for model development and evaluation. Results: The detection model achieves high accuracy in identifying intervertebral areas, with Intersection over Union (IoU) values above 0.7, indicating strong localization alignment. Confidence scores above 0.9 demonstrate the model's accurate levels identification. In the classification task, standardization proves the best performances for MC type assessment, achieving mean sensitivities of 0.83 for MC0, 0.85 for MC1, and 0.78 for MC2, along with balanced accuracy of 0.80 and F1 score of 0.88. Discussion and conclusion: The study's end-to-end model shows promise in automating MC assessment, contributing to standardized diagnostics and treatment planning. Limitations include dataset size, class imbalance, and lack of external validation. Future research should focus on external validation, refining model generalization, and improving clinical applicability.

10.
Cancer Imaging ; 24(1): 20, 2024 Jan 26.
Article in English | MEDLINE | ID: mdl-38279133

ABSTRACT

BACKGROUND & AIMS: The present study utilized extracted computed tomography radiomics features to classify the gross tumor volume and normal liver tissue in hepatocellular carcinoma by mainstream machine learning methods, aiming to establish an automatic classification model. METHODS: We recruited 104 pathologically confirmed hepatocellular carcinoma patients for this study. GTV and normal liver tissue samples were manually segmented into regions of interest and randomly divided into five-fold cross-validation groups. Dimensionality reduction using LASSO regression. Radiomics models were constructed via logistic regression, support vector machine (SVM), random forest, Xgboost, and Adaboost algorithms. The diagnostic efficacy, discrimination, and calibration of algorithms were verified using area under the receiver operating characteristic curve (AUC) analyses and calibration plot comparison. RESULTS: Seven screened radiomics features excelled at distinguishing the gross tumor area. The Xgboost machine learning algorithm had the best discrimination and comprehensive diagnostic performance with an AUC of 0.9975 [95% confidence interval (CI): 0.9973-0.9978] and mean MCC of 0.9369. SVM had the second best discrimination and diagnostic performance with an AUC of 0.9846 (95% CI: 0.9835- 0.9857), mean Matthews correlation coefficient (MCC)of 0.9105, and a better calibration. All other algorithms showed an excellent ability to distinguish between gross tumor area and normal liver tissue (mean AUC 0.9825, 0.9861,0.9727,0.9644 for Adaboost, random forest, logistic regression, naivem Bayes algorithm respectively). CONCLUSION: CT radiomics based on machine learning algorithms can accurately classify GTV and normal liver tissue, while the Xgboost and SVM algorithms served as the best complementary algorithms.


Subject(s)
Carcinoma, Hepatocellular , Liver Neoplasms , Humans , Carcinoma, Hepatocellular/diagnostic imaging , Bayes Theorem , Radiomics , Tumor Burden , Liver Neoplasms/diagnostic imaging , Machine Learning , Retrospective Studies
11.
Int J Comput Assist Radiol Surg ; 19(2): 355-365, 2024 Feb.
Article in English | MEDLINE | ID: mdl-37921964

ABSTRACT

PURPOSE: Heart failure (HF) is a serious and complex syndrome with a high mortality rate. In clinical diagnosis, the correct classification of HF is helpful. In our previous work, we proposed a self-supervised learning framework of HF classification (SSLHF) on cine cardiac magnetic resonance images (Cine-CMR). However, this method lacks the integration of three dimensions of spatial information and temporal information. Thus, this study aims at proposing an automatic 4D HF classification algorithm. METHODS: To construct a 4D classification model, we proposed an extensional framework called 4D-SSLHF. It mainly consists of self-supervised image restoration and HF classification. The image restoration proxy task utilizes three image transformation methods to enhance the exploration of spatial and temporal information in the Cine-CMR. In the classification task, we proposed a Siamese Conv-LSTM network by combining the Siamese network and bi-directional Conv-LSTM to integrate the features of the four dimensions simultaneously. RESULTS: Experimental results on 184 patients from Shanghai Chest Hospital achieved an AUC of 0.8794 and an ACC of 0.8402 in the five-fold cross-validation. Compared with our previous work, the improvements in AUC and ACC were 2.89 % and 1.94 %, respectively. CONCLUSIONS: In this study, we proposed a novel self-supervised learning framework named 4D-SSLHF for HF classification based on Cine-CMR. The proposed 4D-SSLHF can mine 3D spatial information and temporal information in Cine-CMR images well and accurately classify different categories of HF. The good classification results show our method's potential to assist physicians in choosing personalized treatment.


Subject(s)
Heart Failure , Magnetic Resonance Imaging, Cine , Humans , Magnetic Resonance Imaging, Cine/methods , China , Heart , Heart Failure/diagnostic imaging , Algorithms
12.
Heliyon ; 9(9): e19507, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37809718

ABSTRACT

The study investigates the suitability of time series Sentinel-2 NDVI-derived maps for the subfield detection of a sunflower crop cultivated in an organic farming system. The aim was to understand the spatio-temporal behaviour of subfield areas identified by the K-means algorithm from NDVI maps obtained from satellite images and the ground yield data variability to increase the efficiency of delimiting management zones in an organic farming system. Experiments were conducted on a surface of 29 ha. NDVI time series derived from Sentinel-2 images and k-means algorithm for rapidly delineating the sunflower subfield areas were used. The crop achene yields in the whole field ranged from 1.3 to 3.77 t ha-1 with a significant within-field spatial variability. The cluster analysis of hand-sampled data showed three subfields with achene yield mean values of 3.54 t ha-1 (cluster 1), 2.98 t ha-1 (cluster 2), and 2.07 t ha-1 (Cluster 3). In the cluster analysis of NDVI data, the k-means algorithm has early delineated the subfield crop spatial and temporal yield variability. The best period for identifying subfield areas starts from the inflorescences development stage to the development of the fruit stage. Analyzing the NDVI subfield areas and yield data, it was found that cluster 1 covers an area of 42.4% of the total surface and 50% of the total achene yield; cluster 2 covers 35% of both surface and yield. Instead, the surface of cluster 3 covers 22.2% of the total surface with 15% of achene yield. K-means algorithm derived from Sentinel-2 NDVI images delineates the sunflower subfield areas. Sentinel-2 images and k-means algorithms can improve an efficient assessment of subfield areas in sunflower crops. Identifying subfield areas can lead to site-specific long-term agronomic actions for improving the sustainable intensification of agriculture in the organic farming system.

13.
J Imaging ; 9(9)2023 Sep 18.
Article in English | MEDLINE | ID: mdl-37754951

ABSTRACT

Early diagnosis and initiation of treatment for fresh osteoporotic lumbar vertebral fractures (OLVF) are crucial. Magnetic resonance imaging (MRI) is generally performed to differentiate between fresh and old OLVF. However, MRIs can be intolerable for patients with severe back pain. Furthermore, it is difficult to perform in an emergency. MRI should therefore only be performed in appropriately selected patients with a high suspicion of fresh fractures. As radiography is the first-choice imaging examination for the diagnosis of OLVF, improving screening accuracy with radiographs will optimize the decision of whether an MRI is necessary. This study aimed to develop a method to automatically classify lumbar vertebrae (LV) conditions such as normal, old, or fresh OLVF using deep learning methods with radiography. A total of 3481 LV images for training, validation, and testing and 662 LV images for external validation were collected. Visual evaluation by two radiologists determined the ground truth of LV diagnoses. Three convolutional neural networks were ensembled. The accuracy, sensitivity, and specificity were 0.89, 0.83, and 0.92 in the test and 0.84, 0.76, and 0.89 in the external validation, respectively. The results suggest that the proposed method can contribute to the accurate automatic classification of LV conditions on radiography.

14.
Micron ; 173: 103520, 2023 10.
Article in English | MEDLINE | ID: mdl-37556898

ABSTRACT

Integration of whole slide imaging (WSI) and deep learning technology has led to significant improvements in the screening and diagnosis of cervical cancer. WSI enables the examination of all cells on a slide simultaneously and deep learning algorithms can accurately label them as cancerous or non-cancerous. Although many studies have investigated the application of deep learning for diagnosing various diseases, there is a lack of research focusing on the evolution, limitations, and gaps of intelligent algorithms in conjunction with WSI for cervical cancer. This paper provides a comprehensive overview of the state-of-the-art deep learning algorithms used for the timely and precise analysis of cervical WSI images. A total of 115 relevant papers were reviewed, and 37 were selected after screening with specific inclusion and exclusion criteria. Methodological aspects including deep learning techniques, data sources, architectures, and classification techniques employed by the selected studies were analyzed. The review presents the most popular techniques and current trends in deep learning-based cervical classification systems, and categorizes the evolution of the domain based on deep learning techniques, citing an in-depth analysis of various models developed over time. The paper advocates for the implementation of transfer supervised learning when utilizing deep learning models such as ResNet, VGG19, and EfficientNet, and builds a solid foundation for applying relevant techniques in different fields. Although some progress has been made in developing novel models for the diagnosis of cervical cancer, substantial work remains to be done in creating standardized benchmark databases of WSI images for the research community. This paper serves as a comprehensive guide for understanding the fundamental concepts, benefits, and challenges related to various deep learning models on WSI, including their application for cervical system classification. Additionally, it provides valuable insights into future research directions in this area.


Subject(s)
Deep Learning , Uterine Cervical Neoplasms , Female , Humans , Uterine Cervical Neoplasms/diagnosis , Algorithms , Image Interpretation, Computer-Assisted/methods
15.
J Pers Med ; 13(7)2023 Jun 28.
Article in English | MEDLINE | ID: mdl-37511674

ABSTRACT

Determining histological subtypes, such as invasive ductal and invasive lobular carcinomas (IDCs and ILCs) and immunohistochemical markers, such as estrogen response (ER), progesterone response (PR), and the HER2 protein status is important in planning breast cancer treatment. MRI-based radiomic analysis is emerging as a non-invasive substitute for biopsy to determine these signatures. We explore the effectiveness of radiomics-based and CNN (convolutional neural network)-based classification models to this end. T1-weighted dynamic contrast-enhanced, contrast-subtracted T1, and T2-weighted MR images of 429 breast cancer tumors from 323 patients are used. Various combinations of input data and classification schemes are applied for ER+ vs. ER-, PR+ vs. PR-, HER2+ vs. HER2-, and IDC vs. ILC classification tasks. The best results were obtained for the ER+ vs. ER- and IDC vs. ILC classification tasks, with their respective AUCs reaching 0.78 and 0.73 on test data. The results with multi-contrast input data were generally better than the mono-contrast alone. The radiomics and CNN-based approaches generally exhibited comparable results. ER and IDC/ILC classification results were promising. PR and HER2 classifications need further investigation through a larger dataset. Better results by using multi-contrast data might indicate that multi-parametric quantitative MRI could be used to achieve more reliable classifiers.

16.
Front Surg ; 10: 1172313, 2023.
Article in English | MEDLINE | ID: mdl-37425349

ABSTRACT

Introduction: A novel classification scheme for endplate lesions, based on T2-weighted images from magnetic resonance imaging (MRI) scan, has been recently introduced and validated. The scheme categorizes intervertebral spaces as "normal," "wavy/irregular," "notched," and "Schmorl's node." These lesions have been associated with spinal pathologies, including disc degeneration and low back pain. The exploitation of an automatic tool for the detection of the lesions would facilitate clinical practice by reducing the workload and the diagnosis time. The present work exploits a deep learning application based on convolutional neural networks to automatically classify the type of lesion. Methods: T2-weighted MRI scans of the sagittal lumbosacral spine of consecutive patients were retrospectively collected. The middle slice of each scan was manually processed to identify the intervertebral spaces from L1L2 to L5S1, and the corresponding lesion type was labeled. A total of 1,559 gradable discs were obtained, with the following types of distribution: "normal" (567 discs), "wavy/irregular" (485), "notched" (362), and "Schmorl's node" (145). The dataset was divided randomly into a training set and a validation set while preserving the original distribution of lesion types in each set. A pretrained network for image classification was utilized, and fine-tuning was performed using the training set. The retrained net was then applied to the validation set to evaluate the overall accuracy and accuracy for each specific lesion type. Results: The overall rate of accuracy was found equal to 88%. The accuracy for the specific lesion type was found as follows: 91% (normal), 82% (wavy/irregular), 93% (notched), and 83% (Schmorl's node). Discussion: The results indicate that the deep learning approach achieved high accuracy for both overall classification and individual lesion types. In clinical applications, this implementation could be employed as part of an automatic detection tool for pathological conditions characterized by the presence of endplate lesions, such as spinal osteochondrosis.

17.
Biomed Eng Lett ; 13(3): 273-291, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37519874

ABSTRACT

This study conducted a systematic review to determine the feasibility of automatic Cyclic Alternating Pattern (CAP) analysis. Specifically, this review followed the 2020 Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines to address the formulated research question: is automatic CAP analysis viable for clinical application? From the identified 1,280 articles, the review included 35 studies that proposed various methods for examining CAP, including the classification of A phase, their subtypes, or the CAP cycles. Three main trends were observed over time regarding A phase classification, starting with mathematical models or features classified with a tuned threshold, followed by using conventional machine learning models and, recently, deep learning models. Regarding the CAP cycle detection, it was observed that most studies employed a finite state machine to implement the CAP scoring rules, which depended on an initial A phase classifier, stressing the importance of developing suitable A phase detection models. The assessment of A-phase subtypes has proven challenging due to various approaches used in the state-of-the-art for their detection, ranging from multiclass models to creating a model for each subtype. The review provided a positive answer to the main research question, concluding that automatic CAP analysis can be reliably performed. The main recommended research agenda involves validating the proposed methodologies on larger datasets, including more subjects with sleep-related disorders, and providing the source code for independent confirmation.

18.
Sensors (Basel) ; 23(9)2023 Apr 28.
Article in English | MEDLINE | ID: mdl-37177575

ABSTRACT

Cardiovascular disease is one of the main causes of death worldwide. Arrhythmias are an important group of cardiovascular diseases. The standard 12-lead electrocardiogram signals are an important tool for diagnosing arrhythmias. Although 12-lead electrocardiogram signals provide more comprehensive arrhythmia information than single-lead electrocardiogram signals, it is difficult to effectively fuse information between different leads. In addition, most of the current researches working on automatic diagnosis of cardiac arrhythmias are based on modeling and analysis of single-mode features extracted from one-dimensional electrocardiogram sequences, ignoring the frequency domain features of electrocardiogram signals. Therefore, developing an automatic arrhythmia detection algorithm based on 12-lead electrocardiogram with high accuracy and strong generalization ability is still challenging. In this paper, a multimodal feature fusion model based on the mechanism is developed. This model utilizes a dual channel deep neural network to extract different dimensional features from one-dimensional and two-dimensional electrocardiogram time-frequency maps, and combines attention mechanism to effectively fuse the important features of 12-lead, thereby obtaining richer arrhythmia information and ultimately achieving accurate classification of nine types of arrhythmia signals. This study used electrocardiogram signals from a mixed dataset to train, validate, and evaluate the model, with an average of F1 score and average accuracy reached 0.85 and 0.97, respectively. Experimental results show that our algorithm has stable and reliable performance, so it is expected to have good practical application potential.


Subject(s)
Algorithms , Arrhythmias, Cardiac , Humans , Arrhythmias, Cardiac/diagnosis , Neural Networks, Computer , Heart Rate , Electrocardiography/methods
19.
Front Physiol ; 14: 1176299, 2023.
Article in English | MEDLINE | ID: mdl-37187960

ABSTRACT

Introduction: Low back pain (LBP) is a prevalent and complex condition that poses significant medical, social, and economic burdens worldwide. The accurate and timely assessment and diagnosis of LBP, particularly non-specific LBP (NSLBP), are crucial to developing effective interventions and treatments for LBP patients. In this study, we aimed to investigate the potential of combining B-mode ultrasound image features with shear wave elastography (SWE) features to improve the classification of NSLBP patients. Methods: We recruited 52 subjects with NSLBP from the University of Hong Kong-Shenzhen Hospital and collected B-mode ultrasound images and SWE data from multiple sites. The Visual Analogue Scale (VAS) was used as the ground truth to classify NSLBP patients. We extracted and selected features from the data and employed a support vector machine (SVM) model to classify NSLBP patients. The performance of the SVM model was evaluated using five-fold cross-validation and the accuracy, precision, and sensitivity were calculated. Results: We obtained an optimal feature set of 48 features, among which the SWE elasticity feature had the most significant contribution to the classification task. The SVM model achieved an accuracy, precision, and sensitivity of 0.85, 0.89, and 0.86, respectively, which were higher than the previously reported values of MRI. Discussion: In this study, we aimed to investigate the potential of combining B-mode ultrasound image features with shear wave elastography (SWE) features to improve the classification of non-specific low back pain (NSLBP) patients. Our results showed that combining B-mode ultrasound image features with SWE features and employing an SVM model can improve the automatic classification of NSLBP patients. Our findings also suggest that the SWE elasticity feature is a crucial factor in classifying NSLBP patients, and the proposed method can identify the important site and position of the muscle in the NSLBP classification task.

20.
Brief Bioinform ; 24(3)2023 05 19.
Article in English | MEDLINE | ID: mdl-37088980

ABSTRACT

Immunofluorescence patterns of anti-nuclear antibodies (ANAs) on human epithelial cell (HEp-2) substrates are important biomarkers for the diagnosis of autoimmune diseases. There are growing clinical requirements for an automatic readout and classification of ANA immunofluorescence patterns for HEp-2 images following the taxonomy recommended by the International Consensus on Antinuclear Antibody Patterns (ICAP). In this study, a comprehensive collection of HEp-2 specimen images covering a broad range of ANA patterns was established and manually annotated by experienced laboratory experts. By utilizing a supervised learning methodology, an automatic immunofluorescence pattern classification framework for HEp-2 specimen images was developed. The framework consists of a module for HEp-2 cell detection and cell-level feature extraction, followed by an image-level classifier that is capable of recognizing all 14 classes of ANA immunofluorescence patterns as recommended by ICAP. Performance analysis indicated an accuracy of 92.05% on the validation dataset and 87% on an independent test dataset, which has surpassed the performance of human examiners on the same test dataset. The proposed framework is expected to contribute to the automatic ANA pattern recognition in clinical laboratories to facilitate efficient and precise diagnosis of autoimmune diseases.


Subject(s)
Antibodies, Antinuclear , Autoimmune Diseases , Humans , Fluorescent Antibody Technique , Antibodies, Antinuclear/analysis , Autoimmune Diseases/diagnosis , Epithelial Cells , Supervised Machine Learning
SELECTION OF CITATIONS
SEARCH DETAIL