Your browser doesn't support javascript.
loading
Montrer: 20 | 50 | 100
Résultats 1 - 20 de 31.629
Filtrer
1.
J Med Primatol ; 53(4): e12722, 2024 Aug.
Article de Anglais | MEDLINE | ID: mdl-38949157

RÉSUMÉ

BACKGROUND: Tuberculosis (TB) kills approximately 1.6 million people yearly despite the fact anti-TB drugs are generally curative. Therefore, TB-case detection and monitoring of therapy, need a comprehensive approach. Automated radiological analysis, combined with clinical, microbiological, and immunological data, by machine learning (ML), can help achieve it. METHODS: Six rhesus macaques were experimentally inoculated with pathogenic Mycobacterium tuberculosis in the lung. Data, including Computed Tomography (CT), were collected at 0, 2, 4, 8, 12, 16, and 20 weeks. RESULTS: Our ML-based CT analysis (TB-Net) efficiently and accurately analyzed disease progression, performing better than standard deep learning model (LLM OpenAI's CLIP Vi4). TB-Net based results were more consistent than, and confirmed independently by, blinded manual disease scoring by two radiologists and exhibited strong correlations with blood biomarkers, TB-lesion volumes, and disease-signs during disease pathogenesis. CONCLUSION: The proposed approach is valuable in early disease detection, monitoring efficacy of therapy, and clinical decision making.


Sujet(s)
Marqueurs biologiques , Apprentissage profond , Macaca mulatta , Mycobacterium tuberculosis , Tomodensitométrie , Animaux , Marqueurs biologiques/sang , Tomodensitométrie/médecine vétérinaire , Tuberculose/médecine vétérinaire , Tuberculose/imagerie diagnostique , Modèles animaux de maladie humaine , Tuberculose pulmonaire/imagerie diagnostique , Mâle , Femelle , Poumon/imagerie diagnostique , Poumon/anatomopathologie , Poumon/microbiologie , Maladies des singes/imagerie diagnostique , Maladies des singes/microbiologie
2.
Med Phys ; 2024 Jul 01.
Article de Anglais | MEDLINE | ID: mdl-38949577

RÉSUMÉ

BACKGROUND: Lung cancer is the most common type of cancer. Detection of lung cancer at an early stage can reduce mortality rates. Pulmonary nodules may represent early cancer and can be identified through computed tomography (CT) scans. Malignant risk can be estimated based on attributes like size, shape, location, and density. PURPOSE: Deep learning algorithms have achieved remarkable advancements in this domain compared to traditional machine learning methods. Nevertheless, many existing anchor-based deep learning algorithms exhibit sensitivity to predefined anchor-box configurations, necessitating manual adjustments to obtain optimal outcomes. Conversely, current anchor-free deep learning-based nodule detection methods normally adopt fixed-size nodule models like cubes or spheres. METHODS: To address these technical challenges, we propose a multiscale 3D anchor-free deep learning network (M3N) for pulmonary nodule detection, leveraging adjustable nodule modeling (ANM). Within this framework, ANM empowers the representation of target objects in an anisotropic manner, with a novel point selection strategy (PSS) devised to accelerate the learning process of anisotropic representation. We further incorporate a composite loss function that combines the conventional L2 loss and cosine similarity loss, facilitating M3N to learn nodules' intensity distribution in three dimensions. RESULTS: Experiment results show that the M3N achieves 90.6% competitive performance metrics (CPM) with seven predefined false positives per scan on the LUNA 16 dataset. This performance appears to exceed that of other state-of-the-art deep learning-based networks reported in their respective publications. Individual test results also demonstrate that M3N excels in providing more accurate, adaptive bounding boxes surrounding the contours of target nodules. CONCLUSIONS: The newly developed nodule detection system reduces reliance on prior knowledge, such as the general size of objects in the dataset, thus it should enhance overall robustness and versatility. Distinct from traditional nodule modeling techniques, the ANM approach aligns more closely with the morphological characteristics of nodules. Time consumption and detection results demonstrate promising efficiency and accuracy which should be validated in clinical settings.

3.
Meta Radiol ; 2(3)2024 Sep.
Article de Anglais | MEDLINE | ID: mdl-38947177

RÉSUMÉ

Fairness of artificial intelligence and machine learning models, often caused by imbalanced datasets, has long been a concern. While many efforts aim to minimize model bias, this study suggests that traditional fairness evaluation methods may be biased, highlighting the need for a proper evaluation scheme with multiple evaluation metrics due to varying results under different criteria. Moreover, the limited data size of minority groups introduces significant data uncertainty, which can undermine the judgement of fairness. This paper introduces an innovative evaluation approach that estimates data uncertainty in minority groups through bootstrapping from majority groups for a more objective statistical assessment. Extensive experiments reveal that traditional evaluation methods might have drawn inaccurate conclusions about model fairness. The proposed method delivers an unbiased fairness assessment by adeptly addressing the inherent complications of model evaluation on imbalanced datasets. The results show that such comprehensive evaluation can provide more confidence when adopting those models.

4.
Neurosurg Rev ; 47(1): 300, 2024 Jun 29.
Article de Anglais | MEDLINE | ID: mdl-38951288

RÉSUMÉ

The diagnosis of Moyamoya disease (MMD) relies heavily on imaging, which could benefit from standardized machine learning tools. This study aims to evaluate the diagnostic efficacy of deep learning (DL) algorithms for MMD by analyzing sensitivity, specificity, and the area under the curve (AUC) compared to expert consensus. We conducted a systematic search of PubMed, Embase, and Web of Science for articles published from inception to February 2024. Eligible studies were required to report diagnostic accuracy metrics such as sensitivity, specificity, and AUC, excluding those not in English or using traditional machine learning methods. Seven studies were included, comprising a sample of 4,416 patients, of whom 1,358 had MMD. The pooled sensitivity for common and random effects models was 0.89 (95% CI: 0.85 to 0.92) and 0.92 (95% CI: 0.85 to 0.96), respectively. The pooled specificity was 0.89 (95% CI: 0.86 to 0.91) in the common effects model and 0.91 (95% CI: 0.75 to 0.97) in the random effects model. Two studies reported the AUC alongside their confidence intervals. A meta-analysis synthesizing these findings aggregated a mean AUC of 0.94 (95% CI: 0.92 to 0.96) for common effects and 0.89 (95% CI: 0.76 to 1.02) for random effects models. Deep learning models significantly enhance the diagnosis of MMD by efficiently extracting and identifying complex image patterns with high sensitivity and specificity. Trial registration: CRD42024524998 https://www.crd.york.ac.uk/prospero/displayrecord.php?RecordID=524998.


Sujet(s)
Apprentissage profond , Maladie de Moya-Moya , Maladie de Moya-Moya/diagnostic , Humains , Algorithmes , Sensibilité et spécificité
5.
Front Artif Intell ; 7: 1321884, 2024.
Article de Anglais | MEDLINE | ID: mdl-38952409

RÉSUMÉ

Background: Carotid plaques are major risk factors for stroke. Carotid ultrasound can help to assess the risk and incidence rate of stroke. However, large-scale carotid artery screening is time-consuming and laborious, the diagnostic results inevitably involve the subjectivity of the diagnostician to a certain extent. Deep learning demonstrates the ability to solve the aforementioned challenges. Thus, we attempted to develop an automated algorithm to provide a more consistent and objective diagnostic method and to identify the presence and stability of carotid plaques using deep learning. Methods: A total of 3,860 ultrasound images from 1,339 participants who underwent carotid plaque assessment between January 2021 and March 2023 at the Shanghai Eighth People's Hospital were divided into a 4:1 ratio for training and internal testing. The external test included 1,564 ultrasound images from 674 participants who underwent carotid plaque assessment between January 2022 and May 2023 at Xinhua Hospital affiliated with Dalian University. Deep learning algorithms, based on the fusion of a bilinear convolutional neural network with a residual neural network (BCNN-ResNet), were used for modeling to detect carotid plaques and assess plaque stability. We chose AUC as the main evaluation index, along with accuracy, sensitivity, and specificity as auxiliary evaluation indices. Results: Modeling for detecting carotid plaques involved training and internal testing on 1,291 ultrasound images, with 617 images showing plaques and 674 without plaques. The external test comprised 470 ultrasound images, including 321 images with plaques and 149 without. Modeling for assessing plaque stability involved training and internal testing on 764 ultrasound images, consisting of 494 images with unstable plaques and 270 with stable plaques. The external test was composed of 279 ultrasound images, including 197 images with unstable plaques and 82 with stable plaques. For the task of identifying the presence of carotid plaques, our model achieved an AUC of 0.989 (95% CI: 0.840, 0.998) with a sensitivity of 93.2% and a specificity of 99.21% on the internal test. On the external test, the AUC was 0.951 (95% CI: 0.962, 0.939) with a sensitivity of 95.3% and a specificity of 82.24%. For the task of identifying the stability of carotid plaques, our model achieved an AUC of 0.896 (95% CI: 0.865, 0.922) on the internal test with a sensitivity of 81.63% and a specificity of 87.27%. On the external test, the AUC was 0.854 (95% CI: 0.889, 0.830) with a sensitivity of 68.52% and a specificity of 89.49%. Conclusion: Deep learning using BCNN-ResNet algorithms based on routine ultrasound images could be useful for detecting carotid plaques and assessing plaque instability.

6.
Front Comput Neurosci ; 18: 1415967, 2024.
Article de Anglais | MEDLINE | ID: mdl-38952709

RÉSUMÉ

Electroencephalogram (EEG) plays a pivotal role in the detection and analysis of epileptic seizures, which affects over 70 million people in the world. Nonetheless, the visual interpretation of EEG signals for epilepsy detection is laborious and time-consuming. To tackle this open challenge, we introduce a straightforward yet efficient hybrid deep learning approach, named ResBiLSTM, for detecting epileptic seizures using EEG signals. Firstly, a one-dimensional residual neural network (ResNet) is tailored to adeptly extract the local spatial features of EEG signals. Subsequently, the acquired features are input into a bidirectional long short-term memory (BiLSTM) layer to model temporal dependencies. These output features are further processed through two fully connected layers to achieve the final epileptic seizure detection. The performance of ResBiLSTM is assessed on the epileptic seizure datasets provided by the University of Bonn and Temple University Hospital (TUH). The ResBiLSTM model achieves epileptic seizure detection accuracy rates of 98.88-100% in binary and ternary classifications on the Bonn dataset. Experimental outcomes for seizure recognition across seven epilepsy seizure types on the TUH seizure corpus (TUSZ) dataset indicate that the ResBiLSTM model attains a classification accuracy of 95.03% and a weighted F1 score of 95.03% with 10-fold cross-validation. These findings illustrate that ResBiLSTM outperforms several recent deep learning state-of-the-art approaches.

7.
Network ; : 1-25, 2024 Jul 02.
Article de Anglais | MEDLINE | ID: mdl-38953316

RÉSUMÉ

Groundnut is a noteworthy oilseed crop. Attacks by leaf diseases are one of the most important reasons causing low yield and loss of groundnut plant growth, which will directly diminish the yield and quality. Therefore, an Optimized Wasserstein Deep Convolutional Generative Adversarial Network fostered Groundnut Leaf Disease Identification System (GLDI-WDCGAN-AOA) is proposed in this paper. The pre-processed output is fed to Hesitant Fuzzy Linguistic Bi-objective Clustering (HFL-BOC) for segmentation. By using Wasserstein Deep Convolutional Generative Adversarial Network (WDCGAN), the input leaf images are classified into Healthy leaf, early leaf spot, late leaf spot, nutrition deficiency, and rust. Finally, the weight parameters of WDCGAN are optimized by Aquila Optimization Algorithm (AOA) to achieve high accuracy. The proposed GLDI-WDCGAN-AOA approach provides 23.51%, 22.01%, and 18.65% higher accuracy and 24.78%, 23.24%, and 28.98% lower error rate analysed with existing methods, such as Real-time automated identification and categorization of groundnut leaf disease utilizing hybrid machine learning methods (GLDI-DNN), Online identification of peanut leaf diseases utilizing the data balancing method along deep transfer learning (GLDI-LWCNN), and deep learning-driven method depending on progressive scaling method for the precise categorization of groundnut leaf infections (GLDI-CNN), respectively.

8.
Trop Anim Health Prod ; 56(6): 192, 2024 Jul 02.
Article de Anglais | MEDLINE | ID: mdl-38954103

RÉSUMÉ

Accurate breed identification in dairy cattle is essential for optimizing herd management and improving genetic standards. A smart method for correctly identifying phenotypically similar breeds can empower farmers to enhance herd productivity. A convolutional neural network (CNN) based model was developed for the identification of Sahiwal and Red Sindhi cows. To increase the classification accuracy, first, cows's pixels were segmented from the background using CNN model. Using this segmented image, a masked image was produced by retaining cows' pixels from the original image while eliminating the background. To improve the classification accuracy, models were trained on four different images of each cow: front view, side view, grayscale front view, and grayscale side view. The masked images of these views were fed to the multi-input CNN model which predicts the class of input images. The segmentation model achieved intersection-over-union (IoU) and F1-score values of 81.75% and 85.26%, respectively with an inference time of 296 ms. For the classification task, multiple variants of MobileNet and EfficientNet models were used as the backbone along with pre-trained weights. The MobileNet model achieved 80.0% accuracy for both breeds, while MobileNetV2 and MobileNetV3 reached 82.0% accuracy. CNN models with EfficientNet as backbones outperformed MobileNet models, with accuracy ranging from 84.0% to 86.0%. The F1-scores for these models were found to be above 83.0%, indicating effective breed classification with fewer false positives and negatives. Thus, the present study demonstrates that deep learning models can be used effectively to identify phenotypically similar-looking cattle breeds. To accurately identify zebu breeds, this study will reduce the dependence of farmers on experts.


Sujet(s)
Apprentissage profond , Phénotype , Animaux , Bovins , Sélection , , Femelle , Industrie laitière/méthodes
9.
Article de Anglais | MEDLINE | ID: mdl-38954826

RÉSUMÉ

We have recently developed a charge inversion ion/ion reaction to selectively derivatize phosphatidylserine lipids via gas-phase Schiff base formation. This tandem mass spectrometry (MS/MS) workflow enables the separation and detection of isobaric lipids in imaging mass spectrometry, but the images acquired using this workflow are limited to relatively poor spatial resolutions due to the current time and limit of detection requirements for these ion/ion reaction imaging mass spectrometry experiments. This trade-off between chemical specificity and spatial resolution can be overcome by using computational image fusion, which combines complementary information from multiple images. Herein, we demonstrate a proof-of-concept workflow that fuses a low spatial resolution (i.e., 125 µm) ion/ion reaction product ion image with higher spatial resolution (i.e., 25 µm) ion images from a full scan experiment performed using the same tissue section, which results in a predicted ion/ion reaction product ion image with a 5-fold improvement in spatial resolution. Linear regression, random forest regression, and two-dimensional convolutional neural network (2-D CNN) predictive models were tested for this workflow. Linear regression and 2D CNN models proved optimal for predicted ion/ion images of PS 40:6 and SHexCer d38:1, respectively.

10.
Cell Genom ; : 100603, 2024 Jun 25.
Article de Anglais | MEDLINE | ID: mdl-38955188

RÉSUMÉ

The uncovering of protein-RNA interactions enables a deeper understanding of RNA processing. Recent multiplexed crosslinking and immunoprecipitation (CLIP) technologies such as antibody-barcoded eCLIP (ABC) dramatically increase the throughput of mapping RNA binding protein (RBP) binding sites. However, multiplex CLIP datasets are multivariate, and each RBP suffers non-uniform signal-to-noise ratio. To address this, we developed Mudskipper, a versatile computational suite comprising two components: a Dirichlet multinomial mixture model to account for the multivariate nature of ABC datasets and a softmasking approach that identifies and removes non-specific protein-RNA interactions in RBPs with low signal-to-noise ratio. Mudskipper demonstrates superior precision and recall over existing tools on multiplex datasets and supports analysis of repetitive elements and small non-coding RNAs. Our findings unravel splicing outcomes and variant-associated disruptions, enabling higher-throughput investigations into diseases and regulation mediated by RBPs.

11.
Phys Med Biol ; 2024 Jul 02.
Article de Anglais | MEDLINE | ID: mdl-38955333

RÉSUMÉ

OBJECTIVE: Sparse-view dual-energy spectral computed tomography (DECT) imaging is a challenging inverse problem. Due to the incompleteness of the collected data, the presence of streak artifacts can result in the degradation of reconstructed spectral images. The subsequent material decomposition task in DECT can further lead to the amplification of artifacts and noise. APPROACH: To address this problem, we propose a novel one-step inverse generation network (OIGN) for sparse-view dual- energy CT imaging, which can achieve simultaneous imaging of spectral images and materials. The entire OIGN consists of five sub-networks that form four modules, including the pre-reconstruction module, the pre-decomposition module, and the following residual filtering module and residual decomposition module. The residual feedback mechanism is introduced to synchronize the optimization of spectral CT images and materials. MAIN RESULTS: Numerical simulation experiments show that the OIGN has better performance on both reconstruction and material decomposition than other state-of-the-art spectral CT imaging algorithms. OIGN also demonstrates high imaging efficiency by completing two high-quality imaging tasks in just 50 seconds. Additionally, anti-noise testing is conducted to evaluate the robustness of OIGN. Significance. These findings have great potential in high-quality multi-task spectral CT imaging in clinical diagnosis.

12.
J Biophotonics ; : e202400200, 2024 Jul 02.
Article de Anglais | MEDLINE | ID: mdl-38955356

RÉSUMÉ

Ovarian cancer is among the most common gynecological cancers and the eighth leading cause of cancer-related deaths among women worldwide. Surgery is among the most important options for cancer treatment. During surgery, a biopsy is generally required to screen for lesions; however, traditional case examinations are time consuming and laborious and require extensive experience and knowledge from pathologists. Therefore, this study proposes a simple, fast, and label-free ovarian cancer diagnosis method that combines second harmonic generation (SHG) imaging and deep learning. Unstained fresh human ovarian tissues were subjected to SHG imaging and accurately characterized using the Pyramid Vision Transformer V2 (PVTv2) model. The results showed that the SHG imaged collagen fibers could quantify ovarian cancer. In addition, the PVTv2 model could accurately differentiate the 3240 SHG images obtained from our imaging collection into benign, normal, and malignant images, with a final accuracy of 98.4%. These results demonstrate the great potential of SHG imaging techniques combined with deep learning models for diagnosing the diseased ovarian tissues.

13.
Acad Radiol ; 2024 Jul 01.
Article de Anglais | MEDLINE | ID: mdl-38955592

RÉSUMÉ

RATIONALE AND OBJECTIVE: Stroke-associated pneumonia (SAP) often appears as a complication following intracerebral hemorrhage (ICH), leading to poor prognosis and increased mortality rates. Previous studies have typically developed prediction models based on clinical data alone, without considering that ICH patients often undergo CT scans immediately upon admission. As a result, these models are subjective and lack real-time applicability, with low accuracy that does not meet clinical needs. Therefore, there is an urgent need for a quick and reliable model to timely predict SAP. METHODS: In this retrospective study, we developed an image-based model (DeepSAP) using brain CT scans from 244 ICH patients to classify the presence and severity of SAP. First, DeepSAP employs MRI-template-based image registration technology to eliminate structural differences between samples, achieving statistical quantification and spatial standardization of cerebral hemorrhage. Subsequently, the processed images and filtered clinical data were simultaneously input into a deep-learning neural network for training and analysis. The model was tested on a test set to evaluate diagnostic performance, including accuracy, specificity, and sensitivity. RESULTS: Brain CT scans from 244 ICH patients (mean age, 60.24; 66 female) were divided into a training set (n = 170) and a test set (n = 74). The cohort included 143 SAP patients, accounting for 58.6% of the total, with 66 cases classified as moderate or above, representing 27% of the total. Experimental results showed an AUC of 0.93, an accuracy of 0.84, a sensitivity of 0.79, and a precision of 0.95 for classifying the presence of SAP. In comparison, the model relying solely on clinical data showed an AUC of only 0.76, while the radiomics method had an AUC of 0.74. Additionally, DeepSAP achieved an optimal AUC of 0.84 for the SAP grading task. CONCLUSION: DeepSAP's accuracy in predicting SAP stems from its spatial normalization and statistical quantification of the ICH region. DeepSAP is expected to be an effective tool for predicting and grading SAP in clinic.

14.
Acad Radiol ; 2024 Jul 01.
Article de Anglais | MEDLINE | ID: mdl-38955591

RÉSUMÉ

RATIONALE AND OBJECTIVES: To compare a conventional T1 volumetric interpolated breath-hold examination (VIBE) with SPectral Attenuated Inversion Recovery (SPAIR) fat saturation and a deep learning (DL)-reconstructed accelerated VIBE sequence with SPAIR fat saturation achieving a 50 % reduction in breath-hold duration (hereafter, VIBE-SPAIRDL) in terms of image quality and diagnostic confidence. MATERIALS AND METHODS: This prospective study enrolled consecutive patients referred for upper abdominal MRI from November 2023 to December 2023 at a single tertiary center. Patients underwent upper abdominal MRI with acquisition of non-contrast and gadobutrol-enhanced conventional VIBE-SPAIR (fourfold acceleration, acquisition time 16 s) and VIBE-SPAIRDL (sixfold acceleration, acquisition time 8 s) on a 1.5 T scanner. Image analysis was performed by four readers, evaluating homogeneity of fat suppression, perceived signal-to-noise ratio (SNR), edge sharpness, artifact level, lesion detectability and diagnostic confidence. A statistical power analysis for patient sample size estimation was performed. Image quality parameters were compared by a repeated measures analysis of variance, and interreader agreement was assessed using Fleiss' κ. RESULTS: Among 450 consecutive patients, 45 patients were evaluated (mean age, 60 years ± 15 [SD]; 27 men, 18 women). VIBE-SPAIRDL acquisition demonstrated superior SNR (P < 0.001), edge sharpness (P < 0.001), and reduced artifacts (P < 0.001) with substantial to almost perfect interreader agreement for non-contrast (κ: 0.70-0.91) and gadobutrol-enhanced MRI (κ: 0.68-0.87). No evidence of a difference was found between conventional VIBE-SPAIR and VIBE-SPAIRDL regarding homogeneity of fat suppression, lesion detectability, or diagnostic confidence (all P > 0.05). CONCLUSION: Deep learning reconstruction of VIBE-SPAIR facilitated a reduction of breath-hold duration by half, while reducing artifacts and improving image quality. SUMMARY: Deep learning reconstruction of prospectively accelerated T1 volumetric interpolated breath-hold examination for upper abdominal MRI enabled a 50 % reduction in breath-hold time with superior image quality. KEY RESULTS: 1) In a prospective analysis of 45 patients referred for upper abdominal MRI, accelerated deep learning (DL)-reconstructed VIBE images with spectral fat saturation (SPAIR) showed better overall image quality, with better perceived signal-to-noise ratio and less artifacts (all P < 0.001), despite a 50 % reduction in acquisition time compared to conventional VIBE. 2) No evidence of a difference was found between conventional VIBE-SPAIR and accelerated VIBE-SPAIRDL regarding lesion detectability or diagnostic confidence.

15.
Eur Radiol ; 2024 Jul 02.
Article de Anglais | MEDLINE | ID: mdl-38955845

RÉSUMÉ

OBJECTIVES: Risk calculators (RCs) improve patient selection for prostate biopsy with clinical/demographic information, recently with prostate MRI using the prostate imaging reporting and data system (PI-RADS). Fully-automated deep learning (DL) analyzes MRI data independently, and has been shown to be on par with clinical radiologists, but has yet to be incorporated into RCs. The goal of this study is to re-assess the diagnostic quality of RCs, the impact of replacing PI-RADS with DL predictions, and potential performance gains by adding DL besides PI-RADS. MATERIAL AND METHODS: One thousand six hundred twenty-seven consecutive examinations from 2014 to 2021 were included in this retrospective single-center study, including 517 exams withheld for RC testing. Board-certified radiologists assessed PI-RADS during clinical routine, then systematic and MRI/Ultrasound-fusion biopsies provided histopathological ground truth for significant prostate cancer (sPC). nnUNet-based DL ensembles were trained on biparametric MRI predicting the presence of sPC lesions (UNet-probability) and a PI-RADS-analogous five-point scale (UNet-Likert). Previously published RCs were validated as is; with PI-RADS substituted by UNet-Likert (UNet-Likert-substituted RC); and with both UNet-probability and PI-RADS (UNet-probability-extended RC). Together with a newly fitted RC using clinical data, PI-RADS and UNet-probability, existing RCs were compared by receiver-operating characteristics, calibration, and decision-curve analysis. RESULTS: Diagnostic performance remained stable for UNet-Likert-substituted RCs. DL contained complementary diagnostic information to PI-RADS. The newly-fitted RC spared 49% [252/517] of biopsies while maintaining the negative predictive value (94%), compared to PI-RADS ≥ 4 cut-off which spared 37% [190/517] (p < 0.001). CONCLUSIONS: Incorporating DL as an independent diagnostic marker for RCs can improve patient stratification before biopsy, as there is complementary information in DL features and clinical PI-RADS assessment. CLINICAL RELEVANCE STATEMENT: For patients with positive prostate screening results, a comprehensive diagnostic workup, including prostate MRI, DL analysis, and individual classification using nomograms can identify patients with minimal prostate cancer risk, as they benefit less from the more invasive biopsy procedure. KEY POINTS: The current MRI-based nomograms result in many negative prostate biopsies. The addition of DL to nomograms with clinical data and PI-RADS improves patient stratification before biopsy. Fully automatic DL can be substituted for PI-RADS without sacrificing the quality of nomogram predictions. Prostate nomograms show cancer detection ability comparable to previous validation studies while being suitable for the addition of DL analysis.

16.
Eur Spine J ; 2024 Jul 02.
Article de Anglais | MEDLINE | ID: mdl-38955868

RÉSUMÉ

OBJECTIVE: This study aimed to develop and validate a predictive model for osteoporotic vertebral fractures (OVFs) risk by integrating demographic, bone mineral density (BMD), CT imaging, and deep learning radiomics features from CT images. METHODS: A total of 169 osteoporosis-diagnosed patients from three hospitals were randomly split into OVFs (n = 77) and Non-OVFs (n = 92) groups for training (n = 135) and test (n = 34). Demographic data, BMD, and CT imaging details were collected. Deep transfer learning (DTL) using ResNet-50 and radiomics features were fused, with the best model chosen via logistic regression. Cox proportional hazards models identified clinical factors. Three models were constructed: clinical, radiomics-DTL, and fusion (clinical-radiomics-DTL). Performance was assessed using AUC, C-index, Kaplan-Meier, and calibration curves. The best model was depicted as a nomogram, and clinical utility was evaluated using decision curve analysis (DCA). RESULTS: BMD, CT values of paravertebral muscles (PVM), and paravertebral muscles' cross-sectional area (CSA) significantly differed between OVFs and Non-OVFs groups (P < 0.05). No significant differences were found between training and test cohort. Multivariate Cox models identified BMD, CT values of PVM, and CSAPS reduction as independent OVFs risk factors (P < 0.05). The fusion model exhibited the highest predictive performance (C-index: 0.839 in training, 0.795 in test). DCA confirmed the nomogram's utility in OVFs risk prediction. CONCLUSION: This study presents a robust predictive model for OVFs risk, integrating BMD, CT data, and radiomics-DTL features, offering high sensitivity and specificity. The model's visualizations can inform OVFs prevention and treatment strategies.

17.
J Imaging Inform Med ; 2024 Jul 02.
Article de Anglais | MEDLINE | ID: mdl-38955964

RÉSUMÉ

This study aimed to investigate the performance of a fine-tuned large language model (LLM) in extracting patients on pretreatment for lung cancer from picture archiving and communication systems (PACS) and comparing it with that of radiologists. Patients whose radiological reports contained the term lung cancer (3111 for training, 124 for validation, and 288 for test) were included in this retrospective study. Based on clinical indication and diagnosis sections of the radiological report (used as input data), they were classified into four groups (used as reference data): group 0 (no lung cancer), group 1 (pretreatment lung cancer present), group 2 (after treatment for lung cancer), and group 3 (planning radiation therapy). Using the training and validation datasets, fine-tuning of the pretrained LLM was conducted ten times. Due to group imbalance, group 2 data were undersampled in the training. The performance of the best-performing model in the validation dataset was assessed in the independent test dataset. For testing purposes, two other radiologists (readers 1 and 2) were also involved in classifying radiological reports. The overall accuracy of the fine-tuned LLM, reader 1, and reader 2 was 0.983, 0.969, and 0.969, respectively. The sensitivity for differentiating group 0/1/2/3 by LLM, reader 1, and reader 2 was 1.000/0.948/0.991/1.000, 0.750/0.879/0.996/1.000, and 1.000/0.931/0.978/1.000, respectively. The time required for classification by LLM, reader 1, and reader 2 was 46s/2539s/1538s, respectively. Fine-tuned LLM effectively extracted patients on pretreatment for lung cancer from PACS with comparable performance to radiologists in a shorter time.

18.
BMC Med Inform Decis Mak ; 24(1): 187, 2024 Jul 01.
Article de Anglais | MEDLINE | ID: mdl-38951831

RÉSUMÉ

BACKGROUND: Accurate measurement of hemoglobin concentration is essential for various medical scenarios, including preoperative evaluations and determining blood loss. Traditional invasive methods are inconvenient and not suitable for rapid, point-of-care testing. Moreover, current models, due to their complex parameters, are not well-suited for mobile medical settings, which limits the ability to conduct frequent and rapid testing. This study aims to introduce a novel, compact, and efficient system that leverages deep learning and smartphone technology to accurately estimate hemoglobin levels, thereby facilitating rapid and accessible medical assessments. METHODS: The study employed a smartphone application to capture images of the eye, which were subsequently analyzed by a deep neural network trained on data from invasive blood test data. Specifically, the EGE-Unet model was utilized for eyelid segmentation, while the DHA(C3AE) model was employed for hemoglobin level prediction. The performance of the EGE-Unet was evaluated using statistical metrics including mean intersection over union (MIOU), F1 Score, accuracy, specificity, and sensitivity. The DHA(C3AE) model's performance was assessed using mean absolute error (MAE), mean-square error (MSE), root mean square error (RMSE), and R^2. RESULTS: The EGE-Unet model demonstrated robust performance in eyelid segmentation, achieving an MIOU of 0.78, an F1 Score of 0.87, an accuracy of 0.97, a specificity of 0.98, and a sensitivity of 0.86. The DHA(C3AE) model for hemoglobin level prediction yielded promising outcomes with an MAE of 1.34, an MSE of 2.85, an RMSE of 1.69, and an R^2 of 0.34. The overall size of the model is modest at 1.08 M, with a computational complexity of 0.12 FLOPs (G). CONCLUSIONS: This system presents a groundbreaking approach that eliminates the need for supplementary devices, providing a cost-effective, swift, and accurate method for healthcare professionals to enhance treatment planning and improve patient care in perioperative environments. The proposed system has the potential to enable frequent and rapid testing of hemoglobin levels, which can be particularly beneficial in mobile medical settings. TRIAL REGISTRATION: The clinical trial was registered on the Chinese Clinical Trial Registry (No. ChiCTR2100044138) on 20/02/2021.


Sujet(s)
Apprentissage profond , Hémoglobines , Ordiphone , Humains , Hémoglobines/analyse , Adulte d'âge moyen , Mâle , Applications mobiles , Femelle
19.
Chin Med ; 19(1): 90, 2024 Jun 29.
Article de Anglais | MEDLINE | ID: mdl-38951913

RÉSUMÉ

BACKGROUND: Given the high cost of endoscopy in gastric cancer (GC) screening, there is an urgent need to explore cost-effective methods for the large-scale prediction of precancerous lesions of gastric cancer (PLGC). We aim to construct a hierarchical artificial intelligence-based multimodal non-invasive method for pre-endoscopic risk screening, to provide tailored recommendations for endoscopy. METHODS: From December 2022 to December 2023, a large-scale screening study was conducted in Fujian, China. Based on traditional Chinese medicine theory, we simultaneously collected tongue images and inquiry information from 1034 participants, considering the potential of these data for PLGC screening. Then, we introduced inquiry information for the first time, forming a multimodality artificial intelligence model to integrate tongue images and inquiry information for pre-endoscopic screening. Moreover, we validated this approach in another independent external validation cohort, comprising 143 participants from the China-Japan Friendship Hospital. RESULTS: A multimodality artificial intelligence-assisted pre-endoscopic screening model based on tongue images and inquiry information (AITonguequiry) was constructed, adopting a hierarchical prediction strategy, achieving tailored endoscopic recommendations. Validation analysis revealed that the area under the curve (AUC) values of AITonguequiry were 0.74 for overall PLGC (95% confidence interval (CI) 0.71-0.76, p < 0.05) and 0.82 for high-risk PLGC (95% CI 0.82-0.83, p < 0.05), which were significantly and robustly better than those of the independent use of either tongue images or inquiry information alone. In addition, AITonguequiry has superior performance compared to existing PLGC screening methodologies, with the AUC value enhancing 45% in terms of PLGC screening (0.74 vs. 0.51, p < 0.05) and 52% in terms of high-risk PLGC screening (0.82 vs. 0.54, p < 0.05). In the independent external verification, the AUC values were 0.69 for PLGC and 0.76 for high-risk PLGC. CONCLUSION: Our AITonguequiry artificial intelligence model, for the first time, incorporates inquiry information and tongue images, leading to a higher precision and finer-grained pre-endoscopic screening of PLGC. This enhances patient screening efficiency and alleviates patient burden.

20.
Front Robot AI ; 11: 1356345, 2024.
Article de Anglais | MEDLINE | ID: mdl-38957217

RÉSUMÉ

In this study, we address the critical need for enhanced situational awareness and victim detection capabilities in Search and Rescue (SAR) operations amidst disasters. Traditional unmanned ground vehicles (UGVs) often struggle in such chaotic environments due to their limited manoeuvrability and the challenge of distinguishing victims from debris. Recognising these gaps, our research introduces a novel technological framework that integrates advanced gesture-recognition with cutting-edge deep learning for camera-based victim identification, specifically designed to empower UGVs in disaster scenarios. At the core of our methodology is the development and implementation of the Meerkat Optimization Algorithm-Stacked Convolutional Neural Network-Bi-Long Short Term Memory-Gated Recurrent Unit (MOA-SConv-Bi-LSTM-GRU) model, which sets a new benchmark for hand gesture detection with its remarkable performance metrics: accuracy, precision, recall, and F1-score all approximately 0.9866. This model enables intuitive, real-time control of UGVs through hand gestures, allowing for precise navigation in confined and obstacle-ridden spaces, which is vital for effective SAR operations. Furthermore, we leverage the capabilities of the latest YOLOv8 deep learning model, trained on specialised datasets to accurately detect human victims under a wide range of challenging conditions, such as varying occlusions, lighting, and perspectives. Our comprehensive testing in simulated emergency scenarios validates the effectiveness of our integrated approach. The system demonstrated exceptional proficiency in navigating through obstructions and rapidly locating victims, even in environments with visual impairments like smoke, clutter, and poor lighting. Our study not only highlights the critical gaps in current SAR response capabilities but also offers a pioneering solution through a synergistic blend of gesture-based control, deep learning, and purpose-built robotics. The key findings underscore the potential of our integrated technological framework to significantly enhance UGV performance in disaster scenarios, thereby optimising life-saving outcomes when time is of the essence. This research paves the way for future advancements in SAR technology, with the promise of more efficient and reliable rescue operations in the face of disaster.

SÉLECTION CITATIONS
DÉTAIL DE RECHERCHE
...