Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38489169

RESUMO

BACKGROUND: At present, most articles mainly focused on the diagnosis of thyroid nodules by using artificial intelligence (AI), and there was little research on the detection performance of AI in thyroid nodules. OBJECTIVE: To explore the value of a real-time AI based on computer-aided diagnosis system in the detection of thyroid nodules and to analyze the factors influencing the detection accuracy. METHODS: From June 1, 2022 to December 31, 2023, 224 consecutive patients with 587 thyroid nodules were prospective collected. Based on the detection results determined by two experienced radiologists (both with more than 15 years experience in thyroid diagnosis), the detection ability of thyroid nodules of radiologists with different experience levels (junior radiologist with 1 year experience and senior radiologist with 5 years experience in thyroid diagnosis) and real-time AI were compared. According to the logistic regression analysis, the factors influencing the real-time AI detection of thyroid nodules were analyzed. RESULTS: The detection rate of thyroid nodules by real-time AI was significantly higher than that of junior radiologist (P = 0.013), but lower than that of senior radiologist (P = 0.001). Multivariate logistic regression analysis showed that nodules size, superior pole, outside (near carotid artery), close to vessel, echogenicity (isoechoic, hyperechoic, mixed-echoic), morphology (not very regular, irregular), margin (unclear), ACR TI-RADS category 4 and 5 were significant independent influencing factors (all P < 0.05). With the combination of real-time AI and radiologists, junior and senior radiologist increased the detection rate to 97.4% (P < 0.001) and 99.1% (P = 0.015) respectively. CONCLUSONS: The real-time AI has good performance in thyroid nodule detection and can be a good auxiliary tool in the clinical work of radiologists.

2.
EClinicalMedicine ; 67: 102391, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38274117

RESUMO

Background: Clinical appearance and high-frequency ultrasound (HFUS) are indispensable for diagnosing skin diseases by providing internal and external information. However, their complex combination brings challenges for primary care physicians and dermatologists. Thus, we developed a deep multimodal fusion network (DMFN) model combining analysis of clinical close-up and HFUS images for binary and multiclass classification in skin diseases. Methods: Between Jan 10, 2017, and Dec 31, 2020, the DMFN model was trained and validated using 1269 close-ups and 11,852 HFUS images from 1351 skin lesions. The monomodal convolutional neural network (CNN) model was trained and validated with the same close-up images for comparison. Subsequently, we did a prospective and multicenter study in China. Both CNN models were tested prospectively on 422 cases from 4 hospitals and compared with the results from human raters (general practitioners, general dermatologists, and dermatologists specialized in HFUS). The performance of binary classification (benign vs. malignant) and multiclass classification (the specific diagnoses of 17 types of skin diseases) measured by the area under the receiver operating characteristic curve (AUC) were evaluated. This study is registered with www.chictr.org.cn (ChiCTR2300074765). Findings: The performance of the DMFN model (AUC, 0.876) was superior to that of the monomodal CNN model (AUC, 0.697) in the binary classification (P = 0.0063), which was also better than that of the general practitioner (AUC, 0.651, P = 0.0025) and general dermatologists (AUC, 0.838; P = 0.0038). By integrating close-up and HFUS images, the DMFN model attained an almost identical performance in comparison to dermatologists (AUC, 0.876 vs. AUC, 0.891; P = 0.0080). For the multiclass classification, the DMFN model (AUC, 0.707) exhibited superior prediction performance compared with general dermatologists (AUC, 0.514; P = 0.0043) and dermatologists specialized in HFUS (AUC, 0.640; P = 0.0083), respectively. Compared to dermatologists specialized in HFUS, the DMFN model showed better or comparable performance in diagnosing 9 of the 17 skin diseases. Interpretation: The DMFN model combining analysis of clinical close-up and HFUS images exhibited satisfactory performance in the binary and multiclass classification compared with the dermatologists. It may be a valuable tool for general dermatologists and primary care providers. Funding: This work was supported in part by the National Natural Science Foundation of China and the Clinical research project of Shanghai Skin Disease Hospital.

3.
EClinicalMedicine ; 60: 102027, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37333662

RESUMO

Background: Identifying patients with clinically significant prostate cancer (csPCa) before biopsy helps reduce unnecessary biopsies and improve patient prognosis. The diagnostic performance of traditional transrectal ultrasound (TRUS) for csPCa is relatively limited. This study was aimed to develop a high-performance convolutional neural network (CNN) model (P-Net) based on a TRUS video of the entire prostate and investigate its efficacy in identifying csPCa. Methods: Between January 2021 and December 2022, this study prospectively evaluated 832 patients from four centres who underwent prostate biopsy and/or radical prostatectomy. All patients had a standardised TRUS video of the whole prostate. A two-dimensional CNN (2D P-Net) and three-dimensional CNN (3D P-Net) were constructed using the training cohort (559 patients) and tested on the internal validation cohort (140 patients) as well as on the external validation cohort (133 patients). The performance of 2D P-Net and 3D P-Net in predicting csPCa was assessed in terms of the area under the receiver operating characteristic curve (AUC), biopsy rate, and unnecessary biopsy rate, and compared with the TRUS 5-point Likert score system as well as multiparametric magnetic resonance imaging (mp-MRI) prostate imaging reporting and data system (PI-RADS) v2.1. Decision curve analyses (DCAs) were used to determine the net benefits associated with their use. The study is registered at https://www.chictr.org.cn with the unique identifier ChiCTR2200064545. Findings: The diagnostic performance of 3D P-Net (AUC: 0.85-0.89) was superior to TRUS 5-point Likert score system (AUC: 0.71-0.78, P = 0.003-0.040), and similar to mp-MRI PI-RADS v2.1 score system interpreted by experienced radiologists (AUC: 0.83-0.86, P = 0.460-0.732) and 2D P-Net (AUC: 0.79-0.86, P = 0.066-0.678) in the internal and external validation cohorts. The biopsy rate decreased from 40.3% (TRUS 5-point Likert score system) and 47.6% (mp-MRI PI-RADS v2.1 score system) to 35.5% (2D P-Net) and 34.0% (3D P-Net). The unnecessary biopsy rate decreased from 38.1% (TRUS 5-point Likert score system) and 35.2% (mp-MRI PI-RADS v2.1 score system) to 32.0% (2D P-Net) and 25.8% (3D P-Net). 3D P-Net yielded the highest net benefit according to the DCAs. Interpretation: 3D P-Net based on a prostate grayscale TRUS video achieved satisfactory performance in identifying csPCa and potentially reducing unnecessary biopsies. More studies to determine how AI models better integrate into routine practice and randomized controlled trials to show the values of these models in real clinical applications are warranted. Funding: The National Natural Science Foundation of China (Grants 82202174 and 82202153), the Science and Technology Commission of Shanghai Municipality (Grants 18441905500 and 19DZ2251100), Shanghai Municipal Health Commission (Grants 2019LJ21 and SHSLCZDZK03502), Shanghai Science and Technology Innovation Action Plan (21Y11911200), and Fundamental Research Funds for the Central Universities (ZD-11-202151), Scientific Research and Development Fund of Zhongshan Hospital of Fudan University (Grant 2022ZSQD07).

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...