Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
EClinicalMedicine ; 67: 102391, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38274117

RESUMO

Background: Clinical appearance and high-frequency ultrasound (HFUS) are indispensable for diagnosing skin diseases by providing internal and external information. However, their complex combination brings challenges for primary care physicians and dermatologists. Thus, we developed a deep multimodal fusion network (DMFN) model combining analysis of clinical close-up and HFUS images for binary and multiclass classification in skin diseases. Methods: Between Jan 10, 2017, and Dec 31, 2020, the DMFN model was trained and validated using 1269 close-ups and 11,852 HFUS images from 1351 skin lesions. The monomodal convolutional neural network (CNN) model was trained and validated with the same close-up images for comparison. Subsequently, we did a prospective and multicenter study in China. Both CNN models were tested prospectively on 422 cases from 4 hospitals and compared with the results from human raters (general practitioners, general dermatologists, and dermatologists specialized in HFUS). The performance of binary classification (benign vs. malignant) and multiclass classification (the specific diagnoses of 17 types of skin diseases) measured by the area under the receiver operating characteristic curve (AUC) were evaluated. This study is registered with www.chictr.org.cn (ChiCTR2300074765). Findings: The performance of the DMFN model (AUC, 0.876) was superior to that of the monomodal CNN model (AUC, 0.697) in the binary classification (P = 0.0063), which was also better than that of the general practitioner (AUC, 0.651, P = 0.0025) and general dermatologists (AUC, 0.838; P = 0.0038). By integrating close-up and HFUS images, the DMFN model attained an almost identical performance in comparison to dermatologists (AUC, 0.876 vs. AUC, 0.891; P = 0.0080). For the multiclass classification, the DMFN model (AUC, 0.707) exhibited superior prediction performance compared with general dermatologists (AUC, 0.514; P = 0.0043) and dermatologists specialized in HFUS (AUC, 0.640; P = 0.0083), respectively. Compared to dermatologists specialized in HFUS, the DMFN model showed better or comparable performance in diagnosing 9 of the 17 skin diseases. Interpretation: The DMFN model combining analysis of clinical close-up and HFUS images exhibited satisfactory performance in the binary and multiclass classification compared with the dermatologists. It may be a valuable tool for general dermatologists and primary care providers. Funding: This work was supported in part by the National Natural Science Foundation of China and the Clinical research project of Shanghai Skin Disease Hospital.

2.
EClinicalMedicine ; 60: 102027, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37333662

RESUMO

Background: Identifying patients with clinically significant prostate cancer (csPCa) before biopsy helps reduce unnecessary biopsies and improve patient prognosis. The diagnostic performance of traditional transrectal ultrasound (TRUS) for csPCa is relatively limited. This study was aimed to develop a high-performance convolutional neural network (CNN) model (P-Net) based on a TRUS video of the entire prostate and investigate its efficacy in identifying csPCa. Methods: Between January 2021 and December 2022, this study prospectively evaluated 832 patients from four centres who underwent prostate biopsy and/or radical prostatectomy. All patients had a standardised TRUS video of the whole prostate. A two-dimensional CNN (2D P-Net) and three-dimensional CNN (3D P-Net) were constructed using the training cohort (559 patients) and tested on the internal validation cohort (140 patients) as well as on the external validation cohort (133 patients). The performance of 2D P-Net and 3D P-Net in predicting csPCa was assessed in terms of the area under the receiver operating characteristic curve (AUC), biopsy rate, and unnecessary biopsy rate, and compared with the TRUS 5-point Likert score system as well as multiparametric magnetic resonance imaging (mp-MRI) prostate imaging reporting and data system (PI-RADS) v2.1. Decision curve analyses (DCAs) were used to determine the net benefits associated with their use. The study is registered at https://www.chictr.org.cn with the unique identifier ChiCTR2200064545. Findings: The diagnostic performance of 3D P-Net (AUC: 0.85-0.89) was superior to TRUS 5-point Likert score system (AUC: 0.71-0.78, P = 0.003-0.040), and similar to mp-MRI PI-RADS v2.1 score system interpreted by experienced radiologists (AUC: 0.83-0.86, P = 0.460-0.732) and 2D P-Net (AUC: 0.79-0.86, P = 0.066-0.678) in the internal and external validation cohorts. The biopsy rate decreased from 40.3% (TRUS 5-point Likert score system) and 47.6% (mp-MRI PI-RADS v2.1 score system) to 35.5% (2D P-Net) and 34.0% (3D P-Net). The unnecessary biopsy rate decreased from 38.1% (TRUS 5-point Likert score system) and 35.2% (mp-MRI PI-RADS v2.1 score system) to 32.0% (2D P-Net) and 25.8% (3D P-Net). 3D P-Net yielded the highest net benefit according to the DCAs. Interpretation: 3D P-Net based on a prostate grayscale TRUS video achieved satisfactory performance in identifying csPCa and potentially reducing unnecessary biopsies. More studies to determine how AI models better integrate into routine practice and randomized controlled trials to show the values of these models in real clinical applications are warranted. Funding: The National Natural Science Foundation of China (Grants 82202174 and 82202153), the Science and Technology Commission of Shanghai Municipality (Grants 18441905500 and 19DZ2251100), Shanghai Municipal Health Commission (Grants 2019LJ21 and SHSLCZDZK03502), Shanghai Science and Technology Innovation Action Plan (21Y11911200), and Fundamental Research Funds for the Central Universities (ZD-11-202151), Scientific Research and Development Fund of Zhongshan Hospital of Fudan University (Grant 2022ZSQD07).

3.
J Vis Exp ; (194)2023 04 21.
Artigo em Inglês | MEDLINE | ID: mdl-37154577

RESUMO

In recent years, the incidence of thyroid cancer has been increasing. Thyroid nodule detection is critical for both the detection and treatment of thyroid cancer. Convolutional neural networks (CNNs) have achieved good results in thyroid ultrasound image analysis tasks. However, due to the limited valid receptive field of convolutional layers, CNNs fail to capture long-range contextual dependencies, which are important for identifying thyroid nodules in ultrasound images. Transformer networks are effective in capturing long-range contextual information. Inspired by this, we propose a novel thyroid nodule detection method that combines the Swin Transformer backbone and Faster R-CNN. Specifically, an ultrasound image is first projected into a 1D sequence of embeddings, which are then fed into a hierarchical Swin Transformer. The Swin Transformer backbone extracts features at five different scales by utilizing shifted windows for the computation of self-attention. Subsequently, a feature pyramid network (FPN) is used to fuse the features from different scales. Finally, a detection head is used to predict bounding boxes and the corresponding confidence scores. Data collected from 2,680 patients were used to conduct the experiments, and the results showed that this method achieved the best mAP score of 44.8%, outperforming CNN-based baselines. In addition, we gained better sensitivity (90.5%) than the competitors. This indicates that context modeling in this model is effective for thyroid nodule detection.


Assuntos
Neoplasias da Glândula Tireoide , Nódulo da Glândula Tireoide , Humanos , Nódulo da Glândula Tireoide/diagnóstico por imagem , Neoplasias da Glândula Tireoide/diagnóstico por imagem , Ultrassonografia , Fontes de Energia Elétrica , Processamento de Imagem Assistida por Computador
4.
Behav Brain Res ; 448: 114456, 2023 06 25.
Artigo em Inglês | MEDLINE | ID: mdl-37116662

RESUMO

Chronic social defeat has been found to be stressful and to affect many aspects of the brain and behaviors in males. However, relatively little is known about its effects on females. In the present study, we examined the effects of repeated social defeat on social approach and anxiety-like behaviors as well as the neuronal activation in the brain of sexually naïve female Mongolian gerbils (Meriones unguiculatus). Our data indicate that repeated social defeats for 20 days reduced social approach and social investigation, but increased risk assessment or vigilance to an unfamiliar conspecific. Such social defeat experience also increased anxiety-like behavior and reduced locomotor activity. Using ΔFosB-immunoreactive (ΔFosB-ir) staining as a marker of neuronal activation in the brain, we found significant elevations by social defeat experience in the density of ΔFosB-ir stained neurons in several brain regions, including the prelimbic (PL) and infralimbic (IL) subnuclei of the prefrontal cortex (PFC), CA1 subfields (CA1) of the hippocampus, central subnuclei of the amygdala (CeA), the paraventricular nucleus (PVN), dorsomedial nucleus (DMH), and ventrolateral subdivision of the ventromedial nucleus (VMHvl) of the hypothalamus. As these brain regions have been implicated in social behaviors and stress responses, our data suggest that the specific patterns of neuronal activation in the brain may relate to the altered social and anxiety-like behaviors following chronic social defeat in female Mongolian gerbils.


Assuntos
Encéfalo , Derrota Social , Masculino , Animais , Feminino , Gerbillinae , Encéfalo/metabolismo , Comportamento Social , Neurônios/metabolismo , Estresse Psicológico , Proteínas Proto-Oncogênicas c-fos/metabolismo
5.
Front Endocrinol (Lausanne) ; 13: 1018321, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36237194

RESUMO

Background: Dynamic artificial intelligence (AI) ultrasound intelligent auxiliary diagnosis system (Dynamic AI) is a joint application of AI technology and medical imaging data, which can perform a real-time synchronous dynamic analysis of nodules. The aim of this study is to investigate the value of dynamic AI in differentiating benign and malignant thyroid nodules and its guiding significance for treatment strategies. Methods: The data of 607 patients with 1007 thyroid nodules who underwent surgical treatment were reviewed and analyzed, retrospectively. Dynamic AI was used to differentiate benign and malignant nodules. The diagnostic efficacy of dynamic AI was evaluated by comparing the results of dynamic AI examination, preoperative fine needle aspiration cytology (FNAC) and postoperative pathology of nodules with different sizes and properties in patients of different sexes and ages. Results: The sensitivity, specificity and accuracy of dynamic AI in the diagnosis of thyroid nodules were 92.21%, 83.20% and 89.97%, respectively, which were highly consistent with the postoperative pathological results (kappa = 0.737, p < 0.001). There is no statistical difference in accuracy between people with different ages and sexes and nodules of different sizes, which showed the good stability. The accuracy of dynamic AI in malignant nodules (92.21%) was significantly higher than that in benign nodules (83.20%) (p < 0.001). The specificity and positive predictive value were significantly higher, and the misdiagnosis rate was significantly lower in dynamic AI than that of preoperative ultrasound ACR TI-RADS (p < 0.001). The accuracy of dynamic AI in nodules with diameter ≤ 0.50 cm was significantly higher than that of preoperative ultrasound (p = 0.044). Compared with FNAC, the sensitivity (96.58%) and accuracy (94.06%) of dynamic AI were similar. Conclusions: The dynamic AI examination has high diagnostic value for benign and malignant thyroid nodules, which can effectively assist surgeons in formulating scientific and reasonable individualized diagnosis and treatment strategies for patients.


Assuntos
Nódulo da Glândula Tireoide , Inteligência Artificial , Biópsia por Agulha Fina , Humanos , Estudos Retrospectivos , Nódulo da Glândula Tireoide/diagnóstico por imagem , Nódulo da Glândula Tireoide/cirurgia , Ultrassonografia/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA