RESUMO
OBJECTIVES: To develop and validate a fully automated AI system to extract standard planes, assess early gestational weeks, and compare the performance of the developed system to sonographers. METHODS: In this three-center retrospective study, 214 consecutive pregnant women that underwent transvaginal ultrasounds between January and December 2018 were selected. Their ultrasound videos were automatically split into 38,941 frames using a particular program. First, an optimal deep-learning classifier was selected to extract the standard planes with key anatomical structures from the ultrasound frames. Second, an optimal segmentation model was selected to outline gestational sacs. Third, novel biometry was used to measure, select the largest gestational sac in the same video, and assess gestational weeks automatically. Finally, an independent test set was used to compare the performance of the system with that of sonographers. The outcomes were analyzed using the area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and mean similarity between two samples (mDice). RESULTS: The standard planes were extracted with an AUC of 0.975, a sensitivity of 0.961, and a specificity of 0.979. The gestational sacs' contours were segmented with a mDice of 0.974 (error less than 2 pixels). The comparison showed that the relative error of the tool in assessing gestational weeks was 12.44% and 6.92% lower and faster (min, 0.17 vs. 16.6 and 12.63) than that of the intermediate and senior sonographers, respectively. CONCLUSIONS: This proposed end-to-end tool allows automatic assessment of gestational weeks in early pregnancy and may reduce manual analysis time and measurement errors. CLINICAL RELEVANCE STATEMENT: The fully automated tool achieved high accuracy showing its potential to optimize the increasingly scarce resources of sonographers. Explainable predictions can assist in their confidence in assessing gestational weeks and provide a reliable basis for managing early pregnancy cases. KEY POINTS: ⢠The end-to-end pipeline enabled automatic identification of the standard plane containing the gestational sac in an ultrasound video, as well as segmentation of the sac contour, automatic multi-angle measurements, and the selection of the sac with the largest mean internal diameter to calculate the early gestational week. ⢠This fully automated tool combining deep learning and intelligent biometry may assist the sonographer in assessing the early gestational week, increasing accuracy and reducing the analyzing time, thereby reducing observer dependence.
Assuntos
Aprendizado Profundo , Gravidez , Feminino , Humanos , Idade Gestacional , Ultrassonografia Pré-Natal , Estudos Retrospectivos , BiometriaRESUMO
Human telomeres are maintained by the shelterin protein complex in which TRF1 and TRF2 bind directly to duplex telomeric DNA. How these proteins find telomeric sequences among a genome of billions of base pairs and how they find protein partners to form the shelterin complex remains uncertain. Using single-molecule fluorescence imaging of quantum dot-labeled TRF1 and TRF2, we study how these proteins locate TTAGGG repeats on DNA tightropes. By virtue of its basic domain TRF2 performs an extensive 1D search on nontelomeric DNA, whereas TRF1's 1D search is limited. Unlike the stable and static associations observed for other proteins at specific binding sites, TRF proteins possess reduced binding stability marked by transient binding (â¼ 9-17 s) and slow 1D diffusion on specific telomeric regions. These slow diffusion constants yield activation energy barriers to sliding â¼ 2.8-3.6 κ(B)T greater than those for nontelomeric DNA. We propose that the TRF proteins use 1D sliding to find protein partners and assemble the shelterin complex, which in turn stabilizes the interaction with specific telomeric DNA. This 'tag-team proofreading' represents a more general mechanism to ensure a specific set of proteins interact with each other on long repetitive specific DNA sequences without requiring external energy sources.
Assuntos
DNA/metabolismo , Telômero/metabolismo , Proteína 1 de Ligação a Repetições Teloméricas/metabolismo , Proteína 2 de Ligação a Repetições Teloméricas/metabolismo , DNA/química , Difusão , Ligação Proteica , Estrutura Terciária de Proteína , Sequências Repetitivas de Ácido Nucleico , Telômero/química , Proteína 2 de Ligação a Repetições Teloméricas/químicaRESUMO
Background: Early gestational age (GA) assessment using ultrasound is a routine and frequent examination performed in hospitals whereby clinicians manually measure the size of the gestational sac using ultrasound and calculate GA. However, the error is often substantial, and the process is laborious. To overcome these challenges, we propose a new system to assess early GA using a new end-to-end computer vision system and a new biometric measurement method based on ultrasound video. Methods: In this retrospective study, a new system was provided. B-ultrasound videos were first decomposed into two-dimensional (2D) images, and the contours of the gestational sac were extracted and drawn. The maximum length and short diameter of the gestational sac were then automatically measured and GA was calculated using the Hellman formula. Finally, through human-machine comparison, the clinicians' assessment errors were analyzed by SPSS 26. Results: In this study, 29,829 2D images of 191 B-ultrasound videos were evaluated using the new system. Clinicians usually require 15-20 min to complete assessments of GA, whereas with the new system assessments can be completed in approximately 30 s. Moreover, a human-machine comparison showed that the system helped intermediate skills clinicians improve their relative diagnostic error by 13.45% with an absolute error of 7 days. In addition, the new system was used to identify other lesions in the uterus and measure their size as a "sanity check". Conclusions: The proposed new system is a practical, reproducible, and reliable approach for assessing early GA.
RESUMO
PURPOSE: Comparing the efficacy of a deep-learning model in classifying the etiology of pneumonia on pediatric chest X-rays (CXRs) with that of human readers. METHODS: We built a clinical-pediatric CXR set containing 4035 patients to exploit a deep-learning model called Resnet-50 for differentiating viral from bacterial pneumonia. The dataset was split into training (80%) and validation (20%). Model performance was assessed by receiver operating characteristic curve and area under the curve (AUC) on the first test set of 400 CXRs collected from different studies. For the second test set composed of 100 independent examinations obtained from the daily clinical practice at our institution, the kappa coefficient was selected to measure the interrater agreement in a pairwise fashion for the reference standard, all reviewers, and the model. Gradient-weighted class activation mapping was used to visualize the significant areas contributing to the model prediction. RESULTS: On the first test set, the best-performing classifier achieved an AUC of 0.919 (p < .001), with a sensitivity of 79.0% and specificity of 88.9%. On the second test set, the classifier achieved performance similar to that of human experts, which resulted in a sensitivity of 74.3% and specificity of 90.8%, positive and negative likelihood ratios of 8.1 and 0.3, respectively. Contingence tables and kappa values further revealed that expert reviewers and model reached substantial agreements on differentiating the etiology of pediatric pneumonia. CONCLUSIONS: This study demonstrated that the model performed similarly as human reviewers and recognized the regions of pathology on CXRs.
Assuntos
Aprendizado Profundo , Pneumonia , Área Sob a Curva , Criança , Humanos , Pneumonia/diagnóstico por imagem , Pneumonia/etiologia , Radiografia , Raios XRESUMO
PURPOSE: To evaluate the efficacy of a deep-learning model to segment the lung and thorax regions in pediatric chest X-rays (CXRs). Validating the diagnosis of bacterial or viral pneumonia could be improved after lung segmentation. MATERIALS AND METHODS: A clinical-pediatric CXR set including 1351 patients was proposed to develop a deep-learning model for the pulmonary-thoracic segmentations. Model performance was evaluated by Jaccard's similarity coefficient (JSC) and Dice's coefficient (DC). Two adult CXR sets were used to assess the model's generalizability. According to the pulmonary-thoracic ratio, Pearson's correlation coefficient and the Bland-Altman plot were generated to demonstrate the correlation and agreement between manual and automatic segmentations. The receiver operating characteristic curves and areas under the curve (AUCs) were used to compare the pneumonia classification performance based on the lung-extracted images with that based on the original images. RESULTS: The model achieved JSCs of 0.910 and 0.950, DCs of 0.948 and 0.974 for lung and thorax segmentations, respectively. Pearson's r = 0.96, P < .0001. In the Bland-Altman plot, the mean difference was 0.0025 with a 95% confidence interval of (-0.0451, 0.0501). For testing with two adult CXR sets, the JSCs were 0.903 and 0.888, respectively, while the DCs were 0.948 and 0.937, respectively. After lung segmentation, the AUC of a classifier to identify bacterial or viral pneumonia increased from 0.815 to 0.879. CONCLUSION: We built a pediatric CXR dataset and exploited a deep-learning model for accurate pulmonary-thoracic segmentations. Lung segmentation can notably improve the diagnosis of bacterial or viral pneumonia.