Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Otolaryngol Head Neck Surg ; 169(4): 1083-1085, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-36934457

RESUMO

Head and neck surgeons often have difficulty in relocating sites of positive margins due to the complex 3-dimensional (3D) anatomy of the head and neck. We introduce a new technique where resection specimens are 3D scanned with a smartphone, annotated in computer-assisted design software, and immediately visualized on augmented reality (AR) glasses. The 3D virtual specimen can be accurately superimposed onto surgical sites for orientation and sizing applications. During an operative workshop, a surgeon using AR glasses projected virtual, annotated specimen models back into the resection bed onto a cadaver within approximately 10 minutes. Colored annotations can correspond with pathologic annotations and guide the orientation of the virtual 3D specimen. The model was also overlayed onto a flap harvest site to aid in reconstructive planning. We present a new technique allowing interactive, sterile inspection of tissue specimens in AR that could facilitate communication among surgeons and pathologists and assist with reconstructive surgery.


Assuntos
Realidade Aumentada , Cirurgia Assistida por Computador , Humanos , Software , Cirurgia Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador , Cabeça , Imageamento Tridimensional
2.
Cell Syst ; 13(7): 547-560.e3, 2022 07 20.
Artigo em Inglês | MEDLINE | ID: mdl-35705097

RESUMO

Organoids recapitulate complex 3D organ structures and represent a unique opportunity to probe the principles of self-organization. While we can alter an organoid's morphology by manipulating the culture conditions, the morphology of an organoid often resembles that of its original organ, suggesting that organoid morphologies are governed by a set of tissue-specific constraints. Here, we establish a framework to identify constraints on an organoid's morphological features by quantifying them from microscopy images of organoids exposed to a range of perturbations. We apply this framework to Madin-Darby canine kidney cysts and show that they obey a number of constraints taking the form of scaling relationships or caps on certain parameters. For example, we found that the number, but not size, of cells increases with increasing cyst size. We also find that these constraints vary with cyst age and can be altered by varying the culture conditions. We observed similar sets of constraints in intestinal organoids. This quantitative framework for identifying constraints on organoid morphologies may inform future efforts to engineer organoids.


Assuntos
Cistos , Organoides , Animais , Cães , Fenótipo
3.
EBioMedicine ; 62: 103121, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-33232868

RESUMO

BACKGROUND: To develop a deep learning model to classify primary bone tumors from preoperative radiographs and compare performance with radiologists. METHODS: A total of 1356 patients (2899 images) with histologically confirmed primary bone tumors and pre-operative radiographs were identified from five institutions' pathology databases. Manual cropping was performed by radiologists to label the lesions. Binary discriminatory capacity (benign versus not-benign and malignant versus not-malignant) and three-way classification (benign versus intermediate versus malignant) performance of our model were evaluated. The generalizability of our model was investigated on data from external test set. Final model performance was compared with interpretation from five radiologists of varying level of experience using the Permutations tests. FINDINGS: For benign vs. not benign, model achieved area under curve (AUC) of 0•894 and 0•877 on cross-validation and external testing, respectively. For malignant vs. not malignant, model achieved AUC of 0•907 and 0•916 on cross-validation and external testing, respectively. For three-way classification, model achieved 72•1% accuracy vs. 74•6% and 72•1% for the two subspecialists on cross-validation (p = 0•03 and p = 0•52, respectively). On external testing, model achieved 73•4% accuracy vs. 69•3%, 73•4%, 73•1%, 67•9%, and 63•4% for the two subspecialists and three junior radiologists (p = 0•14, p = 0•89, p = 0•93, p = 0•02, p < 0•01 for radiologists 1-5, respectively). INTERPRETATION: Deep learning can classify primary bone tumors using conventional radiographs in a multi-institutional dataset with similar accuracy compared to subspecialists, and better performance than junior radiologists. FUNDING: The project described was supported by RSNA Research & Education Foundation, through grant number RSCH2004 to Harrison X. Bai.


Assuntos
Neoplasias Ósseas/diagnóstico , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Radiografia , Adolescente , Adulto , Criança , Feminino , Humanos , Processamento de Imagem Assistida por Computador/normas , Masculino , Pessoa de Meia-Idade , Gradação de Tumores , Curva ROC , Radiografia/métodos , Reprodutibilidade dos Testes , Adulto Jovem
4.
J Magn Reson Imaging ; 52(5): 1542-1549, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-32222054

RESUMO

Pretreatment determination of renal cell carcinoma aggressiveness may help to guide clinical decision-making. PURPOSE: To evaluate the efficacy of residual convolutional neural network using routine MRI in differentiating low-grade (grade I-II) from high-grade (grade III-IV) in stage I and II renal cell carcinoma. STUDY TYPE: Retrospective. POPULATION: In all, 376 patients with 430 renal cell carcinoma lesions from 2008-2019 in a multicenter cohort were acquired. The 353 Fuhrman-graded renal cell carcinomas were divided into a training, validation, and test set with a 7:2:1 split. The 77 WHO/ISUP graded renal cell carcinomas were used as a separate WHO/ISUP test set. FIELD STRENGTH/SEQUENCE: 1.5T and 3.0T/T2 -weighted and T1 contrast-enhanced sequences. ASSESSMENT: The accuracy, sensitivity, and specificity of the final model were assessed. The receiver operating characteristic (ROC) curve and precision-recall curve were plotted to measure the performance of the binary classifier. A confusion matrix was drawn to show the true positive, true negative, false positive, and false negative of the model. STATISTICAL TESTS: Mann-Whitney U-test for continuous data and the chi-square test or Fisher's exact test for categorical data were used to compare the difference of clinicopathologic characteristics between the low- and high-grade groups. The adjusted Wald method was used to calculate the 95% confidence interval (CI) of accuracy, sensitivity, and specificity. RESULTS: The final deep-learning model achieved a test accuracy of 0.88 (95% CI: 0.73-0.96), sensitivity of 0.89 (95% CI: 0.74-0.96), and specificity of 0.88 (95% CI: 0.73-0.96) in the Fuhrman test set and a test accuracy of 0.83 (95% CI: 0.73-0.90), sensitivity of 0.92 (95% CI: 0.84-0.97), and specificity of 0.78 (95% CI: 0.68-0.86) in the WHO/ISUP test set. DATA CONCLUSION: Deep learning can noninvasively predict the histological grade of stage I and II renal cell carcinoma using conventional MRI in a multiinstitutional dataset with high accuracy. LEVEL OF EVIDENCE: 3 TECHNICAL EFFICACY STAGE: 2.


Assuntos
Carcinoma de Células Renais , Aprendizado Profundo , Neoplasias Renais , Carcinoma de Células Renais/diagnóstico por imagem , Diferenciação Celular , Humanos , Neoplasias Renais/diagnóstico por imagem , Imageamento por Ressonância Magnética , Estudos Retrospectivos
5.
Clin Cancer Res ; 26(8): 1944-1952, 2020 04 15.
Artigo em Inglês | MEDLINE | ID: mdl-31937619

RESUMO

PURPOSE: With increasing incidence of renal mass, it is important to make a pretreatment differentiation between benign renal mass and malignant tumor. We aimed to develop a deep learning model that distinguishes benign renal tumors from renal cell carcinoma (RCC) by applying a residual convolutional neural network (ResNet) on routine MR imaging. EXPERIMENTAL DESIGN: Preoperative MR images (T2-weighted and T1-postcontrast sequences) of 1,162 renal lesions definitely diagnosed on pathology or imaging in a multicenter cohort were divided into training, validation, and test sets (70:20:10 split). An ensemble model based on ResNet was built combining clinical variables and T1C and T2WI MR images using a bagging classifier to predict renal tumor pathology. Final model performance was compared with expert interpretation and the most optimized radiomics model. RESULTS: Among the 1,162 renal lesions, 655 were malignant and 507 were benign. Compared with a baseline zero rule algorithm, the ensemble deep learning model had a statistically significant higher test accuracy (0.70 vs. 0.56, P = 0.004). Compared with all experts averaged, the ensemble deep learning model had higher test accuracy (0.70 vs. 0.60, P = 0.053), sensitivity (0.92 vs. 0.80, P = 0.017), and specificity (0.41 vs. 0.35, P = 0.450). Compared with the radiomics model, the ensemble deep learning model had higher test accuracy (0.70 vs. 0.62, P = 0.081), sensitivity (0.92 vs. 0.79, P = 0.012), and specificity (0.41 vs. 0.39, P = 0.770). CONCLUSIONS: Deep learning can noninvasively distinguish benign renal tumors from RCC using conventional MR imaging in a multi-institutional dataset with good accuracy, sensitivity, and specificity comparable with experts and radiomics.


Assuntos
Algoritmos , Carcinoma de Células Renais/diagnóstico , Aprendizado Profundo , Neoplasias Renais/diagnóstico , Imageamento por Ressonância Magnética/métodos , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Carcinoma de Células Renais/classificação , Criança , Pré-Escolar , Diagnóstico Diferencial , Feminino , Humanos , Neoplasias Renais/classificação , Masculino , Pessoa de Meia-Idade , Redes Neurais de Computação , Valor Preditivo dos Testes , Estudos Retrospectivos , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA