Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
1.
Am J Pathol ; 193(3): 332-340, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36563748

RESUMO

Colorectal cancer (CRC) is one of the most common types of cancer among men and women. The grading of dysplasia and the detection of adenocarcinoma are important clinical tasks in the diagnosis of CRC and shape the patients' follow-up plans. This study evaluated the feasibility of deep learning models for the classification of colorectal lesions into four classes: benign, low-grade dysplasia, high-grade dysplasia, and adenocarcinoma. To this end, a deep neural network was developed on a training set of 655 whole slide images of digitized colorectal resection slides from a tertiary medical institution; and the network was evaluated on an internal test set of 234 slides, as well as on an external test set of 606 adenocarcinoma slides from The Cancer Genome Atlas database. The model achieved an overall accuracy, sensitivity, and specificity of 95.5%, 91.0%, and 97.1%, respectively, on the internal test set, and an accuracy and sensitivity of 98.5% for adenocarcinoma detection task on the external test set. Results suggest that such deep learning models can potentially assist pathologists in grading colorectal dysplasia, detecting adenocarcinoma, prescreening, and prioritizing the reviewing of suspicious cases to improve the turnaround time for patients with a high risk of CRC. Furthermore, the high sensitivity on the external test set suggests the model's generalizability in detecting colorectal adenocarcinoma on whole slide images across different institutions.


Assuntos
Adenocarcinoma , Neoplasias Colorretais , Aprendizado Profundo , Masculino , Humanos , Feminino , Redes Neurais de Computação , Adenocarcinoma/diagnóstico , Adenocarcinoma/patologia , Patologistas , Hiperplasia , Neoplasias Colorretais/diagnóstico
2.
Transl Oncol ; 24: 101494, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35905641

RESUMO

Lung cancer is a leading cause of death in both men and women globally. The recent development of tumor molecular profiling has opened opportunities for targeted therapies for lung adenocarcinoma (LUAD) patients. However, the lack of access to molecular profiling or cost and turnaround time associated with it could hinder oncologists' willingness to order frequent molecular tests, limiting potential benefits from precision medicine. In this study, we developed a weakly supervised deep learning model for predicting somatic mutations of LUAD patients based on formalin-fixed paraffin-embedded (FFPE) whole-slide images (WSIs) using LUAD subtypes-related histological features and recent advances in computer vision. Our study was performed on a total of 747 hematoxylin and eosin (H&E) stained FFPE LUAD WSIs and the genetic mutation data of 232 patients who were treated at Dartmouth-Hitchcock Medical Center (DHMC). We developed our convolutional neural network-based models to analyze whole slides and predict five major genetic mutations, i.e., BRAF, EGFR, KRAS, STK11, and TP53. We additionally used 111 cases from the LUAD dataset of the CPTAC-3 study for external validation. Our model achieved an AUROC of 0.799 (95% CI: 0.686-0.904) and 0.686 (95% CI: 0.620-0.752) for predicting EGFR genetic mutations on the DHMC and CPTAC-3 test sets, respectively. Predicting TP53 genetic mutations also showed promising outcomes. Our results demonstrated that H&E stained FFPE LUAD whole slides could be utilized to predict oncogene mutations, such as EGFR, indicating that somatic mutations could present subtle morphological characteristics in histology slides, where deep learning-based feature extractors can learn such latent information.

3.
JAMA Netw Open ; 4(11): e2135271, 2021 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-34792588

RESUMO

Importance: Colorectal polyps are common, and their histopathologic classification is used in the planning of follow-up surveillance. Substantial variation has been observed in pathologists' classification of colorectal polyps, and improved assessment by pathologists may be associated with reduced subsequent underuse and overuse of colonoscopy. Objective: To compare standard microscopic assessment with an artificial intelligence (AI)-augmented digital system that annotates regions of interest within digitized polyp tissue and predicts polyp type using a deep learning model to assist pathologists in colorectal polyp classification. Design, Setting, and Participants: In this diagnostic study conducted at a tertiary academic medical center and a community hospital in New Hampshire, 100 slides with colorectal polyp samples were read by 15 pathologists using a microscope and an AI-augmented digital system, with a washout period of at least 12 weeks between use of each modality. The study was conducted from February 10 to July 10, 2020. Main Outcomes and Measures: Accuracy and time of evaluation were used to compare pathologists' performance when a microscope was used with their performance when the AI-augmented digital system was used. Outcomes were compared using paired t tests and mixed-effects models. Results: In assessments of 100 slides with colorectal polyp specimens, use of the AI-augmented digital system significantly improved pathologists' classification accuracy compared with microscopic assessment from 73.9% (95% CI, 71.7%-76.2%) to 80.8% (95% CI, 78.8%-82.8%) (P < .001). The overall difference in the evaluation time per slide between the digital system (mean, 21.7 seconds; 95% CI, 20.8-22.7 seconds) and microscopic examination (mean, 13.0 seconds; 95% CI, 12.4-13.5 seconds) was -8.8 seconds (95% CI, -9.8 to -7.7 seconds), but this difference decreased as pathologists became more familiar and experienced with the digital system; the difference between the time of evaluation on the last set of 20 slides for all pathologists when using the microscope and the digital system was 4.8 seconds (95% CI, 3.0-6.5 seconds). Conclusions and Relevance: In this diagnostic study, an AI-augmented digital system significantly improved the accuracy of pathologic interpretation of colorectal polyps compared with microscopic assessment. If applied broadly to clinical practice, this tool may be associated with decreases in subsequent overuse and underuse of colonoscopy and thus with improved patient outcomes and reduced health care costs.


Assuntos
Inteligência Artificial , Pólipos do Colo/classificação , Pólipos do Colo/diagnóstico por imagem , Pólipos do Colo/diagnóstico , Neoplasias Colorretais/classificação , Neoplasias Colorretais/diagnóstico , Microscopia , Pólipos do Colo/patologia , Confiabilidade dos Dados , Testes Diagnósticos de Rotina/métodos , Humanos , Interpretação de Imagem Assistida por Computador/métodos , New Hampshire
4.
Sci Rep ; 11(1): 7080, 2021 03 29.
Artigo em Inglês | MEDLINE | ID: mdl-33782535

RESUMO

Renal cell carcinoma (RCC) is the most common renal cancer in adults. The histopathologic classification of RCC is essential for diagnosis, prognosis, and management of patients. Reorganization and classification of complex histologic patterns of RCC on biopsy and surgical resection slides under a microscope remains a heavily specialized, error-prone, and time-consuming task for pathologists. In this study, we developed a deep neural network model that can accurately classify digitized surgical resection slides and biopsy slides into five related classes: clear cell RCC, papillary RCC, chromophobe RCC, renal oncocytoma, and normal. In addition to the whole-slide classification pipeline, we visualized the identified indicative regions and features on slides for classification by reprocessing patch-level classification results to ensure the explainability of our diagnostic model. We evaluated our model on independent test sets of 78 surgical resection whole slides and 79 biopsy slides from our tertiary medical institution, and 917 surgical resection slides from The Cancer Genome Atlas (TCGA) database. The average area under the curve (AUC) of our classifier on the internal resection slides, internal biopsy slides, and external TCGA slides is 0.98 (95% confidence interval (CI): 0.97-1.00), 0.98 (95% CI: 0.96-1.00) and 0.97 (95% CI: 0.96-0.98), respectively. Our results suggest that the high generalizability of our approach across different data sources and specimen types. More importantly, our model has the potential to assist pathologists by (1) automatically pre-screening slides to reduce false-negative cases, (2) highlighting regions of importance on digitized slides to accelerate diagnosis, and (3) providing objective and accurate diagnosis as the second opinion.


Assuntos
Carcinoma de Células Renais/patologia , Neoplasias Renais/patologia , Redes Neurais de Computação , Biópsia , Carcinoma de Células Renais/cirurgia , Humanos , Neoplasias Renais/cirurgia
5.
J Biomed Inform ; 111: 103581, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-33010425

RESUMO

OBJECTIVE: Currently, a major limitation for natural language processing (NLP) analyses in clinical applications is that concepts are not effectively referenced in various forms across different texts. This paper introduces Multi-Ontology Refined Embeddings (MORE), a novel hybrid framework that incorporates domain knowledge from multiple ontologies into a distributional semantic model, learned from a corpus of clinical text. MATERIALS AND METHODS: We use the RadCore and MIMIC-III free-text datasets for the corpus-based component of MORE. For the ontology-based part, we use the Medical Subject Headings (MeSH) ontology and three state-of-the-art ontology-based similarity measures. In our approach, we propose a new learning objective, modified from the sigmoid cross-entropy objective function. RESULTS AND DISCUSSION: We used two established datasets of semantic similarities among biomedical concept pairs to evaluate the quality of the generated word embeddings. On the first dataset with 29 concept pairs, with similarity scores established by physicians and medical coders, MORE's similarity scores have the highest combined correlation (0.633), which is 5.0% higher than that of the baseline model, and 12.4% higher than that of the best ontology-based similarity measure. On the second dataset with 449 concept pairs, MORE's similarity scores have a correlation of 0.481, based on the average of four medical residents' similarity ratings, and that outperforms the skip-gram model by 8.1%, and the best ontology measure by 6.9%. Furthermore, MORE outperforms three pre-trained transformer-based word embedding models (i.e., BERT, ClinicalBERT, and BioBERT) on both datasets. CONCLUSION: MORE incorporates knowledge from several biomedical ontologies into an existing corpus-based distributional semantics model, improving both the accuracy of the learned word embeddings and the extensibility of the model to a broader range of biomedical concepts. MORE allows for more accurate clustering of concepts across a wide range of applications, such as analyzing patient health records to identify subjects with similar pathologies, or integrating heterogeneous clinical data to improve interoperability between hospitals.


Assuntos
Ontologias Biológicas , Processamento de Linguagem Natural , Semântica , Análise por Conglomerados , Humanos , Medical Subject Headings
6.
Neuroimage Clin ; 27: 102276, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32512401

RESUMO

In this paper, we demonstrate the feasibility and performance of deep residual neural networks for volumetric segmentation of irreversibly damaged brain tissue lesions on T1-weighted MRI scans for chronic stroke patients. A total of 239 T1-weighted MRI scans of chronic ischemic stroke patients from a public dataset were retrospectively analyzed by 3D deep convolutional segmentation models with residual learning, using a novel zoom-in&out strategy. Dice similarity coefficient (DSC), average symmetric surface distance (ASSD), and Hausdorff distance (HD) of the identified lesions were measured by using manual tracing of lesions as the reference standard. Bootstrapping was employed for all metrics to estimate 95% confidence intervals. The models were assessed on a test set of 31 scans. The average DSC was 0.64 (0.51-0.76) with a median of 0.78. ASSD and HD were 3.6 mm (1.7-6.2 mm) and 20.4 mm (10.0-33.3 mm), respectively. The latest deep learning architecture and techniques were applied with 3D segmentation on MRI scans and demonstrated effectiveness for volumetric segmentation of chronic ischemic stroke lesions.


Assuntos
Imageamento Tridimensional , Imageamento por Ressonância Magnética , Redes Neurais de Computação , Acidente Vascular Cerebral/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Imageamento por Ressonância Magnética/métodos , Estudos Retrospectivos , Acidente Vascular Cerebral/diagnóstico , Tomografia Computadorizada por Raios X
7.
JAMA Netw Open ; 3(4): e203398, 2020 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-32324237

RESUMO

Importance: Histologic classification of colorectal polyps plays a critical role in screening for colorectal cancer and care of affected patients. An accurate and automated algorithm for the classification of colorectal polyps on digitized histopathologic slides could benefit practitioners and patients. Objective: To evaluate the performance and generalizability of a deep neural network for colorectal polyp classification on histopathologic slide images using a multi-institutional data set. Design, Setting, and Participants: This prognostic study used histopathologic slides collected from January 1, 2016, to June 31, 2016, from Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire, with 326 slides used for training, 157 slides for an internal data set, and 25 for a validation set. For the external data set, 238 slides for 179 distinct patients were obtained from 24 institutions across 13 US states. Data analysis was performed from April 9 to November 23, 2019. Main Outcomes and Measures: Accuracy, sensitivity, and specificity of the model to classify 4 major colorectal polyp types: tubular adenoma, tubulovillous or villous adenoma, hyperplastic polyp, and sessile serrated adenoma. Performance was compared with that of local pathologists' at the point of care identified from corresponding pathology laboratories. Results: For the internal evaluation on the 157 slides with ground truth labels from 5 pathologists, the deep neural network had a mean accuracy of 93.5% (95% CI, 89.6%-97.4%) compared with local pathologists' accuracy of 91.4% (95% CI, 87.0%-95.8%). On the external test set of 238 slides with ground truth labels from 5 pathologists, the deep neural network achieved an accuracy of 87.0% (95% CI, 82.7%-91.3%), which was comparable with local pathologists' accuracy of 86.6% (95% CI, 82.3%-90.9%). Conclusions and Relevance: The findings suggest that this model may assist pathologists by improving the diagnostic efficiency, reproducibility, and accuracy of colorectal cancer screenings.


Assuntos
Pólipos do Colo/diagnóstico , Pólipos do Colo/patologia , Interpretação de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Algoritmos , Aprendizado Profundo , Histocitoquímica , Humanos , Sensibilidade e Especificidade
8.
JAMA Netw Open ; 2(11): e1914645, 2019 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-31693124

RESUMO

Importance: Deep learning-based methods, such as the sliding window approach for cropped-image classification and heuristic aggregation for whole-slide inference, for analyzing histological patterns in high-resolution microscopy images have shown promising results. These approaches, however, require a laborious annotation process and are fragmented. Objective: To evaluate a novel deep learning method that uses tissue-level annotations for high-resolution histological image analysis for Barrett esophagus (BE) and esophageal adenocarcinoma detection. Design, Setting, and Participants: This diagnostic study collected deidentified high-resolution histological images (N = 379) for training a new model composed of a convolutional neural network and a grid-based attention network. Histological images of patients who underwent endoscopic esophagus and gastroesophageal junction mucosal biopsy between January 1, 2016, and December 31, 2018, at Dartmouth-Hitchcock Medical Center (Lebanon, New Hampshire) were collected. Main Outcomes and Measures: The model was evaluated on an independent testing set of 123 histological images with 4 classes: normal, BE-no-dysplasia, BE-with-dysplasia, and adenocarcinoma. Performance of this model was measured and compared with that of the current state-of-the-art sliding window approach using the following standard machine learning metrics: accuracy, recall, precision, and F1 score. Results: Of the independent testing set of 123 histological images, 30 (24.4%) were in the BE-no-dysplasia class, 14 (11.4%) in the BE-with-dysplasia class, 21 (17.1%) in the adenocarcinoma class, and 58 (47.2%) in the normal class. Classification accuracies of the proposed model were 0.85 (95% CI, 0.81-0.90) for the BE-no-dysplasia class, 0.89 (95% CI, 0.84-0.92) for the BE-with-dysplasia class, and 0.88 (95% CI, 0.84-0.92) for the adenocarcinoma class. The proposed model achieved a mean accuracy of 0.83 (95% CI, 0.80-0.86) and marginally outperformed the sliding window approach on the same testing set. The F1 scores of the attention-based model were at least 8% higher for each class compared with the sliding window approach: 0.68 (95% CI, 0.61-0.75) vs 0.61 (95% CI, 0.53-0.68) for the normal class, 0.72 (95% CI, 0.63-0.80) vs 0.58 (95% CI, 0.45-0.69) for the BE-no-dysplasia class, 0.30 (95% CI, 0.11-0.48) vs 0.22 (95% CI, 0.11-0.33) for the BE-with-dysplasia class, and 0.67 (95% CI, 0.54-0.77) vs 0.58 (95% CI, 0.44-0.70) for the adenocarcinoma class. However, this outperformance was not statistically significant. Conclusions and Relevance: Results of this study suggest that the proposed attention-based deep neural network framework for BE and esophageal adenocarcinoma detection is important because it is based solely on tissue-level annotations, unlike existing methods that are based on regions of interest. This new model is expected to open avenues for applying deep learning to digital pathology.


Assuntos
Adenocarcinoma/patologia , Esôfago de Barrett/patologia , Aprendizado Profundo , Neoplasias Esofágicas/patologia , Redes Neurais de Computação , Biópsia , Simulação por Computador , Conjuntos de Dados como Assunto , Humanos , Microscopia
9.
Sci Rep ; 9(1): 3358, 2019 03 04.
Artigo em Inglês | MEDLINE | ID: mdl-30833650

RESUMO

Classification of histologic patterns in lung adenocarcinoma is critical for determining tumor grade and treatment for patients. However, this task is often challenging due to the heterogeneous nature of lung adenocarcinoma and the subjective criteria for evaluation. In this study, we propose a deep learning model that automatically classifies the histologic patterns of lung adenocarcinoma on surgical resection slides. Our model uses a convolutional neural network to identify regions of neoplastic cells, then aggregates those classifications to infer predominant and minor histologic patterns for any given whole-slide image. We evaluated our model on an independent set of 143 whole-slide images. It achieved a kappa score of 0.525 and an agreement of 66.6% with three pathologists for classifying the predominant patterns, slightly higher than the inter-pathologist kappa score of 0.485 and agreement of 62.7% on this test set. All evaluation metrics for our model and the three pathologists were within 95% confidence intervals of agreement. If confirmed in clinical practice, our model can assist pathologists in improving classification of lung adenocarcinoma patterns by automatically pre-screening and highlighting cancerous regions prior to review. Our approach can be generalized to any whole-slide image classification task, and code is made publicly available at https://github.com/BMIRDS/deepslide .


Assuntos
Adenocarcinoma de Pulmão/classificação , Adenocarcinoma de Pulmão/patologia , Neoplasias Pulmonares/patologia , Redes Neurais de Computação , Adenocarcinoma de Pulmão/cirurgia , Automação , Aprendizado Profundo , Técnicas Histológicas/métodos , Humanos , Neoplasias Pulmonares/classificação , Patologistas
10.
Neuropsychopharmacology ; 44(3): 487-494, 2019 02.
Artigo em Inglês | MEDLINE | ID: mdl-30356094

RESUMO

Social media may provide new insight into our understanding of substance use and addiction. In this study, we developed a deep-learning method to automatically classify individuals' risk for alcohol, tobacco, and drug use based on the content from their Instagram profiles. In total, 2287 active Instagram users participated in the study. Deep convolutional neural networks for images and long short-term memory (LSTM) for text were used to extract predictive features from these data for risk assessment. The evaluation of our approach on a held-out test set of 228 individuals showed that among the substances we evaluated, our method could estimate the risk of alcohol abuse with statistical significance. These results are the first to suggest that deep-learning approaches applied to social media data can be used to identify potential substance use risk behavior, such as alcohol use. Utilization of automated estimation techniques can provide new insights for the next generation of population-level risk assessment and intervention delivery.


Assuntos
Aprendizado Profundo , Medição de Risco/métodos , Assunção de Riscos , Mídias Sociais , Transtornos Relacionados ao Uso de Substâncias/epidemiologia , Adulto , Alcoolismo/epidemiologia , Humanos , Medição de Risco/normas , Sensibilidade e Especificidade , Mídias Sociais/estatística & dados numéricos
11.
Comput Biol Med ; 98: 8-15, 2018 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-29758455

RESUMO

Osteoporotic vertebral fractures (OVFs) are prevalent in older adults and are associated with substantial personal suffering and socio-economic burden. Early diagnosis and treatment of OVFs are critical to prevent further fractures and morbidity. However, OVFs are often under-diagnosed and under-reported in computed tomography (CT) exams as they can be asymptomatic at an early stage. In this paper, we present and evaluate an automatic system that can detect incidental OVFs in chest, abdomen, and pelvis CT examinations at the level of practicing radiologists. Our OVF detection system leverages a deep convolutional neural network (CNN) to extract radiological features from each slice in a CT scan. These extracted features are processed through a feature aggregation module to make the final diagnosis for the full CT scan. In this work, we explored different methods for this feature aggregation, including the use of a long short-term memory (LSTM) network. We trained and evaluated our system on 1432 CT scans, comprised of 10,546 two-dimensional (2D) images in sagittal view. Our system achieved an accuracy of 89.2% and an F1 score of 90.8% based on our evaluation on a held-out test set of 129 CT scans, which were established as reference standards through standard semiquantitative and quantitative methods. The results of our system matched the performance of practicing radiologists on this test set in real-world clinical circumstances. We expect the proposed system will assist and improve OVF diagnosis in clinical settings by pre-screening routine CT examinations and flagging suspicious cases prior to review by radiologists.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Fraturas por Osteoporose/diagnóstico por imagem , Fraturas da Coluna Vertebral/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Humanos , Osteoporose/diagnóstico por imagem
12.
Ann Nucl Med ; 20(4): 317-20, 2006 May.
Artigo em Inglês | MEDLINE | ID: mdl-16856576

RESUMO

We report a case of early gastric cancer and early colon cancer detected by positron emission tomography (PET) cancer screening. A 64-year-old male patient with an unremarkable past history except for hypertension and cerebrovascular disease underwent 18F-FDG PET for cancer screening. Images revealed increased uptake in the gastric antrum and sigmoid colon. Both areas appeared suspicious for neoplasm on subsequent fluoroscopy and endoscopy, and biopsies were positive for neoplasia at both sites. The gastric lesion was treated by distal gastrectomy and D2 lymphadenectomy and the colon cancer by endoscopic mucosal resection (EMR). Both surgical specimens were positive for cancer.


Assuntos
Neoplasias do Colo/diagnóstico por imagem , Fluordesoxiglucose F18 , Tomografia por Emissão de Pósitrons/métodos , Neoplasias Gástricas/diagnóstico por imagem , Humanos , Achados Incidentais , Masculino , Pessoa de Meia-Idade , Compostos Radiofarmacêuticos
13.
AJR Am J Roentgenol ; 184(5): 1572-7, 2005 May.
Artigo em Inglês | MEDLINE | ID: mdl-15855117

RESUMO

OBJECTIVE: We evaluated the feasibility of creating 3D and multiphase fusion images of cholangiocarcinoma. The 3D rendering of the biliary tree provide valuable information for planning surgery, including the location of the obstruction and its relationship to the surrounding vessels. CONCLUSION: Our data emphasize that 3D and multiphase fusion images may be an accurate and routinely applicable tool for the diagnosis and therapeutic management of patients with biliary system abnormalities.


Assuntos
Neoplasias dos Ductos Biliares/diagnóstico por imagem , Ductos Biliares Intra-Hepáticos , Colangiocarcinoma/diagnóstico por imagem , Colangiografia/métodos , Imageamento Tridimensional , Tomografia Computadorizada por Raios X/métodos , Humanos , Processamento de Imagem Assistida por Computador
14.
Chest ; 122(4): 1485-7, 2002 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-12377885

RESUMO

We present the case of a 55-year-old man with advanced esophageal cancer who was successfully treated using a self-expandable metallic stent (S-EMS) for 6 months and subsequently was treated for an esophagobronchial fistula as a complication of the initial S-EMS using a silicone airway stent for an additional 4 months. This is the first report in the literature concerning penetration into the airway of an S-EMS implanted in the esophagus. The present case suggests that airway stenting using a silicone stent as treatment for an esophagobronchial fistula may represent a useful modality.


Assuntos
Fístula Brônquica/terapia , Carcinoma de Células Escamosas/complicações , Cateterismo/instrumentação , Fístula Esofágica/terapia , Neoplasias Esofágicas/complicações , Estenose Esofágica/etiologia , Estenose Esofágica/terapia , Próteses e Implantes , Stents , Obstrução das Vias Respiratórias/etiologia , Obstrução das Vias Respiratórias/terapia , Fístula Brônquica/diagnóstico , Fístula Brônquica/etiologia , Carcinoma de Células Escamosas/diagnóstico , Carcinoma de Células Escamosas/terapia , Cateterismo/métodos , Materiais Revestidos Biocompatíveis , Fístula Esofágica/diagnóstico , Fístula Esofágica/etiologia , Neoplasias Esofágicas/diagnóstico , Neoplasias Esofágicas/terapia , Estenose Esofágica/diagnóstico , Esofagoscopia/métodos , Seguimentos , Humanos , Masculino , Pessoa de Meia-Idade , Estadiamento de Neoplasias , Silicones , Resultado do Tratamento
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...