Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Assunto principal
Intervalo de ano de publicação
1.
J Med Imaging (Bellingham) ; 10(4): 044503, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37547812

RESUMO

Purpose: Deep learning (DL) models have received much attention lately for their ability to achieve expert-level performance on the accurate automated analysis of chest X-rays (CXRs). Recently available public CXR datasets include high resolution images, but state-of-the-art models are trained on reduced size images due to limitations on graphics processing unit memory and training time. As computing hardware continues to advance, it has become feasible to train deep convolutional neural networks on high-resolution images without sacrificing detail by downscaling. This study examines the effect of increased resolution on CXR classification performance. Approach: We used the publicly available MIMIC-CXR-JPG dataset, comprising 377,110 high resolution CXR images for this study. We applied image downscaling from native resolution to 2048×2048 pixels, 1024×1024 pixels, 512×512 pixels, and 256×256 pixels and then we used the DenseNet121 and EfficientNet-B4 DL models to evaluate clinical task performance using these four downscaled image resolutions. Results: We find that while some clinical findings are more reliably labeled using high resolutions, many other findings are actually labeled better using downscaled inputs. We qualitatively verify that tasks requiring a large receptive field are better suited to downscaled low resolution input images, by inspecting effective receptive fields and class activation maps of trained models. Finally, we show that stacking an ensemble across resolutions outperforms each individual learner at all input resolutions while providing interpretable scale weights, indicating that diverse information is extracted across resolutions. Conclusions: This study suggests that instead of focusing solely on the finest image resolutions, multi-scale features should be emphasized for information extraction from high-resolution CXRs.

2.
Artif Intell Med ; 101: 101726, 2019 11.
Artigo em Inglês | MEDLINE | ID: mdl-31813492

RESUMO

We introduce a deep learning architecture, hierarchical self-attention networks (HiSANs), designed for classifying pathology reports and show how its unique architecture leads to a new state-of-the-art in accuracy, faster training, and clear interpretability. We evaluate performance on a corpus of 374,899 pathology reports obtained from the National Cancer Institute's (NCI) Surveillance, Epidemiology, and End Results (SEER) program. Each pathology report is associated with five clinical classification tasks - site, laterality, behavior, histology, and grade. We compare the performance of the HiSAN against other machine learning and deep learning approaches commonly used on medical text data - Naive Bayes, logistic regression, convolutional neural networks, and hierarchical attention networks (the previous state-of-the-art). We show that HiSANs are superior to other machine learning and deep learning text classifiers in both accuracy and macro F-score across all five classification tasks. Compared to the previous state-of-the-art, hierarchical attention networks, HiSANs not only are an order of magnitude faster to train, but also achieve about 1% better relative accuracy and 5% better relative macro F-score.


Assuntos
Neoplasias/patologia , Aprendizado Profundo , Humanos , Processamento de Linguagem Natural , Neoplasias/classificação , Redes Neurais de Computação
3.
Artigo em Inglês | MEDLINE | ID: mdl-36081613

RESUMO

Automated text information extraction from cancer pathology reports is an active area of research to support national cancer surveillance. A well-known challenge is how to develop information extraction tools with robust performance across cancer registries. In this study we investigated whether transfer learning (TL) with a convolutional neural network (CNN) can facilitate cross-registry knowledge sharing. Specifically, we performed a series of experiments to determine whether a CNN trained with single-registry data is capable of transferring knowledge to another registry or whether developing a cross-registry knowledge database produces a more effective and generalizable model. Using data from two cancer registries and primary tumor site and topography as the information extraction task of interest, our study showed that TL results in 6.90% and 17.22% improvement of classification macro F-score over the baseline single-registry models. Detailed analysis illustrated that the observed improvement is evident in the low prevalence classes.

4.
Biotechnol Biofuels ; 8: 212, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26709354

RESUMO

BACKGROUND: Substrate accessibility to catalysts has been a dominant theme in theories of biomass deconstruction. However, current methods of quantifying accessibility do not elucidate mechanisms for increased accessibility due to changes in microstructure following pretreatment. RESULTS: We introduce methods for characterization of surface accessibility based on fine-scale microstructure of the plant cell wall as revealed by 3D electron tomography. These methods comprise a general framework, enabling analysis of image-based cell wall architecture using a flexible model of accessibility. We analyze corn stover cell walls, both native and after undergoing dilute acid pretreatment with and without a steam explosion process, as well as AFEX pretreatment. CONCLUSION: Image-based measures provide useful information about how much pretreatments are able to increase biomass surface accessibility to a wide range of catalyst sizes. We find a strong dependence on probe size when measuring surface accessibility, with a substantial decrease in biomass surface accessibility to probe sizes above 5-10 nm radius compared to smaller probes.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...