Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
J Med Imaging (Bellingham) ; 10(4): 044503, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37547812

RESUMEN

Purpose: Deep learning (DL) models have received much attention lately for their ability to achieve expert-level performance on the accurate automated analysis of chest X-rays (CXRs). Recently available public CXR datasets include high resolution images, but state-of-the-art models are trained on reduced size images due to limitations on graphics processing unit memory and training time. As computing hardware continues to advance, it has become feasible to train deep convolutional neural networks on high-resolution images without sacrificing detail by downscaling. This study examines the effect of increased resolution on CXR classification performance. Approach: We used the publicly available MIMIC-CXR-JPG dataset, comprising 377,110 high resolution CXR images for this study. We applied image downscaling from native resolution to 2048×2048 pixels, 1024×1024 pixels, 512×512 pixels, and 256×256 pixels and then we used the DenseNet121 and EfficientNet-B4 DL models to evaluate clinical task performance using these four downscaled image resolutions. Results: We find that while some clinical findings are more reliably labeled using high resolutions, many other findings are actually labeled better using downscaled inputs. We qualitatively verify that tasks requiring a large receptive field are better suited to downscaled low resolution input images, by inspecting effective receptive fields and class activation maps of trained models. Finally, we show that stacking an ensemble across resolutions outperforms each individual learner at all input resolutions while providing interpretable scale weights, indicating that diverse information is extracted across resolutions. Conclusions: This study suggests that instead of focusing solely on the finest image resolutions, multi-scale features should be emphasized for information extraction from high-resolution CXRs.

2.
J Am Soc Mass Spectrom ; 34(2): 227-235, 2023 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-36625762

RESUMEN

Prostate cancer is one of the most common cancers globally and is the second most common cancer in the male population in the US. Here we develop a study based on correlating the hematoxylin and eosin (H&E)-stained biopsy data with MALDI mass-spectrometric imaging data of the corresponding tissue to determine the cancerous regions and their unique chemical signatures and variations of the predicted regions with original pathological annotations. We obtain features from high-resolution optical micrographs of whole slide H&E stained data through deep learning and spatially register them with mass spectrometry imaging (MSI) data to correlate the chemical signature with the tissue anatomy of the data. We then use the learned correlation to predict prostate cancer from observed H&E images using trained coregistered MSI data. This multimodal approach can predict cancerous regions with ∼80% accuracy, which indicates a correlation between optical H&E features and chemical information found in MSI. We show that such paired multimodal data can be used for training feature extraction networks on H&E data which bypasses the need to acquire expensive MSI data and eliminates the need for manual annotation saving valuable time. Two chemical biomarkers were also found to be predicting the ground truth cancerous regions. This study shows promise in generating improved patient treatment trajectories by predicting prostate cancer directly from readily available H&E-stained biopsy images aided by coregistered MSI data.


Asunto(s)
Aprendizaje Profundo , Neoplasias de la Próstata , Humanos , Masculino , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/patología , Espectrometría de Masa por Láser de Matriz Asistida de Ionización Desorción/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA