Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros

Bases de datos
Tipo del documento
Asunto de la revista
País de afiliación
Intervalo de año de publicación
1.
Analyst ; 147(7): 1425-1439, 2022 Mar 28.
Artículo en Inglés | MEDLINE | ID: mdl-35253812

RESUMEN

Raman spectroscopy is a non-destructive analysis technique that provides detailed information about the chemical structure of tumors. Raman spectra of 52 giant cell tumors of bone (GCTB) and 21 adjacent normal tissues of formalin-fixed paraffin embedded (FFPE) and frozen specimens were obtained using a confocal Raman spectrometer and analyzed with machine learning and deep learning algorithms. We discovered characteristic Raman shifts in the GCTB specimens. They were assigned to phenylalanine and tyrosine. Based on the spectroscopic data, classification algorithms including support vector machine, k-nearest neighbors and long short-term memory (LSTM) were successfully applied to discriminate GCTB from adjacent normal tissues of both the FFPE and frozen specimens, with the accuracy ranging from 82.8% to 94.5%. Importantly, our LSTM algorithm showed the best performance in the discrimination of the frozen specimens, with a sensitivity and specificity of 93.9% and 95.1% respectively, and the AUC was 0.97. The results of our study suggest that confocal Raman spectroscopy accomplished by the LSTM network could non-destructively evaluate a tumor margin by its inherent biochemical specificity which may allow intraoperative assessment of the adequacy of tumor clearance.


Asunto(s)
Aprendizaje Profundo , Tumores de Células Gigantes , Algoritmos , Humanos , Espectrometría Raman/métodos , Máquina de Vectores de Soporte
2.
Med Image Anal ; 98: 103324, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-39213939

RESUMEN

Despite that the segment anything model (SAM) achieved impressive results on general-purpose semantic segmentation with strong generalization ability on daily images, its demonstrated performance on medical image segmentation is less precise and unstable, especially when dealing with tumor segmentation tasks that involve objects of small sizes, irregular shapes, and low contrast. Notably, the original SAM architecture is designed for 2D natural images and, therefore would not be able to extract the 3D spatial information from volumetric medical data effectively. In this paper, we propose a novel adaptation method for transferring SAM from 2D to 3D for promptable medical image segmentation. Through a holistically designed scheme for architecture modification, we transfer the SAM to support volumetric inputs while retaining the majority of its pre-trained parameters for reuse. The fine-tuning process is conducted in a parameter-efficient manner, wherein most of the pre-trained parameters remain frozen, and only a few lightweight spatial adapters are introduced and tuned. Regardless of the domain gap between natural and medical data and the disparity in the spatial arrangement between 2D and 3D, the transformer trained on natural images can effectively capture the spatial patterns present in volumetric medical images with only lightweight adaptations. We conduct experiments on four open-source tumor segmentation datasets, and with a single click prompt, our model can outperform domain state-of-the-art medical image segmentation models and interactive segmentation models. We also compared our adaptation method with existing popular adapters and observed significant performance improvement on most datasets. Our code and models are available at: https://github.com/med-air/3DSAM-adapter.


Asunto(s)
Imagenología Tridimensional , Humanos , Imagenología Tridimensional/métodos , Algoritmos , Neoplasias/diagnóstico por imagen
3.
IEEE Trans Med Imaging ; 43(8): 2778-2789, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38635381

RESUMEN

Aneurysmal subarachnoid hemorrhage is a medical emergency of brain that has high mortality and poor prognosis. Causal effect estimation of treatment strategies on patient outcomes is crucial for aneurysmal subarachnoid hemorrhage treatment decision-making. However, most existing studies on treatment decision-making support of this disease are unable to simultaneously compare the potential outcomes of different treatments for a patient. Furthermore, these studies fail to harmoniously integrate the imaging data with non-imaging clinical data, both of which are useful in clinical scenarios. In this paper, we estimate the causal effect of various treatments on patients with aneurysmal subarachnoid hemorrhage by integrating plain CT with non-imaging clinical data, which is represented using structured tabular data. Specifically, we first propose a novel scheme that uses multi-modality confounders distillation architecture to predict the treatment outcome and treatment assignment simultaneously. With these distilled confounder features, we design an imaging and non-imaging interaction representation learning strategy to use the complementary information extracted from different modalities to balance the feature distribution of different treatment groups. We have conducted extensive experiments using a clinical dataset of 656 subarachnoid hemorrhage cases, which was collected from the Hospital Authority Data Collaboration Laboratory in Hong Kong. Our method shows consistent improvements on the evaluation metrics of treatment effect estimation, achieving state-of-the-art results over strong competitors. Code is released at https://github.com/med-air/TOP-aSAH.


Asunto(s)
Hemorragia Subaracnoidea , Humanos , Hemorragia Subaracnoidea/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Bases de Datos Factuales , Algoritmos , Sistemas de Apoyo a Decisiones Clínicas
4.
IEEE Trans Med Imaging ; 40(5): 1363-1376, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-33507867

RESUMEN

To better understand early brain development in health and disorder, it is critical to accurately segment infant brain magnetic resonance (MR) images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF). Deep learning-based methods have achieved state-of-the-art performance; h owever, one of the major limitations is that the learning-based methods may suffer from the multi-site issue, that is, the models trained on a dataset from one site may not be applicable to the datasets acquired from other sites with different imaging protocols/scanners. To promote methodological development in the community, the iSeg-2019 challenge (http://iseg2019.web.unc.edu) provides a set of 6-month infant subjects from multiple sites with different protocols/scanners for the participating methods. T raining/validation subjects are from UNC (MAP) and testing subjects are from UNC/UMN (BCP), Stanford University, and Emory University. By the time of writing, there are 30 automatic segmentation methods participated in the iSeg-2019. In this article, 8 top-ranked methods were reviewed by detailing their pipelines/implementations, presenting experimental results, and evaluating performance across different sites in terms of whole brain, regions of interest, and gyral landmark curves. We further pointed out their limitations and possible directions for addressing the multi-site issue. We find that multi-site consistency is still an open issue. We hope that the multi-site dataset in the iSeg-2019 and this review article will attract more researchers to address the challenging and critical multi-site issue in practice.


Asunto(s)
Algoritmos , Imagen por Resonancia Magnética , Encéfalo/diagnóstico por imagen , Mapeo Encefálico , Sustancia Gris , Humanos , Lactante
5.
IEEE Trans Med Imaging ; 39(4): 898-909, 2020 04.
Artículo en Inglés | MEDLINE | ID: mdl-31449009

RESUMEN

The segmentation of brain tissue in MRI is valuable for extracting brain structure to aid diagnosis, treatment and tracking the progression of different neurologic diseases. Medical image data are volumetric and some neural network models for medical image segmentation have addressed this using a 3D convolutional architecture. However, this volumetric spatial information has not been fully exploited to enhance the representative ability of deep networks, and these networks have not fully addressed the practical issues facing the analysis of multimodal MRI data. In this paper, we propose a spatially-weighted 3D network (SW-3D-UNet) for brain tissue segmentation of single-modality MRI, and extend it using multimodality MRI data. We validate our model on the MRBrainS13 and MALC12 datasets. This unpublished model ranked first on the leaderboard of the MRBrainS13 Challenge.


Asunto(s)
Encéfalo/diagnóstico por imagen , Aprendizaje Profundo , Imagenología Tridimensional/métodos , Imagen por Resonancia Magnética/métodos , Algoritmos , Femenino , Humanos , Masculino
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA