Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Front Comput Neurosci ; 18: 1365727, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38784680

RESUMEN

Automatic segmentation of vestibular schwannoma (VS) from routine clinical MRI has potential to improve clinical workflow, facilitate treatment decisions, and assist patient management. Previous work demonstrated reliable automatic segmentation performance on datasets of standardized MRI images acquired for stereotactic surgery planning. However, diagnostic clinical datasets are generally more diverse and pose a larger challenge to automatic segmentation algorithms, especially when post-operative images are included. In this work, we show for the first time that automatic segmentation of VS on routine MRI datasets is also possible with high accuracy. We acquired and publicly release a curated multi-center routine clinical (MC-RC) dataset of 160 patients with a single sporadic VS. For each patient up to three longitudinal MRI exams with contrast-enhanced T1-weighted (ceT1w) (n = 124) and T2-weighted (T2w) (n = 363) images were included and the VS manually annotated. Segmentations were produced and verified in an iterative process: (1) initial segmentations by a specialized company; (2) review by one of three trained radiologists; and (3) validation by an expert team. Inter- and intra-observer reliability experiments were performed on a subset of the dataset. A state-of-the-art deep learning framework was used to train segmentation models for VS. Model performance was evaluated on a MC-RC hold-out testing set, another public VS datasets, and a partially public dataset. The generalizability and robustness of the VS deep learning segmentation models increased significantly when trained on the MC-RC dataset. Dice similarity coefficients (DSC) achieved by our model are comparable to those achieved by trained radiologists in the inter-observer experiment. On the MC-RC testing set, median DSCs were 86.2(9.5) for ceT1w, 89.4(7.0) for T2w, and 86.4(8.6) for combined ceT1w+T2w input images. On another public dataset acquired for Gamma Knife stereotactic radiosurgery our model achieved median DSCs of 95.3(2.9), 92.8(3.8), and 95.5(3.3), respectively. In contrast, models trained on the Gamma Knife dataset did not generalize well as illustrated by significant underperformance on the MC-RC routine MRI dataset, highlighting the importance of data variability in the development of robust VS segmentation models. The MC-RC dataset and all trained deep learning models were made available online.

2.
Light Sci Appl ; 12(1): 228, 2023 Sep 13.
Artículo en Inglés | MEDLINE | ID: mdl-37704619

RESUMEN

Limited throughput is a key challenge in in vivo deep tissue imaging using nonlinear optical microscopy. Point scanning multiphoton microscopy, the current gold standard, is slow especially compared to the widefield imaging modalities used for optically cleared or thin specimens. We recently introduced "De-scattering with Excitation Patterning" or "DEEP" as a widefield alternative to point-scanning geometries. Using patterned multiphoton excitation, DEEP encodes spatial information inside tissue before scattering. However, to de-scatter at typical depths, hundreds of such patterned excitations were needed. In this work, we present DEEP2, a deep learning-based model that can de-scatter images from just tens of patterned excitations instead of hundreds. Consequently, we improve DEEP's throughput by almost an order of magnitude. We demonstrate our method in multiple numerical and experimental imaging studies, including in vivo cortical vasculature imaging up to 4 scattering lengths deep in live mice.

3.
Res Sq ; 2023 Jun 08.
Artículo en Inglés | MEDLINE | ID: mdl-37333305

RESUMEN

Today the gold standard for in vivo imaging through scattering tissue is point-scanning two-photon microscopy (PSTPM). Especially in neuroscience, PSTPM is widely used for deep-tissue imaging in the brain. However, due to sequential scanning, PSTPM is slow. Temporal focusing microscopy (TFM), on the other hand, focuses femtosecond pulsed laser light temporally while keeping wide-field illumination, and is consequently much faster. However, due to the use of a camera detector, TFM suffers from the scattering of emission photons. As a result, TFM produces images of poor quality, obscuring fluorescent signals from small structures such as dendritic spines. In this work, we present a de-scattering deep neural network (DeScatterNet) to improve the quality of TFM images. Using a 3D convolutional neural network (CNN) we build a map from TFM to PSTPM modalities, to enable fast TFM imaging while maintaining high image quality through scattering media. We demonstrate this approach for in vivo imaging of dendritic spines on pyramidal neurons in the mouse visual cortex. We quantitatively show that our trained network rapidly outputs images that recover biologically relevant features previously buried in the scattered fluorescence in the TFM images. In vivo imaging that combines TFM and the proposed neural network is one to two orders of magnitude faster than PSTPM but retains the high quality necessary to analyze small fluorescent structures. The proposed approach could also be beneficial for improving the performance of many speed-demanding deep-tissue imaging applications, such as in vivo voltage imaging.

4.
Front Oncol ; 13: 1131013, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37182138

RESUMEN

Extra-axial brain tumors are extra-cerebral tumors and are usually benign. The choice of treatment for extra-axial tumors is often dependent on the growth of the tumor, and imaging plays a significant role in monitoring growth and clinical decision-making. This motivates the investigation of imaging biomarkers for these tumors that may be incorporated into clinical workflows to inform treatment decisions. The databases from Pubmed, Web of Science, Embase, and Medline were searched from 1 January 2000 to 7 March 2022, to systematically identify relevant publications in this area. All studies that used an imaging tool and found an association with a growth-related factor, including molecular markers, grade, survival, growth/progression, recurrence, and treatment outcomes, were included in this review. We included 42 studies, comprising 22 studies (50%) of patients with meningioma; 17 studies (38.6%) of patients with pituitary tumors; three studies (6.8%) of patients with vestibular schwannomas; and two studies (4.5%) of patients with solitary fibrous tumors. The included studies were explicitly and narratively analyzed according to tumor type and imaging tool. The risk of bias and concerns regarding applicability were assessed using QUADAS-2. Most studies (41/44) used statistics-based analysis methods, and a small number of studies (3/44) used machine learning. Our review highlights an opportunity for future work to focus on machine learning-based deep feature identification as biomarkers, combining various feature classes such as size, shape, and intensity. Systematic Review Registration: PROSPERO, CRD42022306922.

5.
Med Biol Eng Comput ; 60(2): 337-348, 2022 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-34859369

RESUMEN

Segmentation of intracerebral hemorrhage (ICH) helps improve the quality of diagnosis, draft the desired treatment methods, and clinically observe the variations with healthy patients. The clinical utilization of various ICH progression scoring systems has limitations due to the systems' modest predictive value. This paper proposes a single pipeline of a multi-task model for end-to-end hemorrhage segmentation and risk estimation. We introduce a 3D spatial attention unit and integrate it into the state-of-the-art segmentation architecture, UNet, to enhance the accuracy by bootstrapping the global spatial representation. We further extract the geometric features from the segmented hemorrhage volume and fuse them with clinical features such as CT angiography (CTA) spot, Glasgow Coma Scale (GCS), and age to predict the ICH stability. Several state-of-the-art machine learning techniques such as multilayer perceptron (MLP), support vector machine (SVM), gradient boosting, and random forests are applied to train stability estimation and to compare the performances. To align clinical intuition with model learning, we determine the shapely values (SHAP) and explain the most significant features for the ICH risk scoring system. A total of 79 patients are included, of which 20 are found in critical condition. Our proposed single pipeline model achieves a segmentation accuracy of 86.3%, stability prediction accuracy of 78.3%, and precision of 82.9%; the mean square error of exact expansion rate regression is observed to be 0.46. The SHAP analysis reveals that CTA spot sign, age, solidity, location, and length of the first axis of the ICH volume are the most critical characteristics that help define the stability of the stroke lesion. We also show that integrating significant geometric features with clinical features can improve the ICH progression scoring by predicting long-term outcomes. Graphical abstract Overview of our proposed method comprising of spatial attention and feature extraction mechanisms. The architecture is trained on the input CT images, and the first step output is the predicted segmentation of the hemorrhagic region. The output is fed into a geometric feature extractor and is fused with clinical features to estimate ICH stability using a multilayer perceptron (MLP).


Asunto(s)
Hemorragia Cerebral , Angiografía por Tomografía Computarizada , Atención , Hemorragia Cerebral/diagnóstico por imagen , Escala de Coma de Glasgow , Humanos , Factores de Riesgo
6.
Comput Med Imaging Graph ; 91: 101906, 2021 07.
Artículo en Inglés | MEDLINE | ID: mdl-34175548

RESUMEN

The accurate prognosis of glioblastoma multiforme (GBM) plays an essential role in planning correlated surgeries and treatments. The conventional models of survival prediction rely on radiomic features using magnetic resonance imaging (MRI). In this paper, we propose a radiogenomic overall survival (OS) prediction approach by incorporating gene expression data with radiomic features such as shape, geometry, and clinical information. We exploit TCGA (The Cancer Genomic Atlas) dataset and synthesize the missing MRI modalities using a fully convolutional network (FCN) in a conditional generative adversarial network (cGAN). Meanwhile, the same FCN architecture enables the tumor segmentation from the available and the synthesized MRI modalities. The proposed FCN architecture comprises octave convolution (OctConv) and a novel decoder, with skip connections in spatial and channel squeeze & excitation (skip-scSE) block. The OctConv can process low and high-frequency features individually and improve model efficiency by reducing channel-wise redundancy. Skip-scSE applies spatial and channel-wise excitation to signify the essential features and reduces the sparsity in deeper layers learning parameters using skip connections. The proposed approaches are evaluated by comparative experiments with state-of-the-art models in synthesis, segmentation, and overall survival (OS) prediction. We observe that adding missing MRI modality improves the segmentation prediction, and expression levels of gene markers have a high contribution in the GBM prognosis prediction, and fused radiogenomic features boost the OS estimation.


Asunto(s)
Glioblastoma , Glioblastoma/diagnóstico por imagen , Glioblastoma/genética , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Pronóstico
7.
Med Biol Eng Comput ; 58(8): 1767-1777, 2020 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-32488372

RESUMEN

Glioblastoma multiforme (GBM) is a very aggressive and infiltrative brain tumor with a high mortality rate. There are radiomic models with handcrafted features to estimate glioblastoma prognosis. In this work, we evaluate to what extent of combining genomic with radiomic features makes an impact on the prognosis of overall survival (OS) in patients with GBM. We apply a hypercolumn-based convolutional network to segment tumor regions from magnetic resonance images (MRI), extract radiomic features (geometric, shape, histogram), and fuse with gene expression profiling data to predict survival rate for each patient. Several state-of-the-art regression models such as linear regression, support vector machine, and neural network are exploited to conduct prognosis analysis. The Cancer Genome Atlas (TCGA) dataset of MRI and gene expression profiling is used in the study to observe the model performance in radiomic, genomic, and radiogenomic features. The results demonstrate that genomic data are correlated with the GBM OS prediction, and the radiogenomic model outperforms both radiomic and genomic models. We further illustrate the most significant genes, such as IL1B, KLHL4, ATP1A2, IQGAP2, and TMSL8, which contribute highly to prognosis analysis. Graphical Abstract Our Proposed fully automated "Radiogenomic"" approach for survival prediction overview. It fuses geometric, intensity, volumetric, genomic and clinical information to predict OS.


Asunto(s)
Neoplasias Encefálicas/mortalidad , Glioblastoma/mortalidad , Neoplasias Encefálicas/genética , Neoplasias Encefálicas/patología , Perfilación de la Expresión Génica/métodos , Glioblastoma/genética , Glioblastoma/patología , Humanos , Imagen por Resonancia Magnética/métodos , Pronóstico , Tasa de Supervivencia , Transcriptoma/genética
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA