Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
1.
bioRxiv ; 2024 Jan 22.
Artigo em Inglês | MEDLINE | ID: mdl-38328148

RESUMO

White matter signals in resting state blood oxygen level dependent functional magnetic resonance (BOLD-fMRI) have been largely discounted, yet there is growing evidence that these signals are indicative of brain activity. Understanding how these white matter signals capture function can provide insight into brain physiology. Moreover, functional signals could potentially be used as early markers for neurological changes, such as in Alzheimer's Disease. To investigate white matter brain networks, we leveraged the OASIS-3 dataset to extract white matter signals from resting state BOLD-FMRI data on 711 subjects. The imaging was longitudinal with a total of 2,026 images. Hierarchical clustering was performed to investigate clusters of voxel-level correlations on the timeseries data. The stability of clusters was measured with the average Dice coefficients on two different cross fold validations. The first validated the stability between scans, and the second validated the stability between subject populations. Functional clusters at hierarchical levels 4, 9, 13, 18, and 24 had local maximum stability, suggesting better clustered white matter. In comparison with JHU-DTI-SS Type-I Atlas defined regions, clusters at lower hierarchical levels identified well defined anatomical lobes. At higher hierarchical levels, functional clusters mapped motor and memory functional regions, identifying 50.00%, 20.00%, 27.27%, and 35.14% of the frontal, occipital, parietal, and temporal lobe regions respectively.

2.
Med Phys ; 2024 Mar 26.
Artigo em Inglês | MEDLINE | ID: mdl-38530135

RESUMO

BACKGROUND: The kernel used in CT image reconstruction is an important factor that determines the texture of the CT image. Consistency of reconstruction kernel choice is important for quantitative CT-based assessment as kernel differences can lead to substantial shifts in measurements unrelated to underlying anatomical structures. PURPOSE: In this study, we investigate kernel harmonization in a multi-vendor low-dose CT lung cancer screening cohort and evaluate our approach's validity in quantitative CT-based assessments. METHODS: Using the National Lung Screening Trial, we identified CT scan pairs of the same sessions with one reconstructed from a soft tissue kernel and one from a hard kernel. In total, 1000 pairs of five different paired kernel types (200 each) were identified. We adopt the pix2pix architecture to train models for kernel conversion. Each model was trained on 100 pairs and evaluated on 100 withheld pairs. A total of 10 models were implemented. We evaluated the efficacy of kernel conversion based on image similarity metrics including root mean squared error (RMSE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM) as well as the capability of the models to reduce measurement shifts in quantitative emphysema and body composition measurements. Additionally, we study the reproducibility of standard radiomic features for all kernel pairs before and after harmonization. RESULTS: Our approach effectively converts CT images from one kernel to another in all paired kernel types, as indicated by the reduction in RMSE (p < 0.05) and an increase in the PSNR (p < 0.05) and SSIM (p < 0.05) for both directions of conversion for all pair types. In addition, there is an increase in the agreement for percent emphysema, skeletal muscle area, and subcutaneous adipose tissue (SAT) area for both directions of conversion. Furthermore, radiomic features were reproducible when compared with the ground truth features. CONCLUSIONS: Kernel conversion using deep learning reduces measurement variation in percent emphysema, muscle area, and SAT area.

3.
Med Image Anal ; 94: 103124, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38428271

RESUMO

Analyzing high resolution whole slide images (WSIs) with regard to information across multiple scales poses a significant challenge in digital pathology. Multi-instance learning (MIL) is a common solution for working with high resolution images by classifying bags of objects (i.e. sets of smaller image patches). However, such processing is typically performed at a single scale (e.g., 20× magnification) of WSIs, disregarding the vital inter-scale information that is key to diagnoses by human pathologists. In this study, we propose a novel cross-scale MIL algorithm to explicitly aggregate inter-scale relationships into a single MIL network for pathological image diagnosis. The contribution of this paper is three-fold: (1) A novel cross-scale MIL (CS-MIL) algorithm that integrates the multi-scale information and the inter-scale relationships is proposed; (2) A toy dataset with scale-specific morphological features is created and released to examine and visualize differential cross-scale attention; (3) Superior performance on both in-house and public datasets is demonstrated by our simple cross-scale MIL strategy. The official implementation is publicly available at https://github.com/hrlblab/CS-MIL.


Assuntos
Algoritmos , Diagnóstico por Imagem , Humanos
4.
medRxiv ; 2024 Jan 22.
Artigo em Inglês | MEDLINE | ID: mdl-37662348

RESUMO

Background: As large analyses merge data across sites, a deeper understanding of variance in statistical assessment across the sources of data becomes critical for valid analyses. Diffusion tensor imaging (DTI) exhibits spatially varying and correlated noise, so care must be taken with distributional assumptions. Purpose: We characterize the role of physiology, subject compliance, and the interaction of subject with the scanner in the understanding of DTI variability, as modeled in spatial variance of derived metrics in homogeneous regions. Methods: We analyze DTI data from 1035 subjects in the Baltimore Longitudinal Study of Aging (BLSA), with ages ranging from 22.4 to 103 years old. For each subject, up to 12 longitudinal sessions were conducted. We assess variance of DTI scalars within regions of interest (ROIs) defined by four segmentation methods and investigate the relationships between the variance and covariates, including baseline age, time from the baseline (referred to as "interval"), motion, sex, and whether it is the first scan or the second scan in the session. Results: Covariate effects are heterogeneous and bilaterally symmetric across ROIs. Inter-session interval is positively related (p ≪ 0.001) to FA variance in the cuneus and occipital gyrus, but negatively (p ≪ 0.001) in the caudate nucleus. Males show significantly (p ≪ 0.001) higher FA variance in the right putamen, thalamus, body of the corpus callosum, and cingulate gyrus. In 62 out of 176 ROIs defined by the Eve type-1 atlas, an increase in motion is associated (p < 0.05) with a decrease in FA variance. Head motion increases during the rescan of DTI (Δµ = 0.045 millimeters per volume). Conclusions: The effects of each covariate on DTI variance, and their relationships across ROIs are complex. Ultimately, we encourage researchers to include estimates of variance when sharing data and consider models of heteroscedasticity in analysis. This work provides a foundation for study planning to account for regional variations in metric variance.

5.
ArXiv ; 2024 Jan 24.
Artigo em Inglês | MEDLINE | ID: mdl-38344221

RESUMO

Connectivity matrices derived from diffusion MRI (dMRI) provide an interpretable and generalizable way of understanding the human brain connectome. However, dMRI suffers from inter-site and between-scanner variation, which impedes analysis across datasets to improve robustness and reproducibility of results. To evaluate different harmonization approaches on connectivity matrices, we compared graph measures derived from these matrices before and after applying three harmonization techniques: mean shift, ComBat, and CycleGAN. The sample comprises 168 age-matched, sex-matched normal subjects from two studies: the Vanderbilt Memory and Aging Project (VMAP) and the Biomarkers of Cognitive Decline Among Normal Individuals (BIOCARD). First, we plotted the graph measures and used coefficient of variation (CoV) and the Mann-Whitney U test to evaluate different methods' effectiveness in removing site effects on the matrices and the derived graph measures. ComBat effectively eliminated site effects for global efficiency and modularity and outperformed the other two methods. However, all methods exhibited poor performance when harmonizing average betweenness centrality. Second, we tested whether our harmonization methods preserved correlations between age and graph measures. All methods except for CycleGAN in one direction improved correlations between age and global efficiency and between age and modularity from insignificant to significant with p-values less than 0.05.

6.
IEEE J Biomed Health Inform ; 27(9): 4444-4453, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37310834

RESUMO

Medical image segmentation, or computing voxel-wise semantic masks, is a fundamental yet challenging task in medical imaging domain. To increase the ability of encoder-decoder neural networks to perform this task across large clinical cohorts, contrastive learning provides an opportunity to stabilize model initialization and enhances downstream tasks performance without ground-truth voxel-wise labels. However, multiple target objects with different semantic meanings and contrast level may exist in a single image, which poses a problem for adapting traditional contrastive learning methods from prevalent "image-level classification" to "pixel-level segmentation". In this article, we propose a simple semantic-aware contrastive learning approach leveraging attention masks and image-wise labels to advance multi-object semantic segmentation. Briefly, we embed different semantic objects to different clusters rather than the traditional image-level embeddings. We evaluate our proposed method on a multi-organ medical image segmentation task with both in-house data and MICCAI Challenge 2015 BTCV datasets. Compared with current state-of-the-art training strategies, our proposed pipeline yields a substantial improvement of 5.53% and 6.09% on Dice score for both medical image segmentation cohorts respectively (p-value 0.01). The performance of the proposed method is further assessed on external medical image cohort via MICCAI Challenge FLARE 2021 dataset, and achieves a substantial improvement from Dice 0.922 to 0.933 (p-value 0.01).


Assuntos
Diagnóstico por Imagem , Aprendizado de Máquina , Humanos , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Semântica , Diagnóstico por Imagem/métodos , Conjuntos de Dados como Assunto
7.
Artigo em Inglês | MEDLINE | ID: mdl-37324550

RESUMO

The Tangram algorithm is a benchmarking method of aligning single-cell (sc/snRNA-seq) data to various forms of spatial data collected from the same region. With this data alignment, the annotation of the single-cell data can be projected to spatial data. However, the cell composition (cell-type ratio) of the single-cell data and spatial data might be different because of heterogeneous cell distribution. Whether the Tangram algorithm can be adapted when the two data have different cell-type ratios has not been discussed in previous works. In our practical application that maps the cell-type classification results of single-cell data to the Multiplex immunofluorescence (MxIF) spatial data, cell-type ratios were different, though they were sampled from adjacent areas. In this work, both simulation and empirical validation were conducted to quantitatively explore the impact of the mismatched cell-type ratio on the Tangram mapping in different situations. Results show that the cell-type difference has a negative influence on classification accuracy.

8.
Med Image Comput Comput Assist Interv ; 14225: 497-507, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38529367

RESUMO

Multi-class cell segmentation in high-resolution Giga-pixel whole slide images (WSI) is critical for various clinical applications. Training such an AI model typically requires labor-intensive pixel-wise manual annotation from experienced domain experts (e.g., pathologists). Moreover, such annotation is error-prone when differentiating fine-grained cell types (e.g., podocyte and mesangial cells) via the naked human eye. In this study, we assess the feasibility of democratizing pathological AI deployment by only using lay annotators (annotators without medical domain knowledge). The contribution of this paper is threefold: (1) We proposed a molecular-empowered learning scheme for multi-class cell segmentation using partial labels from lay annotators; (2) The proposed method integrated Giga-pixel level molecular-morphology cross-modality registration, molecular-informed annotation, and molecular-oriented segmentation model, so as to achieve significantly superior performance via 3 lay annotators as compared with 2 experienced pathologists; (3) A deep corrective learning (learning with imperfect label) method is proposed to further improve the segmentation performance using partially annotated noisy data. From the experimental results, our learning method achieved F1 = 0.8496 using molecular-informed annotations from lay annotators, which is better than conventional morphology-based annotations (F1 = 0.7015) from experienced pathologists. Our method democratizes the development of a pathological segmentation deep model to the lay annotator level, which consequently scales up the learning process similar to a non-medical computer vision task. The official implementation and cell annotations are publicly available at https://github.com/hrlblab/MolecularEL.

9.
Med Image Learn Ltd Noisy Data (2023) ; 14307: 82-92, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38523773

RESUMO

Many anomaly detection approaches, especially deep learning methods, have been recently developed to identify abnormal image morphology by only employing normal images during training. Unfortunately, many prior anomaly detection methods were optimized for a specific "known" abnormality (e.g., brain tumor, bone fraction, cell types). Moreover, even though only the normal images were used in the training process, the abnormal images were often employed during the validation process (e.g., epoch selection, hyper-parameter tuning), which might leak the supposed "unknown" abnormality unintentionally. In this study, we investigated these two essential aspects regarding universal anomaly detection in medical images by (1) comparing various anomaly detection methods across four medical datasets, (2) investigating the inevitable but often neglected issues on how to unbiasedly select the optimal anomaly detection model during the validation phase using only normal images, and (3) proposing a simple decision-level ensemble method to leverage the advantage of different kinds of anomaly detection without knowing the abnormality. The results of our experiments indicate that none of the evaluated methods consistently achieved the best performance across all datasets. Our proposed method enhanced the robustness of performance in general (average AUC 0.956).

10.
Artigo em Inglês | MEDLINE | ID: mdl-37786583

RESUMO

Multiplex immunofluorescence (MxIF) is an emerging imaging technology whose downstream molecular analytics highly rely upon the effectiveness of cell segmentation. In practice, multiple membrane markers (e.g., NaKATPase, PanCK and ß-catenin) are employed to stain membranes for different cell types, so as to achieve a more comprehensive cell segmentation since no single marker fits all cell types. However, prevalent watershed-based image processing might yield inferior capability for modeling complicated relationships between markers. For example, some markers can be misleading due to questionable stain quality. In this paper, we propose a deep learning based membrane segmentation method to aggregate complementary information that is uniquely provided by large scale MxIF markers. We aim to segment tubular membrane structure in MxIF data using global (membrane markers z-stack projection image) and local (separate individual markers) information to maximize topology preservation with deep learning. Specifically, we investigate the feasibility of four SOTA 2D deep networks and four volumetric-based loss functions. We conducted a comprehensive ablation study to assess the sensitivity of the proposed method with various combinations of input channels. Beyond using adjusted rand index (ARI) as the evaluation metric, which was inspired by the clDice, we propose a novel volumetric metric that is specific for skeletal structure, denoted as clDiceSKEL. In total, 80 membrane MxIF images were manually traced for 5-fold cross-validation. Our model outperforms the baseline with a 20.2% and 41.3% increase in clDiceSKEL and ARI performance, which is significant (p<0.05) using the Wilcoxon signed rank test. Our work explores a promising direction for advancing MxIF imaging cell segmentation with deep learning membrane segmentation. Tools are available at https://github.com/MASILab/MxIF_Membrane_Segmentation.

11.
Artigo em Inglês | MEDLINE | ID: mdl-37465840

RESUMO

Crohn's disease (CD) is a debilitating inflammatory bowel disease with no known cure. Computational analysis of hematoxylin and eosin (H&E) stained colon biopsy whole slide images (WSIs) from CD patients provides the opportunity to discover unknown and complex relationships between tissue cellular features and disease severity. While there have been works using cell nuclei-derived features for predicting slide-level traits, this has not been performed on CD H&E WSIs for classifying normal tissue from CD patients vs active CD and assessing slide label-predictive performance while using both separate and combined information from pseudo-segmentation labels of nuclei from neutrophils, eosinophils, epithelial cells, lymphocytes, plasma cells, and connective cells. We used 413 WSIs of CD patient biopsies and calculated normalized histograms of nucleus density for the six cell classes for each WSI. We used a support vector machine to classify the truncated singular value decomposition representations of the normalized histograms as normal or active CD with four-fold cross-validation in rounds where nucleus types were first compared individually, the best was selected, and further types were added each round. We found that neutrophils were the most predictive individual nucleus type, with an AUC of 0.92 ± 0.0003 on the withheld test set. Adding information improved cross-validation performance for the first two rounds and on the withheld test set for the first three rounds, though performance metrics did not increase substantially beyond when neutrophils were used alone.

12.
Artigo em Inglês | MEDLINE | ID: mdl-36303574

RESUMO

Deep learning promises the extraction of valuable information from traumatic brain injury (TBI) datasets and depends on efficient navigation when using large-scale mixed computed tomography (CT) datasets from clinical systems. To ensure a cleaner signal while training deep learning models, removal of computed tomography angiography (CTA) and scans with streaking artifacts is sensible. On massive datasets of heterogeneously sized scans, time-consuming manual quality assurance (QA) by visual inspection is still often necessary, despite the expectation of CTA annotation (artifact annotation is not expected). We propose an automatic QA approach for retrieving CT scans without artifacts by representing 3D scans as 2D axial slice montages and using a multi-headed convolutional neural network to detect CT vs CTA and artifact vs no artifact. We sampled 848 scans from a mixed CT dataset of TBI patients and performed 4-fold stratified cross-validation on 698 montages followed by an ablation experiment-150 stratified montages were withheld for external validation evaluation. Aggregate AUC for our main model was 0.978 for CT detection, 0.675 for artifact detection during cross-validation and 0.965 for CT detection, 0.698 for artifact detection on the external validation set, while the ablated model showed 0.946 for CT detection, 0.735 for artifact detection during cross-validation and 0.937 for CT detection, 0.708 for artifact detection on the external validation set. While our approach is successful for CT detection, artifact detection performance is potentially depressed due to the heterogeneity of present streaking artifacts and a suboptimal number of artifact scans in our training data.

13.
Artigo em Inglês | MEDLINE | ID: mdl-36303575

RESUMO

7T MRI provides unprecedented resolution for examining human brain anatomy in vivo. For example, 7T MRI enables deep thickness measurement of laminar subdivisions in the right fusiform area. Existing laminar thickness measurement on 7T is labor intensive, and error prone since the visual inspection of the image is typically along one of the three orthogonal planes (axial, coronal, or sagittal view). To overcome this, we propose a new analytics tool that allows flexible quantification of cortical thickness on a 2D plane that is orthogonal to the cortical surface (beyond axial, coronal, and sagittal views) based on the 3D computational surface reconstruction. The proposed method further distinguishes high quality 2D planes and the low-quality ones by automatically inspecting the angles between the surface normals and slice direction. In our approach, we acquired a pair of 3T and 7T scans (same subject). We extracted the brain surfaces from the 3T scan using MaCRUISE and projected the surface to the 7T scan's space. After computing the angles between the surface normals and axial direction vector, we found that 18.58% of surface points were angled at more than 80° with the axial direction vector and had 2D axial planes with visually distinguishable cortical layers. 15.12% of the surface points with normal vectors angled at 30° or lesser with the axial direction, had poor 2D axial slices for visual inspection of the cortical layers. This effort promises to dramatically extend the area of cortex that can be quantified with ultra-high resolution in-plane imaging methods.

14.
Artigo em Inglês | MEDLINE | ID: mdl-36331283

RESUMO

Multi-instance learning (MIL) is widely used in the computer-aided interpretation of pathological Whole Slide Images (WSIs) to solve the lack of pixel-wise or patch-wise annotations. Often, this approach directly applies "natural image driven" MIL algorithms which overlook the multi-scale (i.e. pyramidal) nature of WSIs. Off-the-shelf MIL algorithms are typically deployed on a single-scale of WSIs (e.g., 20× magnification), while human pathologists usually aggregate the global and local patterns in a multi-scale manner (e.g., by zooming in and out between different magnifications). In this study, we propose a novel cross-scale attention mechanism to explicitly aggregate inter-scale interactions into a single MIL network for Crohn's Disease (CD), which is a form of inflammatory bowel disease. The contribution of this paper is two-fold: (1) a cross-scale attention mechanism is proposed to aggregate features from different resolutions with multi-scale interaction; and (2) differential multi-scale attention visualizations are generated to localize explainable lesion patterns. By training ~250,000 H&E-stained Ascending Colon (AC) patches from 20 CD patient and 30 healthy control samples at different scales, our approach achieved a superior Area under the Curve (AUC) score of 0.8924 compared with baseline models. The official implementation is publicly available at https://github.com/hrlblab/CS-MIL.

15.
Magn Reson Imaging ; 85: 44-56, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34666161

RESUMO

Reproducible identification of white matter pathways across subjects is essential for the study of structural connectivity of the human brain. One of the key challenges is anatomical differences between subjects and human rater subjectivity in labeling. Labeling white matter regions of interest presents many challenges due to the need to integrate both local and global information. Clearly communicating the manual processes to capture this information is cumbersome, yet essential to lay a solid foundation for comprehensive atlases. Segmentation protocols must be designed so the interpretation of the requested tasks as well as locating structural landmarks is anatomically accurate, intuitive and reproducible. In this work, we quantified the reproducibility of a first iteration of an open/public multi-bundle segmentation protocol. This allowed us to establish a baseline for its reproducibility as well as to identify the limitations for future iterations. The protocol was tested/evaluated on both typical 3 T research acquisition Baltimore Longitudinal Study of Aging (BLSA) and high-acquisition quality Human Connectome Project (HCP) datasets. The results show that a rudimentary protocol can produce acceptable intra-rater and inter-rater reproducibility. However, this work highlights the difficulty in generalizing reproducible results and the importance of reaching consensus on anatomical description of white matter pathways. The protocol has been made available in open source to improve generalizability and reliability in collaboration. The goal is to improve upon the first iteration and initiate a discussion on the anatomical validity (or lack thereof) of some bundle definitions and the importance of reproducibility of tractography segmentation.


Assuntos
Conectoma , Substância Branca , Encéfalo/diagnóstico por imagem , Imagem de Tensor de Difusão/métodos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Estudos Longitudinais , Reprodutibilidade dos Testes , Substância Branca/diagnóstico por imagem
16.
Med Phys ; 48(10): 6060-6068, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34287944

RESUMO

PURPOSE: Artificial intelligence diagnosis and triage of large vessel occlusion may quicken clinical response for a subset of time-sensitive acute ischemic stroke patients, improving outcomes. Differences in architectural elements within data-driven convolutional neural network (CNN) models impact performance. Foreknowledge of effective model architectural elements for domain-specific problems can narrow the search for candidate models and inform strategic model design and adaptation to optimize performance on available data. Here, we study CNN architectures with a range of learnable parameters and which span the inclusion of architectural elements, such as parallel processing branches and residual connections with varying methods of recombining residual information. METHODS: We compare five CNNs: ResNet-50, DenseNet-121, EfficientNet-B0, PhiNet, and an Inception module-based network, on a computed tomography angiography large vessel occlusion detection task. The models were trained and preliminarily evaluated with 10-fold cross-validation on preprocessed scans (n = 240). An ablation study was performed on PhiNet due to superior cross-validated test performance across accuracy, precision, recall, specificity, and F1 score. The final evaluation of all models was performed on a withheld external validation set (n = 60) and these predictions were subsequently calibrated with sigmoid curves. RESULTS: Uncalibrated results on the withheld external validation set show that DenseNet-121 had the best average performance on accuracy, precision, recall, specificity, and F1 score. After calibration DenseNet-121 maintained superior performance on all metrics except recall. CONCLUSIONS: The number of learnable parameters in our five models and best-ablated PhiNet directly related to cross-validated test performance-the smaller the model the better. However, this pattern did not hold when looking at generalization on the withheld external validation set. DenseNet-121 generalized the best; we posit this was due to its heavy use of residual connections utilizing concatenation, which causes feature maps from earlier layers to be used deeper in the network, while aiding in gradient flow and regularization.


Assuntos
Isquemia Encefálica , Acidente Vascular Cerebral , Inteligência Artificial , Angiografia por Tomografia Computadorizada , Humanos , Redes Neurais de Computação , Acidente Vascular Cerebral/diagnóstico por imagem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA