Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 62
Filtrar
1.
J Med Imaging (Bellingham) ; 11(2): 024008, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38571764

RESUMO

Purpose: Two-dimensional single-slice abdominal computed tomography (CT) provides a detailed tissue map with high resolution allowing quantitative characterization of relationships between health conditions and aging. However, longitudinal analysis of body composition changes using these scans is difficult due to positional variation between slices acquired in different years, which leads to different organs/tissues being captured. Approach: To address this issue, we propose C-SliceGen, which takes an arbitrary axial slice in the abdominal region as a condition and generates a pre-defined vertebral level slice by estimating structural changes in the latent space. Results: Our experiments on 2608 volumetric CT data from two in-house datasets and 50 subjects from the 2015 Multi-Atlas Abdomen Labeling Challenge Beyond the Cranial Vault (BTCV) dataset demonstrate that our model can generate high-quality images that are realistic and similar. We further evaluate our method's capability to harmonize longitudinal positional variation on 1033 subjects from the Baltimore longitudinal study of aging dataset, which contains longitudinal single abdominal slices, and confirmed that our method can harmonize the slice positional variance in terms of visceral fat area. Conclusion: This approach provides a promising direction for mapping slices from different vertebral levels to a target slice and reducing positional variance for single-slice longitudinal analysis. The source code is available at: https://github.com/MASILab/C-SliceGen.

2.
Med Image Anal ; 94: 103124, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38428271

RESUMO

Analyzing high resolution whole slide images (WSIs) with regard to information across multiple scales poses a significant challenge in digital pathology. Multi-instance learning (MIL) is a common solution for working with high resolution images by classifying bags of objects (i.e. sets of smaller image patches). However, such processing is typically performed at a single scale (e.g., 20× magnification) of WSIs, disregarding the vital inter-scale information that is key to diagnoses by human pathologists. In this study, we propose a novel cross-scale MIL algorithm to explicitly aggregate inter-scale relationships into a single MIL network for pathological image diagnosis. The contribution of this paper is three-fold: (1) A novel cross-scale MIL (CS-MIL) algorithm that integrates the multi-scale information and the inter-scale relationships is proposed; (2) A toy dataset with scale-specific morphological features is created and released to examine and visualize differential cross-scale attention; (3) Superior performance on both in-house and public datasets is demonstrated by our simple cross-scale MIL strategy. The official implementation is publicly available at https://github.com/hrlblab/CS-MIL.


Assuntos
Algoritmos , Diagnóstico por Imagem , Humanos
3.
bioRxiv ; 2023 Oct 02.
Artigo em Inglês | MEDLINE | ID: mdl-37873404

RESUMO

Crohn's disease (CD) is a complex chronic inflammatory disorder that may affect any part of gastrointestinal tract with extra-intestinal manifestations and associated immune dysregulation. To characterize heterogeneity in CD, we profiled single-cell transcriptomics of 170 samples from 65 CD patients and 18 non-inflammatory bowel disease (IBD) controls in both the terminal ileum (TI) and ascending colon (AC). Analysis of 202,359 cells identified a novel epithelial cell type in both TI and AC, featuring high expression of LCN2, NOS2, and DUOX2, and thus is named LND. LND cells, confirmed by high-resolution in-situ RNA imaging, were rarely found in non-IBD controls, but expanded significantly in active CD. Compared to other epithelial cells, genes defining LND cells were enriched in antimicrobial response and immunoregulation. Moreover, multiplexed protein imaging demonstrated that LND cell abundance was associated with immune infiltration. Cross-talk between LND and immune cells was explored by ligand-receptor interactions and further evidenced by their spatial colocalization. LND cells showed significant enrichment of expression specificity of IBD/CD susceptibility genes, revealing its role in immunopathogenesis of CD. Investigating lineage relationships of epithelial cells detected two LND cell subpopulations with different origins and developmental potential, early and late LND. The ratio of the late to early LND cells was related to anti-TNF response. These findings emphasize the pathogenic role of the specialized LND cell type in both Crohn's ileitis and Crohn's colitis and identify novel biomarkers associated with disease activity and treatment response.

4.
Artigo em Inglês | MEDLINE | ID: mdl-37786583

RESUMO

Multiplex immunofluorescence (MxIF) is an emerging imaging technology whose downstream molecular analytics highly rely upon the effectiveness of cell segmentation. In practice, multiple membrane markers (e.g., NaKATPase, PanCK and ß-catenin) are employed to stain membranes for different cell types, so as to achieve a more comprehensive cell segmentation since no single marker fits all cell types. However, prevalent watershed-based image processing might yield inferior capability for modeling complicated relationships between markers. For example, some markers can be misleading due to questionable stain quality. In this paper, we propose a deep learning based membrane segmentation method to aggregate complementary information that is uniquely provided by large scale MxIF markers. We aim to segment tubular membrane structure in MxIF data using global (membrane markers z-stack projection image) and local (separate individual markers) information to maximize topology preservation with deep learning. Specifically, we investigate the feasibility of four SOTA 2D deep networks and four volumetric-based loss functions. We conducted a comprehensive ablation study to assess the sensitivity of the proposed method with various combinations of input channels. Beyond using adjusted rand index (ARI) as the evaluation metric, which was inspired by the clDice, we propose a novel volumetric metric that is specific for skeletal structure, denoted as clDiceSKEL. In total, 80 membrane MxIF images were manually traced for 5-fold cross-validation. Our model outperforms the baseline with a 20.2% and 41.3% increase in clDiceSKEL and ARI performance, which is significant (p<0.05) using the Wilcoxon signed rank test. Our work explores a promising direction for advancing MxIF imaging cell segmentation with deep learning membrane segmentation. Tools are available at https://github.com/MASILab/MxIF_Membrane_Segmentation.

5.
Med Image Anal ; 90: 102939, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37725868

RESUMO

Transformer-based models, capable of learning better global dependencies, have recently demonstrated exceptional representation learning capabilities in computer vision and medical image analysis. Transformer reformats the image into separate patches and realizes global communication via the self-attention mechanism. However, positional information between patches is hard to preserve in such 1D sequences, and loss of it can lead to sub-optimal performance when dealing with large amounts of heterogeneous tissues of various sizes in 3D medical image segmentation. Additionally, current methods are not robust and efficient for heavy-duty medical segmentation tasks such as predicting a large number of tissue classes or modeling globally inter-connected tissue structures. To address such challenges and inspired by the nested hierarchical structures in vision transformer, we proposed a novel 3D medical image segmentation method (UNesT), employing a simplified and faster-converging transformer encoder design that achieves local communication among spatially adjacent patch sequences by aggregating them hierarchically. We extensively validate our method on multiple challenging datasets, consisting of multiple modalities, anatomies, and a wide range of tissue classes, including 133 structures in the brain, 14 organs in the abdomen, 4 hierarchical components in the kidneys, inter-connected kidney tumors and brain tumors. We show that UNesT consistently achieves state-of-the-art performance and evaluate its generalizability and data efficiency. Particularly, the model achieves whole brain segmentation task complete ROI with 133 tissue classes in a single network, outperforming prior state-of-the-art method SLANT27 ensembled with 27 networks. Our model performance increases the mean DSC score of the publicly available Colin and CANDI dataset from 0.7264 to 0.7444 and from 0.6968 to 0.7025, respectively. Code, pre-trained models, and use case pipeline are available at: https://github.com/MASILab/UNesT.

6.
J Med Imaging (Bellingham) ; 10(4): 044001, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37448597

RESUMO

Purpose: Thigh muscle group segmentation is important for assessing muscle anatomy, metabolic disease, and aging. Many efforts have been put into quantifying muscle tissues with magnetic resonance (MR) imaging, including manual annotation of individual muscles. However, leveraging publicly available annotations in MR images to achieve muscle group segmentation on single-slice computed tomography (CT) thigh images is challenging. Approach: We propose an unsupervised domain adaptation pipeline with self-training to transfer labels from three-dimensional MR to single CT slices. First, we transform the image appearance from MR to CT with CycleGAN and feed the synthesized CT images to a segmenter simultaneously. Single CT slices are divided into hard and easy cohorts based on the entropy of pseudo-labels predicted by the segmenter. After refining easy cohort pseudo-labels based on anatomical assumption, self-training with easy and hard splits is applied to fine-tune the segmenter. Results: On 152 withheld single CT thigh images, the proposed pipeline achieved a mean Dice of 0.888 (0.041) across all muscle groups, including gracilis, hamstrings, quadriceps femoris, and sartorius muscle. Conclusions: To our best knowledge, this is the first pipeline to achieve domain adaptation from MR to CT for thigh images. The proposed pipeline effectively and robustly extracts muscle groups on two-dimensional single-slice CT thigh images. The container is available for public use in GitHub repository available at: https://github.com/MASILab/DA_CT_muscle_seg.

7.
Artigo em Inglês | MEDLINE | ID: mdl-37465093

RESUMO

Metabolic health is increasingly implicated as a risk factor across conditions from cardiology to neurology, and efficiency assessment of body composition is critical to quantitatively characterizing these relationships. 2D low dose single slice computed tomography (CT) provides a high resolution, quantitative tissue map, albeit with a limited field of view. Although numerous potential analyses have been proposed in quantifying image context, there has been no comprehensive study for low-dose single slice CT longitudinal variability with automated segmentation. We studied a total of 1816 slices from 1469 subjects of Baltimore Longitudinal Study on Aging (BLSA) abdominal dataset using supervised deep learning-based segmentation and unsupervised clustering method. 300 out of 1469 subjects that have two year gap in their first two scans were pick out to evaluate longitudinal variability with measurements including intraclass correlation coefficient (ICC) and coefficient of variation (CV) in terms of tissues/organs size and mean intensity. We showed that our segmentation methods are stable in longitudinal settings with Dice ranged from 0.821 to 0.962 for thirteen target abdominal tissues structures. We observed high variability in most organ with ICC<0.5, low variability in the area of muscle, abdominal wall, fat and body mask with average ICC≥0.8. We found that the variability in organ is highly related to the cross-sectional position of the 2D slice. Our efforts pave quantitative exploration and quality control to reduce uncertainties in longitudinal analysis.

8.
Artigo em Inglês | MEDLINE | ID: mdl-37465097

RESUMO

With the confounding effects of demographics across large-scale imaging surveys, substantial variation is demonstrated with the volumetric structure of orbit and eye anthropometry. Such variability increases the level of difficulty to localize the anatomical features of the eye organs for populational analysis. To adapt the variability of eye organs with stable registration transfer, we propose an unbiased eye atlas template followed by a hierarchical coarse-to-fine approach to provide generalized eye organ context across populations. Furthermore, we retrieved volumetric scans from 1842 healthy patients for generating an eye atlas template with minimal biases. Briefly, we select 20 subject scans and use an iterative approach to generate an initial unbiased template. We then perform metric-based registration to the remaining samples with the unbiased template and generate coarse registered outputs. The coarse registered outputs are further leveraged to train a deep probabilistic network, which aims to refine the organ deformation in unsupervised setting. Computed tomography (CT) scans of 100 de-identified subjects are used to generate and evaluate the unbiased atlas template with the hierarchical pipeline. The refined registration shows the stable transfer of the eye organs, which were well-localized in the high-resolution (0.5 mm3) atlas space and demonstrated a significant improvement of 2.37% Dice for inverse label transfer performance. The subject-wise qualitative representations with surface rendering successfully demonstrate the transfer details of the organ context and showed the applicability of generalizing the morphological variation across patients.

9.
Artigo em Inglês | MEDLINE | ID: mdl-37465840

RESUMO

Crohn's disease (CD) is a debilitating inflammatory bowel disease with no known cure. Computational analysis of hematoxylin and eosin (H&E) stained colon biopsy whole slide images (WSIs) from CD patients provides the opportunity to discover unknown and complex relationships between tissue cellular features and disease severity. While there have been works using cell nuclei-derived features for predicting slide-level traits, this has not been performed on CD H&E WSIs for classifying normal tissue from CD patients vs active CD and assessing slide label-predictive performance while using both separate and combined information from pseudo-segmentation labels of nuclei from neutrophils, eosinophils, epithelial cells, lymphocytes, plasma cells, and connective cells. We used 413 WSIs of CD patient biopsies and calculated normalized histograms of nucleus density for the six cell classes for each WSI. We used a support vector machine to classify the truncated singular value decomposition representations of the normalized histograms as normal or active CD with four-fold cross-validation in rounds where nucleus types were first compared individually, the best was selected, and further types were added each round. We found that neutrophils were the most predictive individual nucleus type, with an AUC of 0.92 ± 0.0003 on the withheld test set. Adding information improved cross-validation performance for the first two rounds and on the withheld test set for the first three rounds, though performance metrics did not increase substantially beyond when neutrophils were used alone.

10.
Artigo em Inglês | MEDLINE | ID: mdl-37324550

RESUMO

The Tangram algorithm is a benchmarking method of aligning single-cell (sc/snRNA-seq) data to various forms of spatial data collected from the same region. With this data alignment, the annotation of the single-cell data can be projected to spatial data. However, the cell composition (cell-type ratio) of the single-cell data and spatial data might be different because of heterogeneous cell distribution. Whether the Tangram algorithm can be adapted when the two data have different cell-type ratios has not been discussed in previous works. In our practical application that maps the cell-type classification results of single-cell data to the Multiplex immunofluorescence (MxIF) spatial data, cell-type ratios were different, though they were sampled from adjacent areas. In this work, both simulation and empirical validation were conducted to quantitatively explore the impact of the mismatched cell-type ratio on the Tangram mapping in different situations. Results show that the cell-type difference has a negative influence on classification accuracy.

11.
IEEE J Biomed Health Inform ; 27(9): 4444-4453, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37310834

RESUMO

Medical image segmentation, or computing voxel-wise semantic masks, is a fundamental yet challenging task in medical imaging domain. To increase the ability of encoder-decoder neural networks to perform this task across large clinical cohorts, contrastive learning provides an opportunity to stabilize model initialization and enhances downstream tasks performance without ground-truth voxel-wise labels. However, multiple target objects with different semantic meanings and contrast level may exist in a single image, which poses a problem for adapting traditional contrastive learning methods from prevalent "image-level classification" to "pixel-level segmentation". In this article, we propose a simple semantic-aware contrastive learning approach leveraging attention masks and image-wise labels to advance multi-object semantic segmentation. Briefly, we embed different semantic objects to different clusters rather than the traditional image-level embeddings. We evaluate our proposed method on a multi-organ medical image segmentation task with both in-house data and MICCAI Challenge 2015 BTCV datasets. Compared with current state-of-the-art training strategies, our proposed pipeline yields a substantial improvement of 5.53% and 6.09% on Dice score for both medical image segmentation cohorts respectively (p-value 0.01). The performance of the proposed method is further assessed on external medical image cohort via MICCAI Challenge FLARE 2021 dataset, and achieves a substantial improvement from Dice 0.922 to 0.933 (p-value 0.01).


Assuntos
Diagnóstico por Imagem , Aprendizado de Máquina , Humanos , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Semântica , Diagnóstico por Imagem/métodos , Conjuntos de Dados como Assunto
12.
Artigo em Inglês | MEDLINE | ID: mdl-37123016

RESUMO

7T magnetic resonance imaging (MRI) has the potential to drive our understanding of human brain function through new contrast and enhanced resolution. Whole brain segmentation is a key neuroimaging technique that allows for region-by-region analysis of the brain. Segmentation is also an important preliminary step that provides spatial and volumetric information for running other neuroimaging pipelines. Spatially localized atlas network tiles (SLANT) is a popular 3D convolutional neural network (CNN) tool that breaks the whole brain segmentation task into localized sub-tasks. Each sub-task involves a specific spatial location handled by an independent 3D convolutional network to provide high resolution whole brain segmentation results. SLANT has been widely used to generate whole brain segmentations from structural scans acquired on 3T MRI. However, the use of SLANT for whole brain segmentation from structural 7T MRI scans has not been successful due to the inhomogeneous image contrast usually seen across the brain in 7T MRI. For instance, we demonstrate the mean percent difference of SLANT label volumes between a 3T scan-rescan is approximately 1.73%, whereas its 3T-7T scan-rescan counterpart has higher differences around 15.13%. Our approach to address this problem is to register the whole brain segmentation performed on 3T MRI to 7T MRI and use this information to finetune SLANT for structural 7T MRI. With the finetuned SLANT pipeline, we observe a lower mean relative difference in the label volumes of ~8.43% acquired from structural 7T MRI data. Dice similarity coefficient between SLANT segmentation on the 3T MRI scan and the after finetuning SLANT segmentation on the 7T MRI increased from 0.79 to 0.83 with p<0.01. These results suggest finetuning of SLANT is a viable solution for improving whole brain segmentation on high resolution 7T structural imaging.

13.
Comput Biol Med ; 152: 106414, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36525831

RESUMO

BACKGROUND: Anterior temporal lobe resection is an effective treatment for temporal lobe epilepsy. The post-surgical structural changes could influence the follow-up treatment. Capturing post-surgical changes necessitates a well-established cortical shape correspondence between pre- and post-surgical surfaces. Yet, most cortical surface registration methods are designed for normal neuroanatomy. Surgical changes can introduce wide ranging artifacts in correspondence, for which conventional surface registration methods may not work as intended. METHODS: In this paper, we propose a novel particle method for one-to-one dense shape correspondence between pre- and post-surgical surfaces with temporal lobe resection. The proposed method can handle partial structural abnormality involving non-rigid changes. Unlike existing particle methods using implicit particle adjacency, we consider explicit particle adjacency to establish a smooth correspondence. Moreover, we propose hierarchical optimization of particles rather than full optimization of all particles at once to avoid trappings of locally optimal particle update. RESULTS: We evaluate the proposed method on 25 pairs of T1-MRI with pre- and post-simulated resection on the anterior temporal lobe and 25 pairs of patients with actual resection. We show improved accuracy over several cortical regions in terms of ROI boundary Hausdorff distance with 4.29 mm and Dice similarity coefficients with average value 0.841, compared to existing surface registration methods on simulated data. In 25 patients with actual resection of the anterior temporal lobe, our method shows an improved shape correspondence in qualitative and quantitative evaluation on parcellation-off ratio with average value 0.061 and cortical thickness changes. We also show better smoothness of the correspondence without self-intersection, compared with point-wise matching methods which show various degrees of self-intersection. CONCLUSION: The proposed method establishes a promising one-to-one dense shape correspondence for temporal lobe resection. The resulting correspondence is smooth without self-intersection. The proposed hierarchical optimization strategy could accelerate optimization and improve the optimization accuracy. According to the results on the paired surfaces with temporal lobe resection, the proposed method outperforms the compared methods and is more reliable to capture cortical thickness changes.


Assuntos
Epilepsia do Lobo Temporal , Lobo Temporal , Humanos , Lobo Temporal/diagnóstico por imagem , Lobo Temporal/cirurgia , Epilepsia do Lobo Temporal/diagnóstico por imagem , Epilepsia do Lobo Temporal/cirurgia , Imageamento por Ressonância Magnética/métodos , Resultado do Tratamento
14.
Med Image Learn Ltd Noisy Data (2023) ; 14307: 82-92, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38523773

RESUMO

Many anomaly detection approaches, especially deep learning methods, have been recently developed to identify abnormal image morphology by only employing normal images during training. Unfortunately, many prior anomaly detection methods were optimized for a specific "known" abnormality (e.g., brain tumor, bone fraction, cell types). Moreover, even though only the normal images were used in the training process, the abnormal images were often employed during the validation process (e.g., epoch selection, hyper-parameter tuning), which might leak the supposed "unknown" abnormality unintentionally. In this study, we investigated these two essential aspects regarding universal anomaly detection in medical images by (1) comparing various anomaly detection methods across four medical datasets, (2) investigating the inevitable but often neglected issues on how to unbiasedly select the optimal anomaly detection model during the validation phase using only normal images, and (3) proposing a simple decision-level ensemble method to leverage the advantage of different kinds of anomaly detection without knowing the abnormality. The results of our experiments indicate that none of the evaluated methods consistently achieved the best performance across all datasets. Our proposed method enhanced the robustness of performance in general (average AUC 0.956).

15.
Artigo em Inglês | MEDLINE | ID: mdl-36331283

RESUMO

Multi-instance learning (MIL) is widely used in the computer-aided interpretation of pathological Whole Slide Images (WSIs) to solve the lack of pixel-wise or patch-wise annotations. Often, this approach directly applies "natural image driven" MIL algorithms which overlook the multi-scale (i.e. pyramidal) nature of WSIs. Off-the-shelf MIL algorithms are typically deployed on a single-scale of WSIs (e.g., 20× magnification), while human pathologists usually aggregate the global and local patterns in a multi-scale manner (e.g., by zooming in and out between different magnifications). In this study, we propose a novel cross-scale attention mechanism to explicitly aggregate inter-scale interactions into a single MIL network for Crohn's Disease (CD), which is a form of inflammatory bowel disease. The contribution of this paper is two-fold: (1) a cross-scale attention mechanism is proposed to aggregate features from different resolutions with multi-scale interaction; and (2) differential multi-scale attention visualizations are generated to localize explainable lesion patterns. By training ~250,000 H&E-stained Ascending Colon (AC) patches from 20 CD patient and 30 healthy control samples at different scales, our approach achieved a superior Area under the Curve (AUC) score of 0.8924 compared with baseline models. The official implementation is publicly available at https://github.com/hrlblab/CS-MIL.

16.
Artigo em Inglês | MEDLINE | ID: mdl-36303577

RESUMO

The Human BioMolecular Atlas Program (HuBMAP) provides an opportunity to contextualize findings across cellular to organ systems levels. Constructing an atlas target is the primary endpoint for generalizing anatomical information across scales and populations. An initial target of HuBMAP is the kidney organ and arterial phase contrast-enhanced computed tomography (CT) provides distinctive appearance and anatomical context on the internal substructure of kidney organs such as renal context, medulla, and pelvicalyceal system. With the confounding effects of demographics and morphological characteristics of the kidney across large-scale imaging surveys, substantial variation is demonstrated with the internal substructure morphometry and the intensity contrast due to the variance of imaging protocols. Such variability increases the level of difficulty to localize the anatomical features of the kidney substructure in a well-defined spatial reference for clinical analysis. In order to stabilize the localization of kidney substructures in the context of this variability, we propose a high-resolution CT kidney substructure atlas template. Briefly, we introduce a deep learning preprocessing technique to extract the volumetric interest of the abdominal regions and further perform a deep supervised registration pipeline to stably adapt the anatomical context of the kidney internal substructure. To generate and evaluate the atlas template, arterial phase CT scans of 500 control subjects are de-identified and registered to the atlas template with a complete end-to-end pipeline. With stable registration to the abdominal wall and kidney organs, the internal substructure of both left and right kidneys are substantially localized in the high-resolution atlas space. The atlas average template successfully demonstrated the contextual details of the internal structure and was applicable to generalize the morphological variation of internal substructure across patients.

17.
Artigo em Inglês | MEDLINE | ID: mdl-36303572

RESUMO

Muscle, bone, and fat segmentation of CT thigh slice is essential for body composition research. Voxel-wise image segmentation enables quantification of tissue properties including area, intensity and texture. Deep learning approaches have had substantial success in medical image segmentation, but they typically require substantial data. Due to high cost of manual annotation, training deep learning models with limited human labelled data is desirable but also a challenging problem. Inspired by transfer learning, we proposed a two-stage deep learning pipeline to address this issue in thigh segmentation. We study 2836 slices from Baltimore Longitudinal Study of Aging (BLSA) and 121 slices from Genetic and Epigenetic Signatures of Translational Aging Laboratory Testing (GESTALT). First, we generated pseudo-labels based on approximate hand-crafted approaches using CT intensity and anatomical morphology. Then, those pseudo labels are fed into deep neural networks to train models from scratch. Finally, the first stage model is loaded as initialization and fine-tuned with a more limited set of expert human labels. We evaluate the performance of this framework on 56 thigh CT scans and obtained average Dice of 0.979,0.969,0.953,0.980 and 0.800 for five tissues: muscle, cortical bone, internal bone, subcutaneous fat and intermuscular fat respectively. We evaluated generalizability by manually reviewing external 3504 BLSA single thighs from 1752 thigh slices. The result is consistent and passed human review with 150 failed thigh images, which demonstrates that the proposed method has strong generalizability.

18.
Artigo em Inglês | MEDLINE | ID: mdl-36303576

RESUMO

Abdominal computed tomography CT imaging enables assessment of body habitus and organ health. Quantification of these health factors necessitates semantic segmentation of key structures. Deep learning efforts have shown remarkable success in automating segmentation of abdominal CT, but these methods largely rely on 3D volumes. Current approaches are not applicable when single slice imaging is used to minimize radiation dose. For 2D abdominal organ segmentation, lack of 3D context and variety in acquired image levels are major challenges. Deep learning approaches for 2D abdominal organ segmentation benefit by adding more images with manual annotation, but annotation is resource intensive to acquire given the large quantity and the requirement of expertise. Herein, we designed a gradient based active learning annotation framework by meta-parameterizing and optimizing the exemplars to dynamically select the 'hard cases' to achieve better results with fewer annotated slices to reduce the annotation effort. With the Baltimore Longitudinal Study on Aging (BLSA) cohort, we evaluated the performance with starting from 286 subjects and added 50 more subjects iteratively to 586 subjects in total. We compared the amount of data required to add to achieve the same Dice score between using our proposed method and the random selection in terms of Dice. When achieving 0.97 of the maximum Dice, the random selection needed 4.4 times more data compared with our active learning framework. The proposed framework maximizes the efficacy of manual efforts and accelerates learning.

19.
Artigo em Inglês | MEDLINE | ID: mdl-36304178

RESUMO

Multi-modal learning (e.g., integrating pathological images with genomic features) tends to improve the accuracy of cancer diagnosis and prognosis as compared to learning with a single modality. However, missing data is a common problem in clinical practice, i.e., not every patient has all modalities available. Most of the previous works directly discarded samples with missing modalities, which might lose information in these data and increase the likelihood of overfitting. In this work, we generalize the multi-modal learning in cancer diagnosis with the capacity of dealing with missing data using histological images and genomic data. Our integrated model can utilize all available data from patients with both complete and partial modalities. The experiments on the public TCGA-GBM and TCGA-LGG datasets show that the data with missing modalities can contribute to multi-modal learning, which improves the model performance in grade classification of glioma cancer.

20.
J Digit Imaging ; 35(6): 1576-1589, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-35922700

RESUMO

A robust medical image computing infrastructure must host massive multimodal archives, perform extensive analysis pipelines, and execute scalable job management. An emerging data format standard, the Brain Imaging Data Structure (BIDS), introduces complexities for interfacing with XNAT archives. Moreover, workflow integration is combinatorically problematic when matching large amount of processing to large datasets. Historically, workflow engines have been focused on refining workflows themselves instead of actual job generation. However, such an approach is incompatible with data centric architecture that hosts heterogeneous medical image computing. Distributed automation for XNAT toolkit (DAX) provides large-scale image storage and analysis pipelines with an optimized job management tool. Herein, we describe developments for DAX that allows for integration of XNAT and BIDS standards. We also improve DAX's efficiencies of diverse containerized workflows in a high-performance computing (HPC) environment. Briefly, we integrate YAML configuration processor scripts to abstract workflow data inputs, data outputs, commands, and job attributes. Finally, we propose an online database-driven mechanism for DAX to efficiently identify the most recent updated sessions, thereby improving job building efficiency on large projects. We refer the proposed overall DAX development in this work as DAX-1 (DAX version 1). To validate the effectiveness of the new features, we verified (1) the efficiency of converting XNAT data to BIDS format and the correctness of the conversion using a collection of BIDS standard containerized neuroimaging workflows, (2) how YAML-based processor simplified configuration setup via a sequence of application pipelines, and (3) the productivity of DAX-1 on generating actual HPC processing jobs compared with earlier DAX baseline method. The empirical results show that (1) DAX-1 converting XNAT data to BIDS has similar speed as accessing XNAT data only; (2) YAML can integrate to the DAX-1 with shallow learning curve for users, and (3) DAX-1 reduced the job/assessor generation latency by finding recent modified sessions. Herein, we present approaches for efficiently integrating XNAT and modern image formats with a scalable workflow engine for the large-scale dataset access and processing.


Assuntos
Neuroimagem , Software , Humanos , Encéfalo , Neuroimagem/métodos , Fluxo de Trabalho
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA