Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 70
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
J Med Internet Res ; 26: e51706, 2024 Aug 08.
Artigo em Inglês | MEDLINE | ID: mdl-39116439

RESUMO

BACKGROUND: Temporal bone computed tomography (CT) helps diagnose chronic otitis media (COM). However, its interpretation requires training and expertise. Artificial intelligence (AI) can help clinicians evaluate COM through CT scans, but existing models lack transparency and may not fully leverage multidimensional diagnostic information. OBJECTIVE: We aimed to develop an explainable AI system based on 3D convolutional neural networks (CNNs) for automatic CT-based evaluation of COM. METHODS: Temporal bone CT scans were retrospectively obtained from patients operated for COM between December 2015 and July 2021 at 2 independent institutes. A region of interest encompassing the middle ear was automatically segmented, and 3D CNNs were subsequently trained to identify pathological ears and cholesteatoma. An ablation study was performed to refine model architecture. Benchmark tests were conducted against a baseline 2D model and 7 clinical experts. Model performance was measured through cross-validation and external validation. Heat maps, generated using Gradient-Weighted Class Activation Mapping, were used to highlight critical decision-making regions. Finally, the AI system was assessed with a prospective cohort to aid clinicians in preoperative COM assessment. RESULTS: Internal and external data sets contained 1661 and 108 patients (3153 and 211 eligible ears), respectively. The 3D model exhibited decent performance with mean areas under the receiver operating characteristic curves of 0.96 (SD 0.01) and 0.93 (SD 0.01), and mean accuracies of 0.878 (SD 0.017) and 0.843 (SD 0.015), respectively, for detecting pathological ears on the 2 data sets. Similar outcomes were observed for cholesteatoma identification (mean area under the receiver operating characteristic curve 0.85, SD 0.03 and 0.83, SD 0.05; mean accuracies 0.783, SD 0.04 and 0.813, SD 0.033, respectively). The proposed 3D model achieved a commendable balance between performance and network size relative to alternative models. It significantly outperformed the 2D approach in detecting COM (P≤.05) and exhibited a substantial gain in identifying cholesteatoma (P<.001). The model also demonstrated superior diagnostic capabilities over resident fellows and the attending otologist (P<.05), rivaling all senior clinicians in both tasks. The generated heat maps properly highlighted the middle ear and mastoid regions, aligning with human knowledge in interpreting temporal bone CT. The resulting AI system achieved an accuracy of 81.8% in generating preoperative diagnoses for 121 patients and contributed to clinical decision-making in 90.1% cases. CONCLUSIONS: We present a 3D CNN model trained to detect pathological changes and identify cholesteatoma via temporal bone CT scans. In both tasks, this model significantly outperforms the baseline 2D approach, achieving levels comparable with or surpassing those of human experts. The model also exhibits decent generalizability and enhanced comprehensibility. This AI system facilitates automatic COM assessment and shows promising viability in real-world clinical settings. These findings underscore AI's potential as a valuable aid for clinicians in COM evaluation. TRIAL REGISTRATION: Chinese Clinical Trial Registry ChiCTR2000036300; https://www.chictr.org.cn/showprojEN.html?proj=58685.


Assuntos
Inteligência Artificial , Otite Média , Osso Temporal , Tomografia Computadorizada por Raios X , Humanos , Otite Média/diagnóstico por imagem , Osso Temporal/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Doença Crônica , Estudos Retrospectivos , Feminino , Masculino , Pessoa de Meia-Idade , Imageamento Tridimensional/métodos , Adulto , Redes Neurais de Computação
2.
J Digit Imaging ; 35(4): 1023-1033, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35266088

RESUMO

The field of artificial intelligence (AI) in medical imaging is undergoing explosive growth, and Radiology is a prime target for innovation. The American College of Radiology Data Science Institute has identified more than 240 specific use cases where AI could be used to improve clinical practice. In this context, thousands of potential methods are developed by research labs and industry innovators. Deploying AI tools within a clinical enterprise, even on limited retrospective evaluation, is complicated by security and privacy concerns. Thus, innovation must be weighed against the substantive resources required for local clinical evaluation. To reduce barriers to AI validation while maintaining rigorous security and privacy standards, we developed the AI Imaging Incubator. The AI Imaging Incubator serves as a DICOM storage destination within a clinical enterprise where images can be directed for novel research evaluation under Institutional Review Board approval. AI Imaging Incubator is controlled by a secure HIPAA-compliant front end and provides access to a menu of AI procedures captured within network-isolated containers. Results are served via a secure website that supports research and clinical data formats. Deployment of new AI approaches within this system is streamlined through a standardized application programming interface. This manuscript presents case studies of the AI Imaging Incubator applied to randomizing lung biopsies on chest CT, liver fat assessment on abdomen CT, and brain volumetry on head MRI.


Assuntos
Inteligência Artificial , Radiologia , Hospitais , Humanos , Radiologia/métodos , Estudos Retrospectivos , Fluxo de Trabalho
3.
J Digit Imaging ; 35(6): 1576-1589, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-35922700

RESUMO

A robust medical image computing infrastructure must host massive multimodal archives, perform extensive analysis pipelines, and execute scalable job management. An emerging data format standard, the Brain Imaging Data Structure (BIDS), introduces complexities for interfacing with XNAT archives. Moreover, workflow integration is combinatorically problematic when matching large amount of processing to large datasets. Historically, workflow engines have been focused on refining workflows themselves instead of actual job generation. However, such an approach is incompatible with data centric architecture that hosts heterogeneous medical image computing. Distributed automation for XNAT toolkit (DAX) provides large-scale image storage and analysis pipelines with an optimized job management tool. Herein, we describe developments for DAX that allows for integration of XNAT and BIDS standards. We also improve DAX's efficiencies of diverse containerized workflows in a high-performance computing (HPC) environment. Briefly, we integrate YAML configuration processor scripts to abstract workflow data inputs, data outputs, commands, and job attributes. Finally, we propose an online database-driven mechanism for DAX to efficiently identify the most recent updated sessions, thereby improving job building efficiency on large projects. We refer the proposed overall DAX development in this work as DAX-1 (DAX version 1). To validate the effectiveness of the new features, we verified (1) the efficiency of converting XNAT data to BIDS format and the correctness of the conversion using a collection of BIDS standard containerized neuroimaging workflows, (2) how YAML-based processor simplified configuration setup via a sequence of application pipelines, and (3) the productivity of DAX-1 on generating actual HPC processing jobs compared with earlier DAX baseline method. The empirical results show that (1) DAX-1 converting XNAT data to BIDS has similar speed as accessing XNAT data only; (2) YAML can integrate to the DAX-1 with shallow learning curve for users, and (3) DAX-1 reduced the job/assessor generation latency by finding recent modified sessions. Herein, we present approaches for efficiently integrating XNAT and modern image formats with a scalable workflow engine for the large-scale dataset access and processing.


Assuntos
Neuroimagem , Software , Humanos , Encéfalo , Neuroimagem/métodos , Fluxo de Trabalho
4.
Neurocomputing (Amst) ; 397: 48-59, 2020 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-32863584

RESUMO

With the rapid development of image acquisition and storage, multiple images per class are commonly available for computer vision tasks (e.g., face recognition, object detection, medical imaging, etc.). Recently, the recurrent neural network (RNN) has been widely integrated with convolutional neural networks (CNN) to perform image classification on ordered (sequential) data. In this paper, by permutating multiple images as multiple dummy orders, we generalize the ordered "RNN+CNN" design (longitudinal) to a novel unordered fashion, called Multi-path x-D Recurrent Neural Network (MxDRNN) for image classification. To the best of our knowledge, few (if any) existing studies have deployed the RNN framework to unordered intra-class images to leverage classification performance. Specifically, multiple learning paths are introduced in the MxDRNN to extract discriminative features by permutating input dummy orders. Eight datasets from five different fields (MNIST, 3D-MNIST, CIFAR, VGGFace2, and lung screening computed tomography) are included to evaluate the performance of our method. The proposed MxDRNN improves the baseline performance by a large margin across the different application fields (e.g., accuracy from 46.40% to 76.54% in VGGFace2 test pose set, AUC from 0.7418 to 0.8162 in NLST lung dataset). Additionally, empirical experiments show the MxDRNN is more robust to category-irrelevant attributes (e.g., expression, pose in face images), which may introduce difficulties for image classification and algorithm generalizability. The code is publicly available.

5.
Neuroimage ; 194: 105-119, 2019 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-30910724

RESUMO

Detailed whole brain segmentation is an essential quantitative technique in medical image analysis, which provides a non-invasive way of measuring brain regions from a clinical acquired structural magnetic resonance imaging (MRI). Recently, deep convolution neural network (CNN) has been applied to whole brain segmentation. However, restricted by current GPU memory, 2D based methods, downsampling based 3D CNN methods, and patch-based high-resolution 3D CNN methods have been the de facto standard solutions. 3D patch-based high resolution methods typically yield superior performance among CNN approaches on detailed whole brain segmentation (>100 labels), however, whose performance are still commonly inferior compared with state-of-the-art multi-atlas segmentation methods (MAS) due to the following challenges: (1) a single network is typically used to learn both spatial and contextual information for the patches, (2) limited manually traced whole brain volumes are available (typically less than 50) for training a network. In this work, we propose the spatially localized atlas network tiles (SLANT) method to distribute multiple independent 3D fully convolutional networks (FCN) for high-resolution whole brain segmentation. To address the first challenge, multiple spatially distributed networks were used in the SLANT method, in which each network learned contextual information for a fixed spatial location. To address the second challenge, auxiliary labels on 5111 initially unlabeled scans were created by multi-atlas segmentation for training. Since the method integrated multiple traditional medical image processing methods with deep learning, we developed a containerized pipeline to deploy the end-to-end solution. From the results, the proposed method achieved superior performance compared with multi-atlas segmentation methods, while reducing the computational time from >30 h to 15 min. The method has been made available in open source (https://github.com/MASILab/SLANTbrainSeg).


Assuntos
Encéfalo/anatomia & histologia , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Atlas como Assunto , Humanos , Imageamento por Ressonância Magnética/métodos , Neuroimagem/métodos
6.
J Digit Imaging ; 31(3): 304-314, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-29725960

RESUMO

High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software development and expansion, and (3) scalable spider deployment compatible with HPC clusters and local workstations.


Assuntos
Diagnóstico por Imagem/métodos , Processamento de Imagem Assistida por Computador/instrumentação , Processamento de Imagem Assistida por Computador/métodos , Sistemas de Informação em Radiologia/instrumentação , Humanos , Armazenamento e Recuperação da Informação
7.
Artigo em Inglês | MEDLINE | ID: mdl-39220623

RESUMO

Whole brain segmentation with magnetic resonance imaging (MRI) enables the non-invasive measurement of brain regions, including total intracranial volume (TICV) and posterior fossa volume (PFV). Enhancing the existing whole brain segmentation methodology to incorporate intracranial measurements offers a heightened level of comprehensiveness in the analysis of brain structures. Despite its potential, the task of generalizing deep learning techniques for intracranial measurements faces data availability constraints due to limited manually annotated atlases encompassing whole brain and TICV/PFV labels. In this paper, we enhancing the hierarchical transformer UNesT for whole brain segmentation to achieve segmenting whole brain with 133 classes and TICV/PFV simultaneously. To address the problem of data scarcity, the model is first pretrained on 4859 T1-weighted (T1w) 3D volumes sourced from 8 different sites. These volumes are processed through a multi-atlas segmentation pipeline for label generation, while TICV/PFV labels are unavailable. Subsequently, the model is finetuned with 45 T1w 3D volumes from Open Access Series Imaging Studies (OASIS) where both 133 whole brain classes and TICV/PFV labels are available. We evaluate our method with Dice similarity coefficients(DSC). We show that our model is able to conduct precise TICV/PFV estimation while maintaining the 132 brain regions performance at a comparable level. Code and trained model are available at: https://github.com/MASILab/UNesT/wholebrainSeg.

8.
Artigo em Inglês | MEDLINE | ID: mdl-39281711

RESUMO

Diffusion magnetic resonance imaging (dMRI) offers the ability to assess subvoxel brain microstructure through the extraction of biomarkers like fractional anisotropy, as well as to unveil brain connectivity by reconstructing white matter fiber trajectories. However, accurate analysis becomes challenging at the interface between cerebrospinal fluid and white matter, where the MRI signal originates from both the cerebrospinal fluid and the white matter partial volume. The presence of free water partial volume effects introduces a substantial bias in estimating diffusion properties, thereby limiting the clinical utility of DWI. Moreover, current mathematical models often lack applicability to single-shell acquisitions commonly encountered in clinical settings. Without appropriate regularization, direct model fitting becomes impractical. We propose a novel voxel-based deep learning method for mapping and correcting free-water partial volume contamination in DWI to address these limitations. This approach leverages data-driven techniques to reliably infer plausible free-water volumes across different diffusion MRI acquisition schemes, including single-shell acquisitions. Our evaluation demonstrates that the introduced methodology consistently produces more consistent and plausible results than previous approaches. By effectively mitigating the impact of free water partial volume effects, our approach enhances the accuracy and reliability of DWI analysis for single-shell dMRI, thereby expanding its applications in assessing brain microstructure and connectivity.

9.
J Med Imaging (Bellingham) ; 11(2): 024008, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38571764

RESUMO

Purpose: Two-dimensional single-slice abdominal computed tomography (CT) provides a detailed tissue map with high resolution allowing quantitative characterization of relationships between health conditions and aging. However, longitudinal analysis of body composition changes using these scans is difficult due to positional variation between slices acquired in different years, which leads to different organs/tissues being captured. Approach: To address this issue, we propose C-SliceGen, which takes an arbitrary axial slice in the abdominal region as a condition and generates a pre-defined vertebral level slice by estimating structural changes in the latent space. Results: Our experiments on 2608 volumetric CT data from two in-house datasets and 50 subjects from the 2015 Multi-Atlas Abdomen Labeling Challenge Beyond the Cranial Vault (BTCV) dataset demonstrate that our model can generate high-quality images that are realistic and similar. We further evaluate our method's capability to harmonize longitudinal positional variation on 1033 subjects from the Baltimore longitudinal study of aging dataset, which contains longitudinal single abdominal slices, and confirmed that our method can harmonize the slice positional variance in terms of visceral fat area. Conclusion: This approach provides a promising direction for mapping slices from different vertebral levels to a target slice and reducing positional variance for single-slice longitudinal analysis. The source code is available at: https://github.com/MASILab/C-SliceGen.

10.
J Med Imaging (Bellingham) ; 11(4): 044008, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-39185475

RESUMO

Purpose: In brain diffusion magnetic resonance imaging (dMRI), the volumetric and bundle analyses of whole-brain tissue microstructure and connectivity can be severely impeded by an incomplete field of view (FOV). We aim to develop a method for imputing the missing slices directly from existing dMRI scans with an incomplete FOV. We hypothesize that the imputed image with a complete FOV can improve whole-brain tractography for corrupted data with an incomplete FOV. Therefore, our approach provides a desirable alternative to discarding the valuable brain dMRI data, enabling subsequent tractography analyses that would otherwise be challenging or unattainable with corrupted data. Approach: We propose a framework based on a deep generative model that estimates the absent brain regions in dMRI scans with an incomplete FOV. The model is capable of learning both the diffusion characteristics in diffusion-weighted images (DWIs) and the anatomical features evident in the corresponding structural images for efficiently imputing missing slices of DWIs in the incomplete part of the FOV. Results: For evaluating the imputed slices, on the Wisconsin Registry for Alzheimer's Prevention (WRAP) dataset, the proposed framework achieved PSNR b 0 = 22.397 , SSIM b 0 = 0.905 , PSNR b 1300 = 22.479 , and SSIM b 1300 = 0.893 ; on the National Alzheimer's Coordinating Center (NACC) dataset, it achieved PSNR b 0 = 21.304 , SSIM b 0 = 0.892 , PSNR b 1300 = 21.599 , and SSIM b 1300 = 0.877 . The proposed framework improved the tractography accuracy, as demonstrated by an increased average Dice score for 72 tracts ( p < 0.001 ) on both the WRAP and NACC datasets. Conclusions: Results suggest that the proposed framework achieved sufficient imputation performance in brain dMRI data with an incomplete FOV for improving whole-brain tractography, thereby repairing the corrupted data. Our approach achieved more accurate whole-brain tractography results with an extended and complete FOV and reduced the uncertainty when analyzing bundles associated with Alzheimer's disease.

11.
Artigo em Inglês | MEDLINE | ID: mdl-39268356

RESUMO

The reconstruction kernel in computed tomography (CT) generation determines the texture of the image. Consistency in reconstruction kernels is important as the underlying CT texture can impact measurements during quantitative image analysis. Harmonization (i.e., kernel conversion) minimizes differences in measurements due to inconsistent reconstruction kernels. Existing methods investigate harmonization of CT scans in single or multiple manufacturers. However, these methods require paired scans of hard and soft reconstruction kernels that are spatially and anatomically aligned. Additionally, a large number of models need to be trained across different kernel pairs within manufacturers. In this study, we adopt an unpaired image translation approach to investigate harmonization between and across reconstruction kernels from different manufacturers by constructing a multipath cycle generative adversarial network (GAN). We use hard and soft reconstruction kernels from the Siemens and GE vendors from the National Lung Screening Trial dataset. We use 50 scans from each reconstruction kernel and train a multipath cycle GAN. To evaluate the effect of harmonization on the reconstruction kernels, we harmonize 50 scans each from Siemens hard kernel, GE soft kernel and GE hard kernel to a reference Siemens soft kernel (B30f) and evaluate percent emphysema. We fit a linear model by considering the age, smoking status, sex and vendor and perform an analysis of variance (ANOVA) on the emphysema scores. Our approach minimizes differences in emphysema measurement and highlights the impact of age, sex, smoking status and vendor on emphysema quantification.

12.
Med Image Anal ; 94: 103124, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38428271

RESUMO

Analyzing high resolution whole slide images (WSIs) with regard to information across multiple scales poses a significant challenge in digital pathology. Multi-instance learning (MIL) is a common solution for working with high resolution images by classifying bags of objects (i.e. sets of smaller image patches). However, such processing is typically performed at a single scale (e.g., 20× magnification) of WSIs, disregarding the vital inter-scale information that is key to diagnoses by human pathologists. In this study, we propose a novel cross-scale MIL algorithm to explicitly aggregate inter-scale relationships into a single MIL network for pathological image diagnosis. The contribution of this paper is three-fold: (1) A novel cross-scale MIL (CS-MIL) algorithm that integrates the multi-scale information and the inter-scale relationships is proposed; (2) A toy dataset with scale-specific morphological features is created and released to examine and visualize differential cross-scale attention; (3) Superior performance on both in-house and public datasets is demonstrated by our simple cross-scale MIL strategy. The official implementation is publicly available at https://github.com/hrlblab/CS-MIL.


Assuntos
Algoritmos , Diagnóstico por Imagem , Humanos
13.
Artigo em Inglês | MEDLINE | ID: mdl-39268202

RESUMO

Understanding the way cells communicate, co-locate, and interrelate is essential to understanding human physiology. Hematoxylin and eosin (H&E) staining is ubiquitously available both for clinical studies and research. The Colon Nucleus Identification and Classification (CoNIC) Challenge has recently innovated on robust artificial intelligence labeling of six cell types on H&E stains of the colon. However, this is a very small fraction of the number of potential cell classification types. Specifically, the CoNIC Challenge is unable to classify epithelial subtypes (progenitor, endocrine, goblet), lymphocyte subtypes (B, helper T, cytotoxic T), or connective subtypes (fibroblasts, stromal). In this paper, we propose to use inter-modality learning to label previously un-labelable cell types on virtual H&E. We leveraged multiplexed immunofluorescence (MxIF) histology imaging to identify 14 subclasses of cell types. We performed style transfer to synthesize virtual H&E from MxIF and transferred the higher density labels from MxIF to these virtual H&E images. We then evaluated the efficacy of learning in this approach. We identified helper T and progenitor nuclei with positive predictive values of 0.34 ± 0.15 (prevalence 0.03 ± 0.01) and 0.47 ± 0.1 (prevalence 0.07 ± 0.02) respectively on virtual H&E. This approach represents a promising step towards automating annotation in digital pathology.

14.
Proc Mach Learn Res ; 227: 1406-1422, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38993526

RESUMO

Multiplex immunofluorescence (MxIF) is an advanced molecular imaging technique that can simultaneously provide biologists with multiple (i.e., more than 20) molecular markers on a single histological tissue section. Unfortunately, due to imaging restrictions, the more routinely used hematoxylin and eosin (H&E) stain is typically unavailable with MxIF on the same tissue section. As biological H&E staining is not feasible, previous efforts have been made to obtain H&E whole slide image (WSI) from MxIF via deep learning empowered virtual staining. However, the tiling effect is a long-lasting problem in high-resolution WSI-wise synthesis. The MxIF to H&E synthesis is no exception. Limited by computational resources, the cross-stain image synthesis is typically performed at the patch-level. Thus, discontinuous intensities might be visually identified along with the patch boundaries assembling all individual patches back to a WSI. In this work, we propose a deep learning based unpaired high-resolution image synthesis method to obtain virtual H&E WSIs from MxIF WSIs (each with 27 markers/stains) with reduced tiling effects. Briefly, we first extend the CycleGAN framework by adding simultaneous nuclei and mucin segmentation supervision as spatial constraints. Then, we introduce a random walk sliding window shifting strategy during the optimized inference stage, to alleviate the tiling effects. The validation results show that our spatially constrained synthesis method achieves a 56% performance gain for the downstream cell segmentation task. The proposed inference method reduces the tiling effects by using 50% fewer computation resources without compromising performance. The proposed random sliding window inference method is a plug-and-play module, which can be generalized for other high-resolution WSI image synthesis applications. The source code with our proposed model are available at https://github.com/MASILab/RandomWalkSlidingWindow.git.

15.
Nat Commun ; 15(1): 7204, 2024 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-39169060

RESUMO

Crohn's disease (CD) is a complex chronic inflammatory disorder with both gastrointestinal and extra-intestinal manifestations associated immune dysregulation. Analyzing 202,359 cells from 170 specimens across 83 patients, we identify a distinct epithelial cell type in both terminal ileum and ascending colon (hereon as 'LND') with high expression of LCN2, NOS2, and DUOX2 and genes related to antimicrobial response and immunoregulation. LND cells, confirmed by in-situ RNA and protein imaging, are rare in non-IBD controls but expand in active CD, and actively interact with immune cells and specifically express IBD/CD susceptibility genes, suggesting a possible function in CD immunopathogenesis. Furthermore, we discover early and late LND subpopulations with different origins and developmental potential. A higher ratio of late-to-early LND cells correlates with better response to anti-TNF treatment. Our findings thus suggest a potential pathogenic role for LND cells in both Crohn's ileitis and colitis.


Assuntos
Colo , Doença de Crohn , Oxidases Duais , Células Epiteliais , Íleo , Lipocalina-2 , Doença de Crohn/patologia , Doença de Crohn/genética , Doença de Crohn/imunologia , Humanos , Células Epiteliais/metabolismo , Células Epiteliais/patologia , Colo/patologia , Íleo/patologia , Lipocalina-2/metabolismo , Lipocalina-2/genética , Oxidases Duais/genética , Oxidases Duais/metabolismo , Masculino , Óxido Nítrico Sintase Tipo II/metabolismo , Óxido Nítrico Sintase Tipo II/genética , Feminino , Adulto , Fator de Necrose Tumoral alfa/metabolismo , Mucosa Intestinal/patologia , Mucosa Intestinal/metabolismo , Pessoa de Meia-Idade
16.
IEEE J Biomed Health Inform ; 27(9): 4444-4453, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37310834

RESUMO

Medical image segmentation, or computing voxel-wise semantic masks, is a fundamental yet challenging task in medical imaging domain. To increase the ability of encoder-decoder neural networks to perform this task across large clinical cohorts, contrastive learning provides an opportunity to stabilize model initialization and enhances downstream tasks performance without ground-truth voxel-wise labels. However, multiple target objects with different semantic meanings and contrast level may exist in a single image, which poses a problem for adapting traditional contrastive learning methods from prevalent "image-level classification" to "pixel-level segmentation". In this article, we propose a simple semantic-aware contrastive learning approach leveraging attention masks and image-wise labels to advance multi-object semantic segmentation. Briefly, we embed different semantic objects to different clusters rather than the traditional image-level embeddings. We evaluate our proposed method on a multi-organ medical image segmentation task with both in-house data and MICCAI Challenge 2015 BTCV datasets. Compared with current state-of-the-art training strategies, our proposed pipeline yields a substantial improvement of 5.53% and 6.09% on Dice score for both medical image segmentation cohorts respectively (p-value 0.01). The performance of the proposed method is further assessed on external medical image cohort via MICCAI Challenge FLARE 2021 dataset, and achieves a substantial improvement from Dice 0.922 to 0.933 (p-value 0.01).


Assuntos
Diagnóstico por Imagem , Aprendizado de Máquina , Humanos , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Semântica , Diagnóstico por Imagem/métodos , Conjuntos de Dados como Assunto
17.
Artigo em Inglês | MEDLINE | ID: mdl-37465093

RESUMO

Metabolic health is increasingly implicated as a risk factor across conditions from cardiology to neurology, and efficiency assessment of body composition is critical to quantitatively characterizing these relationships. 2D low dose single slice computed tomography (CT) provides a high resolution, quantitative tissue map, albeit with a limited field of view. Although numerous potential analyses have been proposed in quantifying image context, there has been no comprehensive study for low-dose single slice CT longitudinal variability with automated segmentation. We studied a total of 1816 slices from 1469 subjects of Baltimore Longitudinal Study on Aging (BLSA) abdominal dataset using supervised deep learning-based segmentation and unsupervised clustering method. 300 out of 1469 subjects that have two year gap in their first two scans were pick out to evaluate longitudinal variability with measurements including intraclass correlation coefficient (ICC) and coefficient of variation (CV) in terms of tissues/organs size and mean intensity. We showed that our segmentation methods are stable in longitudinal settings with Dice ranged from 0.821 to 0.962 for thirteen target abdominal tissues structures. We observed high variability in most organ with ICC<0.5, low variability in the area of muscle, abdominal wall, fat and body mask with average ICC≥0.8. We found that the variability in organ is highly related to the cross-sectional position of the 2D slice. Our efforts pave quantitative exploration and quality control to reduce uncertainties in longitudinal analysis.

18.
Artigo em Inglês | MEDLINE | ID: mdl-37465097

RESUMO

With the confounding effects of demographics across large-scale imaging surveys, substantial variation is demonstrated with the volumetric structure of orbit and eye anthropometry. Such variability increases the level of difficulty to localize the anatomical features of the eye organs for populational analysis. To adapt the variability of eye organs with stable registration transfer, we propose an unbiased eye atlas template followed by a hierarchical coarse-to-fine approach to provide generalized eye organ context across populations. Furthermore, we retrieved volumetric scans from 1842 healthy patients for generating an eye atlas template with minimal biases. Briefly, we select 20 subject scans and use an iterative approach to generate an initial unbiased template. We then perform metric-based registration to the remaining samples with the unbiased template and generate coarse registered outputs. The coarse registered outputs are further leveraged to train a deep probabilistic network, which aims to refine the organ deformation in unsupervised setting. Computed tomography (CT) scans of 100 de-identified subjects are used to generate and evaluate the unbiased atlas template with the hierarchical pipeline. The refined registration shows the stable transfer of the eye organs, which were well-localized in the high-resolution (0.5 mm3) atlas space and demonstrated a significant improvement of 2.37% Dice for inverse label transfer performance. The subject-wise qualitative representations with surface rendering successfully demonstrate the transfer details of the organ context and showed the applicability of generalizing the morphological variation across patients.

19.
Comput Biol Med ; 152: 106414, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36525831

RESUMO

BACKGROUND: Anterior temporal lobe resection is an effective treatment for temporal lobe epilepsy. The post-surgical structural changes could influence the follow-up treatment. Capturing post-surgical changes necessitates a well-established cortical shape correspondence between pre- and post-surgical surfaces. Yet, most cortical surface registration methods are designed for normal neuroanatomy. Surgical changes can introduce wide ranging artifacts in correspondence, for which conventional surface registration methods may not work as intended. METHODS: In this paper, we propose a novel particle method for one-to-one dense shape correspondence between pre- and post-surgical surfaces with temporal lobe resection. The proposed method can handle partial structural abnormality involving non-rigid changes. Unlike existing particle methods using implicit particle adjacency, we consider explicit particle adjacency to establish a smooth correspondence. Moreover, we propose hierarchical optimization of particles rather than full optimization of all particles at once to avoid trappings of locally optimal particle update. RESULTS: We evaluate the proposed method on 25 pairs of T1-MRI with pre- and post-simulated resection on the anterior temporal lobe and 25 pairs of patients with actual resection. We show improved accuracy over several cortical regions in terms of ROI boundary Hausdorff distance with 4.29 mm and Dice similarity coefficients with average value 0.841, compared to existing surface registration methods on simulated data. In 25 patients with actual resection of the anterior temporal lobe, our method shows an improved shape correspondence in qualitative and quantitative evaluation on parcellation-off ratio with average value 0.061 and cortical thickness changes. We also show better smoothness of the correspondence without self-intersection, compared with point-wise matching methods which show various degrees of self-intersection. CONCLUSION: The proposed method establishes a promising one-to-one dense shape correspondence for temporal lobe resection. The resulting correspondence is smooth without self-intersection. The proposed hierarchical optimization strategy could accelerate optimization and improve the optimization accuracy. According to the results on the paired surfaces with temporal lobe resection, the proposed method outperforms the compared methods and is more reliable to capture cortical thickness changes.


Assuntos
Epilepsia do Lobo Temporal , Lobo Temporal , Humanos , Lobo Temporal/diagnóstico por imagem , Lobo Temporal/cirurgia , Epilepsia do Lobo Temporal/diagnóstico por imagem , Epilepsia do Lobo Temporal/cirurgia , Imageamento por Ressonância Magnética/métodos , Resultado do Tratamento
20.
Artigo em Inglês | MEDLINE | ID: mdl-37324550

RESUMO

The Tangram algorithm is a benchmarking method of aligning single-cell (sc/snRNA-seq) data to various forms of spatial data collected from the same region. With this data alignment, the annotation of the single-cell data can be projected to spatial data. However, the cell composition (cell-type ratio) of the single-cell data and spatial data might be different because of heterogeneous cell distribution. Whether the Tangram algorithm can be adapted when the two data have different cell-type ratios has not been discussed in previous works. In our practical application that maps the cell-type classification results of single-cell data to the Multiplex immunofluorescence (MxIF) spatial data, cell-type ratios were different, though they were sampled from adjacent areas. In this work, both simulation and empirical validation were conducted to quantitatively explore the impact of the mismatched cell-type ratio on the Tangram mapping in different situations. Results show that the cell-type difference has a negative influence on classification accuracy.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA