Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
J Med Imaging (Bellingham) ; 11(1): 014005, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38188934

RESUMEN

Purpose: Diffusion-weighted magnetic resonance imaging (DW-MRI) is a critical imaging method for capturing and modeling tissue microarchitecture at a millimeter scale. A common practice to model the measured DW-MRI signal is via fiber orientation distribution function (fODF). This function is the essential first step for the downstream tractography and connectivity analyses. With recent advantages in data sharing, large-scale multisite DW-MRI datasets are being made available for multisite studies. However, measurement variabilities (e.g., inter- and intrasite variability, hardware performance, and sequence design) are inevitable during the acquisition of DW-MRI. Most existing model-based methods [e.g., constrained spherical deconvolution (CSD)] and learning-based methods (e.g., deep learning) do not explicitly consider such variabilities in fODF modeling, which consequently leads to inferior performance on multisite and/or longitudinal diffusion studies. Approach: In this paper, we propose a data-driven deep CSD method to explicitly constrain the scan-rescan variabilities for a more reproducible and robust estimation of brain microstructure from repeated DW-MRI scans. Specifically, the proposed method introduces a three-dimensional volumetric scanner-invariant regularization scheme during the fODF estimation. We study the Human Connectome Project (HCP) young adults test-retest group as well as the MASiVar dataset (with inter- and intrasite scan/rescan data). The Baltimore Longitudinal Study of Aging dataset is employed for external validation. Results: From the experimental results, the proposed data-driven framework outperforms the existing benchmarks in repeated fODF estimation. By introducing the contrastive loss with scan/rescan data, the proposed method achieved a higher consistency while maintaining higher angular correlation coefficients with the CSD modeling. The proposed method is assessing the downstream connectivity analysis and shows increased performance in distinguishing subjects with different biomarkers. Conclusion: We propose a deep CSD method to explicitly reduce the scan-rescan variabilities, so as to model a more reproducible and robust brain microstructure from repeated DW-MRI scans. The plug-and-play design of the proposed approach is potentially applicable to a wider range of data harmonization problems in neuroimaging.

2.
Artículo en Inglés | MEDLINE | ID: mdl-37324550

RESUMEN

The Tangram algorithm is a benchmarking method of aligning single-cell (sc/snRNA-seq) data to various forms of spatial data collected from the same region. With this data alignment, the annotation of the single-cell data can be projected to spatial data. However, the cell composition (cell-type ratio) of the single-cell data and spatial data might be different because of heterogeneous cell distribution. Whether the Tangram algorithm can be adapted when the two data have different cell-type ratios has not been discussed in previous works. In our practical application that maps the cell-type classification results of single-cell data to the Multiplex immunofluorescence (MxIF) spatial data, cell-type ratios were different, though they were sampled from adjacent areas. In this work, both simulation and empirical validation were conducted to quantitatively explore the impact of the mismatched cell-type ratio on the Tangram mapping in different situations. Results show that the cell-type difference has a negative influence on classification accuracy.

3.
Prog Biomed Eng (Bristol) ; 5(2)2023 Apr 11.
Artículo en Inglés | MEDLINE | ID: mdl-37360402

RESUMEN

The rapid development of diagnostic technologies in healthcare is leading to higher requirements for physicians to handle and integrate the heterogeneous, yet complementary data that are produced during routine practice. For instance, the personalized diagnosis and treatment planning for a single cancer patient relies on various images (e.g. radiology, pathology and camera images) and non-image data (e.g. clinical data and genomic data). However, such decision-making procedures can be subjective, qualitative, and have large inter-subject variabilities. With the recent advances in multimodal deep learning technologies, an increasingly large number of efforts have been devoted to a key question: how do we extract and aggregate multimodal information to ultimately provide more objective, quantitative computer-aided clinical decision making? This paper reviews the recent studies on dealing with such a question. Briefly, this review will include the (a) overview of current multimodal learning workflows, (b) summarization of multimodal fusion methods, (c) discussion of the performance, (d) applications in disease diagnosis and prognosis, and (e) challenges and future directions.

4.
Artículo en Inglés | MEDLINE | ID: mdl-37228707

RESUMEN

Diffusion weighted magnetic resonance imaging (DW-MRI) captures tissue microarchitecture at millimeter scale. With recent advantages in data sharing, large-scale multi-site DW-MRI datasets are being made available for multi-site studies. However, DW-MRI suffers from measurement variability (e.g., inter- and intra-site variability, hardware performance, and sequence design), which consequently yields inferior performance on multi-site and/or longitudinal diffusion studies. In this study, we propose a novel, deep learning-based method to harmonize DW-MRI signals for a more reproducible and robust estimation of microstructure. Our method introduces a data-driven scanner-invariant regularization scheme to model a more robust fiber orientation distribution function (FODF) estimation. We study the Human Connectome Project (HCP) young adults test-retest group as well as the MASiVar dataset (with inter- and intra-site scan/rescan data). The 8th order spherical harmonics coefficients are employed as data representation. The results show that the proposed harmonization approach maintains higher angular correlation coefficients (ACC) with the ground truth signals (0.954 versus 0.942), while achieves higher consistency of FODF signals for intra-scanner data (0.891 versus 0.826), as compared with the baseline supervised deep learning scheme. Furthermore, the proposed data-driven framework is flexible and potentially applicable to a wider range of data harmonization problems in neuroimaging.

5.
IEEE Trans Biomed Eng ; 70(9): 2636-2644, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37030838

RESUMEN

Comprehensive semantic segmentation on renal pathological images is challenging due to the heterogeneous scales of the objects. For example, on a whole slide image (WSI), the cross-sectional areas of glomeruli can be 64 times larger than that of the peritubular capillaries, making it impractical to segment both objects on the same patch, at the same scale. To handle this scaling issue, prior studies have typically trained multiple segmentation networks in order to match the optimal pixel resolution of heterogeneous tissue types. This multi-network solution is resource-intensive and fails to model the spatial relationship between tissue types. In this article, we propose the Omni-Seg network, a scale-aware dynamic neural network that achieves multi-object (six tissue types) and multi-scale (5× to 40× scale) pathological image segmentation via a single neural network. The contribution of this article is three-fold: (1) a novel scale-aware controller is proposed to generalize the dynamic neural network from single-scale to multi-scale; (2) semi-supervised consistency regularization of pseudo-labels is introduced to model the inter-scale correlation of unannotated tissue types into a single end-to-end learning paradigm; and (3) superior scale-aware generalization is evidenced by directly applying a model trained on human kidney images to mouse kidney images, without retraining. By learning from 150,000 human pathological image patches from six tissue types at three different resolutions, our approach achieved superior segmentation performance according to human visual assessment and evaluation of image-omics (i.e., spatial transcriptomics).


Asunto(s)
Riñón , Redes Neurales de la Computación , Humanos , Animales , Ratones , Riñón/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador
6.
Artículo en Inglés | MEDLINE | ID: mdl-38545337

RESUMEN

Deep neural networks (DNNs) utilized recently are physically deployed with computational units (e.g., CPUs and GPUs). Such a design might lead to a heavy computational burden, significant latency, and intensive power consumption, which are critical limitations in applications such as the Internet of Things (IoT), edge computing, and the usage of drones. Recent advances in optical computational units (e.g., metamaterial) have shed light on energy-free and light-speed neural networks. However, the digital design of the metamaterial neural network (MNN) is fundamentally limited by its physical limitations, such as precision, noise, and bandwidth during fabrication. Moreover, the unique advantages of MNN's (e.g., light-speed computation) are not fully explored via standard 3×3 convolution kernels. In this paper, we propose a novel large kernel metamaterial neural network (LMNN) that maximizes the digital capacity of the state-of-the-art (SOTA) MNN with model re-parametrization and network compression, while also considering the optical limitation explicitly. The new digital learning scheme can maximize the learning capacity of MNN while modeling the physical restrictions of meta-optic. With the proposed LMNN, the computation cost of the convolutional front-end can be offloaded into fabricated optical hardware. The experimental results on two publicly available datasets demonstrate that the optimized hybrid design improved classification accuracy while reducing computational latency. The development of the proposed LMNN is a promising step towards the ultimate goal of energy-free and light-speed AI.

7.
Int J Remote Sens ; 44(6): 1922-1938, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38524866

RESUMEN

Archaeology has long faced fundamental issues of sampling and scalar representation. Traditionally, the local-to-regional-scale views of settlement patterns are produced through systematic pedestrian surveys. Recently, systematic manual survey of satellite and aerial imagery has enabled continuous distributional views of archaeological phenomena at interregional scales. However, such "brute force" manual imagery survey methods are both time- and labor-intensive, as well as prone to inter-observer differences in sensitivity and specificity. The development of self-supervised learning methods (e.g., contrastive learning) offers a scalable learning scheme for locating archaeological features using unlabeled satellite and historical aerial images. However, archaeological features are generally only visible in a very small proportion relative to the landscape, while the modern contrastive-supervised learning approach typically yields an inferior performance on highly imbalanced datasets. In this work, we propose a framework to address this long-tail problem. As opposed to the existing contrastive learning approaches that typically treat the labeled and unlabeled data separately, our proposed method reforms the learning paradigm under a semi-supervised setting in order to fully utilize the precious annotated data (<7% in our setting). Specifically, the highly unbalanced nature of the data is employed as the prior knowledge in order to form pseudo negative pairs by ranking the similarities between unannotated image patches and annotated anchor images. In this study, we used 95,358 unlabeled images and 5,830 labeled images in order to solve the issues associated with detecting ancient buildings from a long-tailed satellite image dataset. From the results, our semi-supervised contrastive learning model achieved a promising testing balanced accuracy of 79.0%, which is a 3.8% improvement as compared to other state-of-the-art approaches.

8.
Med Image Comput Comput Assist Interv ; 14225: 497-507, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38529367

RESUMEN

Multi-class cell segmentation in high-resolution Giga-pixel whole slide images (WSI) is critical for various clinical applications. Training such an AI model typically requires labor-intensive pixel-wise manual annotation from experienced domain experts (e.g., pathologists). Moreover, such annotation is error-prone when differentiating fine-grained cell types (e.g., podocyte and mesangial cells) via the naked human eye. In this study, we assess the feasibility of democratizing pathological AI deployment by only using lay annotators (annotators without medical domain knowledge). The contribution of this paper is threefold: (1) We proposed a molecular-empowered learning scheme for multi-class cell segmentation using partial labels from lay annotators; (2) The proposed method integrated Giga-pixel level molecular-morphology cross-modality registration, molecular-informed annotation, and molecular-oriented segmentation model, so as to achieve significantly superior performance via 3 lay annotators as compared with 2 experienced pathologists; (3) A deep corrective learning (learning with imperfect label) method is proposed to further improve the segmentation performance using partially annotated noisy data. From the experimental results, our learning method achieved F1 = 0.8496 using molecular-informed annotations from lay annotators, which is better than conventional morphology-based annotations (F1 = 0.7015) from experienced pathologists. Our method democratizes the development of a pathological segmentation deep model to the lay annotator level, which consequently scales up the learning process similar to a non-medical computer vision task. The official implementation and cell annotations are publicly available at https://github.com/hrlblab/MolecularEL.

9.
Med Image Learn Ltd Noisy Data (2023) ; 14307: 82-92, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38523773

RESUMEN

Many anomaly detection approaches, especially deep learning methods, have been recently developed to identify abnormal image morphology by only employing normal images during training. Unfortunately, many prior anomaly detection methods were optimized for a specific "known" abnormality (e.g., brain tumor, bone fraction, cell types). Moreover, even though only the normal images were used in the training process, the abnormal images were often employed during the validation process (e.g., epoch selection, hyper-parameter tuning), which might leak the supposed "unknown" abnormality unintentionally. In this study, we investigated these two essential aspects regarding universal anomaly detection in medical images by (1) comparing various anomaly detection methods across four medical datasets, (2) investigating the inevitable but often neglected issues on how to unbiasedly select the optimal anomaly detection model during the validation phase using only normal images, and (3) proposing a simple decision-level ensemble method to leverage the advantage of different kinds of anomaly detection without knowing the abnormality. The results of our experiments indicate that none of the evaluated methods consistently achieved the best performance across all datasets. Our proposed method enhanced the robustness of performance in general (average AUC 0.956).

10.
Artículo en Inglés | MEDLINE | ID: mdl-38606193

RESUMEN

Deep-learning techniques have been used widely to alleviate the labour-intensive and time-consuming manual annotation required for pixel-level tissue characterization. Our previous study introduced an efficient single dynamic network - Omni-Seg - that achieved multi-class multi-scale pathological segmentation with less computational complexity. However, the patch-wise segmentation paradigm still applies to Omni-Seg, and the pipeline is time-consuming when providing segmentation for Whole Slide Images (WSIs). In this paper, we propose an enhanced version of the Omni-Seg pipeline in order to reduce the repetitive computing processes and utilize a GPU to accelerate the model's prediction for both better model performance and faster speed. Our proposed method's innovative contribution is two-fold: (1) a Docker is released for an end-to-end slide-wise multi-tissue segmentation for WSIs; and (2) the pipeline is deployed on a GPU to accelerate the prediction, achieving better segmentation quality in less time. The proposed accelerated implementation reduced the average processing time (at the testing stage) on a standard needle biopsy WSI from 2.3 hours to 22 minutes, using 35 WSIs from the Kidney Tissue Atlas (KPMP) Datasets. The source code and the Docker have been made publicly available at https://github.com/ddrrnn123/Omni-Seg.

11.
Artículo en Inglés | MEDLINE | ID: mdl-36304178

RESUMEN

Multi-modal learning (e.g., integrating pathological images with genomic features) tends to improve the accuracy of cancer diagnosis and prognosis as compared to learning with a single modality. However, missing data is a common problem in clinical practice, i.e., not every patient has all modalities available. Most of the previous works directly discarded samples with missing modalities, which might lose information in these data and increase the likelihood of overfitting. In this work, we generalize the multi-modal learning in cancer diagnosis with the capacity of dealing with missing data using histological images and genomic data. Our integrated model can utilize all available data from patients with both complete and partial modalities. The experiments on the public TCGA-GBM and TCGA-LGG datasets show that the data with missing modalities can contribute to multi-modal learning, which improves the model performance in grade classification of glioma cancer.

12.
J Med Imaging (Bellingham) ; 9(5): 052408, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-35747553

RESUMEN

Purpose: The quantitative detection, segmentation, and characterization of glomeruli from high-resolution whole slide imaging (WSI) play essential roles in the computer-assisted diagnosis and scientific research in digital renal pathology. Historically, such comprehensive quantification requires extensive programming skills to be able to handle heterogeneous and customized computational tools. To bridge the gap of performing glomerular quantification for non-technical users, we develop the Glo-In-One toolkit to achieve holistic glomerular detection, segmentation, and characterization via a single line of command. Additionally, we release a large-scale collection of 30,000 unlabeled glomerular images to further facilitate the algorithmic development of self-supervised deep learning. Approach: The inputs of the Glo-In-One toolkit are WSIs, while the outputs are (1) WSI-level multi-class circle glomerular detection results (which can be directly manipulated with ImageScope), (2) glomerular image patches with segmentation masks, and (3) different lesion types. In the current version, the fine-grained global glomerulosclerosis (GGS) characterization is provided, including assessed-solidified-GSS (associated with hypertension-related injury), disappearing-GSS (a further end result of the SGGS becoming contiguous with fibrotic interstitium), and obsolescent-GSS (nonspecific GGS increasing with aging) glomeruli. To leverage the performance of the Glo-In-One toolkit, we introduce self-supervised deep learning to glomerular quantification via large-scale web image mining. Results: The GGS fine-grained classification model achieved a decent performance compared with baseline supervised methods while only using 10% of the annotated data. The glomerular detection achieved an average precision of 0.627 with circle representations, while the glomerular segmentation achieved a 0.955 patch-wise Dice dimilarity coefficient. Conclusion: We develop and release an open-source Glo-In-One toolkit, a software with holistic glomerular detection, segmentation, and lesion characterization. This toolkit is user-friendly to non-technical users via a single line of command. The toolbox and the 30,000 web mined glomerular images have been made publicly available at https://github.com/hrlblab/Glo-In-One.

13.
J Med Imaging (Bellingham) ; 9(1): 014005, 2022 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-35237706

RESUMEN

Purpose: Recent studies have demonstrated the diagnostic and prognostic values of global glomerulosclerosis (GGS) in IgA nephropathy, aging, and end-stage renal disease. However, the fine-grained quantitative analysis of multiple GGS subtypes (e.g., obsolescent, solidified, and disappearing glomerulosclerosis) is typically a resource extensive manual process. Very few automatic methods, if any, have been developed to bridge this gap for such analytics. We present a holistic pipeline to quantify GGS (with both detection and classification) from a whole slide image in a fully automatic manner. In addition, we conduct the fine-grained classification for the subtypes of GGS. Our study releases the open-source quantitative analytical tool for fine-grained GGS characterization while tackling the technical challenges in unbalanced classification and integrating detection and classification. Approach: We present a deep learning-based framework to perform fine-grained detection and classification of GGS, with a hierarchical two-stage design. Moreover, we incorporate the state-of-the-art transfer learning techniques to achieve a more generalizable deep learning model for tackling the imbalanced distribution of our dataset. This way, we build a highly efficient WSI-to-results GGS characterization pipeline. Meanwhile, we investigated the largest fine-grained GGS cohort as of yet with 11,462 glomeruli and 10,619 nonglomeruli, which include 7841 globally sclerotic glomeruli of three distinct categories. With these data, we apply deep learning techniques to achieve (1) fine-grained GGS characterization, (2) GGS versus non-GGS classification, and (3) improved glomeruli detection results. Results: For fine-grained GGS characterization, when pretrained on the larger dataset, our model can achieve a 0.778-macro- F 1 score, compared to a 0.746-macro- F 1 score when using the regular ImageNet-pretrained weights. On the external dataset, our best model achieves an area under the curve (AUC) score of 0.994 when tasked with differentiating GGS from normal glomeruli. Using our dataset, we are able to build algorithms that allow for fine-grained classification of glomeruli lesions and are robust to distribution shifts. Conclusion: Our study showed that the proposed methods consistently improve the detection and fine-grained classification performance through both cross validation and external validation. Our code and pretrained models have been released for public use at https://github.com/luyuzhe111/glomeruli.

14.
Artículo en Inglés | MEDLINE | ID: mdl-37077404

RESUMEN

With the rapid development of self-supervised learning (e.g., contrastive learning), the importance of having large-scale images (even without annotations) for training a more generalizable AI model has been widely recognized in medical image analysis. However, collecting large-scale task-specific unannotated data at scale can be challenging for individual labs. Existing online resources, such as digital books, publications, and search engines, provide a new resource for obtaining large-scale images. However, published images in healthcare (e.g., radiology and pathology) consist of a considerable amount of compound figures with subplots. In order to extract and separate compound figures into usable individual images for downstream learning, we propose a simple compound figure separation (SimCFS) framework without using the traditionally required detection bounding box annotations, with a new loss function and a hard case simulation. Our technical contribution is four-fold: (1) we introduce a simulation-based training framework that minimizes the need for resource extensive bounding box annotations; (2) we propose a new side loss that is optimized for compound figure separation; (3) we propose an intra-class image augmentation method to simulate hard cases; and (4) to the best of our knowledge, this is the first study that evaluates the efficacy of leveraging self-supervised learning with compound image separation. From the results, the proposed SimCFS achieved state-of-the-art performance on the ImageCLEF 2016 Compound Figure Separation Database. The pretrained self-supervised learning model using large-scale mined figures improved the accuracy of downstream image classification tasks with a contrastive learning algorithm. The source code of SimCFS is made publicly available at https://github.com/hrlblab/ImageSeperation.

15.
Artículo en Inglés | MEDLINE | ID: mdl-37229309

RESUMEN

There has been a long pursuit for precise and reproducible glomerular quantification in the field of renal pathology in both research and clinical practice. Currently, 3D glomerular identification and reconstruction of large-scale glomeruli are labor-intensive tasks, and time-consuming by manual analysis on whole slide imaging (WSI) in 2D serial sectioning representation. The accuracy of serial section analysis is also limited in the 2D serial context. Moreover, there are no approaches to present 3D glomerular visualization for human examination (volume calculation, 3D phenotype analysis, etc.). In this paper, we introduce an end-to-end holistic deep-learning-based method that achieves automatic detection, segmentation and multi-object tracking (MOT) of individual glomeruli with large-scale glomerular-registered assessment in a 3D context on WSIs. The high-resolution WSIs are the inputs, while the outputs are the 3D glomerular reconstruction and volume estimation. This pipeline achieves 81.8 in IDF1 and 69.1 in MOTA as MOT performance, while the proposed volume estimation achieves 0.84 Spearman correlation coefficient with manual annotation. The end-to-end MAP3D+ pipeline provides an approach for extensive 3D glomerular reconstruction and volume quantification from 2D serial section WSIs.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA