Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 113
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-39060373

RESUMEN

PURPOSE: Generating polar map (PM) from [68Ga]Ga-DOTA-FAPI-04 PET images is challenging and inaccurate using existing automatic methods that rely on the myocardial anatomical integrity in PET images. This study aims to enhance the accuracy of PM generated from [68Ga]Ga-DOTA-FAPI-04 PET images and explore the potential value of PM in detecting reactive fibrosis after myocardial infarction and assessing its relationship with cardiac function. METHODS: We proposed a deep-learning-based method that fuses multi-modality images to compensate for the cardiac structural information lost in [68Ga]Ga-DOTA-FAPI-04 PET images and accurately generated PMs. We collected 133 pairs of [68Ga]Ga-DOTA-FAPI-04 PET/MR images from 87 ST-segment elevated myocardial infarction patients for training and evaluation purposes. Twenty-six patients were selected for longitudinal analysis, further examining the clinical value of PM-related imaging parameters. RESULTS: The quantitative comparison demonstrated that our method was comparable with the manual method and surpassed the commercially available software-PMOD in terms of accuracy in generating PMs for [68Ga]Ga-DOTA-FAPI-04 PET images. Clinical analysis revealed the effectiveness of [68Ga]Ga-DOTA-FAPI-04 PET PM in detecting reactive myocardial fibrosis. Significant correlations were demonstrated between the difference of baseline PM FAPI% and PM LGE%, and the change in cardiac function parameters (all p < 0.001), including LVESV% (r = 0.697), LVEDV% (r = 0.621) and LVEF% (r = -0.607). CONCLUSION: The [68Ga]Ga-DOTA-FAPI-04 PET PMs generated by our method are comparable to manually generated and sufficient for clinical use. The PMs generated by our method have potential value in detecting reactive fibrosis after myocardial infarction and were associated with cardiac function, suggesting the possibility of enhancing clinical diagnostic practices. TRIAL REGISTRATION: ClinicalTrials.gov (NCT04723953). Registered 26 January 2021.

2.
Eur J Nucl Med Mol Imaging ; 50(13): 3996-4009, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-37596343

RESUMEN

PURPOSE: Prognostic prediction is crucial to guide individual treatment for locoregionally advanced nasopharyngeal carcinoma (LA-NPC) patients. Recently, multi-task deep learning was explored for joint prognostic prediction and tumor segmentation in various cancers, resulting in promising performance. This study aims to evaluate the clinical value of multi-task deep learning for prognostic prediction in LA-NPC patients. METHODS: A total of 886 LA-NPC patients acquired from two medical centers were enrolled including clinical data, [18F]FDG PET/CT images, and follow-up of progression-free survival (PFS). We adopted a deep multi-task survival model (DeepMTS) to jointly perform prognostic prediction (DeepMTS-Score) and tumor segmentation from FDG-PET/CT images. The DeepMTS-derived segmentation masks were leveraged to extract handcrafted radiomics features, which were also used for prognostic prediction (AutoRadio-Score). Finally, we developed a multi-task deep learning-based radiomic (MTDLR) nomogram by integrating DeepMTS-Score, AutoRadio-Score, and clinical data. Harrell's concordance indices (C-index) and time-independent receiver operating characteristic (ROC) analysis were used to evaluate the discriminative ability of the proposed MTDLR nomogram. For patient stratification, the PFS rates of high- and low-risk patients were calculated using Kaplan-Meier method and compared with the observed PFS probability. RESULTS: Our MTDLR nomogram achieved C-index of 0.818 (95% confidence interval (CI): 0.785-0.851), 0.752 (95% CI: 0.638-0.865), and 0.717 (95% CI: 0.641-0.793) and area under curve (AUC) of 0.859 (95% CI: 0.822-0.895), 0.769 (95% CI: 0.642-0.896), and 0.730 (95% CI: 0.634-0.826) in the training, internal validation, and external validation cohorts, which showed a statistically significant improvement over conventional radiomic nomograms. Our nomogram also divided patients into significantly different high- and low-risk groups. CONCLUSION: Our study demonstrated that MTDLR nomogram can perform reliable and accurate prognostic prediction in LA-NPC patients, and also enabled better patient stratification, which could facilitate personalized treatment planning.


Asunto(s)
Aprendizaje Profundo , Neoplasias Nasofaríngeas , Humanos , Pronóstico , Nomogramas , Carcinoma Nasofaríngeo/diagnóstico por imagen , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Fluorodesoxiglucosa F18 , Neoplasias Nasofaríngeas/diagnóstico por imagen , Estudios Retrospectivos
3.
Neuroimage ; 259: 119444, 2022 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-35792292

RESUMEN

Deformable image registration is fundamental for many medical image analyses. A key obstacle for accurate image registration lies in image appearance variations such as the variations in texture, intensities, and noise. These variations are readily apparent in medical images, especially in brain images where registration is frequently used. Recently, deep learning-based registration methods (DLRs), using deep neural networks, have shown computational efficiency that is several orders of magnitude faster than traditional optimization-based registration methods (ORs). DLRs rely on a globally optimized network that is trained with a set of training samples to achieve faster registration. DLRs tend, however, to disregard the target-pair-specific optimization inherent in ORs and thus have degraded adaptability to variations in testing samples. This limitation is severe for registering medical images with large appearance variations, especially since few existing DLRs explicitly take into account appearance variations. In this study, we propose an Appearance Adjustment Network (AAN) to enhance the adaptability of DLRs to appearance variations. Our AAN, when integrated into a DLR, provides appearance transformations to reduce the appearance variations during registration. In addition, we propose an anatomy-constrained loss function through which our AAN generates anatomy-preserving transformations. Our AAN has been purposely designed to be readily inserted into a wide range of DLRs and can be trained cooperatively in an unsupervised and end-to-end manner. We evaluated our AAN with three state-of-the-art DLRs - Voxelmorph (VM), Diffeomorphic Voxelmorph (DifVM), and Laplacian Pyramid Image Registration Network (LapIRN) - on three well-established public datasets of 3D brain magnetic resonance imaging (MRI) - IBSR18, Mindboggle101, and LPBA40. The results show that our AAN consistently improved existing DLRs and outperformed state-of-the-art ORs on registration accuracy, while adding a fractional computational load to existing DLRs.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Algoritmos , Encéfalo/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación
4.
J Biomed Inform ; 106: 103430, 2020 06.
Artículo en Inglés | MEDLINE | ID: mdl-32371232

RESUMEN

Laparoscopic liver surgery is challenging to perform because of compromised ability of the surgeon to localize subsurface anatomy due to minimal invasive visibility. While image guidance has the potential to address this barrier, intraoperative factors, such as insufflations and variable degrees of organ mobilization from supporting ligaments, may generate substantial deformation. The navigation ability in terms of searching and tagging within liver views has not been characterized, and current object detection methods do not account for the mechanics of how these features could be applied to the liver images. In this research, we have proposed spatial pyramid based searching and tagging of liver's intraoperative views using convolution neural network (SPST-CNN). By exploiting a hybrid combination of an image pyramid at input and spatial pyramid pooling layer at deeper stages of SPST-CNN, we reveal the gains of full-image representations for searching and tagging variable scaled liver live views. SPST-CNN provides pinpoint searching and tagging of intraoperative liver views to obtain up-to-date information about the location and shape of the area of interest. Downsampling input using image pyramid enables SPST-CNN framework to deploy input images with a diversity of resolutions for achieving scale-invariance feature. We have compared the proposed approach to the four recent state-of-the-art approaches and our method achieved better mAP up to 85.9%.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Hígado/diagnóstico por imagen , Hígado/cirugía
5.
BMC Genomics ; 19(Suppl 6): 565, 2018 Aug 13.
Artículo en Inglés | MEDLINE | ID: mdl-30367576

RESUMEN

BACKGROUND: With the developments of DNA sequencing technology, large amounts of sequencing data have been produced that provides unprecedented opportunities for advanced association studies between somatic mutations and cancer types/subtypes which further contributes to more accurate somatic mutation based cancer typing (SMCT). In existing SMCT methods however, the absence of high-level feature extraction is a major obstacle in improving the classification performance. RESULTS: We propose DeepCNA, an advanced convolutional neural network (CNN) based classifier, which utilizes copy number aberrations (CNAs) and HiC data, to address this issue. DeepCNA first pre-process the CNA data by clipping, zero padding and reshaping. Then, the processed data is fed into a CNN classifier, which extracts high-level features for accurate classification. Experimental results on the COSMIC CNA dataset indicate that 2D CNN with both cell lines of HiC data lead to the best performance. We further compare DeepCNA with three widely adopted classifiers, and demonstrate that DeepCNA has at least 78% improvement of performance. CONCLUSIONS: This paper demonstrates the advantages and potential of the proposed DeepCNA model for processing of somatic point mutation based gene data, and proposes that its usage may be extended to other complex genotype-phenotype association studies.


Asunto(s)
Cromatina/química , Variaciones en el Número de Copia de ADN , Neoplasias/clasificación , Neoplasias/genética , Redes Neurales de la Computación , Línea Celular , Humanos
6.
BMC Bioinformatics ; 17(Suppl 17): 476, 2016 Dec 23.
Artículo en Inglés | MEDLINE | ID: mdl-28155641

RESUMEN

BACKGROUND: With the developments of DNA sequencing technology, large amounts of sequencing data have become available in recent years and provide unprecedented opportunities for advanced association studies between somatic point mutations and cancer types/subtypes, which may contribute to more accurate somatic point mutation based cancer classification (SMCC). However in existing SMCC methods, issues like high data sparsity, small volume of sample size, and the application of simple linear classifiers, are major obstacles in improving the classification performance. RESULTS: To address the obstacles in existing SMCC studies, we propose DeepGene, an advanced deep neural network (DNN) based classifier, that consists of three steps: firstly, the clustered gene filtering (CGF) concentrates the gene data by mutation occurrence frequency, filtering out the majority of irrelevant genes; secondly, the indexed sparsity reduction (ISR) converts the gene data into indexes of its non-zero elements, thereby significantly suppressing the impact of data sparsity; finally, the data after CGF and ISR is fed into a DNN classifier, which extracts high-level features for accurate classification. Experimental results on our curated TCGA-DeepGene dataset, which is a reformulated subset of the TCGA dataset containing 12 selected types of cancer, show that CGF, ISR and DNN all contribute in improving the overall classification performance. We further compare DeepGene with three widely adopted classifiers and demonstrate that DeepGene has at least 24% performance improvement in terms of testing accuracy. CONCLUSIONS: Based on deep learning and somatic point mutation data, we devise DeepGene, an advanced cancer type classifier, which addresses the obstacles in existing SMCC studies. Experiments indicate that DeepGene outperforms three widely adopted existing classifiers, which is mainly attributed to its deep learning module that is able to extract the high level features between combinatorial somatic point mutations and cancer types.


Asunto(s)
Biología Computacional/métodos , Neoplasias/clasificación , Redes Neurales de la Computación , Mutación Puntual , Genes Relacionados con las Neoplasias , Humanos , Neoplasias/genética , Análisis de Secuencia de ADN/métodos
7.
Neurocomputing (Amst) ; 177: 75-88, 2016 Feb 12.
Artículo en Inglés | MEDLINE | ID: mdl-27688597

RESUMEN

Content-based medical image retrieval (CBMIR) is an active research area for disease diagnosis and treatment but it can be problematic given the small visual variations between anatomical structures. We propose a retrieval method based on a bag-of-visual-words (BoVW) to identify discriminative characteristics between different medical images with Pruned Dictionary based on Latent Semantic Topic description. We refer to this as the PD-LST retrieval. Our method has two main components. First, we calculate a topic-word significance value for each visual word given a certain latent topic to evaluate how the word is connected to this latent topic. The latent topics are learnt, based on the relationship between the images and words, and are employed to bridge the gap between low-level visual features and high-level semantics. These latent topics describe the images and words semantically and can thus facilitate more meaningful comparisons between the words. Second, we compute an overall-word significance value to evaluate the significance of a visual word within the entire dictionary. We designed an iterative ranking method to measure overall-word significance by considering the relationship between all latent topics and words. The words with higher values are considered meaningful with more significant discriminative power in differentiating medical images. We evaluated our method on two public medical imaging datasets and it showed improved retrieval accuracy and efficiency.

8.
Appl Opt ; 53(30): 7059-71, 2014 Oct 20.
Artículo en Inglés | MEDLINE | ID: mdl-25402795

RESUMEN

Accurate approximation of noise in hyperspectral (HS) images plays an important role in better visualization and image processing. Conventional algorithms often hypothesize the noise type to be either purely additive or of a mixed noise type for the signal-dependent (SD) noise component and the signal-independent (SI) noise component in HS images. This can result in application-driven algorithm design and limited use in different noise types. Moreover, as the highly textured HS images have abundant edges and textures, existing algorithms may fail to produce accurate noise estimation. To address these challenges, we propose a noise estimation algorithm that can adaptively estimate both purely additive noise and mixed noise in HS images with various complexities. First, homogeneous areas are automatically detected using a new region-growing-based approach, in which the similarity of two pixels is calculated by a robust spectral metric. Then, the mixed noise variance of each homogeneous region is estimated based on multiple linear regression technology. Finally, intensities of the SD and SI noise are obtained with a modified scatter plot approach. We quantitatively evaluated our algorithm on the synthetic HS data. Compared with the benchmarking and state-of-the-art algorithms, the proposed algorithm is more accurate and robust when facing images with different complexities. Experimental results with real Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) images further demonstrated the superiority of our algorithm.

9.
J Healthc Inform Res ; 8(3): 478-505, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39131102

RESUMEN

Understanding and addressing the dynamics of infectious diseases, such as coronavirus disease 2019, are essential for effectively managing the current situation and developing intervention strategies. Epidemiologists commonly use mathematical models, known as epidemiological equations (EE), to simulate disease spread. However, accurately estimating the parameters of these models can be challenging due to factors like variations in social distancing policies and intervention strategies. In this study, we propose a novel method called deep dynamic epidemiological modeling (DDE) to address these challenges. The DDE method combines the strengths of EE with the capabilities of deep neural networks to improve the accuracy of fitting real-world data. In DDE, we apply neural ordinary differential equations to solve variant-specific equations, ensuring a more precise fit for disease progression in different geographic regions. In the experiment, we tested the performance of the DDE method and other state-of-the-art methods using real-world data from five diverse geographic entities: the USA, Colombia, South Africa, Wuhan in China, and Piedmont in Italy. Compared to the state-of-the-art method, DDE significantly improved accuracy, with an average fitting Pearson coefficient exceeding 0.97 across the five geographic entities. In summary, the DDE method enhances the accuracy of parameter fitting in epidemiological models and provides a foundation for constructing simpler models adaptable to different geographic areas.

10.
IEEE Trans Med Imaging ; PP2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-38949934

RESUMEN

Deep learning approaches for multi-label Chest X-ray (CXR) images classification usually require large-scale datasets. However, acquiring such datasets with full annotations is costly, time-consuming, and prone to noisy labels. Therefore, we introduce a weakly supervised learning problem called Single Positive Multi-label Learning (SPML) into CXR images classification (abbreviated as SPML-CXR), in which only one positive label is annotated per image. A simple solution to SPML-CXR problem is to assume that all the unannotated pathological labels are negative, however, it might introduce false negative labels and decrease the model performance. To this end, we present a Multi-level Pseudo-label Consistency (MPC) framework for SPML-CXR. First, inspired by the pseudo-labeling and consistency regularization in semi-supervised learning, we construct a weak-to-strong consistency framework, where the model prediction on weakly-augmented image is treated as the pseudo label for supervising the model prediction on a strongly-augmented version of the same image, and define an Image-level Perturbation-based Consistency (IPC) regularization to recover the potential mislabeled positive labels. Besides, we incorporate Random Elastic Deformation (RED) as an additional strong augmentation to enhance the perturbation. Second, aiming to expand the perturbation space, we design a perturbation stream to the consistency framework at the feature-level and introduce a Feature-level Perturbation-based Consistency (FPC) regularization as a supplement. Third, we design a Transformer-based encoder module to explore the sample relationship within each mini-batch by a Batch-level Transformer-based Correlation (BTC) regularization. Extensive experiments on the CheXpert and MIMIC-CXR datasets have shown the effectiveness of our MPC framework for solving the SPML-CXR problem.

11.
Artículo en Inglés | MEDLINE | ID: mdl-39150812

RESUMEN

Motion artifacts compromise the quality of magnetic resonance imaging (MRI) and pose challenges to achieving diagnostic outcomes and image-guided therapies. In recent years, supervised deep learning approaches have emerged as successful solutions for motion artifact reduction (MAR). One disadvantage of these methods is their dependency on acquiring paired sets of motion artifact-corrupted (MA-corrupted) and motion artifact-free (MA-free) MR images for training purposes. Obtaining such image pairs is difficult and therefore limits the application of supervised training. In this paper, we propose a novel UNsupervised Abnormality Extraction Network (UNAEN) to alleviate this problem. Our network is capable of working with unpaired MA-corrupted and MA-free images. It converts the MA-corrupted images to MA-reduced images by extracting abnormalities from the MA-corrupted images using a proposed artifact extractor, which intercepts the residual artifact maps from the MA-corrupted MR images explicitly, and a reconstructor to restore the original input from the MA-reduced images. The performance of UNAEN was assessed by experimenting with various publicly available MRI datasets and comparing them with state-of-the-art methods. The quantitative evaluation demonstrates the superiority of UNAEN over alternative MAR methods and visually exhibits fewer residual artifacts. Our results substantiate the potential of UNAEN as a promising solution applicable in real-world clinical environments, with the capability to enhance diagnostic accuracy and facilitate image-guided therapies. Our codes are publicly available at https://github.com/YuSheng-Zhou/UNAEN.

12.
IEEE Trans Biomed Eng ; PP2024 Sep 20.
Artículo en Inglés | MEDLINE | ID: mdl-39302789

RESUMEN

The Magnetically Controlled Capsule Endoscopy (MCCE) has a limited shooting range, resulting in capturing numerous fragmented images and an inability to precisely locate and examine the region of interest (ROI) as traditional endoscopy can. Addressing this issue, image stitching around the ROI can be employed to aid in the diagnosis of gastrointestinal (GI) tract conditions. However, MCCE images possess unique characteristics, such as weak texture, close-up shooting, and large angle rotation, presenting challenges to current image-matching methods. In this context, a method named S2P-Matching is proposed for self-supervised patch-based matching in MCCE image stitching. The method involves augmenting the raw data by simulating the capsule endoscopic camera's behavior around the GI tract's ROI. Subsequently, an improved contrast learning encoder is utilized to extract local features, represented as deep feature descriptors. This encoder comprises two branches that extract distinct scale features, which are combined over the channel without manual labeling. The data-driven descriptors are then input into a Transformer model to obtain patch-level matches by learning the globally consented matching priors in the pseudo-ground-truth match pairs. Finally, the patch-level matching is refined and filtered to the pixel-level. The experimental results on real-world MCCE images demonstrate that S2P-Matching provides enhanced accuracy in addressing challenging issues in the GI tract environment with image parallax. The performance improvement can reach up to 203 and 55.8% in terms of NCM (Number of Correct Matches) and SR (Success Rate), respectively. This approach is expected to facilitate the wide adoption of MCCE-based gastrointestinal screening.

13.
BMC Bioinformatics ; 14: 173, 2013 Jun 02.
Artículo en Inglés | MEDLINE | ID: mdl-23725412

RESUMEN

BACKGROUND: Segmenting cell nuclei in microscopic images has become one of the most important routines in modern biological applications. With the vast amount of data, automatic localization, i.e. detection and segmentation, of cell nuclei is highly desirable compared to time-consuming manual processes. However, automated segmentation is challenging due to large intensity inhomogeneities in the cell nuclei and the background. RESULTS: We present a new method for automated progressive localization of cell nuclei using data-adaptive models that can better handle the inhomogeneity problem. We perform localization in a three-stage approach: first identify all interest regions with contrast-enhanced salient region detection, then process the clusters to identify true cell nuclei with probability estimation via feature-distance profiles of reference regions, and finally refine the contours of detected regions with regional contrast-based graphical model. The proposed region-based progressive localization (RPL) method is evaluated on three different datasets, with the first two containing grayscale images, and the third one comprising of color images with cytoplasm in addition to cell nuclei. We demonstrate performance improvement over the state-of-the-art. For example, compared to the second best approach, on the first dataset, our method achieves 2.8 and 3.7 reduction in Hausdorff distance and false negatives; on the second dataset that has larger intensity inhomogeneity, our method achieves 5% increase in Dice coefficient and Rand index; on the third dataset, our method achieves 4% increase in object-level accuracy. CONCLUSIONS: To tackle the intensity inhomogeneities in cell nuclei and background, a region-based progressive localization method is proposed for cell nuclei localization in fluorescence microscopy images. The RPL method is demonstrated highly effective on three different public datasets, with on average 3.5% and 7% improvement of region- and contour-based segmentation performance over the state-of-the-art.


Asunto(s)
Núcleo Celular/ultraestructura , Procesamiento de Imagen Asistido por Computador/métodos , Microscopía Fluorescente/métodos , Modelos Teóricos
14.
J Opt Soc Am A Opt Image Sci Vis ; 30(8): 1464-75, 2013 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-24323203

RESUMEN

Fluorescent molecular tomographic image reconstruction usually involves repeatedly solving large-scale matrix equations, which are computationally expensive. In this paper, a method is proposed to reduce the scale of the matrix system. The Jacobian matrix is simplified by deleting the columns or the rows whose values are smaller than a threshold. Furthermore, the measurement data are divided into two groups and are used for iteration of image reconstruction in turn. The simplified system is then solved in the wavelet domain to further accelerate the process of solving the inverse problem. Simulation results demonstrate that the proposed method can significantly speed up the reconstruction process.

15.
Artículo en Inglés | MEDLINE | ID: mdl-38083369

RESUMEN

[18F]-Fluorodeoxyglucose (FDG) positron emission tomography - computed tomography (PET-CT) has become the imaging modality of choice for diagnosing many cancers. Co-learning complementary PET-CT imaging features is a fundamental requirement for automatic tumor segmentation and for developing computer aided cancer diagnosis systems. In this study, we propose a hyper-connected transformer (HCT) network that integrates a transformer network (TN) with a hyper connected fusion for multi-modality PET-CT images. The TN was leveraged for its ability to provide global dependencies in image feature learning, which was achieved by using image patch embeddings with a self-attention mechanism to capture image-wide contextual information. We extended the single-modality definition of TN with multiple TN based branches to separately extract image features. We also introduced a hyper connected fusion to fuse the contextual and complementary image features across multiple transformers in an iterative manner. Our results with two clinical datasets show that HCT achieved better performance in segmentation accuracy when compared to the existing methods.Clinical Relevance-We anticipate that our approach can be an effective and supportive tool to aid physicians in tumor quantification and in identifying image biomarkers for cancer treatment.


Asunto(s)
Neoplasias , Tomografía Computarizada por Tomografía de Emisión de Positrones , Humanos , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Neoplasias/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador/métodos , Fluorodesoxiglucosa F18 , Diagnóstico por Computador
16.
Comput Biol Med ; 154: 106576, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36736097

RESUMEN

The spatial architecture of the tumour microenvironment and phenotypic heterogeneity of tumour cells have been shown to be associated with cancer prognosis and clinical outcomes, including survival. Recent advances in highly multiplexed imaging, including imaging mass cytometry (IMC), capture spatially resolved, high-dimensional maps that quantify dozens of disease-relevant biomarkers at single-cell resolution, that contain potential to inform patient-specific prognosis. Existing automated methods for predicting survival, on the other hand, typically do not leverage spatial phenotype information captured at the single-cell level. Furthermore, there is no end-to-end method designed to leverage the rich information in whole IMC images and all marker channels, and aggregate this information with clinical data in a complementary manner to predict survival with enhanced accuracy. To that end, we present a deep multimodal graph-based network (DMGN) with two modules: (1) a multimodal graph-based module that considers relationships between spatial phenotype information in all image regions and all clinical variables adaptively, and (2) a clinical embedding module that automatically generates embeddings specialised for each clinical variable to enhance multimodal aggregation. We demonstrate that our modules are consistently effective at improving survival prediction performance using two public breast cancer datasets, and that our new approach can outperform state-of-the-art methods in survival prediction.


Asunto(s)
Neoplasias , Microambiente Tumoral , Humanos , Fenotipo , Extremidad Superior , Neoplasias/diagnóstico por imagen
17.
IEEE Trans Biomed Eng ; 70(9): 2592-2603, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37030751

RESUMEN

In this article, we propose a novel wavelet convolution unit for the image-oriented neural network to integrate wavelet analysis with a vanilla convolution operator to extract deep abstract features more efficiently. On one hand, in order to acquire non-local receptive fields and avoid information loss, we define a new convolution operation by composing a traditional convolution function and approximate and detailed representations after single-scale wavelet decomposition of source images. On the other hand, multi-scale wavelet decomposition is introduced to obtain more comprehensive multi-scale feature information. Then, we fuse all these cross-scale features to improve the problem of inaccurate localization of singular points. Given the novel wavelet convolution unit, we further design a network based on it for fine-grained Alzheimer's disease classifications (i.e., Alzheimer's disease, Normal controls, early mild cognitive impairment, late mild cognitive impairment). Up to now, only a few methods have studied one or several fine-grained classifications, and even fewer methods can achieve both fine-grained and multi-class classifications. We adopt the novel network and diffuse tensor images to achieve fine-grained classifications, which achieved state-of-the-art accuracy for all eight kinds of fine-grained classifications, up to 97.30%, 95.78%, 95.00%, 94.00%, 97.89%, 95.71%, 95.07%, 93.79%. In order to build a reference standard for Alzheimer's disease classifications, we actually implemented all twelve coarse-grained and fine-grained classifications. The results show that the proposed method achieves solidly high accuracy for them. Its classification ability greatly exceeds any kind of existing Alzheimer's disease classification method.


Asunto(s)
Enfermedad de Alzheimer , Disfunción Cognitiva , Humanos , Enfermedad de Alzheimer/diagnóstico por imagen , Redes Neurales de la Computación , Encéfalo , Bases de Datos Factuales
18.
IEEE Trans Med Imaging ; 42(10): 2842-2852, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37043322

RESUMEN

Dynamic PET imaging provides superior physiological information than conventional static PET imaging. However, the dynamic information is gained at the cost of a long scanning protocol; this limits the clinical application of dynamic PET imaging. We developed a modified Logan reference plot model to shorten the acquisition procedure in dynamic PET imaging by omitting the early-time information necessary for the conventional reference Logan model. The proposed model is accurate theoretically, but the straightforward approach raises the sampling problem in implementation and results in noisy parametric images. We then designed a self-supervised convolutional neural network to increase the noise performance of parametric imaging, with dynamic images of only a single subject for training. The proposed method was validated via simulated and real dynamic [Formula: see text]-fallypride PET data. Results showed that it accurately estimated the distribution volume ratio (DVR) in dynamic PET with a shortened scanning protocol, e.g., 20 minutes, where the estimations were comparable with those obtained from a standard dynamic PET study of 120 minutes of acquisition. Further comparisons illustrated that our method outperformed the shortened Logan model implemented with Gaussian filtering, regularization, BM4D and the 4D deep image prior methods in terms of the trade-off between bias and variance. Since the proposed method uses data acquired in a short period of time upon the equilibrium, it has the potential to add clinical values by providing both DVR and Standard Uptake Value (SUV) simultaneously. It thus promotes clinical applications of dynamic PET studies when neuronal receptor functions are studied.


Asunto(s)
Redes Neurales de la Computación , Tomografía de Emisión de Positrones , Tomografía de Emisión de Positrones/métodos
19.
Front Public Health ; 11: 1143947, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37033028

RESUMEN

Virtual Reality (VR) has emerged as a new safe and efficient tool for the rehabilitation of many childhood and adulthood illnesses. VR-based therapies have the potential to improve both motor and functional skills in a wide range of age groups through cortical reorganization and the activation of various neuronal connections. Recently, the potential for using serious VR-based games that combine perceptual learning and dichoptic stimulation has been explored for the rehabilitation of ophthalmological and neurological disorders. In ophthalmology, several clinical studies have demonstrated the ability to use VR training to enhance stereopsis, contrast sensitivity, and visual acuity. The use of VR technology provides a significant advantage in training each eye individually without requiring occlusion or penalty. In neurological disorders, the majority of patients undergo recurrent episodes (relapses) of neurological impairment, however, in a few cases (60-80%), the illness progresses over time and becomes chronic, consequential in cumulated motor disability and cognitive deficits. Current research on memory restoration has been spurred by theories about brain plasticity and findings concerning the nervous system's capacity to reconstruct cellular synapses as a result of interaction with enriched environments. Therefore, the use of VR training can play an important role in the improvement of cognitive function and motor disability. Although there are several reviews in the community employing relevant Artificial Intelligence in healthcare, VR has not yet been thoroughly examined in this regard. In this systematic review, we examine the key ideas of VR-based training for prevention and control measurements in ocular diseases such as Myopia, Amblyopia, Presbyopia, and Age-related Macular Degeneration (AMD), and neurological disorders such as Alzheimer, Multiple Sclerosis (MS) Epilepsy and Autism spectrum disorder. This review highlights the fundamentals of VR technologies regarding their clinical research in healthcare. Moreover, these findings will raise community awareness of using VR training and help researchers to learn new techniques to prevent and cure different diseases. We further discuss the current challenges of using VR devices, as well as the future prospects of human training.


Asunto(s)
Trastorno del Espectro Autista , Personas con Discapacidad , Trastornos Motores , Enfermedades del Sistema Nervioso , Realidad Virtual , Humanos , Niño , Inteligencia Artificial
20.
IEEE J Biomed Health Inform ; 26(9): 4497-4507, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-35696469

RESUMEN

Nasopharyngeal Carcinoma (NPC) is a malignant epithelial cancer arising from the nasopharynx. Survival prediction is a major concern for NPC patients, as it provides early prognostic information to plan treatments. Recently, deep survival models based on deep learning have demonstrated the potential to outperform traditional radiomics-based survival prediction models. Deep survival models usually use image patches covering the whole target regions (e.g., nasopharynx for NPC) or containing only segmented tumor regions as the input. However, the models using the whole target regions will also include non-relevant background information, while the models using segmented tumor regions will disregard potentially prognostic information existing out of primary tumors (e.g., local lymph node metastasis and adjacent tissue invasion). In this study, we propose a 3D end-to-end Deep Multi-Task Survival model (DeepMTS) for joint survival prediction and tumor segmentation in advanced NPC from pretreatment PET/CT. Our novelty is the introduction of a hard-sharing segmentation backbone to guide the extraction of local features related to the primary tumors, which reduces the interference from non-relevant background information. In addition, we also introduce a cascaded survival network to capture the prognostic information existing out of primary tumors and further leverage the global tumor information (e.g., tumor size, shape, and locations) derived from the segmentation backbone. Our experiments with two clinical datasets demonstrate that our DeepMTS can consistently outperform traditional radiomics-based survival prediction models and existing deep survival models.


Asunto(s)
Aprendizaje Profundo , Neoplasias Nasofaríngeas , Humanos , Carcinoma Nasofaríngeo/diagnóstico por imagen , Carcinoma Nasofaríngeo/patología , Neoplasias Nasofaríngeas/diagnóstico por imagen , Neoplasias Nasofaríngeas/patología , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Pronóstico
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA