Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 226
Filtrar
1.
Photoacoustics ; 38: 100618, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38957484

RESUMEN

Photoacoustic tomography (PAT), as a novel medical imaging technology, provides structural, functional, and metabolism information of biological tissue in vivo. Sparse Sampling PAT, or SS-PAT, generates images with a smaller number of detectors, yet its image reconstruction is inherently ill-posed. Model-based methods are the state-of-the-art method for SS-PAT image reconstruction, but they require design of complex handcrafted prior. Owing to their ability to derive robust prior from labeled datasets, deep-learning-based methods have achieved great success in solving inverse problems, yet their interpretability is poor. Herein, we propose a novel SS-PAT image reconstruction method based on deep algorithm unrolling (DAU), which integrates the advantages of model-based and deep-learning-based methods. We firstly provide a thorough analysis of DAU for PAT reconstruction. Then, in order to incorporate the structural prior constraint, we propose a nested DAU framework based on plug-and-play Alternating Direction Method of Multipliers (PnP-ADMM) to deal with the sparse sampling problem. Experimental results on numerical simulation, in vivo animal imaging, and multispectral un-mixing demonstrate that the proposed DAU image reconstruction framework outperforms state-of-the-art model-based and deep-learning-based methods.

2.
IEEE Trans Med Imaging ; PP2024 Apr 19.
Artículo en Inglés | MEDLINE | ID: mdl-38640054

RESUMEN

This paper presents a novel method based on leveraging physics-informed neural networks for magnetic resonance electrical property tomography (MREPT). MREPT is a noninvasive technique that can retrieve the spatial distribution of electrical properties (EPs) of scanned tissues from measured transmit radiofrequency (RF) in magnetic resonance imaging (MRI) systems. The reconstruction of EP values in MREPT is achieved by solving a partial differential equation derived from Maxwell's equations that lacks a direct solution. Most conventional MREPT methods suffer from artifacts caused by the invalidation of the assumption applied for simplification of the problem and numerical errors caused by numerical differentiation. Existing deep learning-based (DL-based) MREPT methods comprise data-driven methods that need to collect massive datasets for training or model-driven methods that are only validated in trivial cases. Hence we proposed a model-driven method that learns mapping from a measured RF, its spatial gradient and Laplacian to EPs using fully connected networks (FCNNs). The spatial gradient of EP can be computed through the automatic differentiation of FCNNs and the chain rule. FCNNs are optimized using the residual of the central physical equation of convection-reaction MREPT as the loss function (L). To alleviate the ill condition of the problem, we added multiconstraints, including the similarity constraint between permittivity and conductivity and the ℓ1 norm of spatial gradients of permittivity and conductivity, to the L. We demonstrate the proposed method with a three-dimensional realistic head model, a digital phantom simulation, and a practical phantom experiment at a 9.4T animal MRI system.

3.
Photoacoustics ; 37: 100601, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38516295

RESUMEN

Photoacoustic tomography (PAT) is a promising imaging technique that can visualize the distribution of chromophores within biological tissue. However, the accuracy of PAT imaging is compromised by light fluence (LF), which hinders the quantification of light absorbers. Currently, model-based iterative methods are used for LF correction, but they require extensive computational resources due to repeated LF estimation based on differential light transport models. To improve LF correction efficiency, we propose to use Fourier neural operator (FNO), a neural network specially designed for estimating partial differential equations, to learn the forward projection of light transport in PAT. Trained using paired finite-element-based LF simulation data, our FNO model replaces the traditional computational heavy LF estimator during iterative correction, such that the correction procedure is considerably accelerated. Simulation and experimental results demonstrate that our method achieves comparable LF correction quality to traditional iterative methods while reducing the correction time by over 30 times.

4.
Med Image Anal ; 94: 103148, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38554550

RESUMEN

Deep learning methods show great potential for the efficient and precise estimation of quantitative parameter maps from multiple magnetic resonance (MR) images. Current deep learning-based MR parameter mapping (MPM) methods are mostly trained and tested using data with specific acquisition settings. However, scan protocols usually vary with centers, scanners, and studies in practice. Thus, deep learning methods applicable to MPM with varying acquisition settings are highly required but still rarely investigated. In this work, we develop a model-based deep network termed MMPM-Net for robust MPM with varying acquisition settings. A deep learning-based denoiser is introduced to construct the regularization term in the nonlinear inversion problem of MPM. The alternating direction method of multipliers is used to solve the optimization problem and then unrolled to construct MMPM-Net. The variation in acquisition parameters can be addressed by the data fidelity component in MMPM-Net. Extensive experiments are performed on R2 mapping and R1 mapping datasets with substantial variations in acquisition settings, and the results demonstrate that the proposed MMPM-Net method outperforms other state-of-the-art MR parameter mapping methods both qualitatively and quantitatively.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Metacrilatos , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Encéfalo , Imagen por Resonancia Magnética/métodos
5.
Diagnostics (Basel) ; 14(4)2024 Feb 17.
Artículo en Inglés | MEDLINE | ID: mdl-38396486

RESUMEN

Objective: To comprehensively capture intra-tumor heterogeneity in head and neck cancer (HNC) and maximize the use of valid information collected in the clinical field, we propose a novel multi-modal image-text fusion strategy aimed at improving prognosis. Method: We have developed a tailored diagnostic algorithm for HNC, leveraging a deep learning-based model that integrates both image and clinical text information. For the image fusion part, we used the cross-attention mechanism to fuse the image information between PET and CT, and for the fusion of text and image, we used the Q-former architecture to fuse the text and image information. We also improved the traditional prognostic model by introducing time as a variable in the construction of the model, and finally obtained the corresponding prognostic results. Result: We assessed the efficacy of our methodology through the compilation of a multicenter dataset, achieving commendable outcomes in multicenter validations. Notably, our results for metastasis-free survival (MFS), recurrence-free survival (RFS), overall survival (OS), and progression-free survival (PFS) were as follows: 0.796, 0.626, 0.641, and 0.691. Our results demonstrate a notable superiority over the utilization of CT and PET independently, and exceed the result derived without the clinical textual information. Conclusions: Our model not only validates the effectiveness of multi-modal fusion in aiding diagnosis, but also provides insights for optimizing survival analysis. The study underscores the potential of our approach in enhancing prognosis and contributing to the advancement of personalized medicine in HNC.

6.
Comput Med Imaging Graph ; 111: 102316, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38039866

RESUMEN

Cylindrical organs, e.g., blood vessels, airways, and intestines, are ubiquitous structures in biomedical optical imaging analysis. Image segmentation of these structures serves as a vital step in tissue physiology analysis. Traditional model-driven segmentation methods seek to fit the structure by constructing a corresponding topological geometry based on domain knowledge. Classification-based deep learning methods neglect the geometric features of the cylindrical structure and therefore cannot ensure the continuity of the segmentation surface. In this paper, by treating the cylindrical structures as a 3D graph, we introduce a novel contour-based graph neural network for 3D cylindrical structure segmentation in biomedical optical imaging. Our proposed method, which we named CylinGCN, adopts a novel learnable framework that extracts semantic features and complex topological relationships in the 3D volumetric data to achieve continuous and effective 3D segmentation. Our CylinGCN consists of a multiscale 3D semantic feature extractor for extracting inter-frame multiscale semantic features, and a residual graph convolutional network (GCN) contour generator that combines the semantic features and cylindrical topological priors to generate segmentation contours. We tested the CylinGCN framework on two types of optical tomographic imaging data, small animal whole body photoacoustic tomography (PAT) and endoscopic airway optical coherence tomography (OCT), and the results show that CylinGCN achieves state-of-the-art performance. Code will be released at https://github.com/lzc-smu/CylinGCN.git.


Asunto(s)
Redes Neurales de la Computación , Tomografía Computarizada por Rayos X , Tomografía Computarizada por Rayos X/métodos , Tomografía de Coherencia Óptica/métodos , Procesamiento de Imagen Asistido por Computador/métodos
7.
IEEE Trans Med Imaging ; 43(5): 1702-1714, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38147426

RESUMEN

Photoacoustic tomography (PAT) and magnetic resonance imaging (MRI) are two advanced imaging techniques widely used in pre-clinical research. PAT has high optical contrast and deep imaging range but poor soft tissue contrast, whereas MRI provides excellent soft tissue information but poor temporal resolution. Despite recent advances in medical image fusion with pre-aligned multimodal data, PAT-MRI image fusion remains challenging due to misaligned images and spatial distortion. To address these issues, we propose an unsupervised multi-stage deep learning framework called PAMRFuse for misaligned PAT and MRI image fusion. PAMRFuse comprises a multimodal to unimodal registration network to accurately align the input PAT-MRI image pairs and a self-attentive fusion network that selects information-rich features for fusion. We employ an end-to-end mutually reinforcing mode in our registration network, which enables joint optimization of cross-modality image generation and registration. To the best of our knowledge, this is the first attempt at information fusion for misaligned PAT and MRI. Qualitative and quantitative experimental results show the excellent performance of our method in fusing PAT-MRI images of small animals captured from commercial imaging systems.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Imagen Multimodal , Técnicas Fotoacústicas , Imagen por Resonancia Magnética/métodos , Animales , Imagen Multimodal/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Técnicas Fotoacústicas/métodos , Aprendizaje Automático no Supervisado , Algoritmos , Ratones , Aprendizaje Profundo
8.
Quant Imaging Med Surg ; 13(12): 8336-8349, 2023 Dec 01.
Artículo en Inglés | MEDLINE | ID: mdl-38106319

RESUMEN

Background: Rhabdomyolysis (RM)-induced acute kidney injury (AKI) is a common renal disease with low survival rate and inadequate prognosis. In this study, we investigate the feasibility of chemical exchange saturation transfer (CEST) magnetic resonance imaging (MRI) for assessing the progression of RM-induced AKI in a mouse model. Methods: AKI was induced in C57BL/6J mice via intramuscular injection of 7.5 mL/kg glycerol (n=30). Subsequently, serum creatinine (SCr), blood urea nitrogen (BUN), and hematoxylin-eosin (HE) and Masson staining, were performed. Longitudinal CEST-MRI was conducted on days 1, 3, 7, 15, and 30 after AKI induction using a 7.0-T MRI system. CEST-MRI quantification parameters including magnetization transfer ratio (MTR), MTR asymmetric analysis (MTRasym), apparent amide proton transfer (APT*), and apparent relayed nuclear Overhauser effect (rNOE*) were used to investigate the feasibility of detecting RM-induced renal damage. Results: Significant increases of SCr and BUN demonstrated established AKI. The HE staining revealed various degrees of tubular damage, and Masson staining indicted an increase in the degree of fibrosis in the injured kidneys. Among CEST parameters, the cortical MTR presented a significant difference, and it also showed the best diagnostic performance for AKI [area under the receiver operating characteristic curve (AUC) =0.915] and moderate negative correlations with SCr and BUN. On the first day of renal damage, MTR was significantly reduced in cortex (22.7%±0.04%, P=0.013), outer stripe of outer medulla (24.7%±1.6%, P<0.001), and inner stripe of outer medulla (27.0%±1.5%, P<0.001) compared to the control group. Longitudinally, MTR increased steadily with AKI progression. Conclusions: The MTR obtained from CEST-MRI is sensitive to the pathological changes in RM-induced AKI, indicating its potential clinical utility for the assessment of kidney diseases.

9.
Front Neurosci ; 17: 1287788, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38033538

RESUMEN

Background: Accurate phase unwrapping is a critical prerequisite for successful applications in phase-related MRI, including quantitative susceptibility mapping (QSM) and susceptibility weighted imaging. However, many existing 3D phase unwrapping algorithms face challenges in the presence of severe noise, rapidly changing phase, and open-end cutline. Methods: In this study, we introduce a novel 3D phase unwrapping approach utilizing region partitioning and a local polynomial model. Initially, the method leverages phase partitioning to create initial regions. Noisy voxels connecting areas within these regions are excluded and grouped into residual voxels. The connected regions within the region of interest are then reidentified and categorized into blocks and residual voxels based on voxel count thresholds. Subsequently, the method sequentially performs inter-block and residual voxel phase unwrapping using the local polynomial model. The proposed method was evaluated on simulation and in vivo abdominal QSM data, and was compared with the classical Region-growing, Laplacian_based, Graph-cut, and PRELUDE methods. Results: Simulation experiments, conducted under different signal-to-noise ratios and phase change levels, consistently demonstrate that the proposed method achieves accurate unwrapping results, with mean error ratios not exceeding 0.01%. In contrast, the error ratios of Region-growing (N/A, 84.47%), Laplacian_based (20.65%, N/A), Graph-cut (2.26%, 20.71%), and PRELUDE (4.28%, 10.33%) methods are all substantially higher than those of the proposed method. In vivo abdominal QSM experiments further confirm the effectiveness of the proposed method in unwrapping phase data and successfully reconstructing susceptibility maps, even in scenarios with significant noise, rapidly changing phase, and open-end cutline in a large field of view. Conclusion: The proposed method demonstrates robust and accurate phase unwrapping capabilities, positioning it as a promising option for abdominal QSM applications.

10.
Biomed Opt Express ; 14(9): 4594-4608, 2023 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-37791278

RESUMEN

Endoscopic airway optical coherence tomography (OCT) is a non-invasive and high resolution imaging modality for the diagnosis and analysis of airway-related diseases. During OCT imaging of the upper airway, in order to reliably characterize its 3D structure, there is a need to automatically detect the airway lumen contour, correct rotational distortion and perform 3D airway reconstruction. Based on a long-range endoscopic OCT imaging system equipped with a magnetic tracker, we present a fully automatic framework to reconstruct the 3D upper airway model with correct bending anatomy. Our method includes an automatic segmentation method for the upper airway based on dynamic programming algorithm, an automatic initial rotation angle error correction method for the detected 2D airway lumen contour, and an anatomic bending method combined with the centerline detected from the magnetically tracked imaging probe. The proposed automatic reconstruction framework is validated on experimental datasets acquired from two healthy adults. The result shows that the proposed framework allows the full automation of 3D airway reconstruction from OCT images and thus reveals its potential to improve analysis efficiency of endoscopic OCT images.

11.
Front Bioeng Biotechnol ; 11: 1236108, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37744251

RESUMEN

Introduction: The estimation of myocardial motion abnormalities has great potential for the early diagnosis of myocardial infarction (MI). This study aims to quantitatively analyze the segmental and transmural myocardial motion in MI rats by incorporating two novel strategies of algorithm parameter optimization and transmural motion index (TMI) calculation. Methods: Twenty-one rats were randomly divided into three groups (n = 7 per group): sham, MI, and ischemia-reperfusion (IR) groups. Ultrasound radio-frequency (RF) signals were acquired from each rat heart at 1 day and 28 days after animal model establishment; thus, a total of six datasets were represented as Sham1, Sham28, MI1, MI28, IR1, and IR28. The systolic cumulative displacement was calculated using our previously proposed vectorized normalized cross-correlation (VNCC) method. A semiautomatic regional and layer-specific myocardium segmentation framework was proposed for transmural and segmental myocardial motion estimation. Two novel strategies were proposed: the displacement-compensated cross-correlation coefficient (DCCCC) for algorithm parameter optimization and the transmural motion index (TMI) for quantitative estimation of the cross-wall transmural motion gradient. Results: The results showed that an overlap value of 80% used in VNCC guaranteed a more accurate displacement calculation. Compared to the Sham1 group, the systolic myocardial motion reductions were significantly detected (p < 0.05) in the middle anteroseptal (M-ANT-SEP), basal anteroseptal (B-ANT-SEP), apical lateral (A-LAT), middle inferolateral (M-INF-LAT), and basal inferolateral (B-INF-LAT) walls as well as a significant TMI drop (p < 0.05) in the M-ANT-SEP wall in the MI1 rats; significant motion reductions (p < 0.05) were also detected in the B-ANT-SEP and A-LAT walls in the IR1 group. The motion improvements (p < 0.05) were detected in the M-INF-LAT wall in the MI28 group and the apical septal (A-SEP) wall in the IR28 group compared to the MI1 and IR1 groups, respectively. Discussion: Our results show that the MI-induced reductions and reperfusion-induced recovery in systolic myocardial contractility could be successfully evaluated using our method, and most post-MI myocardial segments could recover systolic function to various extents in the remodeling phase. In conclusion, the ultrasound-based quantitative estimation framework for estimating segmental and transmural motion of the myocardium proposed in our study has great potential for non-invasive, novel, and early MI detection.

12.
Photoacoustics ; 32: 100536, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-37575971

RESUMEN

Photoacoustic tomography (PAT) images contain inherent distortions due to the imaging system and heterogeneous tissue properties. Improving image quality requires the removal of these system distortions. While model-based approaches and data-driven techniques have been proposed for PAT image restoration, achieving accurate and robust image recovery remains challenging. Recently, deep-learning-based image deconvolution approaches have shown promise for image recovery. However, PAT imaging presents unique challenges, including spatially varying resolution and the absence of ground truth data. Consequently, there is a pressing need for a novel learning strategy specifically tailored for PAT imaging. Herein, we propose a configurable network model named Deep hybrid Image-PSF Prior (DIPP) that builds upon the physical image degradation model of PAT. DIPP is an unsupervised and deeply learned network model that aims to extract the ideal PAT image from complex system degradation. Our DIPP framework captures the degraded information solely from the acquired PAT image, without relying on ground truth or labeled data for network training. Additionally, we can incorporate the experimentally measured Point Spread Functions (PSFs) of the specific PAT system as a reference to further enhance performance. To evaluate the algorithm's effectiveness in addressing multiple degradations in PAT, we conduct extensive experiments using simulation images, publicly available datasets, phantom images, and in vivo small animal imaging data. Comparative analyses with classical analytical methods and state-of-the-art deep learning models demonstrate that our DIPP approach achieves significantly improved restoration results in terms of image details and contrast.

13.
Photoacoustics ; 31: 100506, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-37397508

RESUMEN

Magnetic resonance imaging (MRI) and photoacoustic tomography (PAT) offer two distinct image contrasts. To integrate these two modalities, we present a comprehensive hardware-software solution for the successive acquisition and co-registration of PAT and MRI images in in vivo animal studies. Based on commercial PAT and MRI scanners, our solution includes a 3D-printed dual-modality imaging bed, a 3-D spatial image co-registration algorithm with dual-modality markers, and a robust modality switching protocol for in vivo imaging studies. Using the proposed solution, we successfully demonstrated co-registered hybrid-contrast PAT-MRI imaging that simultaneously displays multi-scale anatomical, functional and molecular characteristics on healthy and cancerous living mice. Week-long longitudinal dual-modality imaging of tumor development reveals information on size, border, vascular pattern, blood oxygenation, and molecular probe metabolism of the tumor micro-environment at the same time. The proposed methodology holds promise for a wide range of pre-clinical research applications that benefit from the PAT-MRI dual-modality image contrast.

14.
Quant Imaging Med Surg ; 13(3): 1550-1562, 2023 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-36915306

RESUMEN

Background: To develop an accurate and robust 3-dimensional (3D) phase-unwrapping method that works effectively in the presence of severe noise, disconnected regions, rapid phase changes, and open-ended lines for quantitative susceptibility mapping (QSM). Methods: We developed a 3D phase-unwrapping method based on voxel clustering and local polynomial modeling named CLOSE3D, which firstly explores the 26-neighborhood to calculate local variation of the phasor and the phase, and then according to the local variation of the phasor, clusters the phase data into easy-to-unwrap blocks and difficult-to-unwrap residual voxels. Next, CLOSE3D sequentially performs intrablock, interblock, and residual-voxel unwrapping by using the region-growing local polynomial modeling method. CLOSED3D was evaluated in simulation and using in vivo brain QSM data, and was compared with classical region-growing and region-expanding labeling for unwrapping estimates methods. Results: The simulation experiments showed that CLOSE3D achieved accurate phase-unwrapping results with a mean error ratio <0.39%, even in the presence of serious noise, disconnected regions, and rapid phase changes. The error ratios of region-growing (P=0.000 and P=0.000) and region-expanding labeling for unwrapping estimates (P=0.007, P=0.014) methods were both significantly higher than that of CLOSE3D, when the noise level was ≥60%. The results of the in vivo brain QSM experiments showed that CLOSE3D unwrapped the phase data and accurately reconstructed quantitative susceptibility data, even with serious noise, rapid-varying phase, or an open-ended cutline. Conclusions: CLOSE3D achieves phase unwrapping with high accuracy and robustness, which will help phase-related 3D magnetic resonance imaging (MRI) applications such as QSM and susceptibility weighted imaging.

15.
Cancers (Basel) ; 15(3)2023 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-36765889

RESUMEN

PURPOSE: This study aims to investigate the impact of aggregation methods used for the generation of texture features on their robustness of nasopharyngeal carcinoma (NPC) based on 18F-FDG PET/CT images. METHODS: 128 NPC patients were enrolled and 95 texture features were extracted for each patient including six feature families under different aggregation methods. For GLCM and GLRLM features, six aggregation methods were considered. For GLSZM, GLDZM, NGTDM and NGLDM features, three aggregation methods were considered. The robustness of the features affected by aggregation methods was assessed by the pair-wise intra-class correlation coefficient (ICC). Furthermore, the effects of discretization and partial volume correction (PVC) on the percent of ICC categories of all texture features were evaluated by overall ICC instead of the pair-wise ICC. RESULTS: There were 12 features with excellent pair-wise ICCs varying aggregation methods, namely joint average, sum average, autocorrelation, long run emphasis, high grey level run emphasis, short run high grey level emphasis, long run high grey level emphasis, run length variance, SZM high grey level emphasis, DZM high grey level emphasis, high grey level count emphasis and dependence count percentage. For GLCM and GLRLM features, 19/25 and 14/16 features showed excellent pair-wise ICCs varying aggregation methods (averaged and merged) on the same dimensional features (2D, 2.5D or 3D). Different discretization levels and partial volume corrections lead to consistent robustness of textural features affected by aggregation methods. CONCLUSION: Different dimensional features with the same aggregation methods showed worse robustness compared with the same dimensional features with different aggregation methods. Different discretization levels and PVC algorithms had a negligible effect on the percent of ICC categories of all texture features.

16.
Magn Reson Med ; 89(5): 1961-1974, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-36705076

RESUMEN

PURPOSE: This work aims to develop a novel distortion-free 3D-EPI acquisition and image reconstruction technique for fast and robust, high-resolution, whole-brain imaging as well as quantitative T 2 * $$ {\mathrm{T}}_2^{\ast } $$ mapping. METHODS: 3D Blip-up and -down acquisition (3D-BUDA) sequence is designed for both single- and multi-echo 3D gradient recalled echo (GRE)-EPI imaging using multiple shots with blip-up and -down readouts to encode B0 field map information. Complementary k-space coverage is achieved using controlled aliasing in parallel imaging (CAIPI) sampling across the shots. For image reconstruction, an iterative hard-thresholding algorithm is employed to minimize the cost function that combines field map information informed parallel imaging with the structured low-rank constraint for multi-shot 3D-BUDA data. Extending 3D-BUDA to multi-echo imaging permits T 2 * $$ {\mathrm{T}}_2^{\ast } $$ mapping. For this, we propose constructing a joint Hankel matrix along both echo and shot dimensions to improve the reconstruction. RESULTS: Experimental results on in vivo multi-echo data demonstrate that, by performing joint reconstruction along with both echo and shot dimensions, reconstruction accuracy is improved compared to standard 3D-BUDA reconstruction. CAIPI sampling is further shown to enhance image quality. For T 2 * $$ {\mathrm{T}}_2^{\ast } $$ mapping, parameter values from 3D-Joint-CAIPI-BUDA and reference multi-echo GRE are within limits of agreement as quantified by Bland-Altman analysis. CONCLUSIONS: The proposed technique enables rapid 3D distortion-free high-resolution imaging and T 2 * $$ {\mathrm{T}}_2^{\ast } $$ mapping. Specifically, 3D-BUDA enables 1-mm isotropic whole-brain imaging in 22 s at 3T and 9 s on a 7T scanner. The combination of multi-echo 3D-BUDA with CAIPI acquisition and joint reconstruction enables distortion-free whole-brain T 2 * $$ {\mathrm{T}}_2^{\ast } $$ mapping in 47 s at 1.1 × 1.1 × 1.0 mm3 resolution.


Asunto(s)
Imagen Eco-Planar , Procesamiento de Imagen Asistido por Computador , Procesamiento de Imagen Asistido por Computador/métodos , Imagen Eco-Planar/métodos , Imagenología Tridimensional/métodos , Encéfalo/diagnóstico por imagen , Mapeo Encefálico/métodos , Algoritmos
17.
Comput Methods Programs Biomed ; 230: 107346, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-36716637

RESUMEN

BACKGROUND AND OBJECTIVE: Predicting the malignant potential of breast lesions based on breast ultrasound (BUS) images is a crucial component of computer-aided diagnosis system for breast cancers. However, since breast lesions in BUS images generally have various shapes with relatively low contrast and present complex textures, it still remains challenging to accurately identify the malignant potential of breast lesions. METHODS: In this paper, we propose a multi-scale gradational-order fusion framework to make full advantages of multi-scale representations incorporating with gradational-order characteristics of BUS images for breast lesions classification. Specifically, we first construct a spatial context aggregation module to generate multi-scale context representations from the original BUS images. Subsequently, multi-scale representations are efficiently fused in feature fusion block that is armed with special fusion strategies to comprehensively capture morphological characteristics of breast lesions. To better characterize complex textures and enhance non-linear modeling capability, we further propose isotropous gradational-order feature module in the feature fusion block to learn and combine multi-order representations. Finally, these multi-scale gradational-order representations are utilized to perform prediction for the malignant potential of breast lesions. RESULTS: The proposed model was evaluated on three open datasets by using 5-fold cross-validation. The experimental results (Accuracy: 85.32%, Sensitivity: 85.24%, Specificity: 88.57%, AUC: 90.63% on dataset A; Accuracy: 76.48%, Sensitivity: 72.45%, Specificity: 80.42%, AUC: 78.98% on dataset B) demonstrate that the proposed method achieves the promising performance when compared with other deep learning-based methods in BUS classification task. CONCLUSIONS: The proposed method has demonstrated a promising potential to predict malignant potential of breast lesion using ultrasound image in an end-to-end manner.


Asunto(s)
Neoplasias de la Mama , Mama , Femenino , Humanos , Mama/diagnóstico por imagen , Mama/patología , Neoplasias de la Mama/diagnóstico por imagen , Neoplasias de la Mama/patología , Ultrasonografía , Ultrasonografía Mamaria , Diagnóstico por Computador/métodos
18.
IEEE Trans Neural Netw Learn Syst ; 34(7): 3737-3750, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-34596560

RESUMEN

The Cox proportional hazard model has been widely applied to cancer prognosis prediction. Nowadays, multi-modal data, such as histopathological images and gene data, have advanced this field by providing histologic phenotype and genotype information. However, how to efficiently fuse and select the complementary information of high-dimensional multi-modal data remains challenging for Cox model, as it generally does not equip with feature fusion/selection mechanism. Many previous studies typically perform feature fusion/selection in the original feature space before Cox modeling. Alternatively, learning a latent shared feature space that is tailored for Cox model and simultaneously keeps sparsity is desirable. In addition, existing Cox-based models commonly pay little attention to the actual length of the observed time that may help to boost the model's performance. In this article, we propose a novel Cox-driven multi-constraint latent representation learning framework for prognosis analysis with multi-modal data. Specifically, for efficient feature fusion, a multi-modal latent space is learned via a bi-mapping approach under ranking and regression constraints. The ranking constraint utilizes the log-partial likelihood of Cox model to induce learning discriminative representations in a task-oriented manner. Meanwhile, the representations also benefit from regression constraint, which imposes the supervision of specific survival time on representation learning. To improve generalization and alleviate overfitting, we further introduce similarity and sparsity constraints to encourage extra consistency and sparseness. Extensive experiments on three datasets acquired from The Cancer Genome Atlas (TCGA) demonstrate that the proposed method is superior to state-of-the-art Cox-based models.


Asunto(s)
Aprendizaje , Redes Neurales de la Computación , Generalización Psicológica , Pronóstico , Probabilidad
19.
IEEE Trans Pattern Anal Mach Intell ; 45(6): 7577-7594, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-36383577

RESUMEN

Current survival analysis of cancers confronts two key issues. While comprehensive perspectives provided by data from multiple modalities often promote the performance of survival models, data with inadequate modalities at the testing phase are more ubiquitous in clinical scenarios, which makes multi-modality approaches not applicable. Additionally, incomplete observations (i.e., censored instances) bring a unique challenge for survival analysis, to tackle which, some models have been proposed based on certain strict assumptions or attribute distributions that, however, may limit their applicability. In this paper, we present a mutual-assistance learning paradigm for standalone mono-modality survival analysis of cancers. The mutual assistance implies the cooperation of multiple components and embodies three aspects: 1) it leverages the knowledge of multi-modality data to guide the representation learning of an individual modality via mutual-assistance similarity and geometry constraints; 2) it formulates mutual-assistance regression and ranking functions independent of strong hypotheses to estimate the relative risk, in which a bias vector is introduced to efficiently cope with the censoring problem; 3) it integrates representation learning and survival modeling into a unified mutual-assistance framework for alleviating the requirement of attribute distributions. Extensive experiments on several datasets demonstrate our method can significantly improve the performance of mono-modality survival model.


Asunto(s)
Algoritmos , Neoplasias , Humanos , Análisis de Supervivencia , Neoplasias/diagnóstico por imagen , Neoplasias/terapia , Aprendizaje Automático
20.
Comput Methods Programs Biomed ; 229: 107267, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36502547

RESUMEN

OBJECTIVES: We aimed to propose an automatic segmentation method for left ventricular (LV) from 16 electrocardiogram (ECG) -gated 13N-NH3 PET/CT myocardial perfusion imaging (MPI) to improve the performance of LV function assessment. METHODS: Ninety-six cases with confirmed or suspected obstructive coronary artery disease (CAD) were enrolled in this research. The LV myocardial contours were delineated by physicians as ground truth. We developed an automatic segmentation method, which introduces the self-attention mechanism into 3D U-Net to capture global information of images so as to achieve fine segmentation of LV. Three cross-validation tests were performed on each gate (64 vs. 32 for training vs. validation). The effectiveness was validated by quantitative metrics (modified hausdorff distance, MHD; dice ratio, DR; 3D MHD) as well as cardiac functional parameters (end-systolic volume, ESV; end-diastolic volume, EDV; ejection fraction, EF). Furthermore, the feasibility of the proposed method was also evaluated by intra- and inter-observers with DR and 3D-MHD. RESULTS: Compared with backbone network, the proposed approach improved the average DR from 0.905 ± 0.0193 to 0.9202 ± 0.0164, and decreased the average 3D MHD from 0.4611 ± 0.0349 to 0.4304 ± 0.0339. The average relative error of LV volume between proposed method and ground truth is 1.09±3.66%, and the correlation coefficient is 0.992 ± 0.007 (P < 0.001). The EDV, ESV, EF deduced from the proposed approach were highly correlated with ground truth (r ≥ 0.864, P < 0.001), and the correlation with commercial software is fair (r ≥ 0.871, P < 0.001). DR and 3D MHD of contours and myocardium from two observers are higher than 0.899 and less than 0.5194. CONCLUSION: The proposed approach is highly feasible for automatic segmentation of the LV cavity and myocardium, with potential to benefit the precision of LV function assessment.


Asunto(s)
Enfermedad de la Arteria Coronaria , Imagen de Perfusión Miocárdica , Humanos , Tomografía Computarizada por Tomografía de Emisión de Positrones , Ventrículos Cardíacos/diagnóstico por imagen , Función Ventricular Izquierda , Enfermedad de la Arteria Coronaria/diagnóstico por imagen , Reproducibilidad de los Resultados
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA