Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 62
Filtrar
1.
IEEE Trans Med Imaging ; PP2024 May 27.
Artículo en Inglés | MEDLINE | ID: mdl-38801688

RESUMEN

Accurately segmenting tubular structures, such as blood vessels or nerves, holds significant clinical implications across various medical applications. However, existing methods often exhibit limitations in achieving satisfactory topological performance, particularly in terms of preserving connectivity. To address this challenge, we propose a novel deep-learning approach, termed Deep Closing, inspired by the well-established classic closing operation. Deep Closing first leverages an AutoEncoder trained in the Masked Image Modeling (MIM) paradigm, enhanced with digital topology knowledge, to effectively learn the inherent shape prior of tubular structures and indicate potential disconnected regions. Subsequently, a Simple Components Erosion module is employed to generate topology-focused outcomes, which refines the preceding segmentation results, ensuring all the generated regions are topologically significant. To evaluate the efficacy of Deep Closing, we conduct comprehensive experiments on 4 datasets: DRIVE, CHASE DB1, DCA1, and CREMI. The results demonstrate that our approach yields considerable improvements in topological performance compared with existing methods. Furthermore, Deep Closing exhibits the ability to generalize and transfer knowledge from external datasets, showcasing its robustness and adaptability. The code for this paper has been available at: https://github.com/5k5000/DeepClosing.

2.
Artículo en Inglés | MEDLINE | ID: mdl-38315607

RESUMEN

Multi-Source-Free Unsupervised Domain Adaptation (MSFUDA) requires aggregating knowledge from multiple source models and adapting it to the target domain. Two challenges remain: 1) suboptimal coarse-grained (domain-level) aggregation of multiple source models, and 2) risky semantics propagation based on local structures. In this paper, we propose an evidential learning method for MSFUDA, where we formulate two uncertainties, i.e. Evidential Prediction Uncertainty (EPU) and Evidential Adjacency-Consistent Uncertainty (EAU), respectively for addressing the two challenges. The former, EPU, captures the uncertainty of a sample fitted to a source model, which can suggest the preferences of target samples for different source models. Based on this, we develop an EPU-Based Multi-Source Aggregation module to achieve fine-grained, instance-level source knowledge aggregation. The latter, EAU, provides a robust measure of consistency among adjacent samples in the target domain. Utilizing this, we develop an EAU-Guided Local Structure Mining module to ensure the trustworthy propagation of semantics. The two modules are integrated into the Evidential Aggregation and Adaptation Framework (EAAF), and we demonstrated that this framework achieves state-of-the-art performances on three MSFUDA benchmarks. Code is available at https://github.com/SPIresearch/EAAF.

3.
IEEE Trans Med Imaging ; 43(4): 1640-1651, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38133966

RESUMEN

Unsupervised domain adaptation(UDA) aims to mitigate the performance drop of models tested on the target domain, due to the domain shift from the target to sources. Most UDA segmentation methods focus on the scenario of solely single source domain. However, in practical situations data with gold standard could be available from multiple sources (domains), and the multi-source training data could provide more information for knowledge transfer. How to utilize them to achieve better domain adaptation yet remains to be further explored. This work investigates multi-source UDA and proposes a new framework for medical image segmentation. Firstly, we employ a multi-level adversarial learning scheme to adapt features at different levels between each of the source domains and the target, to improve the segmentation performance. Then, we propose a multi-model consistency loss to transfer the learned multi-source knowledge to the target domain simultaneously. Finally, we validated the proposed framework on two applications, i.e., multi-modality cardiac segmentation and cross-modality liver segmentation. The results showed our method delivered promising performance and compared favorably to state-of-the-art approaches.


Asunto(s)
Corazón , Hígado , Corazón/diagnóstico por imagen , Hígado/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador
4.
IEEE Trans Med Imaging ; 42(12): 3752-3763, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37581959

RESUMEN

Abnormal posture is a common movement disorder in the progress of Parkinson's disease (PD), and this abnormality can increase the risk of falls or even disabilities. The conventional assessment approach depends on the judgment of well-trained experts via canonical scales. However, this approach requires extensive clinical expertise and is highly subjective. Considering the potential of quantitative susceptibility mapping (QSM) in PD diagnosis, this study explored the QSM-based method for the automated classification between PD patients with and without postural abnormalities. Nevertheless, a major challenge is that unstable non-causal features typically lead to less reliable performance. Therefore, we propose a causality-driven graph-convolutional-network framework based on multi-instance learning, where performance stability is enhanced through the invariant prediction principle and causal interventions. Specifically, we adopt an intervention strategy that combines a non-causal intervenor with causal prediction. A stability constraint is proposed to ensure robust integrated prediction under different interventions. Moreover, an intra-class homogeneity constraint is enforced for each individually-learned causality scoring module to promote the extraction of group-level general features, and hence achieve a balance between subject-specific and group-level features. The proposed method demonstrated promising performance through extensive experiments on a real clinical dataset. Also, the features extracted by our method coincide with those reported in previous medical studies on PD posture abnormalities. In general, our work provides a clinically-valuable approach for automated, objective, and reliable diagnosis of postural abnormalities in Parkinsonians. Our source code is publicly available at https://github.com/SJTUBME-QianLab/CausalGCN-PDPA.


Asunto(s)
Enfermedad de Parkinson , Postura , Humanos , Enfermedad de Parkinson/diagnóstico por imagen
5.
Med Image Anal ; 89: 102889, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37467643

RESUMEN

Due to the cross-domain distribution shift aroused from diverse medical imaging systems, many deep learning segmentation methods fail to perform well on unseen data, which limits their real-world applicability. Recent works have shown the benefits of extracting domain-invariant representations on domain generalization. However, the interpretability of domain-invariant features remains a great challenge. To address this problem, we propose an interpretable Bayesian framework (BayeSeg) through Bayesian modeling of image and label statistics to enhance model generalizability for medical image segmentation. Specifically, we first decompose an image into a spatial-correlated variable and a spatial-variant variable, assigning hierarchical Bayesian priors to explicitly force them to model the domain-stable shape and domain-specific appearance information respectively. Then, we model the segmentation as a locally smooth variable only related to the shape. Finally, we develop a variational Bayesian framework to infer the posterior distributions of these explainable variables. The framework is implemented with neural networks, and thus is referred to as deep Bayesian segmentation. Quantitative and qualitative experimental results on prostate segmentation and cardiac segmentation tasks have shown the effectiveness of our proposed method. Moreover, we investigated the interpretability of BayeSeg by explaining the posteriors and analyzed certain factors that affect the generalization ability through further ablation studies. Our code is released via https://zmiclab.github.io/projects.html.


Asunto(s)
Corazón , Redes Neurales de la Computación , Masculino , Humanos , Teorema de Bayes , Pelvis , Próstata , Procesamiento de Imagen Asistido por Computador
6.
Med Image Anal ; 89: 102875, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37441881

RESUMEN

Medical images are generally acquired with limited field-of-view (FOV), which could lead to incomplete regions of interest (ROI), and thus impose a great challenge on medical image analysis. This is particularly evident for the learning-based multi-target landmark detection, where algorithms could be misleading to learn primarily the variation of background due to the varying FOV, failing the detection of targets. Based on learning a navigation policy, instead of predicting targets directly, reinforcement learning (RL)-based methods have the potential to tackle this challenge in an efficient manner. Inspired by this, in this work we propose a multi-agent RL framework for simultaneous multi-target landmark detection. This framework is aimed to learn from incomplete or (and) complete images to form an implicit knowledge of global structure, which is consolidated during the training stage for the detection of targets from either complete or incomplete test images. To further explicitly exploit the global structural information from incomplete images, we propose to embed a shape model into the RL process. With this prior knowledge, the proposed RL model can not only localize dozens of targets simultaneously, but also work effectively and robustly in the presence of incomplete images. We validated the applicability and efficacy of the proposed method on various multi-target detection tasks with incomplete images from practical clinics, using body dual-energy X-ray absorptiometry (DXA), cardiac MRI and head CT datasets. Results showed that our method could predict whole set of landmarks with incomplete training images up to 80% missing proportion (average distance error 2.29 cm on body DXA), and could detect unseen landmarks in regions with missing image information outside FOV of target images (average distance error 6.84 mm on 3D half-head CT). Our code will be released via https://zmiclab.github.io/projects.html.


Asunto(s)
Algoritmos , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Radiografía , Absorciometría de Fotón , Cabeza
7.
IEEE Trans Med Imaging ; 42(12): 3474-3486, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37347625

RESUMEN

Myocardial pathology segmentation (MyoPS) is critical for the risk stratification and treatment planning of myocardial infarction (MI). Multi-sequence cardiac magnetic resonance (MS-CMR) images can provide valuable information. For instance, balanced steady-state free precession cine sequences present clear anatomical boundaries, while late gadolinium enhancement and T2-weighted CMR sequences visualize myocardial scar and edema of MI, respectively. Existing methods usually fuse anatomical and pathological information from different CMR sequences for MyoPS, but assume that these images have been spatially aligned. However, MS-CMR images are usually unaligned due to the respiratory motions in clinical practices, which poses additional challenges for MyoPS. This work presents an automatic MyoPS framework for unaligned MS-CMR images. Specifically, we design a combined computing model for simultaneous image registration and information fusion, which aggregates multi-sequence features into a common space to extract anatomical structures (i.e., myocardium). Consequently, we can highlight the informative regions in the common space via the extracted myocardium to improve MyoPS performance, considering the spatial relationship between myocardial pathologies and myocardium. Experiments on a private MS-CMR dataset and a public dataset from the MYOPS2020 challenge show that our framework could achieve promising performance for fully automatic MyoPS.


Asunto(s)
Medios de Contraste , Infarto del Miocardio , Humanos , Imagen por Resonancia Cinemagnética/métodos , Gadolinio , Miocardio/patología , Infarto del Miocardio/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Valor Predictivo de las Pruebas
8.
Med Image Anal ; 88: 102869, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37384950

RESUMEN

Multi-modality cardiac imaging plays a key role in the management of patients with cardiovascular diseases. It allows a combination of complementary anatomical, morphological and functional information, increases diagnosis accuracy, and improves the efficacy of cardiovascular interventions and clinical outcomes. Fully-automated processing and quantitative analysis of multi-modality cardiac images could have a direct impact on clinical research and evidence-based patient management. However, these require overcoming significant challenges including inter-modality misalignment and finding optimal methods to integrate information from different modalities. This paper aims to provide a comprehensive review of multi-modality imaging in cardiology, the computing methods, the validation strategies, the related clinical workflows and future perspectives. For the computing methodologies, we have a favored focus on the three tasks, i.e., registration, fusion and segmentation, which generally involve multi-modality imaging data, either combining information from different modalities or transferring information across modalities. The review highlights that multi-modality cardiac imaging data has the potential of wide applicability in the clinic, such as trans-aortic valve implantation guidance, myocardial viability assessment, and catheter ablation therapy and its patient selection. Nevertheless, many challenges remain unsolved, such as missing modality, modality selection, combination of imaging and non-imaging data, and uniform analysis and representation of different modalities. There is also work to do in defining how the well-developed techniques fit in clinical workflows and how much additional and relevant information they introduce. These problems are likely to continue to be an active field of research and the questions to be answered in the future.


Asunto(s)
Enfermedades Cardiovasculares , Imagenología Tridimensional , Humanos , Imagenología Tridimensional/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos
9.
Med Image Anal ; 87: 102808, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-37087838

RESUMEN

Assessment of myocardial viability is essential in diagnosis and treatment management of patients suffering from myocardial infarction, and classification of pathology on the myocardium is the key to this assessment. This work defines a new task of medical image analysis, i.e., to perform myocardial pathology segmentation (MyoPS) combining three-sequence cardiac magnetic resonance (CMR) images, which was first proposed in the MyoPS challenge, in conjunction with MICCAI 2020. Note that MyoPS refers to both myocardial pathology segmentation and the challenge in this paper. The challenge provided 45 paired and pre-aligned CMR images, allowing algorithms to combine the complementary information from the three CMR sequences for pathology segmentation. In this article, we provide details of the challenge, survey the works from fifteen participants and interpret their methods according to five aspects, i.e., preprocessing, data augmentation, learning strategy, model architecture and post-processing. In addition, we analyze the results with respect to different factors, in order to examine the key obstacles and explore the potential of solutions, as well as to provide a benchmark for future research. The average Dice scores of submitted algorithms were 0.614±0.231 and 0.644±0.153 for myocardial scars and edema, respectively. We conclude that while promising results have been reported, the research is still in the early stage, and more in-depth exploration is needed before a successful application to the clinics. MyoPS data and evaluation tool continue to be publicly available upon registration via its homepage (www.sdspeople.fudan.edu.cn/zhuangxiahai/0/myops20/).


Asunto(s)
Benchmarking , Procesamiento de Imagen Asistido por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Corazón/diagnóstico por imagen , Miocardio/patología , Imagen por Resonancia Magnética/métodos
10.
IEEE J Biomed Health Inform ; 27(7): 3302-3313, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37067963

RESUMEN

In recent years, several deep learning models have been proposed to accurately quantify and diagnose cardiac pathologies. These automated tools heavily rely on the accurate segmentation of cardiac structures in MRI images. However, segmentation of the right ventricle is challenging due to its highly complex shape and ill-defined borders. Hence, there is a need for new methods to handle such structure's geometrical and textural complexities, notably in the presence of pathologies such as Dilated Right Ventricle, Tricuspid Regurgitation, Arrhythmogenesis, Tetralogy of Fallot, and Inter-atrial Communication. The last MICCAI challenge on right ventricle segmentation was held in 2012 and included only 48 cases from a single clinical center. As part of the 12th Workshop on Statistical Atlases and Computational Models of the Heart (STACOM 2021), the M&Ms-2 challenge was organized to promote the interest of the research community around right ventricle segmentation in multi-disease, multi-view, and multi-center cardiac MRI. Three hundred sixty CMR cases, including short-axis and long-axis 4-chamber views, were collected from three Spanish hospitals using nine different scanners from three different vendors, and included a diverse set of right and left ventricle pathologies. The solutions provided by the participants show that nnU-Net achieved the best results overall. However, multi-view approaches were able to capture additional information, highlighting the need to integrate multiple cardiac diseases, views, scanners, and acquisition protocols to produce reliable automatic cardiac segmentation algorithms.


Asunto(s)
Aprendizaje Profundo , Ventrículos Cardíacos , Humanos , Ventrículos Cardíacos/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Algoritmos , Atrios Cardíacos
11.
IEEE Trans Pattern Anal Mach Intell ; 45(2): 1405-1423, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-35349433

RESUMEN

Modeling statistics of image priors is useful for image super-resolution, but little attention has been paid from the massive works of deep learning-based methods. In this work, we propose a Bayesian image restoration framework, where natural image statistics are modeled with the combination of smoothness and sparsity priors. Concretely, first we consider an ideal image as the sum of a smoothness component and a sparsity residual, and model real image degradation including blurring, downscaling, and noise corruption. Then, we develop a variational Bayesian approach to infer their posteriors. Finally, we implement the variational approach for single image super-resolution (SISR) using deep neural networks, and propose an unsupervised training strategy. The experiments on three image restoration tasks, i.e., ideal SISR, realistic SISR, and real-world SISR, demonstrate that our method has superior model generalizability against varying noise levels and degradation kernels and is effective in unsupervised SISR. The code and resulting models are released via https://zmiclab.github.io/projects.html.

12.
IEEE Trans Pattern Anal Mach Intell ; 45(5): 6021-6036, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-36251907

RESUMEN

Supervised segmentation can be costly, particularly in applications of biomedical image analysis where large scale manual annotations from experts are generally too expensive to be available. Semi-supervised segmentation, able to learn from both the labeled and unlabeled images, could be an efficient and effective alternative for such scenarios. In this work, we propose a new formulation based on risk minimization, which makes full use of the unlabeled images. Different from most of the existing approaches which solely explicitly guarantee the minimization of prediction risks from the labeled training images, the new formulation also considers the risks on unlabeled images. Particularly, this is achieved via an unbiased estimator, based on which we develop a general framework for semi-supervised image segmentation. We validate this framework on three medical image segmentation tasks, namely cardiac segmentation on ACDC2017, optic cup and disc segmentation on REFUGE dataset and 3D whole heart segmentation on MM-WHS dataset. Results show that the proposed estimator is effective, and the segmentation method achieves superior performance and demonstrates great potential compared to the other state-of-the-art approaches. Our code and data will be released via https://zmiclab.github.io/projects.html, once the manuscript is accepted for publication.


Asunto(s)
Algoritmos , Corazón , Corazón/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador
13.
IEEE Trans Pattern Anal Mach Intell ; 45(7): 9206-9224, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-36445992

RESUMEN

This article presents a generic probabilistic framework for estimating the statistical dependency and finding the anatomical correspondences among an arbitrary number of medical images. The method builds on a novel formulation of the N-dimensional joint intensity distribution by representing the common anatomy as latent variables and estimating the appearance model with nonparametric estimators. Through connection to maximum likelihood and the expectation-maximization algorithm, an information-theoretic metric called X-metric and a co-registration algorithm named X-CoReg are induced, allowing groupwise registration of the N observed images with computational complexity of O(N). Moreover, the method naturally extends for a weakly-supervised scenario where anatomical labels of certain images are provided. This leads to a combined-computing framework implemented with deep learning, which performs registration and segmentation simultaneously and collaboratively in an end-to-end fashion. Extensive experiments were conducted to demonstrate the versatility and applicability of our model, including multimodal groupwise registration, motion correction for dynamic contrast enhanced magnetic resonance images, and deep combined computing for multimodal medical images. Results show the superiority of our method in various applications in terms of both accuracy and efficiency, highlighting the advantage of the proposed representation of the imaging process. Code is available from https://zmiclab.github.io/projects.html.

14.
Med Image Anal ; 84: 102694, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-36495601

RESUMEN

Myocardial pathology segmentation (MyoPS) can be a prerequisite for the accurate diagnosis and treatment planning of myocardial infarction. However, achieving this segmentation is challenging, mainly due to the inadequate and indistinct information from an image. In this work, we develop an end-to-end deep neural network, referred to as MyoPS-Net, to flexibly combine five-sequence cardiac magnetic resonance (CMR) images for MyoPS. To extract precise and adequate information, we design an effective yet flexible architecture to extract and fuse cross-modal features. This architecture can tackle different numbers of CMR images and complex combinations of modalities, with output branches targeting specific pathologies. To impose anatomical knowledge on the segmentation results, we first propose a module to regularize myocardium consistency and localize the pathologies, and then introduce an inclusiveness loss to utilize relations between myocardial scars and edema. We evaluated the proposed MyoPS-Net on two datasets, i.e., a private one consisting of 50 paired multi-sequence CMR images and a public one from MICCAI2020 MyoPS Challenge. Experimental results showed that MyoPS-Net could achieve state-of-the-art performance in various scenarios. Note that in practical clinics, the subjects may not have full sequences, such as missing LGE CMR or mapping CMR scans. We therefore conducted extensive experiments to investigate the performance of the proposed method in dealing with such complex combinations of different CMR sequences. Results proved the superiority and generalizability of MyoPS-Net, and more importantly, indicated a practical clinical application. The code has been released via https://github.com/QJYBall/MyoPS-Net.


Asunto(s)
Corazón , Infarto del Miocardio , Humanos , Miocardio/patología , Imagen por Resonancia Magnética/métodos , Redes Neurales de la Computación , Procesamiento de Imagen Asistido por Computador/métodos
15.
IEEE Trans Med Imaging ; 42(7): 2118-2129, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-36350867

RESUMEN

Large training datasets are important for deep learning-based methods. For medical image segmentation, it could be however difficult to obtain large number of labeled training images solely from one center. Distributed learning, such as swarm learning, has the potential to use multi-center data without breaching data privacy. However, data distributions across centers can vary a lot due to the diverse imaging protocols and vendors (known as feature skew). Also, the regions of interest to be segmented could be different, leading to inhomogeneous label distributions (referred to as label skew). With such non-independently and identically distributed (Non-IID) data, the distributed learning could result in degraded models. In this work, we propose a novel swarm learning approach, which assembles local knowledge from each center while at the same time overcomes forgetting of global knowledge during local training. Specifically, the approach first leverages a label skew-awared loss to preserve the global label knowledge, and then aligns local feature distributions to consolidate global knowledge against local feature skew. We validated our method in three Non-IID scenarios using four public datasets, including the Multi-Centre, Multi-Vendor and Multi-Disease Cardiac Segmentation (M&Ms) dataset, the Federated Tumor Segmentation (FeTS) dataset, the Multi-Modality Whole Heart Segmentation (MMWHS) dataset and the Multi-Site Prostate T2-weighted MRI segmentation (MSProsMRI) dataset. Results show that our method could achieve superior performance over existing methods. Code will be released via https://zmiclab.github.io/projects.html once the paper gets accepted.


Asunto(s)
Algoritmos , Imagen por Resonancia Magnética , Masculino , Humanos , Imagen por Resonancia Magnética/métodos , Corazón/diagnóstico por imagen , Próstata/patología , Procesamiento de Imagen Asistido por Computador/métodos
16.
Med Image Anal ; 81: 102528, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-35834896

RESUMEN

Accurate computing, analysis and modeling of the ventricles and myocardium from medical images are important, especially in the diagnosis and treatment management for patients suffering from myocardial infarction (MI). Late gadolinium enhancement (LGE) cardiac magnetic resonance (CMR) provides an important protocol to visualize MI. However, compared with the other sequences LGE CMR images with gold standard labels are particularly limited. This paper presents the selective results from the Multi-Sequence Cardiac MR (MS-CMR) Segmentation challenge, in conjunction with MICCAI 2019. The challenge offered a data set of paired MS-CMR images, including auxiliary CMR sequences as well as LGE CMR, from 45 patients who underwent cardiomyopathy. It was aimed to develop new algorithms, as well as benchmark existing ones for LGE CMR segmentation focusing on myocardial wall of the left ventricle and blood cavity of the two ventricles. In addition, the paired MS-CMR images could enable algorithms to combine the complementary information from the other sequences for the ventricle segmentation of LGE CMR. Nine representative works were selected for evaluation and comparisons, among which three methods are unsupervised domain adaptation (UDA) methods and the other six are supervised. The results showed that the average performance of the nine methods was comparable to the inter-observer variations. Particularly, the top-ranking algorithms from both the supervised and UDA methods could generate reliable and robust segmentation results. The success of these methods was mainly attributed to the inclusion of the auxiliary sequences from the MS-CMR images, which provide important label information for the training of deep neural networks. The challenge continues as an ongoing resource, and the gold standard segmentation as well as the MS-CMR images of both the training and test data are available upon registration via its homepage (www.sdspeople.fudan.edu.cn/zhuangxiahai/0/mscmrseg/).


Asunto(s)
Gadolinio , Infarto del Miocardio , Benchmarking , Medios de Contraste , Corazón , Humanos , Imagen por Resonancia Magnética/métodos , Infarto del Miocardio/diagnóstico por imagen , Miocardio/patología
17.
Med Image Anal ; 79: 102428, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-35500498

RESUMEN

A key factor for assessing the state of the heart after myocardial infarction (MI) is to measure whether the myocardium segment is viable after reperfusion or revascularization therapy. Delayed enhancement-MRI or DE-MRI, which is performed 10 min after injection of the contrast agent, provides high contrast between viable and nonviable myocardium and is therefore a method of choice to evaluate the extent of MI. To automatically assess myocardial status, the results of the EMIDEC challenge that focused on this task are presented in this paper. The challenge's main objectives were twofold. First, to evaluate if deep learning methods can distinguish between non-infarct and pathological exams, i.e. exams with or without hyperenhanced area. Second, to automatically calculate the extent of myocardial infarction. The publicly available database consists of 150 exams divided into 50 cases without any hyperenhanced area after injection of a contrast agent and 100 cases with myocardial infarction (and then with a hyperenhanced area on DE-MRI), whatever their inclusion in the cardiac emergency department. Along with MRI, clinical characteristics are also provided. The obtained results issued from several works show that the automatic classification of an exam is a reachable task (the best method providing an accuracy of 0.92), and the automatic segmentation of the myocardium is possible. However, the segmentation of the diseased area needs to be improved, mainly due to the small size of these areas and the lack of contrast with the surrounding structures.


Asunto(s)
Aprendizaje Profundo , Infarto del Miocardio , Medios de Contraste , Humanos , Imagen por Resonancia Magnética/métodos , Infarto del Miocardio/diagnóstico por imagen , Miocardio/patología
18.
J Genet Genomics ; 49(10): 934-942, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-35259542

RESUMEN

Facial and cranial variation represent a multidimensional set of highly correlated and heritable phenotypes. Little is known about the genetic basis explaining this correlation. We develop a software package ALoSFL for simultaneous localization of facial and cranial landmarks from head computed tomography (CT) images, apply it in the analysis of head CT images of 777 Han Chinese women, and obtain a set of phenotypes representing variation in face, skull and facial soft tissue thickness (FSTT). Association analysis of 301 single nucleotide polymorphisms (SNPs) from 191 distinct genomic loci previously associated with facial variation reveals an unexpected larger number of loci showing significant associations (P < 1e-3) with cranial phenotypes than expected under the null (O/E = 3.39), suggesting facial and cranial phenotypes share a substantial proportion of genetic components. Adding FSTT to a SNP-only model shows a large impact in explaining facial variance. A gene ontology analysis reveals that bone morphogenesis and osteoblast differentiation likely underlie our cranial-significant findings. Overall, this study simultaneously investigates the genetic effects on both facial and cranial variation of the same sample, supporting that facial variation is a composite phenotype of cranial variation and FSTT.


Asunto(s)
Cara , Antropología Forense , Femenino , Animales , Cara/anatomía & histología , Puntos Anatómicos de Referencia , Cráneo/diagnóstico por imagen , Cráneo/anatomía & histología , Fenotipo
19.
IEEE J Biomed Health Inform ; 26(7): 3104-3115, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-35130178

RESUMEN

Multi-atlas segmentation (MAS) is a promising framework for medical image segmentation. Generally, MAS methods register multiple atlases, i.e., medical images with corresponding labels, to a target image; and the transformed atlas labels can be combined to generate target segmentation via label fusion schemes. Many conventional MAS methods employed the atlases from the same modality as the target image. However, the number of atlases with the same modality may be limited or even missing in many clinical applications. Besides, conventional MAS methods suffer from the computational burden of registration or label fusion procedures. In this work, we design a novel cross-modality MAS framework, which uses available atlases from a certain modality to segment a target image from another modality. To boost the computational efficiency of the framework, both the image registration and label fusion are achieved by well-designed deep neural networks. For the atlas-to-target image registration, we propose a bi-directional registration network (BiRegNet), which can efficiently align images from different modalities. For the label fusion, we design a similarity estimation network (SimNet), which estimates the fusion weight of each atlas by measuring its similarity to the target image. SimNet can learn multi-scale information for similarity estimation to improve the performance of label fusion. The proposed framework was evaluated by the left ventricle and liver segmentation tasks on the MM-WHS and CHAOS datasets, respectively. Results have shown that the framework is effective for cross-modality MAS in both registration and label fusion https://github.com/NanYoMy/cmmas.


Asunto(s)
Imagen por Resonancia Magnética , Redes Neurales de la Computación , Ventrículos Cardíacos , Humanos , Imagen por Resonancia Magnética/métodos
20.
Med Image Anal ; 77: 102360, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-35124370

RESUMEN

Late gadolinium enhancement magnetic resonance imaging (LGE MRI) is commonly used to visualize and quantify left atrial (LA) scars. The position and extent of LA scars provide important information on the pathophysiology and progression of atrial fibrillation (AF). Hence, LA LGE MRI computing and analysis are essential for computer-assisted diagnosis and treatment stratification of AF patients. Since manual delineations can be time-consuming and subject to intra- and inter-expert variability, automating this computing is highly desired, which nevertheless is still challenging and under-researched. This paper aims to provide a systematic review on computing methods for LA cavity, wall, scar, and ablation gap segmentation and quantification from LGE MRI, and the related literature for AF studies. Specifically, we first summarize AF-related imaging techniques, particularly LGE MRI. Then, we review the methodologies of the four computing tasks in detail and summarize the validation strategies applied in each task as well as state-of-the-art results on public datasets. Finally, the possible future developments are outlined, with a brief survey on the potential clinical applications of the aforementioned methods. The review indicates that the research into this topic is still in the early stages. Although several methods have been proposed, especially for the LA cavity segmentation, there is still a large scope for further algorithmic developments due to performance issues related to the high variability of enhancement appearance and differences in image acquisition.


Asunto(s)
Fibrilación Atrial , Gadolinio , Cicatriz , Medios de Contraste , Atrios Cardíacos/diagnóstico por imagen , Humanos , Imagen por Resonancia Magnética/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA