Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Med Imaging ; 43(4): 1640-1651, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38133966

ABSTRACT

Unsupervised domain adaptation(UDA) aims to mitigate the performance drop of models tested on the target domain, due to the domain shift from the target to sources. Most UDA segmentation methods focus on the scenario of solely single source domain. However, in practical situations data with gold standard could be available from multiple sources (domains), and the multi-source training data could provide more information for knowledge transfer. How to utilize them to achieve better domain adaptation yet remains to be further explored. This work investigates multi-source UDA and proposes a new framework for medical image segmentation. Firstly, we employ a multi-level adversarial learning scheme to adapt features at different levels between each of the source domains and the target, to improve the segmentation performance. Then, we propose a multi-model consistency loss to transfer the learned multi-source knowledge to the target domain simultaneously. Finally, we validated the proposed framework on two applications, i.e., multi-modality cardiac segmentation and cross-modality liver segmentation. The results showed our method delivered promising performance and compared favorably to state-of-the-art approaches.


Subject(s)
Heart , Liver , Heart/diagnostic imaging , Liver/diagnostic imaging , Image Processing, Computer-Assisted
2.
Med Image Anal ; 88: 102869, 2023 08.
Article in English | MEDLINE | ID: mdl-37384950

ABSTRACT

Multi-modality cardiac imaging plays a key role in the management of patients with cardiovascular diseases. It allows a combination of complementary anatomical, morphological and functional information, increases diagnosis accuracy, and improves the efficacy of cardiovascular interventions and clinical outcomes. Fully-automated processing and quantitative analysis of multi-modality cardiac images could have a direct impact on clinical research and evidence-based patient management. However, these require overcoming significant challenges including inter-modality misalignment and finding optimal methods to integrate information from different modalities. This paper aims to provide a comprehensive review of multi-modality imaging in cardiology, the computing methods, the validation strategies, the related clinical workflows and future perspectives. For the computing methodologies, we have a favored focus on the three tasks, i.e., registration, fusion and segmentation, which generally involve multi-modality imaging data, either combining information from different modalities or transferring information across modalities. The review highlights that multi-modality cardiac imaging data has the potential of wide applicability in the clinic, such as trans-aortic valve implantation guidance, myocardial viability assessment, and catheter ablation therapy and its patient selection. Nevertheless, many challenges remain unsolved, such as missing modality, modality selection, combination of imaging and non-imaging data, and uniform analysis and representation of different modalities. There is also work to do in defining how the well-developed techniques fit in clinical workflows and how much additional and relevant information they introduce. These problems are likely to continue to be an active field of research and the questions to be answered in the future.


Subject(s)
Cardiovascular Diseases , Imaging, Three-Dimensional , Humans , Imaging, Three-Dimensional/methods , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods
3.
IEEE Trans Med Imaging ; 42(12): 3474-3486, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37347625

ABSTRACT

Myocardial pathology segmentation (MyoPS) is critical for the risk stratification and treatment planning of myocardial infarction (MI). Multi-sequence cardiac magnetic resonance (MS-CMR) images can provide valuable information. For instance, balanced steady-state free precession cine sequences present clear anatomical boundaries, while late gadolinium enhancement and T2-weighted CMR sequences visualize myocardial scar and edema of MI, respectively. Existing methods usually fuse anatomical and pathological information from different CMR sequences for MyoPS, but assume that these images have been spatially aligned. However, MS-CMR images are usually unaligned due to the respiratory motions in clinical practices, which poses additional challenges for MyoPS. This work presents an automatic MyoPS framework for unaligned MS-CMR images. Specifically, we design a combined computing model for simultaneous image registration and information fusion, which aggregates multi-sequence features into a common space to extract anatomical structures (i.e., myocardium). Consequently, we can highlight the informative regions in the common space via the extracted myocardium to improve MyoPS performance, considering the spatial relationship between myocardial pathologies and myocardium. Experiments on a private MS-CMR dataset and a public dataset from the MYOPS2020 challenge show that our framework could achieve promising performance for fully automatic MyoPS.


Subject(s)
Contrast Media , Myocardial Infarction , Humans , Magnetic Resonance Imaging, Cine/methods , Gadolinium , Myocardium/pathology , Myocardial Infarction/diagnostic imaging , Magnetic Resonance Imaging/methods , Predictive Value of Tests
4.
IEEE J Biomed Health Inform ; 26(7): 3104-3115, 2022 07.
Article in English | MEDLINE | ID: mdl-35130178

ABSTRACT

Multi-atlas segmentation (MAS) is a promising framework for medical image segmentation. Generally, MAS methods register multiple atlases, i.e., medical images with corresponding labels, to a target image; and the transformed atlas labels can be combined to generate target segmentation via label fusion schemes. Many conventional MAS methods employed the atlases from the same modality as the target image. However, the number of atlases with the same modality may be limited or even missing in many clinical applications. Besides, conventional MAS methods suffer from the computational burden of registration or label fusion procedures. In this work, we design a novel cross-modality MAS framework, which uses available atlases from a certain modality to segment a target image from another modality. To boost the computational efficiency of the framework, both the image registration and label fusion are achieved by well-designed deep neural networks. For the atlas-to-target image registration, we propose a bi-directional registration network (BiRegNet), which can efficiently align images from different modalities. For the label fusion, we design a similarity estimation network (SimNet), which estimates the fusion weight of each atlas by measuring its similarity to the target image. SimNet can learn multi-scale information for similarity estimation to improve the performance of label fusion. The proposed framework was evaluated by the left ventricle and liver segmentation tasks on the MM-WHS and CHAOS datasets, respectively. Results have shown that the framework is effective for cross-modality MAS in both registration and label fusion https://github.com/NanYoMy/cmmas.


Subject(s)
Magnetic Resonance Imaging , Neural Networks, Computer , Heart Ventricles , Humans , Magnetic Resonance Imaging/methods
5.
Comput Methods Programs Biomed ; 210: 106363, 2021 Oct.
Article in English | MEDLINE | ID: mdl-34478913

ABSTRACT

BACKGROUND AND OBJECTIVE: Computer-aided diagnosis (CAD) systems promote accurate diagnosis and reduce the burden of radiologists. A CAD system for lung cancer diagnosis includes nodule candidate detection and nodule malignancy evaluation. Recently, deep learning-based pulmonary nodule detection has reached satisfactory performance ready for clinical application. However, deep learning-based nodule malignancy evaluation depends on heuristic inference from low-dose computed tomography (LDCT) volume to malignant probability, and lacks clinical cognition. METHODS: In this paper, we propose a joint radiology analysis and malignancy evaluation network called R2MNet to evaluate pulmonary nodule malignancy via the analysis of radiological characteristics. Radiological features are extracted as channel descriptor to highlight specific regions of the input volume that are critical for nodule malignancy evaluation. In addition, for model explanations, we propose channel-dependent activation mapping (CDAM) to visualize features and shed light on the decision process of deep neural networks (DNNs). RESULTS: Experimental results on the lung image database consortium image collection (LIDC-IDRI) dataset demonstrate that the proposed method achieved an area under curve (AUC) of 96.27% and 97.52% on nodule radiology analysis and nodule malignancy evaluation, respectively. In addition, explanations of CDAM features proved that the shape and density of nodule regions are two critical factors that influence a nodule to be inferred as malignant. This process conforms to the diagnosis cognition of experienced radiologists. CONCLUSION: The network inference process conforms to the diagnostic procedure of radiologists and increases the confidence of evaluation results by incorporating radiology analysis with nodule malignancy evaluation. Besides, model interpretation with CDAM features shed light on the focus regions of DNNs during the estimation of nodule malignancy probabilities.


Subject(s)
Lung Neoplasms , Radiology , Solitary Pulmonary Nodule , Computers , Humans , Lung , Lung Neoplasms/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted , Solitary Pulmonary Nodule/diagnostic imaging
SELECTION OF CITATIONS
SEARCH DETAIL
...