Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 27
Filter
1.
Phys Med Biol ; 69(12)2024 Jun 11.
Article in English | MEDLINE | ID: mdl-38759677

ABSTRACT

Objective.Deep learning algorithms have demonstrated impressive performance by leveraging large labeled data. However, acquiring pixel-level annotations for medical image analysis, especially in segmentation tasks, is both costly and time-consuming, posing challenges for supervised learning techniques. Existing semi-supervised methods tend to underutilize representations of unlabeled data and handle labeled and unlabeled data separately, neglecting their interdependencies.Approach.To address this issue, we introduce the Data-Augmented Attention-Decoupled Contrastive model (DADC). This model incorporates an attention decoupling module and utilizes contrastive learning to effectively distinguish foreground and background, significantly improving segmentation accuracy. Our approach integrates an augmentation technique that merges information from both labeled and unlabeled data, notably boosting network performance, especially in scenarios with limited labeled data.Main results.We conducted comprehensive experiments on the automated breast ultrasound (ABUS) dataset and the results demonstrate that DADC outperforms existing segmentation methods in terms of segmentation performance.


Subject(s)
Image Processing, Computer-Assisted , Supervised Machine Learning , Image Processing, Computer-Assisted/methods , Humans , Ultrasonography, Mammary/methods , Deep Learning
2.
Phys Med Biol ; 69(11)2024 May 30.
Article in English | MEDLINE | ID: mdl-38759673

ABSTRACT

Accurate segmentation of tumor regions in automated breast ultrasound (ABUS) images is of paramount importance in computer-aided diagnosis system. However, the inherent diversity of tumors and the imaging interference pose great challenges to ABUS tumor segmentation. In this paper, we propose a global and local feature interaction model combined with graph fusion (GLGM), for 3D ABUS tumor segmentation. In GLGM, we construct a dual branch encoder-decoder, where both local and global features can be extracted. Besides, a global and local feature fusion module is designed, which employs the deepest semantic interaction to facilitate information exchange between local and global features. Additionally, to improve the segmentation performance for small tumors, a graph convolution-based shallow feature fusion module is designed. It exploits the shallow feature to enhance the feature expression of small tumors in both local and global domains. The proposed method is evaluated on a private ABUS dataset and a public ABUS dataset. For the private ABUS dataset, the small tumors (volume smaller than 1 cm3) account for over 50% of the entire dataset. Experimental results show that the proposed GLGM model outperforms several state-of-the-art segmentation models in 3D ABUS tumor segmentation, particularly in segmenting small tumors.


Subject(s)
Breast Neoplasms , Image Processing, Computer-Assisted , Ultrasonography, Mammary , Humans , Breast Neoplasms/diagnostic imaging , Ultrasonography, Mammary/methods , Image Processing, Computer-Assisted/methods , Automation , Imaging, Three-Dimensional/methods
3.
Phys Med Biol ; 69(1)2023 Dec 26.
Article in English | MEDLINE | ID: mdl-38052091

ABSTRACT

Objective.In recent years, deep learning-based methods have become the mainstream for medical image segmentation. Accurate segmentation of automated breast ultrasound (ABUS) tumor plays an essential role in computer-aided diagnosis. Existing deep learning models typically require a large number of computations and parameters.Approach. Aiming at this problem, we propose a novel knowledge distillation method for ABUS tumor segmentation. The tumor or non-tumor regions from different cases tend to have similar representations in the feature space. Based on this, we propose to decouple features into positive (tumor) and negative (non-tumor) pairs and design a decoupled contrastive learning method. The contrastive loss is utilized to force the student network to mimic the tumor or non-tumor features of the teacher network. In addition, we designed a ranking loss function based on ranking the distance metric in the feature space to address the problem of hard-negative mining in medical image segmentation.Main results. The effectiveness of our knowledge distillation method is evaluated on the private ABUS dataset and a public hippocampus dataset. The experimental results demonstrate that our proposed method achieves state-of-the-art performance in ABUS tumor segmentation. Notably, after distilling knowledge from the teacher network (3D U-Net), the Dice similarity coefficient (DSC) of the student network (small 3D U-Net) is improved by 7%. Moreover, the DSC of the student network (3D HR-Net) reaches 0.780, which is very close to that of the teacher network, while their parameters are only 6.8% and 12.1% of 3D U-Net, respectively.Significance. This research introduces a novel knowledge distillation method for ABUS tumor segmentation, significantly reducing computational demands while achieving state-of-the-art performance. The method promises enhanced accuracy and feasibility for computer-aided diagnosis in diverse imaging scenarios.


Subject(s)
Neoplasms , Humans , Diagnosis, Computer-Assisted , Image Processing, Computer-Assisted
4.
IEEE J Biomed Health Inform ; 27(6): 2944-2955, 2023 06.
Article in English | MEDLINE | ID: mdl-37030813

ABSTRACT

Multimodal magnetic resonance imaging (MRI) contains complementary information in anatomical and functional images that help the accurate diagnosis and treatment evaluation of lung cancers. However, effectively exploiting the complementary information in chest MRI images remains challenging due to the lack of rigorous registration. In this paper, a novel method is proposed that can effectively exploit the complementary information in weakly paired images for accurate tumor segmentation, namely coco-attention mechanism. Coco-attention module consists of two parts: the multi-modal co-attention (MultiCo-attn) and the multi-level coordinate attention (MultiCord-attn). The former aims to obtain tumor-aware deep features for accurate tumor localization, and the latter aims to highlight tumor area for more precise segmentation. Specifically, the MultiCo-attn extracts complementary information from multimodal high-dimensional semantic features using a bidirectional algorithm to generate attention maps focused on tumor region, and then uses the attention maps to enhance the feature representations. The MultiCord-attn leverages multi-level feature information to highlight tumor regions by adjusting the weight of each point in the feature. We evaluate the proposed method on lung tumor segmentation with a clinical dataset of 90 chest MRI scans of non-small cell lung cancer (NSCLC). The results show that the proposed method is effective for tumor segmentation in weakly paired images and achieves significant improvement (p < 0.005) over several commonly used multimodal segmentation methods. Furthermore, the ablation experiment results confirm the effectiveness and interpretability of the proposed coco-attention module.


Subject(s)
Carcinoma, Non-Small-Cell Lung , Lung Neoplasms , Humans , Cocos , Lung Neoplasms/diagnostic imaging , Magnetic Resonance Imaging , Algorithms , Image Processing, Computer-Assisted
5.
Comput Med Imaging Graph ; 104: 102169, 2023 03.
Article in English | MEDLINE | ID: mdl-36586196

ABSTRACT

Registration of dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) is challenging as rapid intensity changes caused by a contrast agent lead to large registration errors. To address this problem, we propose a novel multi-domain image-to-image translation (MDIT) network based on image disentangling for separating motion from contrast changes before registration. In particular, the DCE images are disentangled into a domain-invariant content space (motion) and a domain-specific attribute space (contrast changes). The disentangled representations are then used to generate images, where the contrast changes have been removed from the motion. After that the resulting deformations can be directly derived from the generated images using an FFD registration. The method is tested on 10 lung DCE-MRI cases. The proposed method reaches an average root mean squared error of 0.3 ± 0.41 and the separation time is about 2.4 s for each case. Results show that the proposed method improves the registration efficiency without losing the registration accuracy compared with several state-of-the-art registration methods.


Subject(s)
Algorithms , Image Interpretation, Computer-Assisted , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Lung , Motion , Contrast Media
6.
Med Image Anal ; 82: 102589, 2022 11.
Article in English | MEDLINE | ID: mdl-36095905

ABSTRACT

Accurate segmentation of breast mass in 3D automated breast ultrasound (ABUS) plays an important role in breast cancer analysis. Deep convolutional networks have become a promising approach in segmenting ABUS images. However, designing an effective network architecture is time-consuming, and highly relies on specialist's experience and prior knowledge. To address this issue, we introduce a searchable segmentation network (denoted as Auto-DenseUNet) based on the neural architecture search (NAS) to search the optimal architecture automatically for the ABUS mass segmentation task. Concretely, a novel search space is designed based on a densely connected structure to enhance the gradient and information flows throughout the network. Then, to encourage multiscale information fusion, a set of searchable multiscale aggregation nodes between the down-sampling and up-sampling parts of the network are further designed. Thus, all the operators within the dense connection structure or between any two aggregation nodes can be searched to find the optimal structure. Finally, a novel decoupled search training strategy during architecture search is also introduced to alleviate the memory limitation caused by continuous relaxation in NAS. The proposed Auto-DenseUNet method has been evaluated on our ABUS dataset with 170 volumes (from 107 patients), including 120 training volumes and 50 testing volumes split at patient level. Experimental results on testing volumes show that our searched architecture performed better than several human-designed segmentation models on the 3D ABUS mass segmentation task, indicating the effectiveness of our proposed method.


Subject(s)
Breast Neoplasms , Imaging, Three-Dimensional , Humans , Female , Imaging, Three-Dimensional/methods , Ultrasonography, Mammary/methods , Neural Networks, Computer , Breast/diagnostic imaging , Breast Neoplasms/diagnostic imaging , Image Processing, Computer-Assisted/methods
7.
Comput Methods Programs Biomed ; 221: 106891, 2022 Jun.
Article in English | MEDLINE | ID: mdl-35623209

ABSTRACT

BACKGROUND AND OBJECTIVE: Automated breast ultrasound (ABUS) imaging technology has been widely used in clinical diagnosis. Accurate lesion segmentation in ABUS images is essential in computer-aided diagnosis (CAD) systems. Although deep learning-based approaches have been widely employed in medical image analysis, the large variety of lesions and the imaging interference make ABUS lesion segmentation challenging. METHODS: In this paper, we propose a novel deepest semantically guided multi-scale feature fusion network (DSGMFFN) for lesion segmentation in 2D ABUS slices. In order to cope with the large variety of lesions, a deepest semantically guided decoder (DSGNet) and a multi-scale feature fusion model (MFFM) are designed, where the deepest semantics is fully utilized to guide the decoding and feature fusion. That is, the deepest information is given the highest weight in the feature fusion process, and participates in every decoding stage. Aiming at the challenge of imaging interference, a novel mixed attention mechanism is developed, integrating spatial self-attention and channel self-attention to obtain the correlation among pixels and channels to highlight the lesion region. RESULTS: The proposed DSGMFFN is evaluated on 3742 slices of 170 ABUS volumes. The experimental result indicates that DSGMFFN achieves 84.54% and 73.24% in Dice similarity coefficient (DSC) and intersection over union (IoU), respectively. CONCLUSIONS: The proposed method shows better performance than the state-of-the-art methods in ABUS lesion segmentation. Incorrect segmentation caused by lesion variety and imaging interference in ABUS images can be alleviated.


Subject(s)
Image Processing, Computer-Assisted , Ultrasonography, Mammary , Diagnosis, Computer-Assisted , Female , Humans , Image Processing, Computer-Assisted/methods , Ultrasonography, Mammary/methods
8.
Acad Radiol ; 29 Suppl 2: S73-S81, 2022 02.
Article in English | MEDLINE | ID: mdl-33495072

ABSTRACT

RATIONALE AND OBJECTIVES: To investigate the effect of intralesional heterogeneity on differentiating benign and malignant pulmonary lesions, quantitative magnetic resonance imaging (MRI) radiomics, and machine learning methods were adopted. MATERIALS AND METHODS: A total of 176 patients with multiparametric MRI were involved in this exploratory study. To investigate the effect of intralesional heterogeneity on lesion classification, a radiomics model called tumor heterogeneity model was developed and compared to the conventional radiomics model based on the entire tumor. In tumor heterogeneity model, each lesion was divided into five sublesions depending on the spatial location through clustering algorithm. From the five sublesions in multi-parametric MRI sequences, 1100 radiomics features were extracted. The recursive feature elimination method was employed to select features and support vector machine classifier was used to distinguish benign and malignant lesion. The performance of classification was evaluated with the receiver operating characteristic curve and the area under the curve (AUC) was the figure of merit. The 3-fold cross-validation (CV) with and without nesting was used to validate the model, respectively. RESULTS: The tumor heterogeneity model (AUC = 0.74 ± 0.04 and 0.90 ± 0.03, CV with and without nesting, respectively) outperforms conventional model (AUC = 0.68 ± 0.04 and 0.87 ± 0.03). The difference between the two models is statistically significant (p = 0.03) for lesions greater than 18.80 cm3. CONCLUSION: Intralesional heterogeneity influences the classification of pulmonary lesions. The tumor heterogeneity model tends to perform better than conventional radiomics model.


Subject(s)
Magnetic Resonance Imaging , Multiparametric Magnetic Resonance Imaging , Humans , Machine Learning , Magnetic Resonance Imaging/methods , Retrospective Studies , Support Vector Machine
9.
IEEE J Biomed Health Inform ; 26(1): 301-311, 2022 01.
Article in English | MEDLINE | ID: mdl-34003755

ABSTRACT

Tumor segmentation in 3D automated breast ultrasound (ABUS) plays an important role in breast disease diagnosis and surgical planning. However, automatic segmentation of tumors in 3D ABUS images is still challenging, due to the large tumor shape and size variations, and uncertain tumor locations among patients. In this paper, we develop a novel cross-model attention-guided tumor segmentation network with a hybrid loss for 3D ABUS images. Specifically, we incorporate the tumor location into a segmentation network by combining an improved 3D Mask R-CNN head into V-Net as an end-to-end architecture. Furthermore, we introduce a cross-model attention mechanism that is able to aggregate the segmentation probability map from the improved 3D Mask R-CNN to each feature extraction level in the V-Net. Then, we design a hybrid loss to balance the contribution of each part in the proposed cross-model segmentation network. We conduct extensive experiments on 170 3D ABUS from 107 patients. Experimental results show that our method outperforms other state-of-the-art methods, by achieving the Dice similarity coefficient (DSC) of 64.57%, Jaccard coefficient (JC) of 53.39%, recall (REC) of 64.43%, precision (PRE) of 74.51%, 95th Hausdorff distance (95HD) of 11.91 mm, and average surface distance (ASD) of 4.63 mm. Our code will be available online (https://github.com/zhouyuegithub/CMVNet).


Subject(s)
Neoplasms , Ultrasonography, Mammary , Breast/diagnostic imaging , Female , Humans , Image Processing, Computer-Assisted , Ultrasonography, Mammary/methods
10.
Comput Methods Programs Biomed ; 209: 106313, 2021 Sep.
Article in English | MEDLINE | ID: mdl-34364182

ABSTRACT

BACKGROUND AND OBJECTIVE: Accurate segmentation of breast mass in 3D automated breast ultrasound (ABUS) images plays an important role in qualitative and quantitative ABUS image analysis. Yet this task is challenging due to the low signal to noise ratio and serious artifacts in ABUS images, the large shape and size variation of breast masses, as well as the small training dataset compared with natural images. The purpose of this study is to address these difficulties by designing a dilated densely connected U-Net (D2U-Net) together with an uncertainty focus loss. METHODS: A lightweight yet effective densely connected segmentation network is constructed to extensively explore feature representations in the small ABUS dataset. In order to deal with the high variation in shape and size of breast masses, a set of hybrid dilated convolutions is integrated into the dense blocks of the D2U-Net. We further suggest an uncertainty focus loss to put more attention on unreliable network predictions, especially the ambiguous mass boundaries caused by low signal to noise ratio and artifacts. Our segmentation algorithm is evaluated on an ABUS dataset of 170 volumes from 107 patients. Ablation analysis and comparison with existing methods are conduct to verify the effectiveness of the proposed method. RESULTS: Experiment results demonstrate that the proposed algorithm outperforms existing methods on 3D ABUS mass segmentation tasks, with Dice similarity coefficient, Jaccard index and 95% Hausdorff distance of 69.02%, 56.61% and 4.92 mm, respectively. CONCLUSIONS: The proposed method is effective in segmenting breast masses on our small ABUS dataset, especially breast masses with large shape and size variations.


Subject(s)
Breast , Ultrasonography, Mammary , Algorithms , Breast/diagnostic imaging , Female , Humans , Image Processing, Computer-Assisted , Ultrasonography , Uncertainty
11.
Med Image Anal ; 70: 101918, 2021 05.
Article in English | MEDLINE | ID: mdl-33676100

ABSTRACT

Tumor classification and segmentation are two important tasks for computer-aided diagnosis (CAD) using 3D automated breast ultrasound (ABUS) images. However, they are challenging due to the significant shape variation of breast tumors and the fuzzy nature of ultrasound images (e.g., low contrast and signal to noise ratio). Considering the correlation between tumor classification and segmentation, we argue that learning these two tasks jointly is able to improve the outcomes of both tasks. In this paper, we propose a novel multi-task learning framework for joint segmentation and classification of tumors in ABUS images. The proposed framework consists of two sub-networks: an encoder-decoder network for segmentation and a light-weight multi-scale network for classification. To account for the fuzzy boundaries of tumors in ABUS images, our framework uses an iterative training strategy to refine feature maps with the help of probability maps obtained from previous iterations. Experimental results based on a clinical dataset of 170 3D ABUS volumes collected from 107 patients indicate that the proposed multi-task framework improves tumor segmentation and classification over the single-task learning counterparts.


Subject(s)
Breast Neoplasms , Ultrasonography, Mammary , Breast Neoplasms/diagnostic imaging , Diagnosis, Computer-Assisted , Female , Humans , Ultrasonography
12.
IEEE Trans Image Process ; 30: 2935-2946, 2021.
Article in English | MEDLINE | ID: mdl-33560987

ABSTRACT

Unsupervised cross domain (UCD) person re-identification (re-ID) aims to apply a model trained on a labeled source domain to an unlabeled target domain. It faces huge challenges as the identities have no overlap between these two domains. At present, most UCD person re-ID methods perform "supervised learning" by assigning pseudo labels to the target domain, which leads to poor re-ID performance due to the pseudo label noise. To address this problem, a multi-loss optimization learning (MLOL) model is proposed for UCD person re-ID. In addition to using the information of clustering pseudo labels from the perspective of supervised learning, two losses are designed from the view of similarity exploration and adversarial learning to optimize the model. Specifically, in order to alleviate the erroneous guidance brought by the clustering error to the model, a ranking-average-based triplet loss learning and a neighbor-consistency-based loss learning are developed. Combining these losses to optimize the model results in a deep exploration of the intra-domain relation within the target domain. The proposed model is evaluated on three popular person re-ID datasets, Market-1501, DukeMTMC-reID, and MSMT17. Experimental results show that our model outperforms the state-of-the-art UCD re-ID methods with a clear advantage.


Subject(s)
Biometric Identification/methods , Image Processing, Computer-Assisted/methods , Unsupervised Machine Learning , Algorithms , Databases, Factual , Humans , Video Recording
13.
Ultrasonics ; 110: 106271, 2021 Feb.
Article in English | MEDLINE | ID: mdl-33166786

ABSTRACT

Accurate breast mass segmentation of automated breast ultrasound (ABUS) is a great help to breast cancer diagnosis and treatment. However, the lack of clear boundary and significant variation in mass shapes make the automatic segmentation very challenging. In this paper, a novel automatic tumor segmentation method SC-FCN-BLSTM is proposed by incorporating bi-directional long short-term memory (BLSTM) and spatial-channel attention (SC-attention) module into fully convolutional network (FCN). In order to decrease performance degradation caused by ambiguous boundaries and varying tumor sizes, an SC-attention module is designed to integrate both finer-grained spatial information and rich semantic information. Since ABUS is three-dimensional data, utilizing inter-slice context can improve segmentation performance. A BLSTM module with SC-attention is constructed to model the correlation between slices, which employs inter-slice context to assist segmentation for false positive elimination. The proposed method is verified on our private ABUS dataset of 124 patients with 170 volumes, including 3636 2D labeled slices. The Dice similarity coefficient (DSC), Recall, Precision and Hausdorff distance (HD) of the proposed method are 0.8178, 0.8067, 0.8292 and 11.1367. Experimental results demonstrate that the proposed method offered improved segmentation results compared with existing deep learning-based methods.


Subject(s)
Breast Neoplasms/diagnostic imaging , Image Interpretation, Computer-Assisted/methods , Neural Networks, Computer , Pattern Recognition, Automated/methods , Ultrasonography, Mammary/methods , Algorithms , Diagnosis, Computer-Assisted , Female , Humans
14.
IEEE Trans Med Imaging ; 40(2): 673-687, 2021 02.
Article in English | MEDLINE | ID: mdl-33136541

ABSTRACT

Image registration of lung dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) is challenging because the rapid changes in intensity lead to non-realistic deformations of intensity-based registration methods. To address this problem, we propose a novel landmark-based registration framework by incorporating landmark information into a group-wise registration. Robust principal component analysis is used to separate motion from intensity changes caused by a contrast agent. Landmark pairs are detected on the resulting motion components and then incorporated into an intensity-based registration through a constraint term. To reduce the negative effect of inaccurate landmark pairs on registration, an adaptive weighting landmark constraint is proposed. The method for calculating landmark weights is based on an assumption that the displacement of a good matched landmark is consistent with those of its neighbors. The proposed method was tested on 20 clinical lung DCE-MRI image series. Both visual inspection and quantitative assessment are used for the evaluation. Experimental results show that the proposed method effectively reduces the non-realistic deformations in registration and improves the registration performance compared with several state-of-the-art registration methods.


Subject(s)
Algorithms , Contrast Media , Lung/diagnostic imaging , Magnetic Resonance Imaging , Motion
15.
IEEE Trans Med Imaging ; 40(1): 431-443, 2021 01.
Article in English | MEDLINE | ID: mdl-33021936

ABSTRACT

Accurate breast mass segmentation of automated breast ultrasound (ABUS) images plays a crucial role in 3D breast reconstruction which can assist radiologists in surgery planning. Although the convolutional neural network has great potential for breast mass segmentation due to the remarkable progress of deep learning, the lack of annotated data limits the performance of deep CNNs. In this article, we present an uncertainty aware temporal ensembling (UATE) model for semi-supervised ABUS mass segmentation. Specifically, a temporal ensembling segmentation (TEs) model is designed to segment breast mass using a few labeled images and a large number of unlabeled images. Considering the network output contains correct predictions and unreliable predictions, equally treating each prediction in pseudo label update and loss calculation may degrade the network performance. To alleviate this problem, the uncertainty map is estimated for each image. Then an adaptive ensembling momentum map and an uncertainty aware unsupervised loss are designed and integrated with TEs model. The effectiveness of the proposed UATE model is mainly verified on an ABUS dataset of 107 patients with 170 volumes, including 13382 2D labeled slices. The Jaccard index (JI), Dice similarity coefficient (DSC), pixel-wise accuracy (AC) and Hausdorff distance (HD) of the proposed method on testing set are 63.65%, 74.25%, 99.21% and 3.81mm respectively. Experimental results demonstrate that our semi-supervised method outperforms the fully supervised method, and get a promising result compared with existing semi-supervised methods.


Subject(s)
Image Processing, Computer-Assisted , Ultrasonography, Mammary , Female , Humans , Neural Networks, Computer , Ultrasonography , Uncertainty
16.
Med Phys ; 47(11): 5669-5680, 2020 Nov.
Article in English | MEDLINE | ID: mdl-32970838

ABSTRACT

PURPOSE: Automated breast ultrasound (ABUS) has drawn attention in breast disease detection and diagnosis applications. Reviewing hundreds of slices produced by ABUS is time-consuming. In this paper, a tumor detection method for ABUS image based on convolutional neural network is proposed. METHODS: First, integrating multitask learning with YOLOv3, an improved YOLOv3 detection network is designed to detect tumor candidate in two-dimensional (2D) slices. Two-dimensional detection separately treats each slice, leading to larger differences of position and score for tumor candidate in adjacent slices. Due to the influence of artifact, noise, and mammary tissues, 2D detection may include many false positive regions. To alleviate these problems, a rescoring processing algorithm is first designed. Then three-dimensional volume forming and FP reduction scheme are built. RESULTS: This method was tested on 340 volumes (124 patients, 181 tumors) with fivefold cross validation. It achieved sensitivities of 90%, 85%, 80%, 75%, and 70% at 7.42, 3.31, 1.62, 1.23, and 0.88 false positives per volume. CONCLUSION: Compared with existing ABUS tumor detection methods, our method gets a promising result.


Subject(s)
Breast Neoplasms , Neoplasms , Algorithms , Breast Neoplasms/diagnostic imaging , Female , Humans , Image Interpretation, Computer-Assisted , Neural Networks, Computer , Ultrasonography, Mammary
17.
Med Biol Eng Comput ; 58(9): 2095-2105, 2020 Sep.
Article in English | MEDLINE | ID: mdl-32654016

ABSTRACT

Lung diffusion-weighted magnetic resonance imaging (DWI) has shown a promising value in lung lesion detection, diagnosis, differentiation, and staging. However, the respiratory and cardiac motion, blood flow, and lung hysteresis may contribute to the blurring, resulting in unclear lung images. The image blurring could adversely affect diagnosis performance. The purpose of this study is to reduce the DWI blurring and assess its positive effect on diagnosis. The retrospective study includes 71 patients. In this paper, a motion correction and noise removal method using low-rank decomposition is proposed, which can reduce the DWI blurring by exploit the spatiotemporal continuity sequences. The deblurring performances are evaluated by qualitative and quantitative assessment, and the performance of diagnosis of lung cancer is measured by area under curve (AUC). In the view of the qualitative assessment, the deformation of the lung mass is reduced, and the blurring of the lung tumor edge is alleviated. Noise in the apparent diffusion coefficient (ADC) map is greatly reduced. For quantitative assessment, mutual information (MI) and Pearson correlation coefficient (Pearson-Coff) are 1.30 and 0.82 before the decomposition and 1.40 and 0.85 after the decomposition. Both the difference in MI and Pearson-Coff are statistically significant (p < 0.05). For the positive effect of deblurring on diagnosis of lung cancer, the AUC was improved from 0.731 to 0.841 using three-fold cross validation. We conclude that the low-rank matrix decomposition method is promising in reducing the errors in DWI lung images caused by noise and artifacts and improving diagnostics. Further investigations are warranted to understand the full utilities of the low-rank decomposition on lung DWI images. Graphical abstract.


Subject(s)
Diffusion Magnetic Resonance Imaging/methods , Lung Neoplasms/diagnostic imaging , Lung/diagnostic imaging , Adult , Aged , Algorithms , Area Under Curve , Artifacts , Biomedical Engineering , Diffusion Magnetic Resonance Imaging/statistics & numerical data , Female , Humans , Image Interpretation, Computer-Assisted/methods , Image Interpretation, Computer-Assisted/statistics & numerical data , Lung Neoplasms/classification , Male , Middle Aged , Motion , Retrospective Studies , Signal-To-Noise Ratio , Spatio-Temporal Analysis , Young Adult
18.
Comput Methods Programs Biomed ; 195: 105518, 2020 Oct.
Article in English | MEDLINE | ID: mdl-32480189

ABSTRACT

BACKGROUND AND OBJECTIVE: Automatic detection of the masses in mammograms is a big challenge and plays a crucial role to assist radiologists for accurate diagnosis. In this paper, a bilateral image analysis method based on Convolution Neural Network (CNN) is developed for mass detection in mammograms. METHODS: The proposed bilateral mass detection method consists of two networks: a registration network for registering bilateral mammograms and a Siamese-Faster-RCNN network for mass detection using a pair of registered mammograms. In the first step, self-supervised learning network is built to learn the spatial transformation between bilateral mammograms. This network can directly estimate spatial transformation by maximizing an image-wise similarity metric and corresponding points labeling is not needed. In the second step, an end-to-end network combining the Region Proposal Network (RPN) and a Siamese Fully Connected (Siamese-FC) network is designed. Different from existing methods, the designed network integrates mass detection on single image with registered bilateral images comparison. RESULTS: The proposed method is evaluated on three datasets (publicly available dataset INbreast and private dataset BCPKUPH and TXMD). For INbreast dataset, the proposed method achieves 0.88 true positive rate (TPR) with 1.12 false positives per image (FPs/I). For BCPKUPH dataset, the proposed method achieves 0.85 TPR with 1.86 FPs/I. For TXMD dataset, the proposed method achieves 0.85 TPR with 2.70 FPs/I. CONCLUSIONS: Registration experimental result shows that the proposed method is suitable for bilateral mass detection. Mass detection experimental results show that the proposed method performs better than unilateral mass detection method, different bilateral connection schemes and image level fusion bilateral schemes.


Subject(s)
Mammography , Neural Networks, Computer , Image Processing, Computer-Assisted
19.
Sensors (Basel) ; 20(11)2020 Jun 02.
Article in English | MEDLINE | ID: mdl-32498321

ABSTRACT

A 3D ultrasound image reconstruction technique, named probe sector matching (PSM), is proposed in this paper for a freehand linear array ultrasound probe equipped with multiple sensors, providing the position and attitude of the transducer and the pressure between the transducer and the target surface. The proposed PSM method includes three main steps. First, the imaging target and the working range of the probe are set to be the center and the radius of the imaging field of view, respectively. To reconstruct a 3D volume, the positions of all necessary probe sectors are pre-calculated inversely to form a sector database. Second, 2D cross-section probe sectors with the corresponding optical positioning, attitude and pressure information are collected when the ultrasound probe is moving around the imaging target. Last, an improved 3D Hough transform is used to match the plane of the current probe sector to the existing sector images in the sector database. After all pre-calculated probe sectors are acquired and matched into the 3D space defined by the sector database, a 3D ultrasound reconstruction is completed. The PSM is validated through two experiments: a virtual simulation using a numerical model and a lab experiment using a real physical model. The experimental results show that the PSM effectively reduces the errors caused by changes in the target position due to the uneven surface pressure or the inhomogeneity of the transmission media. We conclude that the PSM proposed in this study may help to design a lightweight, inexpensive and flexible ultrasound device with accurate 3D imaging capacity.

20.
Eur Radiol ; 30(8): 4595-4605, 2020 Aug.
Article in English | MEDLINE | ID: mdl-32222795

ABSTRACT

OBJECTIVES: We develop and validate a radiomics model based on multiparametric magnetic resonance imaging (MRI) in the classification of the pulmonary lesion and identify optimal machine learning methods. MATERIALS AND METHODS: This retrospective analysis included 201 patients (143 malignancies, 58 benign lesions). Radiomics features were extracted from multiparametric MRI, including T2-weighted imaging (T2WI), T1-weighted imaging (TIWI), and apparent diffusion coefficient (ADC) map. Three feature selection methods, including recursive feature elimination (RFE), t test, and least absolute shrinkage and selection operator (LASSO), and three classification methods, including linear discriminate analysis (LDA), support vector machine (SVM), and random forest (RF) were used to distinguish benign and malignant pulmonary lesions. Performance was compared by AUC, sensitivity, accuracy, precision, and specificity. Analysis of performance differences in three randomly drawn cross-validation sets verified the stability of the results. RESULTS: For most single MR sequences or combinations of multiple MR sequences, RFE feature selection method with SVM classifier had the best performance, followed by RFE with RF. The radiomics model based on multiple sequences showed a higher diagnostic accuracy than single sequence for every machine learning method. Using RFE with SVM, the joint model of T1WI, T2WI, and ADC showed the highest performance with AUC = 0.88 ± 0.02 (sensitivity 83%; accuracy 82%; precision 91%; specificity 79%) in test set. CONCLUSION: Quantitative radiomics features based on multiparametric MRI have good performance in differentiating lung malignancies and benign lesions. The machine learning method of RFE with SVM is superior to the combination of other feature selection and classifier methods. KEY POINTS: • Radiomics approach has the potential to distinguish between benign and malignant pulmonary lesions. • Radiomics model based on multiparametric MRI has better performance than single-sequence models. • The machine learning methods RFE with SVM perform best in the current cohort.


Subject(s)
Lung Diseases/classification , Lung/diagnostic imaging , Machine Learning , Multiparametric Magnetic Resonance Imaging/methods , Adult , Aged , Cohort Studies , Female , Humans , Lung Diseases/diagnosis , Male , Middle Aged , ROC Curve , Retrospective Studies , Support Vector Machine , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...