Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters











Database
Language
Publication year range
1.
BME Front ; 2022: 9763284, 2022.
Article in English | MEDLINE | ID: mdl-37850158

ABSTRACT

Objective and Impact Statement. We adopt a deep learning model for bone osteolysis prediction on computed tomography (CT) images of murine breast cancer bone metastases. Given the bone CT scans at previous time steps, the model incorporates the bone-cancer interactions learned from the sequential images and generates future CT images. Its ability of predicting the development of bone lesions in cancer-invading bones can assist in assessing the risk of impending fractures and choosing proper treatments in breast cancer bone metastasis. Introduction. Breast cancer often metastasizes to bone, causes osteolytic lesions, and results in skeletal-related events (SREs) including severe pain and even fatal fractures. Although current imaging techniques can detect macroscopic bone lesions, predicting the occurrence and progression of bone lesions remains a challenge. Methods. We adopt a temporal variational autoencoder (T-VAE) model that utilizes a combination of variational autoencoders and long short-term memory networks to predict bone lesion emergence on our micro-CT dataset containing sequential images of murine tibiae. Given the CT scans of murine tibiae at early weeks, our model can learn the distribution of their future states from data. Results. We test our model against other deep learning-based prediction models on the bone lesion progression prediction task. Our model produces much more accurate predictions than existing models under various evaluation metrics. Conclusion. We develop a deep learning framework that can accurately predict and visualize the progression of osteolytic bone lesions. It will assist in planning and evaluating treatment strategies to prevent SREs in breast cancer patients.

2.
Neural Netw ; 132: 66-74, 2020 Dec.
Article in English | MEDLINE | ID: mdl-32861915

ABSTRACT

Caricature generation is an interesting yet challenging task. The primary goal is to generate a plausible caricature with reasonable exaggerations given a face image. Conventional caricature generation approaches mainly use low-level geometric transformations such as image warping to generate exaggerated images, which lack richness and diversity in terms of content and style. The recent progress in generative adversarial networks (GANs) makes it possible to learn an image-to-image transformation from data so as to generate diverse images. However, directly applying GAN-based models to this task leads to unsatisfactory results due to the large variance in the caricature distribution. Moreover, conventional models typically require pixel-wisely paired training data which largely limits their usage scenarios. In this paper, we model caricature generation as a weakly paired image-to-image translation task, and propose CariGAN to address these issues. Specifically, to enforce reasonable exaggeration and facial deformation, manually annotated caricature facial landmarks are used as an additional condition to constrain the generated image. Furthermore, an image fusion mechanism is designed to encourage our model to focus on the key facial parts so that more vivid details in these regions can be generated. Finally, a diversity loss is proposed to encourage the model to produce diverse results. Extensive experiments on a large-scale "WebCaricature" dataset show that the proposed CariGAN can generate more visually plausible caricatures with larger diversity compared with the state-of-the-art models.


Subject(s)
Automated Facial Recognition/methods , Caricatures as Topic , Image Processing, Computer-Assisted/methods , Face , Family Characteristics , Female , Humans , Male
3.
IEEE Trans Med Imaging ; 39(3): 634-643, 2020 03.
Article in English | MEDLINE | ID: mdl-31395543

ABSTRACT

Current deep neural network based approaches to computed tomography (CT) metal artifact reduction (MAR) are supervised methods that rely on synthesized metal artifacts for training. However, as synthesized data may not accurately simulate the underlying physical mechanisms of CT imaging, the supervised methods often generalize poorly to clinical applications. To address this problem, we propose, to the best of our knowledge, the first unsupervised learning approach to MAR. Specifically, we introduce a novel artifact disentanglement network that disentangles the metal artifacts from CT images in the latent space. It supports different forms of generations (artifact reduction, artifact transfer, and self-reconstruction, etc.) with specialized loss functions to obviate the need for supervision with synthesized data. Extensive experiments show that when applied to a synthesized dataset, our method addresses metal artifacts significantly better than the existing unsupervised models designed for natural image-to-image translation problems, and achieves comparable performance to existing supervised models for MAR. When applied to clinical datasets, our method demonstrates better generalization ability over the supervised models. The source code of this paper is publicly available at https:// github.com/liaohaofu/adn.


Subject(s)
Image Processing, Computer-Assisted/methods , Metals/isolation & purification , Neural Networks, Computer , Tomography, X-Ray Computed/methods , Artifacts , Humans , Machine Learning
4.
ACS Biomater Sci Eng ; 6(11): 6241-6252, 2020 11 09.
Article in English | MEDLINE | ID: mdl-33449646

ABSTRACT

Structural bone allograft transplantation remains one of the common strategies for repair and reconstruction of large bone defects. Due to the loss of periosteum that covers the outer surface of the cortical bone, the healing and incorporation of allografts is extremely slow and limited. To enhance the biological performance of allografts, herein, we report a novel and simple approach for engineering a periosteum mimetic coating on the surface of structural bone allografts via polymer-mediated electrospray deposition. This approach enables the coating on allografts with precisely controlled composition and thickness. In addition, the periosteum mimetic coating can be tailored to achieve desired drug release profiles by making use of an appropriate biodegradable polymer or polymer blend. The efficacy study in a murine segmental femoral bone defect model demonstrates that the allograft coating composed of poly(lactic-co-glycolic acid) and bone morphogenetic protein-2 mimicking peptide significantly improves allograft healing as evidenced by decreased fibrotic tissue formation, increased periosteal bone formation, and enhanced osseointegration. Taken together, this study provides a platform technology for engineering a periosteum mimetic coating which can greatly promote bone allograft healing. This technology could eventually result in an off-the-shelf and multifunctional structural bone allograft for highly effective repair and reconstruction of large segmental bone defects. The technology can also be used to ameliorate the performance of other medical implants by modifying their surfaces.


Subject(s)
Mesenchymal Stem Cells , Periosteum , Allografts , Animals , Bone Transplantation , Mice , Tissue Engineering
5.
IEEE Trans Med Imaging ; 37(5): 1266-1275, 2018 05.
Article in English | MEDLINE | ID: mdl-29727289

ABSTRACT

Automatic vertebrae identification and localization from arbitrary computed tomography (CT) images is challenging. Vertebrae usually share similar morphological appearance. Because of pathology and the arbitrary field-of-view of CT scans, one can hardly rely on the existence of some anchor vertebrae or parametric methods to model the appearance and shape. To solve the problem, we argue that: 1) one should make use of the short-range contextual information, such as the presence of some nearby organs (if any), to roughly estimate the target vertebrae; and 2) due to the unique anatomic structure of the spine column, vertebrae have fixed sequential order, which provides the important long-range contextual information to further calibrate the results. We propose a robust and efficient vertebrae identification and localization system that can inherently learn to incorporate both the short- and long-range contextual information in a supervised manner. To this end, we develop a multi-task 3-D fully convolutional neural network to effectively extract the short-range contextual information around the target vertebrae. For the long-range contextual information, we propose a multi-task bidirectional recurrent neural network to encode the spatial and contextual information among the vertebrae of the visible spine column. We demonstrate the effectiveness of the proposed approach on a challenging data set, and the experimental results show that our approach outperforms the state-of-the-art methods by a significant margin.


Subject(s)
Image Processing, Computer-Assisted/methods , Spine/diagnostic imaging , Tomography, X-Ray Computed/methods , Algorithms , Deep Learning , Humans
SELECTION OF CITATIONS
SEARCH DETAIL