Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 99
Filter
1.
Article in English | MEDLINE | ID: mdl-38083363

ABSTRACT

Prostate cancer (PCa) is one of the most prevalent cancers in men. Early diagnosis plays a pivotal role in reducing the mortality rate from clinically significant PCa (csPCa). In recent years, bi-parametric magnetic resonance imaging (bpMRI) has attracted great attention for the detection and diagnosis of csPCa. bpMRI is able to overcome some limitations of multi-parametric MRI (mpMRI) such as the use of contrast agents, the time-consuming for imaging and the costs, and achieve detection performance comparable to mpMRI. However, inter-reader agreements are currently low for prostate MRI. Advancements in artificial intelligence (AI) have propelled the development of deep learning (DL)-based computer-aided detection and diagnosis system (CAD). However, most of the existing DL models developed for csPCa identification are restricted by the scale of data and the scarcity in labels. In this paper, we propose a self-supervised pre-training scheme named SSPT-bpMRI with an image restoration pretext task integrating four different image transformations to improve the performance of DL algorithms. Specially, we explored the potential value of the self-supervised pre-training in fully supervised and weakly supervised situations. Experiments on the publicly available PI-CAI dataset demonstrate that our model outperforms the fully supervised or weakly supervised model alone.


Subject(s)
Multiparametric Magnetic Resonance Imaging , Prostatic Neoplasms , Male , Humans , Prostate/pathology , Artificial Intelligence , Magnetic Resonance Imaging/methods , Prostatic Neoplasms/diagnostic imaging , Prostatic Neoplasms/pathology , Multiparametric Magnetic Resonance Imaging/methods
2.
Article in English | MEDLINE | ID: mdl-38083742

ABSTRACT

Positron emission tomography (PET) is the most sensitive molecular imaging modality routinely applied in our modern healthcare. High radioactivity caused by the injected tracer dose is a major concern in PET imaging and limits its clinical applications. However, reducing the dose leads to inadequate image quality for diagnostic practice. Motivated by the need to produce high quality images with minimum 'low-dose', convolutional neural networks (CNNs) based methods have been developed for high quality PET synthesis from its low-dose counterparts. Previous CNNs-based studies usually directly map low-dose PET into features space without consideration of different dose reduction level. In this study, a novel approach named CG-3DSRGAN (Classification-Guided Generative Adversarial Network with Super Resolution Refinement) is presented. Specifically, a multi-tasking coarse generator, guided by a classification head, allows for a more comprehensive understanding of the noise-level features present in the low-dose data, resulting in improved image synthesis. Moreover, to recover spatial details of standard PET, an auxiliary super resolution network - Contextual-Net - is proposed as a second-stage training to narrow the gap between coarse prediction and standard PET. We compared our method to the state-of-the-art methods on whole-body PET with different dose reduction factors (DRF). Experiments demonstrate our method can outperform others on all DRF.Clinical Relevance- Low-Dose PET, PET recovery, GAN, task driven image synthesis, super resolution.


Subject(s)
Image Processing, Computer-Assisted , Positron-Emission Tomography , Image Processing, Computer-Assisted/methods , Positron-Emission Tomography/methods , Neural Networks, Computer
3.
J Digit Imaging ; 36(6): 2356-2366, 2023 12.
Article in English | MEDLINE | ID: mdl-37553526

ABSTRACT

Coronavirus disease 2019 (COVID-19) is caused by Severe Acute Respiratory Syndrome Coronavirus 2 which enters the body via the angiotensin-converting enzyme 2 (ACE2) and altering its gene expression. Altered ACE2 plays a crucial role in the pathogenesis of COVID-19. Gene expression profiling, however, is invasive and costly, and is not routinely performed. In contrast, medical imaging such as computed tomography (CT) captures imaging features that depict abnormalities, and it is widely available. Computerized quantification of image features has enabled 'radiogenomics', a research discipline that identifies image features that are associated with molecular characteristics. Radiogenomics between ACE2 and COVID-19 has yet to be done primarily due to the lack of ACE2 expression data among COVID-19 patients. Similar to COVID-19, patients with lung adenocarcinoma (LUAD) exhibit altered ACE2 expression and, LUAD data are abundant. We present a radiogenomics framework to derive image features (ACE2-RGF) associated with ACE2 expression data from LUAD. The ACE2-RGF was then used as a surrogate biomarker for ACE2 expression. We adopted conventional feature selection techniques including ElasticNet and LASSO. Our results show that: i) the ACE2-RGF encoded a distinct collection of image features when compared to conventional techniques, ii) the ACE2-RGF can classify COVID-19 from normal subjects with a comparable performance to conventional feature selection techniques with an AUC of 0.92, iii) ACE2-RGF can effectively identify patients with critical illness with an AUC of 0.85. These findings provide unique insights for automated COVID-19 analysis and future research.


Subject(s)
COVID-19 , Humans , COVID-19/diagnostic imaging , Angiotensin-Converting Enzyme 2 , Peptidyl-Dipeptidase A/genetics , Peptidyl-Dipeptidase A/metabolism , SARS-CoV-2/metabolism , Tomography, X-Ray Computed
4.
J Struct Biol ; 215(1): 107940, 2023 03.
Article in English | MEDLINE | ID: mdl-36709787

ABSTRACT

Cryo-electron microscopy (cryo-EM) single-particle analysis is a revolutionary imaging technique to resolve and visualize biomacromolecules. Image alignment in cryo-EM is an important and basic step to improve the precision of the image distance calculation. However, it is a very challenging task due to high noise and low signal-to-noise ratio. Therefore, we propose a new deep unsupervised difference learning (UDL) strategy with novel pseudo-label guided learning network architecture and apply it to pair-wise image alignment in cryo-EM. The training framework is fully unsupervised. Furthermore, a variant of UDL called joint UDL (JUDL), is also proposed, which is capable of utilizing the similarity information of the whole dataset and thus further increase the alignment precision. Assessments on both real-world and synthetic cryo-EM single-particle image datasets suggest the new unsupervised joint alignment method can achieve more accurate alignment results. Our method is highly efficient by taking advantages of GPU devices. The source code of our methods is publicly available at "http://www.csbio.sjtu.edu.cn/bioinf/JointUDL/" for academic use.


Subject(s)
Single Molecule Imaging , Software , Cryoelectron Microscopy/methods , Signal-To-Noise Ratio , Image Processing, Computer-Assisted/methods
5.
IEEE Trans Med Imaging ; 42(4): 1185-1196, 2023 04.
Article in English | MEDLINE | ID: mdl-36446017

ABSTRACT

Anomaly detection in fundus images remains challenging due to the fact that fundus images often contain diverse types of lesions with various properties in locations, sizes, shapes, and colors. Current methods achieve anomaly detection mainly through reconstructing or separating the fundus image background from a fundus image under the guidance of a set of normal fundus images. The reconstruction methods, however, ignore the constraint from lesions. The separation methods primarily model the diverse lesions with pixel-based independent and identical distributed (i.i.d.) properties, neglecting the individualized variations of different types of lesions and their structural properties. And hence, these methods may have difficulty to well distinguish lesions from fundus image backgrounds especially with the normal personalized variations (NPV). To address these challenges, we propose a patch-based non-i.i.d. mixture of Gaussian (MoG) to model diverse lesions for adapting to their statistical distribution variations in different fundus images and their patch-like structural properties. Further, we particularly introduce the weighted Schatten p-norm as the metric of low-rank decomposition for enhancing the accuracy of the learned fundus image backgrounds and reducing false-positives caused by NPV. With the individualized modeling of the diverse lesions and the background learning, fundus image backgrounds and NPV are finely learned and subsequently distinguished from diverse lesions, to ultimately improve the anomaly detection. The proposed method is evaluated on two real-world databases and one artificial database, outperforming the state-of-the-art methods.


Subject(s)
Fundus Oculi , Normal Distribution , Databases, Factual
6.
IEEE Trans Cybern ; 53(6): 3532-3545, 2023 Jun.
Article in English | MEDLINE | ID: mdl-34851845

ABSTRACT

Motion estimation is a fundamental step in dynamic medical image processing for the assessment of target organ anatomy and function. However, existing image-based motion estimation methods, which optimize the motion field by evaluating the local image similarity, are prone to produce implausible estimation, especially in the presence of large motion. In addition, the correct anatomical topology is difficult to be preserved as the image global context is not well incorporated into motion estimation. In this study, we provide a novel motion estimation framework of dense-sparse-dense (DSD), which comprises two stages. In the first stage, we process the raw dense image to extract sparse landmarks to represent the target organ's anatomical topology, and discard the redundant information that is unnecessary for motion estimation. For this purpose, we introduce an unsupervised 3-D landmark detection network to extract spatially sparse but representative landmarks for the target organ's motion estimation. In the second stage, we derive the sparse motion displacement from the extracted sparse landmarks of two images of different time points. Then, we present a motion reconstruction network to construct the motion field by projecting the sparse landmarks' displacement back into the dense image domain. Furthermore, we employ the estimated motion field from our two-stage DSD framework as initialization and boost the motion estimation quality in light-weight yet effective iterative optimization. We evaluate our method on two dynamic medical imaging tasks to model cardiac motion and lung respiratory motion, respectively. Our method has produced superior motion estimation accuracy compared to the existing comparative methods. Besides, the extensive experimental results demonstrate that our solution can extract well-representative anatomical landmarks without any requirement of manual annotation. Our code is publicly available online: https://github.com/yyguo-sjtu/DSD-3D-Unsupervised-Landmark-Detection-Based-Motion-Estimation.


Subject(s)
Image Processing, Computer-Assisted , Image Processing, Computer-Assisted/methods , Motion
7.
Sci Rep ; 12(1): 2173, 2022 02 09.
Article in English | MEDLINE | ID: mdl-35140267

ABSTRACT

Radiogenomics relationships (RRs) aims to identify statistically significant correlations between medical image features and molecular characteristics from analysing tissue samples. Previous radiogenomics studies mainly relied on a single category of image feature extraction techniques (ETs); these are (i) handcrafted ETs that encompass visual imaging characteristics, curated from knowledge of human experts and, (ii) deep ETs that quantify abstract-level imaging characteristics from large data. Prior studies therefore failed to leverage the complementary information that are accessible from fusing the ETs. In this study, we propose a fused feature signature (FFSig): a selection of image features from handcrafted and deep ETs (e.g., transfer learning and fine-tuning of deep learning models). We evaluated the FFSig's ability to better represent RRs compared to individual ET approaches with two public datasets: the first dataset was used to build the FFSig using 89 patients with non-small cell lung cancer (NSCLC) comprising of gene expression data and CT images of the thorax and the upper abdomen for each patient; the second NSCLC dataset comprising of 117 patients with CT images and RNA-Seq data and was used as the validation set. Our results show that our FFSig encoded complementary imaging characteristics of tumours and identified more RRs with a broader range of genes that are related to important biological functions such as tumourigenesis. We suggest that the FFSig has the potential to identify important RRs that may assist cancer diagnosis and treatment in the future.


Subject(s)
Carcinoma, Non-Small-Cell Lung/diagnostic imaging , Carcinoma, Non-Small-Cell Lung/genetics , Imaging Genomics , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/genetics , Deep Learning , Gene Ontology , Humans , RNA-Seq , Tomography, X-Ray Computed , Transcriptome
8.
IEEE Trans Pattern Anal Mach Intell ; 44(9): 4776-4792, 2022 09.
Article in English | MEDLINE | ID: mdl-33755558

ABSTRACT

Saliency detection by human refers to the ability to identify pertinent information using our perceptive and cognitive capabilities. While human perception is attracted by visual stimuli, our cognitive capability is derived from the inspiration of constructing concepts of reasoning. Saliency detection has gained intensive interest with the aim of resembling human 'perceptual' system. However, saliency related to human 'cognition', particularly the analysis of complex salient regions ('cogitating' process), is yet to be fully exploited. We propose to resemble human cognition, coupled with human perception, to improve saliency detection. We recognize saliency in three phases ('Seeing' - 'Perceiving' - 'Cogitating), mimicking human's perceptive and cognitive thinking of an image. In our method, 'Seeing' phase is related to human perception, and we formulate the 'Perceiving' and 'Cogitating' phases related to the human cognition systems via deep neural networks (DNNs) to construct a new module (Cognitive Gate) that enhances the DNN features for saliency detection. To the best of our knowledge, this is the first work that established DNNs to resemble human cognition for saliency detection. In our experiments, our approach outperformed 17 benchmarking DNN methods on six well-recognized datasets, demonstrating that resembling human cognition improves saliency detection.


Subject(s)
Algorithms , Neural Networks, Computer , Cognition , Humans
9.
Phys Med Biol ; 66(24)2021 12 07.
Article in English | MEDLINE | ID: mdl-34818637

ABSTRACT

Objective.Positron emission tomography-computed tomography (PET-CT) is regarded as the imaging modality of choice for the management of soft-tissue sarcomas (STSs). Distant metastases (DM) are the leading cause of death in STS patients and early detection is important to effectively manage tumors with surgery, radiotherapy and chemotherapy. In this study, we aim to early detect DM in patients with STS using their PET-CT data.Approach.We derive a new convolutional neural network method for early DM detection. The novelty of our method is the introduction of a constrained hierarchical multi-modality feature learning approach to integrate functional imaging (PET) features with anatomical imaging (CT) features. In addition, we removed the reliance on manual input, e.g. tumor delineation, for extracting imaging features.Main results.Our experimental results on a well-established benchmark PET-CT dataset show that our method achieved the highest accuracy (0.896) and AUC (0.903) scores when compared to the state-of-the-art methods (unpaired student's t-testp-value < 0.05).Significance.Our method could be an effective and supportive tool to aid physicians in tumor quantification and in identifying image biomarkers for cancer treatment.


Subject(s)
Deep Learning , Sarcoma , Soft Tissue Neoplasms , Humans , Neural Networks, Computer , Positron Emission Tomography Computed Tomography/methods , Sarcoma/diagnostic imaging , Soft Tissue Neoplasms/diagnostic imaging
10.
J Chem Inf Model ; 61(9): 4795-4806, 2021 09 27.
Article in English | MEDLINE | ID: mdl-34523929

ABSTRACT

Cryo-electron microscopy (cryo-EM) single-particle image analysis is a powerful technique to resolve structures of biomacromolecules, while the challenge is that the cryo-EM image is of a low signal-to-noise ratio. For both two-dimensional image analysis and three-dimensional density map analysis, image alignment is an important step to improve the precision of the image distance calculation. In this paper, we introduce a new algorithm for performing two-dimensional pairwise alignment for cryo-EM particle images, which is based on the Fourier transform and power spectrum analysis. Compared to the existing heuristic iterative alignment methods, our method utilizes the signal distribution and signal feature on images' power spectrum to directly compute the alignment parameters. It does not require iterative computations and is robust against the cryo-EM image noise. Both theoretical analysis and experimental results suggest that our power-spectrum-feature-based alignment method is highly computational-efficient and is capable of offering effective alignment results. This new alignment algorithm is publicly available at: www.csbio.sjtu.edu.cn/bioinf/EMAF/for academic use.


Subject(s)
Algorithms , Image Processing, Computer-Assisted , Cryoelectron Microscopy , Signal-To-Noise Ratio , Single Molecule Imaging
11.
Front Oncol ; 11: 723345, 2021.
Article in English | MEDLINE | ID: mdl-34589429

ABSTRACT

OBJECTIVES: The accurate assessment of lymph node metastases (LNMs) and the preoperative nodal (N) stage are critical for the precise treatment of patients with gastric cancer (GC). The diagnostic performance, however, of current imaging procedures used for this assessment is sub-optimal. Our aim was to investigate the value of preoperative 18F-FDG PET/CT radiomic features to predict LNMs and the N stage. METHODS: We retrospectively collected clinical and 18F-FDG PET/CT imaging data of 185 patients with GC who underwent total or partial radical gastrectomy. Patients were allocated to training and validation sets using the stratified method at a fixed ratio (8:2). There were 2,100 radiomic features extracted from the 18F-FDG PET/CT scans. After selecting radiomic features by the random forest, relevancy-based, and sequential forward selection methods, the BalancedBagging ensemble classifier was established for the preoperative prediction of LNMs, and the OneVsRest classifier for the N stage. The performance of the models was primarily evaluated by the AUC and accuracy, and validated by the independent validation methods. Analysis of the feature importance and the correlation were also conducted. We also compared the predictive performance of our radiomic models to that with the contrast-enhanced CT (CECT) and 18F-FDG PET/CT. RESULTS: There were 185 patients-127 men, 58 women, with the median age of 62, and an age range of 22-86 years. One CT feature and one PET feature were selected to predict LNMs and achieved the best performance (AUC: 82.2%, accuracy: 85.2%). This radiomic model also detected some LNMs that were missed in CECT (19.6%) and 18F-FDG PET/CT (35.7%). For predicting the N stage, four CT features and one PET feature were selected (AUC: 73.7%, accuracy: 62.3%). Of note, a proportion of patients in the validation set whose LNMs were incorrectly staged by CECT (57.4%) and 18F-FDG PET/CT (55%) were diagnosed correctly by our radiomic model. CONCLUSION: We developed and validated two machine learning models based on the preoperative 18F-FDG PET/CT images that have a predictive value for LNMs and the N stage in GC. These predictive models show a promise to offer a potentially useful adjunct to current staging approaches for patients with GC.

12.
EBioMedicine ; 69: 103471, 2021 Jul.
Article in English | MEDLINE | ID: mdl-34229277

ABSTRACT

BACKGROUND: Metabolic syndrome (MetS) is highly related to the excessive accumulation of visceral adipose tissue (VAT). Quantitative measurements of VAT are commonly applied in clinical practice for measurement of metabolic risks; however, it remains largely unknown whether the texture of VAT can evaluate visceral adiposity, stratify MetS and predict surgery-induced weight loss effects. METHODS: 675 Chinese adult volunteers and 63 obese patients (with bariatric surgery) were enrolled. Texture features were extracted from VATs of the computed tomography (CT) scans and machine learning was applied to identify significant imaging biomarkers associated with metabolic-related traits. FINDINGS: Combined with sex, ten VAT texture features achieved areas under the curve (AUCs) of 0.872, 0.888, 0.961, and 0.947 for predicting the prevalence of insulin resistance, MetS, central obesity, and visceral obesity, respectively. A novel imaging biomarker, RunEntropy, was identified to be significantly associated with major metabolic outcomes and a 3.5-year follow-up in 338 volunteers demonstrated its long-term effectiveness. More importantly, the preoperative imaging biomarkers yielded high AUCs and accuracies for estimation of surgery responses, including the percentage of excess weight loss (%EWL) (0.867 and 74.6%), postoperative BMI group (0.930 and 76.1%), postoperative insulin resistance (0.947 and 88.9%), and excess visceral fat loss (the proportion of visceral fat reduced over 50%; 0.928 and 84.1%). INTERPRETATION: This study shows that the texture features of VAT have significant clinical implications in evaluating metabolic disorders and predicting surgery-induced weight loss effects. FUNDING: The complete list of funders can be found in the Acknowledgement section.


Subject(s)
Bariatric Surgery/adverse effects , Intra-Abdominal Fat/diagnostic imaging , Metabolic Diseases/diagnostic imaging , Postoperative Complications/diagnostic imaging , Tomography, X-Ray Computed/methods , Weight Loss , Adult , Female , Humans , Male
13.
IEEE Trans Cybern ; 51(12): 5907-5920, 2021 Dec.
Article in English | MEDLINE | ID: mdl-31976925

ABSTRACT

As a fundamental requirement to many computer vision systems, saliency detection has experienced substantial progress in recent years based on deep neural networks (DNNs). Most DNN-based methods rely on either sparse or dense labeling, and thus they are subject to the inherent limitations of the chosen labeling schemes. DNN dense labeling captures salient objects mainly from global features, which are often hampered by other visually distinctive regions. On the other hand, DNN sparse labeling is usually impeded by inaccurate presegmentation of the images that it depends on. To address these limitations, we propose a new framework consisting of two pathways and an Aggregator to progressively integrate the DNN sparse and DNN dense labeling schemes to derive the final saliency map. In our "zipper" type aggregation, we propose a multiscale kernels approach to extract optimal criteria for saliency detection where we suppress nonsalient regions in the sparse labeling while guiding the dense labeling to recognize more complete extent of the saliency. We demonstrate that our method outperforms in saliency detection compared to other 11 state-of-the-art methods across six well-recognized benchmarking datasets.


Subject(s)
Neural Networks, Computer
14.
IEEE J Biomed Health Inform ; 25(5): 1686-1698, 2021 05.
Article in English | MEDLINE | ID: mdl-32841131

ABSTRACT

Laparoscopic videos have been increasingly acquired for various purposes including surgical training and quality assurance, due to the wide adoption of laparoscopy in minimally invasive surgeries. However, it is very time consuming to view a large amount of laparoscopic videos, which prevents the values of laparoscopic video archives from being well exploited. In this paper, a dictionary selection based video summarization method is proposed to effectively extract keyframes for fast access of laparoscopic videos. Firstly, unlike the low-level feature used in most existing summarization methods, deep features are extracted from a convolutional neural network to effectively represent video frames. Secondly, based on such a deep representation, laparoscopic video summarization is formulated as a diverse and weighted dictionary selection model, in which image quality is taken into account to select high quality keyframes, and a diversity regularization term is added to reduce redundancy among the selected keyframes. Finally, an iterative algorithm with a rapid convergence rate is designed for model optimization, and the convergence of the proposed method is also analyzed. Experimental results on a recently released laparoscopic dataset demonstrate the clear superiority of the proposed methods. The proposed method can facilitate the access of key information in surgeries, training of junior clinicians, explanations to patients, and archive of case files.


Subject(s)
Laparoscopy , Algorithms , Humans , Minimally Invasive Surgical Procedures , Neural Networks, Computer , Video Recording
15.
Transl Lung Cancer Res ; 9(3): 549-562, 2020 Jun.
Article in English | MEDLINE | ID: mdl-32676319

ABSTRACT

BACKGROUND: Identification of epidermal growth factor receptor (EGFR) mutation types is crucial before tyrosine kinase inhibitors (TKIs) treatment. Radiomics is a new strategy to noninvasively predict the genetic status of cancer. In this study, we aimed to develop a predictive model based on 18F-fluorodeoxyglucose positron emission tomography-computed tomography (18F-FDG PET/CT) radiomic features to identify the specific EGFR mutation subtypes. METHODS: We retrospectively studied 18F-FDG PET/CT images of 148 patients with isolated lung lesions, which were scanned in two hospitals with different CT scan setting (slice thickness: 3 and 5 mm, respectively). The tumor regions were manually segmented on PET/CT images, and 1,570 radiomic features (1,470 from CT and 100 from PET) were extracted from the tumor regions. Seven hundred and ninety-four radiomic features insensitive to different CT settings were first selected using the Mann white U test, and collinear features were further removed from them by recursively calculating the variation inflation factor. Then, multiple supervised machine learning models were applied to identify prognostic radiomic features through: (I) a multi-variate random forest to select features of high importance in discriminating different EGFR mutation status; (II) a logistic regression model to select features of the highest predictive value of the EGFR subtypes. The EGFR mutation predicting model was constructed from prognostic radiomic features using the popular Xgboost machine-learning algorithm and validated using 3-fold cross-validation. The performance of predicting model was analyzed using the receiver operating characteristic curve (ROC) and measured with the area under the curve (AUC). RESULTS: Two sets of prognostic radiomic features were found for specific EGFR mutation subtypes: 5 radiomic features for EGFR exon 19 deletions, and 5 radiomic features for EGFR exon 21 L858R missense. The corresponding radiomic predictors achieved the prediction accuracies of 0.77 and 0.92 in terms of AUC, respectively. Combing these two predictors, the overall model for predicting EGFR mutation positivity was also constructed, and the AUC was 0.87. CONCLUSIONS: In our study, we established predictive models based on radiomic analysis of 18F-FDG PET/CT images. And it achieved a satisfying prediction power in the identification of EGFR mutation status as well as the certain EGFR mutation subtypes in lung cancer.

16.
Theranostics ; 10(12): 5565-5577, 2020.
Article in English | MEDLINE | ID: mdl-32373231

ABSTRACT

Chondral and osteochondral defects caused by trauma or pathological changes, commonly progress into total joint degradation, even resulting in disability. The cartilage restoration is a great challenge because of its avascularity and limited proliferative ability. Additionally, precise diagnosis using non-invasive detection techniques is challenging, which increases problems associated with chondral disease treatment. Methods: To achieve a theranostic goal, we used an integrated strategy that relies on exploiting a multifunctional nanoprobe based on chitosan-modified Fe3O4 nanoparticles, which spontaneously self-assemble with the oppositely charged small molecule growth factor, kartogenin (KGN). This nanoprobe was used to obtain distinctively brighter T2-weighted magnetic resonance (MR) imaging, allowing its use as a positive contrast agent, and could be applied to obtain accurate diagnosis and osteochondral regeneration therapy. Results: This nanoprobe was first investigated using adipose tissue-derived stem cells (ADSCs), and was found to be a novel positive contrast agent that also plays a significant role in stimulating ADSCs differentiation into chondrocytes. This self-assembled probe was not only biocompatible both in vitro and in vivo, contributing to cellular internalization, but was also used to successfully make distinction of normal/damaged tissue in T2-weighted MR imaging. This novel combination was systematically shown to be biosafe via the decrement of apparent MR signals and elimination of ferroferric oxide over a 12-week regeneration period. Conclusion: Here, we established a novel method for osteochondral disease diagnosis and reconstruction. Using the Fe3O4-CS/KGN nanoprobe, it is easy to distinguish the defect position, and it could act as a tool for dynamic observation as well as a stem cell-based therapy for directionally chondral differentiation.


Subject(s)
Anilides/pharmacology , Cartilage Diseases/therapy , Chitosan/chemistry , Chondrocytes/cytology , Mesenchymal Stem Cells/cytology , Nanoparticles/administration & dosage , Phthalic Acids/pharmacology , Anilides/chemistry , Animals , Biocompatible Materials/chemistry , Biocompatible Materials/pharmacology , Cartilage Diseases/metabolism , Cartilage Diseases/pathology , Cells, Cultured , Chondrocytes/drug effects , Chondrocytes/metabolism , Disease Models, Animal , Ferrosoferric Oxide/chemistry , Ferrosoferric Oxide/pharmacology , Magnetic Resonance Imaging/methods , Male , Mesenchymal Stem Cells/drug effects , Mesenchymal Stem Cells/metabolism , Nanoparticles/chemistry , Phthalic Acids/chemistry , Rabbits , Regeneration/physiology
17.
IEEE Trans Med Imaging ; 39(7): 2385-2394, 2020 07.
Article in English | MEDLINE | ID: mdl-32012005

ABSTRACT

The accuracy and robustness of image classification with supervised deep learning are dependent on the availability of large-scale labelled training data. In medical imaging, these large labelled datasets are sparse, mainly related to the complexity in manual annotation. Deep convolutional neural networks (CNNs), with transferable knowledge, have been employed as a solution to limited annotated data through: 1) fine-tuning generic knowledge with a relatively smaller amount of labelled medical imaging data, and 2) learning image representation that is invariant to different domains. These approaches, however, are still reliant on labelled medical image data. Our aim is to use a new hierarchical unsupervised feature extractor to reduce reliance on annotated training data. Our unsupervised approach uses a multi-layer zero-bias convolutional auto-encoder that constrains the transformation of generic features from a pre-trained CNN (for natural images) to non-redundant and locally relevant features for the medical image data. We also propose a context-based feature augmentation scheme to improve the discriminative power of the feature representation. We evaluated our approach on 3 public medical image datasets and compared it to other state-of-the-art supervised CNNs. Our unsupervised approach achieved better accuracy when compared to other conventional unsupervised methods and baseline fine-tuned CNNs.


Subject(s)
Diagnostic Imaging , Neural Networks, Computer
18.
Eur J Nucl Med Mol Imaging ; 47(5): 1116-1126, 2020 05.
Article in English | MEDLINE | ID: mdl-31982990

ABSTRACT

PURPOSE: Pathologic complete response (pCR) to neoadjuvant chemotherapy (NAC) is commonly accepted as the gold standard to assess outcome after NAC in breast cancer patients. 18F-Fluorodeoxyglucose positron emission tomography/computed tomography (PET/CT) has unique value in tumor staging, predicting prognosis, and evaluating treatment response. Our aim was to determine if we could identify radiomic predictors from PET/CT in breast cancer patient therapeutic efficacy prior to NAC. METHODS: This retrospective study included 100 breast cancer patients who received NAC; there were 2210 PET/CT radiomic features extracted. Unsupervised and supervised machine learning models were used to identify the prognostic radiomic predictors through the following: (1) selection of the significant (p < 0.05) imaging features from consensus clustering and the Wilcoxon signed-rank test; (2) selection of the most discriminative features via univariate random forest (Uni-RF) and the Pearson correlation matrix (PCM); and (3) determination of the most predictive features from a traversal feature selection (TFS) based on a multivariate random forest (RF). The prediction model was constructed with RF and then validated with 10-fold cross-validation for 30 times and then independently validated. The performance of the radiomic predictors was measured in terms of area under the curve (AUC), sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). RESULTS: The PET/CT radiomic predictors achieved a prediction accuracy of 0.857 (AUC = 0.844) on the training split set and 0.767 (AUC = 0.722) on the independent validation set. When age was incorporated, the accuracy for the split set increased to 0.857 (AUC = 0.958) and 0.8 (AUC = 0.73) for the independent validation set and both outperformed the clinical prediction model. We also found a close association between the radiomic features, receptor expression, and tumor T stage. CONCLUSION: Radiomic predictors from pre-treatment PET/CT scans when combined with patient age were able to predict pCR after NAC. We suggest that these data will be valuable for patient management.


Subject(s)
Breast Neoplasms , Fluorodeoxyglucose F18 , Breast Neoplasms/diagnostic imaging , Breast Neoplasms/drug therapy , Humans , Models, Statistical , Neoadjuvant Therapy , Positron Emission Tomography Computed Tomography , Prognosis , Radiopharmaceuticals , Retrospective Studies
19.
ACS Biomater Sci Eng ; 6(11): 6276-6284, 2020 11 09.
Article in English | MEDLINE | ID: mdl-33449656

ABSTRACT

Articular cartilage has a highly organized structure, responsible for supporting tremendous mechanical loads. How to repair defected articular cartilage has become a great challenge as the avascular nature of cartilage limits its regenerative ability. Aiming to facilitate chondrogenic differentiation and cartilage regeneration, we recently explored a novel combination therapy using soluble poly-l-lysine/Kartogenin (L-K) nanoparticles and a poly(lactic-co-glycolic acid) PLGA/methacrylated hyaluronic acid (PLHA) complex scaffold. The potential use for joint cartilage reconstruction was investigated through L-K nanoparticles stimulating adipose-derived stem cells (ADSCs) on PLHA scaffolding, which ultimately differentiated into cartilage in vivo. In this study, on one hand, an effective method was established for obtaining uniform L-K nanoparticles by self-assembly. They were further proved to be biocompatible to ADSCs via cytotoxicity assays in vitro and to accelerate ADSCs secreting type 2 collagen in a dose-dependent manner by immunofluorescence. On the other hand, the porous PLHA scaffold was manufactured by the combination of coprecipitation and ultraviolet (UV) cross-linking. Nanoindentation technology-verified PLHA had an appropriate stiffness close to actual cartilage tissue. Additional microscopic observation confirmed that the PLHA platform supported proliferation and chondrogenesis for ADSCs in vitro. In the presence of ADSCs, a 12-week osteochondral defect regeneration by the combination therapy showed that smooth and intact cartilage tissue successfully regenerated. Furthermore, the results of combination therapy were superior to those of phosphate-buffered saline (PBS) only, KGN, or KGN/PLHA treatment. The results of magnetic resonance imaging (MRI) and histological assessment indicated that the renascent tissue gradually regenerated while the PLHA scaffold degraded. In conclusion, we have developed a novel multidimensional combination therapy of cartilage defect repair that facilitated cartilage regeneration. This strategy has a great clinical translational potential for articular cartilage repair in the near future.


Subject(s)
Chondrogenesis , Polymers , Anilides , Phthalic Acids , Regeneration , Tissue Scaffolds
20.
Med Image Anal ; 56: 140-151, 2019 08.
Article in English | MEDLINE | ID: mdl-31229759

ABSTRACT

The availability of large-scale annotated image datasets and recent advances in supervised deep learning methods enable the end-to-end derivation of representative image features that can impact a variety of image analysis problems. Such supervised approaches, however, are difficult to implement in the medical domain where large volumes of labelled data are difficult to obtain due to the complexity of manual annotation and inter- and intra-observer variability in label assignment. We propose a new convolutional sparse kernel network (CSKN), which is a hierarchical unsupervised feature learning framework that addresses the challenge of learning representative visual features in medical image analysis domains where there is a lack of annotated training data. Our framework has three contributions: (i) we extend kernel learning to identify and represent invariant features across image sub-patches in an unsupervised manner. (ii) We initialise our kernel learning with a layer-wise pre-training scheme that leverages the sparsity inherent in medical images to extract initial discriminative features. (iii) We adapt a multi-scale spatial pyramid pooling (SPP) framework to capture subtle geometric differences between learned visual features. We evaluated our framework in medical image retrieval and classification on three public datasets. Our results show that our CSKN had better accuracy when compared to other conventional unsupervised methods and comparable accuracy to methods that used state-of-the-art supervised convolutional neural networks (CNNs). Our findings indicate that our unsupervised CSKN provides an opportunity to leverage unannotated big data in medical imaging repositories.


Subject(s)
Diagnostic Imaging , Image Processing, Computer-Assisted/methods , Supervised Machine Learning , Unsupervised Machine Learning , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...