Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 99
Filtrar
1.
J Struct Biol ; 215(1): 107940, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36709787

RESUMO

Cryo-electron microscopy (cryo-EM) single-particle analysis is a revolutionary imaging technique to resolve and visualize biomacromolecules. Image alignment in cryo-EM is an important and basic step to improve the precision of the image distance calculation. However, it is a very challenging task due to high noise and low signal-to-noise ratio. Therefore, we propose a new deep unsupervised difference learning (UDL) strategy with novel pseudo-label guided learning network architecture and apply it to pair-wise image alignment in cryo-EM. The training framework is fully unsupervised. Furthermore, a variant of UDL called joint UDL (JUDL), is also proposed, which is capable of utilizing the similarity information of the whole dataset and thus further increase the alignment precision. Assessments on both real-world and synthetic cryo-EM single-particle image datasets suggest the new unsupervised joint alignment method can achieve more accurate alignment results. Our method is highly efficient by taking advantages of GPU devices. The source code of our methods is publicly available at "http://www.csbio.sjtu.edu.cn/bioinf/JointUDL/" for academic use.


Assuntos
Imagem Individual de Molécula , Software , Microscopia Crioeletrônica/métodos , Razão Sinal-Ruído , Processamento de Imagem Assistida por Computador/métodos
2.
J Digit Imaging ; 36(6): 2356-2366, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37553526

RESUMO

Coronavirus disease 2019 (COVID-19) is caused by Severe Acute Respiratory Syndrome Coronavirus 2 which enters the body via the angiotensin-converting enzyme 2 (ACE2) and altering its gene expression. Altered ACE2 plays a crucial role in the pathogenesis of COVID-19. Gene expression profiling, however, is invasive and costly, and is not routinely performed. In contrast, medical imaging such as computed tomography (CT) captures imaging features that depict abnormalities, and it is widely available. Computerized quantification of image features has enabled 'radiogenomics', a research discipline that identifies image features that are associated with molecular characteristics. Radiogenomics between ACE2 and COVID-19 has yet to be done primarily due to the lack of ACE2 expression data among COVID-19 patients. Similar to COVID-19, patients with lung adenocarcinoma (LUAD) exhibit altered ACE2 expression and, LUAD data are abundant. We present a radiogenomics framework to derive image features (ACE2-RGF) associated with ACE2 expression data from LUAD. The ACE2-RGF was then used as a surrogate biomarker for ACE2 expression. We adopted conventional feature selection techniques including ElasticNet and LASSO. Our results show that: i) the ACE2-RGF encoded a distinct collection of image features when compared to conventional techniques, ii) the ACE2-RGF can classify COVID-19 from normal subjects with a comparable performance to conventional feature selection techniques with an AUC of 0.92, iii) ACE2-RGF can effectively identify patients with critical illness with an AUC of 0.85. These findings provide unique insights for automated COVID-19 analysis and future research.


Assuntos
COVID-19 , Humanos , COVID-19/diagnóstico por imagem , Enzima de Conversão de Angiotensina 2 , Peptidil Dipeptidase A/genética , Peptidil Dipeptidase A/metabolismo , SARS-CoV-2/metabolismo , Tomografia Computadorizada por Raios X
3.
J Chem Inf Model ; 61(9): 4795-4806, 2021 09 27.
Artigo em Inglês | MEDLINE | ID: mdl-34523929

RESUMO

Cryo-electron microscopy (cryo-EM) single-particle image analysis is a powerful technique to resolve structures of biomacromolecules, while the challenge is that the cryo-EM image is of a low signal-to-noise ratio. For both two-dimensional image analysis and three-dimensional density map analysis, image alignment is an important step to improve the precision of the image distance calculation. In this paper, we introduce a new algorithm for performing two-dimensional pairwise alignment for cryo-EM particle images, which is based on the Fourier transform and power spectrum analysis. Compared to the existing heuristic iterative alignment methods, our method utilizes the signal distribution and signal feature on images' power spectrum to directly compute the alignment parameters. It does not require iterative computations and is robust against the cryo-EM image noise. Both theoretical analysis and experimental results suggest that our power-spectrum-feature-based alignment method is highly computational-efficient and is capable of offering effective alignment results. This new alignment algorithm is publicly available at: www.csbio.sjtu.edu.cn/bioinf/EMAF/for academic use.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Microscopia Crioeletrônica , Razão Sinal-Ruído , Imagem Individual de Molécula
4.
Eur J Nucl Med Mol Imaging ; 47(5): 1116-1126, 2020 05.
Artigo em Inglês | MEDLINE | ID: mdl-31982990

RESUMO

PURPOSE: Pathologic complete response (pCR) to neoadjuvant chemotherapy (NAC) is commonly accepted as the gold standard to assess outcome after NAC in breast cancer patients. 18F-Fluorodeoxyglucose positron emission tomography/computed tomography (PET/CT) has unique value in tumor staging, predicting prognosis, and evaluating treatment response. Our aim was to determine if we could identify radiomic predictors from PET/CT in breast cancer patient therapeutic efficacy prior to NAC. METHODS: This retrospective study included 100 breast cancer patients who received NAC; there were 2210 PET/CT radiomic features extracted. Unsupervised and supervised machine learning models were used to identify the prognostic radiomic predictors through the following: (1) selection of the significant (p < 0.05) imaging features from consensus clustering and the Wilcoxon signed-rank test; (2) selection of the most discriminative features via univariate random forest (Uni-RF) and the Pearson correlation matrix (PCM); and (3) determination of the most predictive features from a traversal feature selection (TFS) based on a multivariate random forest (RF). The prediction model was constructed with RF and then validated with 10-fold cross-validation for 30 times and then independently validated. The performance of the radiomic predictors was measured in terms of area under the curve (AUC), sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). RESULTS: The PET/CT radiomic predictors achieved a prediction accuracy of 0.857 (AUC = 0.844) on the training split set and 0.767 (AUC = 0.722) on the independent validation set. When age was incorporated, the accuracy for the split set increased to 0.857 (AUC = 0.958) and 0.8 (AUC = 0.73) for the independent validation set and both outperformed the clinical prediction model. We also found a close association between the radiomic features, receptor expression, and tumor T stage. CONCLUSION: Radiomic predictors from pre-treatment PET/CT scans when combined with patient age were able to predict pCR after NAC. We suggest that these data will be valuable for patient management.


Assuntos
Neoplasias da Mama , Fluordesoxiglucose F18 , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/tratamento farmacológico , Humanos , Modelos Estatísticos , Terapia Neoadjuvante , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Prognóstico , Compostos Radiofarmacêuticos , Estudos Retrospectivos
5.
Eur Radiol ; 29(6): 2958-2967, 2019 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-30643940

RESUMO

OBJECTIVES: To determine the integrative value of clinical, hematological, and computed tomography (CT) radiomic features in survival prediction for locally advanced non-small cell lung cancer (LA-NSCLC) patients. METHODS: Radiomic features and clinical and hematological features of 118 LA-NSCLC cases were firstly extracted and analyzed in this study. Then, stable and prognostic radiomic features were automatically selected using the consensus clustering method with either Cox proportional hazard (CPH) model or random survival forest (RSF) analysis. Predictive radiomic, clinical, and hematological parameters were subsequently fitted into a final prognostic model using both the CPH model and the RSF model. A multimodality nomogram was then established from the fitting model and was cross-validated. Finally, calibration curves were generated with the predicted versus actual survival status. RESULTS: Radiomic features selected by clustering combined with CPH were found to be more predictive, with a C-index of 0.699 in comparison to 0.648 by clustering combined with RSF. Based on multivariate CPH model, our integrative nomogram achieved a C-index of 0.792 and retained 0.743 in the cross-validation analysis, outperforming radiomic, clinical, or hematological model alone. The calibration curve showed agreement between predicted and actual values for the 1-year and 2-year survival prediction. Interestingly, the selected important radiomic features were significantly correlated with levels of platelet, platelet/lymphocyte ratio (PLR), and lymphocyte/monocyte ratio (LMR) (p values all < 0.05). CONCLUSIONS: The integrative nomogram incorporated CT radiomic, clinical, and hematological features improved survival prediction in LA-NSCLC patients, which would offer a feasible and practical reference for individualized management of these patients. KEY POINTS: • An integrative nomogram incorporated CT radiomic, clinical, and hematological features was constructed and cross-validated to predict prognosis of LA-NSCLC patients. • The integrative nomogram outperformed radiomic, clinical, or hematological model alone. • This nomogram has value to permit non-invasive, comprehensive, and dynamical evaluation of the phenotypes of LA-NSCLC and can provide a feasible and practical reference for individualized management of LA-NSCLC patients.


Assuntos
Biomarcadores Tumorais/sangue , Carcinoma Pulmonar de Células não Pequenas/diagnóstico , Neoplasias Pulmonares/diagnóstico , Estadiamento de Neoplasias/métodos , Tomografia Computadorizada por Raios X/métodos , Carcinoma Pulmonar de Células não Pequenas/sangue , Carcinoma Pulmonar de Células não Pequenas/mortalidade , China/epidemiologia , Feminino , Seguimentos , Humanos , Neoplasias Pulmonares/sangue , Neoplasias Pulmonares/mortalidade , Masculino , Pessoa de Meia-Idade , Nomogramas , Prognóstico , Taxa de Sobrevida/tendências , Fatores de Tempo
6.
J Biomed Inform ; 79: 117-128, 2018 03.
Artigo em Inglês | MEDLINE | ID: mdl-29366586

RESUMO

Pulmonary cancer is considered as one of the major causes of death worldwide. For the detection of lung cancer, computer-assisted diagnosis (CADx) systems have been designed. Internet-of-Things (IoT) has enabled ubiquitous internet access to biomedical datasets and techniques; in result, the progress in CADx is significant. Unlike the conventional CADx, deep learning techniques have the basic advantage of an automatic exploitation feature as they have the ability to learn mid and high level image representations. We proposed a Computer-Assisted Decision Support System in Pulmonary Cancer by using the novel deep learning based model and metastasis information obtained from MBAN (Medical Body Area Network). The proposed model, DFCNet, is based on the deep fully convolutional neural network (FCNN) which is used for classification of each detected pulmonary nodule into four lung cancer stages. The performance of proposed work is evaluated on different datasets with varying scan conditions. Comparison of proposed classifier is done with the existing CNN techniques. Overall accuracy of CNN and DFCNet was 77.6% and 84.58%, respectively. Experimental results illustrate the effectiveness of proposed method for the detection and classification of lung cancer nodules. These results demonstrate the potential for the proposed technique in helping the radiologists in improving nodule detection accuracy with efficiency.


Assuntos
Sistemas de Apoio a Decisões Clínicas , Diagnóstico por Computador/métodos , Neoplasias Pulmonares/diagnóstico por imagem , Estadiamento de Neoplasias/métodos , Tomografia Computadorizada por Raios X , Algoritmos , Bases de Dados Factuais , Tomada de Decisões , Humanos , Processamento de Imagem Assistida por Computador/métodos , Internet , Pulmão/diagnóstico por imagem , Aprendizado de Máquina , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão , Software , Nódulo Pulmonar Solitário/diagnóstico por imagem , Avaliação de Sintomas
7.
BMC Bioinformatics ; 17(1): 465, 2016 Nov 16.
Artigo em Inglês | MEDLINE | ID: mdl-27852213

RESUMO

BACKGROUND: Bioimage classification is a fundamental problem for many important biological studies that require accurate cell phenotype recognition, subcellular localization, and histopathological classification. In this paper, we present a new bioimage classification method that can be generally applicable to a wide variety of classification problems. We propose to use a high-dimensional multi-modal descriptor that combines multiple texture features. We also design a novel subcategory discriminant transform (SDT) algorithm to further enhance the discriminative power of descriptors by learning convolution kernels to reduce the within-class variation and increase the between-class difference. RESULTS: We evaluate our method on eight different bioimage classification tasks using the publicly available IICBU 2008 database. Each task comprises a separate dataset, and the collection represents typical subcellular, cellular, and tissue level classification problems. Our method demonstrates improved classification accuracy (0.9 to 9%) on six tasks when compared to state-of-the-art approaches. We also find that SDT outperforms the well-known dimension reduction techniques, with for example 0.2 to 13% improvement over linear discriminant analysis. CONCLUSIONS: We present a general bioimage classification method, which comprises a highly descriptive visual feature representation and a learning-based discriminative feature transformation algorithm. Our evaluation on the IICBU 2008 database demonstrates improved performance over the state-of-the-art for six different classification tasks.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Animais , Bases de Dados Factuais , Análise Discriminante , Humanos
8.
BMC Bioinformatics ; 17: 309, 2016 Aug 19.
Artigo em Inglês | MEDLINE | ID: mdl-27538893

RESUMO

BACKGROUND: Direct volume rendering is one of flexible and effective approaches to inspect large volumetric data such as medical and biological images. In conventional volume rendering, it is often time consuming to set up a meaningful illumination environment. Moreover, conventional illumination approaches usually assign same values of variables of an illumination model to different structures manually and thus neglect the important illumination variations due to structure differences. RESULTS: We introduce a novel illumination design paradigm for volume rendering on the basis of topology to automate illumination parameter definitions meaningfully. The topological features are extracted from the contour tree of an input volumetric data. The automation of illumination design is achieved based on four aspects of attenuation, distance, saliency, and contrast perception. To better distinguish structures and maximize illuminance perception differences of structures, a two-phase topology-aware illuminance perception contrast model is proposed based on the psychological concept of Just-Noticeable-Difference. CONCLUSIONS: The proposed approach allows meaningful and efficient automatic generations of illumination in volume rendering. Our results showed that our approach is more effective in depth and shape depiction, as well as providing higher perceptual differences between structures.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Imageamento Tridimensional , Razão Sinal-Ruído
9.
J Digit Imaging ; 26(6): 1025-39, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-23846532

RESUMO

Medical imaging is fundamental to modern healthcare, and its widespread use has resulted in the creation of image databases, as well as picture archiving and communication systems. These repositories now contain images from a diverse range of modalities, multidimensional (three-dimensional or time-varying) images, as well as co-aligned multimodality images. These image collections offer the opportunity for evidence-based diagnosis, teaching, and research; for these applications, there is a requirement for appropriate methods to search the collections for images that have characteristics similar to the case(s) of interest. Content-based image retrieval (CBIR) is an image search technique that complements the conventional text-based retrieval of images by using visual features, such as color, texture, and shape, as search criteria. Medical CBIR is an established field of study that is beginning to realize promise when applied to multidimensional and multimodality medical data. In this paper, we present a review of state-of-the-art medical CBIR approaches in five main categories: two-dimensional image retrieval, retrieval of images with three or more dimensions, the use of nonimage data to enhance the retrieval, multimodality image retrieval, and retrieval from diverse datasets. We use these categories as a framework for discussing the state of the art, focusing on the characteristics and modalities of the information used during medical image retrieval.


Assuntos
Diagnóstico por Imagem/normas , Interpretação de Imagem Assistida por Computador , Armazenamento e Recuperação da Informação/métodos , Sistemas de Informação em Radiologia , Estudos Transversais , Sistemas de Gerenciamento de Base de Dados/organização & administração , Diagnóstico por Imagem/estatística & dados numéricos , Feminino , Humanos , Masculino , Imagem Multimodal/normas , Imagem Multimodal/estatística & dados numéricos , Sensibilidade e Especificidade , Software , Integração de Sistemas
10.
Artigo em Inglês | MEDLINE | ID: mdl-38083363

RESUMO

Prostate cancer (PCa) is one of the most prevalent cancers in men. Early diagnosis plays a pivotal role in reducing the mortality rate from clinically significant PCa (csPCa). In recent years, bi-parametric magnetic resonance imaging (bpMRI) has attracted great attention for the detection and diagnosis of csPCa. bpMRI is able to overcome some limitations of multi-parametric MRI (mpMRI) such as the use of contrast agents, the time-consuming for imaging and the costs, and achieve detection performance comparable to mpMRI. However, inter-reader agreements are currently low for prostate MRI. Advancements in artificial intelligence (AI) have propelled the development of deep learning (DL)-based computer-aided detection and diagnosis system (CAD). However, most of the existing DL models developed for csPCa identification are restricted by the scale of data and the scarcity in labels. In this paper, we propose a self-supervised pre-training scheme named SSPT-bpMRI with an image restoration pretext task integrating four different image transformations to improve the performance of DL algorithms. Specially, we explored the potential value of the self-supervised pre-training in fully supervised and weakly supervised situations. Experiments on the publicly available PI-CAI dataset demonstrate that our model outperforms the fully supervised or weakly supervised model alone.


Assuntos
Imageamento por Ressonância Magnética Multiparamétrica , Neoplasias da Próstata , Masculino , Humanos , Próstata/patologia , Inteligência Artificial , Imageamento por Ressonância Magnética/métodos , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia , Imageamento por Ressonância Magnética Multiparamétrica/métodos
11.
Artigo em Inglês | MEDLINE | ID: mdl-38083742

RESUMO

Positron emission tomography (PET) is the most sensitive molecular imaging modality routinely applied in our modern healthcare. High radioactivity caused by the injected tracer dose is a major concern in PET imaging and limits its clinical applications. However, reducing the dose leads to inadequate image quality for diagnostic practice. Motivated by the need to produce high quality images with minimum 'low-dose', convolutional neural networks (CNNs) based methods have been developed for high quality PET synthesis from its low-dose counterparts. Previous CNNs-based studies usually directly map low-dose PET into features space without consideration of different dose reduction level. In this study, a novel approach named CG-3DSRGAN (Classification-Guided Generative Adversarial Network with Super Resolution Refinement) is presented. Specifically, a multi-tasking coarse generator, guided by a classification head, allows for a more comprehensive understanding of the noise-level features present in the low-dose data, resulting in improved image synthesis. Moreover, to recover spatial details of standard PET, an auxiliary super resolution network - Contextual-Net - is proposed as a second-stage training to narrow the gap between coarse prediction and standard PET. We compared our method to the state-of-the-art methods on whole-body PET with different dose reduction factors (DRF). Experiments demonstrate our method can outperform others on all DRF.Clinical Relevance- Low-Dose PET, PET recovery, GAN, task driven image synthesis, super resolution.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons/métodos , Redes Neurais de Computação
12.
IEEE Trans Cybern ; 53(6): 3532-3545, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-34851845

RESUMO

Motion estimation is a fundamental step in dynamic medical image processing for the assessment of target organ anatomy and function. However, existing image-based motion estimation methods, which optimize the motion field by evaluating the local image similarity, are prone to produce implausible estimation, especially in the presence of large motion. In addition, the correct anatomical topology is difficult to be preserved as the image global context is not well incorporated into motion estimation. In this study, we provide a novel motion estimation framework of dense-sparse-dense (DSD), which comprises two stages. In the first stage, we process the raw dense image to extract sparse landmarks to represent the target organ's anatomical topology, and discard the redundant information that is unnecessary for motion estimation. For this purpose, we introduce an unsupervised 3-D landmark detection network to extract spatially sparse but representative landmarks for the target organ's motion estimation. In the second stage, we derive the sparse motion displacement from the extracted sparse landmarks of two images of different time points. Then, we present a motion reconstruction network to construct the motion field by projecting the sparse landmarks' displacement back into the dense image domain. Furthermore, we employ the estimated motion field from our two-stage DSD framework as initialization and boost the motion estimation quality in light-weight yet effective iterative optimization. We evaluate our method on two dynamic medical imaging tasks to model cardiac motion and lung respiratory motion, respectively. Our method has produced superior motion estimation accuracy compared to the existing comparative methods. Besides, the extensive experimental results demonstrate that our solution can extract well-representative anatomical landmarks without any requirement of manual annotation. Our code is publicly available online: https://github.com/yyguo-sjtu/DSD-3D-Unsupervised-Landmark-Detection-Based-Motion-Estimation.


Assuntos
Processamento de Imagem Assistida por Computador , Processamento de Imagem Assistida por Computador/métodos , Movimento (Física)
13.
IEEE Trans Med Imaging ; 42(4): 1185-1196, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36446017

RESUMO

Anomaly detection in fundus images remains challenging due to the fact that fundus images often contain diverse types of lesions with various properties in locations, sizes, shapes, and colors. Current methods achieve anomaly detection mainly through reconstructing or separating the fundus image background from a fundus image under the guidance of a set of normal fundus images. The reconstruction methods, however, ignore the constraint from lesions. The separation methods primarily model the diverse lesions with pixel-based independent and identical distributed (i.i.d.) properties, neglecting the individualized variations of different types of lesions and their structural properties. And hence, these methods may have difficulty to well distinguish lesions from fundus image backgrounds especially with the normal personalized variations (NPV). To address these challenges, we propose a patch-based non-i.i.d. mixture of Gaussian (MoG) to model diverse lesions for adapting to their statistical distribution variations in different fundus images and their patch-like structural properties. Further, we particularly introduce the weighted Schatten p-norm as the metric of low-rank decomposition for enhancing the accuracy of the learned fundus image backgrounds and reducing false-positives caused by NPV. With the individualized modeling of the diverse lesions and the background learning, fundus image backgrounds and NPV are finely learned and subsequently distinguished from diverse lesions, to ultimately improve the anomaly detection. The proposed method is evaluated on two real-world databases and one artificial database, outperforming the state-of-the-art methods.


Assuntos
Fundo de Olho , Distribuição Normal , Bases de Dados Factuais
14.
IEEE Trans Pattern Anal Mach Intell ; 44(9): 4776-4792, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-33755558

RESUMO

Saliency detection by human refers to the ability to identify pertinent information using our perceptive and cognitive capabilities. While human perception is attracted by visual stimuli, our cognitive capability is derived from the inspiration of constructing concepts of reasoning. Saliency detection has gained intensive interest with the aim of resembling human 'perceptual' system. However, saliency related to human 'cognition', particularly the analysis of complex salient regions ('cogitating' process), is yet to be fully exploited. We propose to resemble human cognition, coupled with human perception, to improve saliency detection. We recognize saliency in three phases ('Seeing' - 'Perceiving' - 'Cogitating), mimicking human's perceptive and cognitive thinking of an image. In our method, 'Seeing' phase is related to human perception, and we formulate the 'Perceiving' and 'Cogitating' phases related to the human cognition systems via deep neural networks (DNNs) to construct a new module (Cognitive Gate) that enhances the DNN features for saliency detection. To the best of our knowledge, this is the first work that established DNNs to resemble human cognition for saliency detection. In our experiments, our approach outperformed 17 benchmarking DNN methods on six well-recognized datasets, demonstrating that resembling human cognition improves saliency detection.


Assuntos
Algoritmos , Redes Neurais de Computação , Cognição , Humanos
15.
Sci Rep ; 12(1): 2173, 2022 02 09.
Artigo em Inglês | MEDLINE | ID: mdl-35140267

RESUMO

Radiogenomics relationships (RRs) aims to identify statistically significant correlations between medical image features and molecular characteristics from analysing tissue samples. Previous radiogenomics studies mainly relied on a single category of image feature extraction techniques (ETs); these are (i) handcrafted ETs that encompass visual imaging characteristics, curated from knowledge of human experts and, (ii) deep ETs that quantify abstract-level imaging characteristics from large data. Prior studies therefore failed to leverage the complementary information that are accessible from fusing the ETs. In this study, we propose a fused feature signature (FFSig): a selection of image features from handcrafted and deep ETs (e.g., transfer learning and fine-tuning of deep learning models). We evaluated the FFSig's ability to better represent RRs compared to individual ET approaches with two public datasets: the first dataset was used to build the FFSig using 89 patients with non-small cell lung cancer (NSCLC) comprising of gene expression data and CT images of the thorax and the upper abdomen for each patient; the second NSCLC dataset comprising of 117 patients with CT images and RNA-Seq data and was used as the validation set. Our results show that our FFSig encoded complementary imaging characteristics of tumours and identified more RRs with a broader range of genes that are related to important biological functions such as tumourigenesis. We suggest that the FFSig has the potential to identify important RRs that may assist cancer diagnosis and treatment in the future.


Assuntos
Carcinoma Pulmonar de Células não Pequenas/diagnóstico por imagem , Carcinoma Pulmonar de Células não Pequenas/genética , Genômica por Imageamento , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/genética , Aprendizado Profundo , Ontologia Genética , Humanos , RNA-Seq , Tomografia Computadorizada por Raios X , Transcriptoma
16.
IEEE Trans Cybern ; 51(12): 5907-5920, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31976925

RESUMO

As a fundamental requirement to many computer vision systems, saliency detection has experienced substantial progress in recent years based on deep neural networks (DNNs). Most DNN-based methods rely on either sparse or dense labeling, and thus they are subject to the inherent limitations of the chosen labeling schemes. DNN dense labeling captures salient objects mainly from global features, which are often hampered by other visually distinctive regions. On the other hand, DNN sparse labeling is usually impeded by inaccurate presegmentation of the images that it depends on. To address these limitations, we propose a new framework consisting of two pathways and an Aggregator to progressively integrate the DNN sparse and DNN dense labeling schemes to derive the final saliency map. In our "zipper" type aggregation, we propose a multiscale kernels approach to extract optimal criteria for saliency detection where we suppress nonsalient regions in the sparse labeling while guiding the dense labeling to recognize more complete extent of the saliency. We demonstrate that our method outperforms in saliency detection compared to other 11 state-of-the-art methods across six well-recognized benchmarking datasets.


Assuntos
Redes Neurais de Computação
17.
IEEE J Biomed Health Inform ; 25(5): 1686-1698, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-32841131

RESUMO

Laparoscopic videos have been increasingly acquired for various purposes including surgical training and quality assurance, due to the wide adoption of laparoscopy in minimally invasive surgeries. However, it is very time consuming to view a large amount of laparoscopic videos, which prevents the values of laparoscopic video archives from being well exploited. In this paper, a dictionary selection based video summarization method is proposed to effectively extract keyframes for fast access of laparoscopic videos. Firstly, unlike the low-level feature used in most existing summarization methods, deep features are extracted from a convolutional neural network to effectively represent video frames. Secondly, based on such a deep representation, laparoscopic video summarization is formulated as a diverse and weighted dictionary selection model, in which image quality is taken into account to select high quality keyframes, and a diversity regularization term is added to reduce redundancy among the selected keyframes. Finally, an iterative algorithm with a rapid convergence rate is designed for model optimization, and the convergence of the proposed method is also analyzed. Experimental results on a recently released laparoscopic dataset demonstrate the clear superiority of the proposed methods. The proposed method can facilitate the access of key information in surgeries, training of junior clinicians, explanations to patients, and archive of case files.


Assuntos
Laparoscopia , Algoritmos , Humanos , Procedimentos Cirúrgicos Minimamente Invasivos , Redes Neurais de Computação , Gravação em Vídeo
18.
Phys Med Biol ; 66(24)2021 12 07.
Artigo em Inglês | MEDLINE | ID: mdl-34818637

RESUMO

Objective.Positron emission tomography-computed tomography (PET-CT) is regarded as the imaging modality of choice for the management of soft-tissue sarcomas (STSs). Distant metastases (DM) are the leading cause of death in STS patients and early detection is important to effectively manage tumors with surgery, radiotherapy and chemotherapy. In this study, we aim to early detect DM in patients with STS using their PET-CT data.Approach.We derive a new convolutional neural network method for early DM detection. The novelty of our method is the introduction of a constrained hierarchical multi-modality feature learning approach to integrate functional imaging (PET) features with anatomical imaging (CT) features. In addition, we removed the reliance on manual input, e.g. tumor delineation, for extracting imaging features.Main results.Our experimental results on a well-established benchmark PET-CT dataset show that our method achieved the highest accuracy (0.896) and AUC (0.903) scores when compared to the state-of-the-art methods (unpaired student's t-testp-value < 0.05).Significance.Our method could be an effective and supportive tool to aid physicians in tumor quantification and in identifying image biomarkers for cancer treatment.


Assuntos
Aprendizado Profundo , Sarcoma , Neoplasias de Tecidos Moles , Humanos , Redes Neurais de Computação , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Sarcoma/diagnóstico por imagem , Neoplasias de Tecidos Moles/diagnóstico por imagem
19.
Front Oncol ; 11: 723345, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34589429

RESUMO

OBJECTIVES: The accurate assessment of lymph node metastases (LNMs) and the preoperative nodal (N) stage are critical for the precise treatment of patients with gastric cancer (GC). The diagnostic performance, however, of current imaging procedures used for this assessment is sub-optimal. Our aim was to investigate the value of preoperative 18F-FDG PET/CT radiomic features to predict LNMs and the N stage. METHODS: We retrospectively collected clinical and 18F-FDG PET/CT imaging data of 185 patients with GC who underwent total or partial radical gastrectomy. Patients were allocated to training and validation sets using the stratified method at a fixed ratio (8:2). There were 2,100 radiomic features extracted from the 18F-FDG PET/CT scans. After selecting radiomic features by the random forest, relevancy-based, and sequential forward selection methods, the BalancedBagging ensemble classifier was established for the preoperative prediction of LNMs, and the OneVsRest classifier for the N stage. The performance of the models was primarily evaluated by the AUC and accuracy, and validated by the independent validation methods. Analysis of the feature importance and the correlation were also conducted. We also compared the predictive performance of our radiomic models to that with the contrast-enhanced CT (CECT) and 18F-FDG PET/CT. RESULTS: There were 185 patients-127 men, 58 women, with the median age of 62, and an age range of 22-86 years. One CT feature and one PET feature were selected to predict LNMs and achieved the best performance (AUC: 82.2%, accuracy: 85.2%). This radiomic model also detected some LNMs that were missed in CECT (19.6%) and 18F-FDG PET/CT (35.7%). For predicting the N stage, four CT features and one PET feature were selected (AUC: 73.7%, accuracy: 62.3%). Of note, a proportion of patients in the validation set whose LNMs were incorrectly staged by CECT (57.4%) and 18F-FDG PET/CT (55%) were diagnosed correctly by our radiomic model. CONCLUSION: We developed and validated two machine learning models based on the preoperative 18F-FDG PET/CT images that have a predictive value for LNMs and the N stage in GC. These predictive models show a promise to offer a potentially useful adjunct to current staging approaches for patients with GC.

20.
EBioMedicine ; 69: 103471, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-34229277

RESUMO

BACKGROUND: Metabolic syndrome (MetS) is highly related to the excessive accumulation of visceral adipose tissue (VAT). Quantitative measurements of VAT are commonly applied in clinical practice for measurement of metabolic risks; however, it remains largely unknown whether the texture of VAT can evaluate visceral adiposity, stratify MetS and predict surgery-induced weight loss effects. METHODS: 675 Chinese adult volunteers and 63 obese patients (with bariatric surgery) were enrolled. Texture features were extracted from VATs of the computed tomography (CT) scans and machine learning was applied to identify significant imaging biomarkers associated with metabolic-related traits. FINDINGS: Combined with sex, ten VAT texture features achieved areas under the curve (AUCs) of 0.872, 0.888, 0.961, and 0.947 for predicting the prevalence of insulin resistance, MetS, central obesity, and visceral obesity, respectively. A novel imaging biomarker, RunEntropy, was identified to be significantly associated with major metabolic outcomes and a 3.5-year follow-up in 338 volunteers demonstrated its long-term effectiveness. More importantly, the preoperative imaging biomarkers yielded high AUCs and accuracies for estimation of surgery responses, including the percentage of excess weight loss (%EWL) (0.867 and 74.6%), postoperative BMI group (0.930 and 76.1%), postoperative insulin resistance (0.947 and 88.9%), and excess visceral fat loss (the proportion of visceral fat reduced over 50%; 0.928 and 84.1%). INTERPRETATION: This study shows that the texture features of VAT have significant clinical implications in evaluating metabolic disorders and predicting surgery-induced weight loss effects. FUNDING: The complete list of funders can be found in the Acknowledgement section.


Assuntos
Cirurgia Bariátrica/efeitos adversos , Gordura Intra-Abdominal/diagnóstico por imagem , Doenças Metabólicas/diagnóstico por imagem , Complicações Pós-Operatórias/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Redução de Peso , Adulto , Feminino , Humanos , Masculino
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA