Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 21
Filtrar
1.
Lab Invest ; 101(4): 450-462, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-32829381

RESUMO

Radiomics has potential advantages in the noninvasive histopathological and molecular diagnosis of gliomas. We aimed to develop a novel image signature (IS)-based radiomics model to achieve multilayered preoperative diagnosis and prognostic stratification of gliomas. Herein, we established three separate case cohorts, consisting of 655 glioma patients, and carried out a retrospective study. Image and clinical data of three cohorts were used for training (N = 188), cross-validation (N = 411), and independent testing (N = 56) of the IS model. All tumors were segmented from magnetic resonance (MR) images by the 3D U-net, followed by extraction of high-throughput network features, which were referred to as IS. IS was then used to perform noninvasive histopathological diagnosis and molecular subtyping. Moreover, a new IS-based clustering method was applied for prognostic stratification in IDH-wild-type lower-grade glioma (IDHwt LGG) and triple-negative glioblastoma (1p19q retain/IDH wild-type/TERTp-wild-type GBM). The average accuracies of histological diagnosis and molecular subtyping were 89.8 and 86.1% in the cross-validation cohort, while these numbers reached 83.9 and 80.4% in the independent testing cohort. IS-based clustering method was demonstrated to successfully divide IDHwt LGG into two subgroups with distinct median overall survival time (48.63 vs 38.27 months respectively, P = 0.023), and two subgroups in triple-negative GBM with different median OS time (36.8 vs 18.2 months respectively, P = 0.013). Our findings demonstrate that our novel IS-based radiomics model is an effective tool to achieve noninvasive histo-molecular pathological diagnosis and prognostic stratification of gliomas. This IS model shows potential for future routine use in clinical practice.


Assuntos
Neoplasias Encefálicas/diagnóstico por imagem , Aprendizado Profundo , Glioma/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Adolescente , Adulto , Idoso , Neoplasias Encefálicas/patologia , Feminino , Glioma/patologia , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Técnicas de Diagnóstico Molecular , Prognóstico , Estudos Retrospectivos , Adulto Jovem
2.
Int J Neurosci ; 128(7): 608-618, 2018 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-29183170

RESUMO

PURPOSE OF THE STUDY: Due to the totally different therapeutic regimens needed for primary central nervous system lymphoma (PCNSL) and glioblastoma (GBM), accurate differentiation of the two diseases by noninvasive imaging techniques is important for clinical decision-making. MATERIALS AND METHODS: Thirty cases of PCNSL and 66 cases of GBM with conventional T1-contrast magnetic resonance imaging (MRI) were analyzed in this study. Convolutional neural networks was used to segment tumor automatically. A modified scale invariant feature transform (SIFT) method was utilized to extract three-dimensional local voxel arrangement information from segmented tumors. Fisher vector was proposed to normalize the dimension of SIFT features. An improved genetic algorithm (GA) was used to extract SIFT features with PCNSL and GBM discrimination ability. The data-set was divided into a cross-validation cohort and an independent validation cohort by the ratio of 2:1. Support vector machine with the leave-one-out cross-validation based on 20 cases of PCNSL and 44 cases of GBM was employed to build and validate the differentiation model. RESULTS: Among 16,384 high-throughput features, 1356 features show significant differences between PCNSL and GBM with p < 0.05 and 420 features with p < 0.001. A total of 496 features were finally chosen by improved GA algorithm. The proposed method produces PCNSL vs. GBM differentiation with an area under the curve (AUC) curve of 99.1% (98.2%), accuracy 95.3% (90.6%), sensitivity 85.0% (80.0%) and specificity 100% (95.5%) on the cross-validation cohort (and independent validation cohort). CONCLUSIONS: Since the local voxel arrangement characterization provided by SIFT features, proposed method produced more competitive PCNSL and GBM differentiation performance by using conventional MRI than methods based on advanced MRI.


Assuntos
Neoplasias do Sistema Nervoso Central/diagnóstico por imagem , Glioblastoma/diagnóstico por imagem , Linfoma/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Adulto , Idoso , Estudos de Coortes , Tomada de Decisões , Diagnóstico Diferencial , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Masculino , Pessoa de Meia-Idade , Sensibilidade e Especificidade
3.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 35(5): 754-760, 2018 10 25.
Artigo em Chinês | MEDLINE | ID: mdl-30370715

RESUMO

It is of great clinical significance in the differential diagnosis of primary central nervous system lymphoma (PCNSL) and glioblastoma (GBM) because there are enormous differences between them in terms of therapeutic regimens. In this paper, we propose a system based on sparse representation for automatic classification of PCNSL and GBM. The proposed system distinguishes the two tumors by using of the different texture detail information of the two tumors on T1 contrast magnetic resonance imaging (MRI) images. First, inspired by the process of radiomics, we designed a dictionary learning and sparse representation-based method to extract texture information, and with this approach, the tumors with different volume and shape were transformed into 968 quantitative texture features. Next, aiming at the problem of the redundancy in the extracted features, feature selection based on iterative sparse representation was set up to select some key texture features with high stability and discrimination. Finally, the selected key features are used for differentiation based on sparse representation classification (SRC) method. By using ten-fold cross-validation method, the differentiation based on the proposed approach presents accuracy of 96.36%, sensitivity 96.30%, and specificity 96.43%. Experimental results show that our approach not only effectively distinguish the two tumors but also has strong robustness in practical application since it avoids the process of parameter extraction on advanced MRI images.

4.
Eur Radiol ; 27(8): 3509-3522, 2017 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-28004160

RESUMO

OBJECTIVE: The status of isocitrate dehydrogenase 1 (IDH1) is highly correlated with the development, treatment and prognosis of glioma. We explored a noninvasive method to reveal IDH1 status by using a quantitative radiomics approach for grade II glioma. METHODS: A primary cohort consisting of 110 patients pathologically diagnosed with grade II glioma was retrospectively studied. The radiomics method developed in this paper includes image segmentation, high-throughput feature extraction, radiomics sequencing, feature selection and classification. Using the leave-one-out cross-validation (LOOCV) method, the classification result was compared with the real IDH1 situation from Sanger sequencing. Another independent validation cohort containing 30 patients was utilised to further test the method. RESULTS: A total of 671 high-throughput features were extracted and quantized. 110 features were selected by improved genetic algorithm. In LOOCV, the noninvasive IDH1 status estimation based on the proposed approach presented an estimation accuracy of 0.80, sensitivity of 0.83 and specificity of 0.74. Area under the receiver operating characteristic curve reached 0.86. Further validation on the independent cohort of 30 patients produced similar results. CONCLUSIONS: Radiomics is a potentially useful approach for estimating IDH1 mutation status noninvasively using conventional T2-FLAIR MRI images. The estimation accuracy could potentially be improved by using multiple imaging modalities. KEY POINTS: • Noninvasive IDH1 status estimation can be obtained with a radiomics approach. • Automatic and quantitative processes were established for noninvasive biomarker estimation. • High-throughput MRI features are highly correlated to IDH1 states. • Area under the ROC curve of the proposed estimation method reached 0.86.


Assuntos
Neoplasias Encefálicas/genética , Glioma/genética , Isocitrato Desidrogenase/genética , Mutação , Adolescente , Adulto , Idoso , Biomarcadores Tumorais/genética , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/patologia , Criança , Pré-Escolar , Estudos de Coortes , Feminino , Glioma/diagnóstico por imagem , Glioma/patologia , Humanos , Interpretação de Imagem Assistida por Computador/métodos , Lactente , Recém-Nascido , Imageamento por Ressonância Magnética/métodos , Masculino , Pessoa de Meia-Idade , Gradação de Tumores , Proteínas de Neoplasias/genética , Prognóstico , Curva ROC , Estudos Retrospectivos , Sensibilidade e Especificidade , Adulto Jovem
5.
Fundam Res ; 4(2): 291-299, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38933506

RESUMO

The photogenerated charge carrier separation and transportation of inside photocathodes can greatly influence the performance of photoelectrochemical (PEC) H2 production devices. Coupling TiO2 with p-type semiconductors to construct heterojunction structures is one of the most widely used strategies to facilitate charge separation and transportation. However, the band position of TiO2 could not perfectly match with all p-type semiconductors. Here, taking antimony selenide (Sb2Se3) as an example, a rational strategy was developed by introducing a viologen electron transfer mediator (ETM) containing polymeric film (poly-1,1'-dially-[4,4'-bipyridine]-1,1'-diium, denoted as PV2+) at the interface between Sb2Se3 and TiO2 to regulate the energy band alignment, which could inhibit the recombination of photogenerated charge carriers of interfaces. With Pt as a catalyst, the constructed Sb2Se3/PV2+/TiO2/Pt photocathode showed a superior PEC hydrogen generation activity with a photocurrent density of -18.6 mA cm-2 vs. a reversible hydrogen electrode (RHE) and a half-cell solar-to-hydrogen efficiency (HC-STH) of 1.54% at 0.17 V vs. RHE, which was much better than that of the related Sb2Se3/TiO2/Pt photocathode without PV2+ (-9.8 mA cm-2, 0.51% at 0.10 V vs. RHE).

6.
Res Sq ; 2024 Feb 23.
Artigo em Inglês | MEDLINE | ID: mdl-38464127

RESUMO

Designing proteins with improved functions requires a deep understanding of how sequence and function are related, a vast space that is hard to explore. The ability to efficiently compress this space by identifying functionally important features is extremely valuable. Here, we first establish a method called EvoScan to comprehensively segment and scan the high-fitness sequence space to obtain anchor points that capture its essential features, especially in high dimensions. Our approach is compatible with any biomolecular function that can be coupled to a transcriptional output. We then develop deep learning and large language models to accurately reconstruct the space from these anchors, allowing computational prediction of novel, highly fit sequences without prior homology-derived or structural information. We apply this hybrid experimental-computational method, which we call EvoAI, to a repressor protein and find that only 82 anchors are sufficient to compress the high-fitness sequence space with a compression ratio of 1048. The extreme compressibility of the space informs both applied biomolecular design and understanding of natural evolution.

7.
IEEE Trans Med Imaging ; 42(6): 1885-1896, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-37022408

RESUMO

Background samples provide key contextual information for segmenting regions of interest (ROIs). However, they always cover a diverse set of structures, causing difficulties for the segmentation model to learn good decision boundaries with high sensitivity and precision. The issue concerns the highly heterogeneous nature of the background class, resulting in multi-modal distributions. Empirically, we find that neural networks trained with heterogeneous background struggle to map the corresponding contextual samples to compact clusters in feature space. As a result, the distribution over background logit activations may shift across the decision boundary, leading to systematic over-segmentation across different datasets and tasks. In this study, we propose context label learning (CoLab) to improve the context representations by decomposing the background class into several subclasses. Specifically, we train an auxiliary network as a task generator, along with the primary segmentation model, to automatically generate context labels that positively affect the ROI segmentation accuracy. Extensive experiments are conducted on several challenging segmentation tasks and datasets. The results demonstrate that CoLab can guide the segmentation model to map the logits of background samples away from the decision boundary, resulting in significantly improved segmentation accuracy. Code is available at https://github.com/ZerojumpLine/CoLab.


Assuntos
Redes Neurais de Computação , Semântica , Processamento de Imagem Assistida por Computador
8.
Metabolites ; 13(8)2023 Aug 18.
Artigo em Inglês | MEDLINE | ID: mdl-37623897

RESUMO

Alzheimer's disease (AD) represents a significant public health concern in modern society. Metabolic syndrome (MetS), which includes diabetes mellitus (DM) and obesity, represents a modifiable risk factor for AD. MetS and AD are interconnected through various mechanisms, such as mitochondrial dysfunction, oxidative stress, insulin resistance (IR), vascular impairment, inflammation, and endoplasmic reticulum (ER) stress. Therefore, it is necessary to seek a multi-targeted and safer approach to intervention. Thus, 10-hydroxy-2-decenoic acid (10-HDA), a unique hydroxy fatty acid in royal jelly, has shown promising anti-neuroinflammatory, blood-brain barrier (BBB)-preserving, and neurogenesis-promoting properties. In this paper, we provide a summary of the relationship between MetS and AD, together with an introduction to 10-HDA as a potential intervention nutrient. In addition, molecular docking is performed to explore the metabolic tuning properties of 10-HDA with associated macromolecules such as GLP-1R, PPARs, GSK-3, and TREM2. In conclusion, there is a close relationship between AD and MetS, and 10-HDA shows potential as a beneficial nutritional intervention for both AD and MetS.

9.
IEEE Trans Med Imaging ; 42(11): 3323-3335, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37276115

RESUMO

This paper presents an effective and general data augmentation framework for medical image segmentation. We adopt a computationally efficient and data-efficient gradient-based meta-learning scheme to explicitly align the distribution of training and validation data which is used as a proxy for unseen test data. We improve the current data augmentation strategies with two core designs. First, we learn class-specific training-time data augmentation (TRA) effectively increasing the heterogeneity within the training subsets and tackling the class imbalance common in segmentation. Second, we jointly optimize TRA and test-time data augmentation (TEA), which are closely connected as both aim to align the training and test data distribution but were so far considered separately in previous works. We demonstrate the effectiveness of our method on four medical image segmentation tasks across different scenarios with two state-of-the-art segmentation models, DeepMedic and nnU-Net. Extensive experimentation shows that the proposed data augmentation framework can significantly and consistently improve the segmentation performance when compared to existing solutions. Code is publicly available at https://github.com/ZerojumpLine/JCSAugment.

10.
IEEE Trans Med Imaging ; 42(4): 1095-1106, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36417741

RESUMO

Deep learning models usually suffer from the domain shift issue, where models trained on one source domain do not generalize well to other unseen domains. In this work, we investigate the single-source domain generalization problem: training a deep network that is robust to unseen domains, under the condition that training data are only available from one source domain, which is common in medical imaging applications. We tackle this problem in the context of cross-domain medical image segmentation. In this scenario, domain shifts are mainly caused by different acquisition processes. We propose a simple causality-inspired data augmentation approach to expose a segmentation model to synthesized domain-shifted training examples. Specifically, 1) to make the deep model robust to discrepancies in image intensities and textures, we employ a family of randomly-weighted shallow networks. They augment training images using diverse appearance transformations. 2) Further we show that spurious correlations among objects in an image are detrimental to domain robustness. These correlations might be taken by the network as domain-specific clues for making predictions, and they may break on unseen domains. We remove these spurious correlations via causal intervention. This is achieved by resampling the appearances of potentially correlated objects independently. The proposed approach is validated on three cross-domain segmentation scenarios: cross-modality (CT-MRI) abdominal image segmentation, cross-sequence (bSSFP-LGE) cardiac MRI segmentation, and cross-site prostate MRI segmentation. The proposed approach yields consistent performance gains compared with competitive methods when tested on unseen domains.


Assuntos
Pelve , Próstata , Masculino , Humanos
11.
IEEE Trans Med Imaging ; 42(3): 697-712, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36264729

RESUMO

Image registration is a fundamental medical image analysis task, and a wide variety of approaches have been proposed. However, only a few studies have comprehensively compared medical image registration approaches on a wide range of clinically relevant tasks. This limits the development of registration methods, the adoption of research advances into practice, and a fair benchmark across competing approaches. The Learn2Reg challenge addresses these limitations by providing a multi-task medical image registration data set for comprehensive characterisation of deformable registration algorithms. A continuous evaluation will be possible at https://learn2reg.grand-challenge.org. Learn2Reg covers a wide range of anatomies (brain, abdomen, and thorax), modalities (ultrasound, CT, MR), availability of annotations, as well as intra- and inter-patient registration evaluation. We established an easily accessible framework for training and validation of 3D registration methods, which enabled the compilation of results of over 65 individual method submissions from more than 20 unique teams. We used a complementary set of metrics, including robustness, accuracy, plausibility, and runtime, enabling unique insight into the current state-of-the-art of medical image registration. This paper describes datasets, tasks, evaluation methods and results of the challenge, as well as results of further analysis of transferability to new datasets, the importance of label supervision, and resulting bias. While no single approach worked best across all tasks, many methodological aspects could be identified that push the performance of medical image registration to new state-of-the-art performance. Furthermore, we demystified the common belief that conventional registration methods have to be much slower than deep-learning-based methods.


Assuntos
Cavidade Abdominal , Aprendizado Profundo , Humanos , Algoritmos , Encéfalo/diagnóstico por imagem , Abdome/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos
12.
IEEE J Biomed Health Inform ; 26(7): 3059-3067, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-34982706

RESUMO

Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) and ultrasound (US), which are two common modalities for clinical breast tumor diagnosis besides Mammograms, can provide different and complementary information for the same tumor regions. Although many machine learning methods have been proposed for breast tumor classification based on either single modality, it remains unclear how to further boost the classification performance by utilizing paired multi-modality information with different dimensions. In this paper, we propose MRI-US multi-modality network (MUM-Net) to classify breast tumor into different subtypes based on 3D MR and 2D US images. The key insight of MUM-Net is that we explicitly distill modality-agnostic features for tumor classification. Specifically, we first adopt a discrimination-adaption module to decompose features into modality-agnostic and modality-specific ones with min-max training strategies. Then, we propose a feature fusion module to increase the compactness of the modality-agnostic features by utilizing an affinity matrix with nearest neighbour selection. We build a paired MRI-US breast tumor classification dataset containing 502 cases with three clinical indicators to validate the proposed method. In three tasks including lymph node metastasis, histological grade and Ki-67 level, MUM-Net achieves AUC scores of 0.8581, 0.8965 and 0.8577, outperforming other counterparts which are based on single task or single modality by a wide margin. In addition, we find that the extracted modality-agnostic features can help the network focus on the tumor regions in both modalities.


Assuntos
Neoplasias da Mama , Imageamento por Ressonância Magnética , Mama/diagnóstico por imagem , Neoplasias da Mama/diagnóstico por imagem , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Ultrassonografia , Ultrassonografia Mamária
13.
Med Image Anal ; 82: 102597, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36095907

RESUMO

The success of neural networks on medical image segmentation tasks typically relies on large labeled datasets for model training. However, acquiring and manually labeling a large medical image set is resource-intensive, expensive, and sometimes impractical due to data sharing and privacy issues. To address this challenge, we propose AdvChain, a generic adversarial data augmentation framework, aiming at improving both the diversity and effectiveness of training data for medical image segmentation tasks. AdvChain augments data with dynamic data augmentation, generating randomly chained photo-metric and geometric transformations to resemble realistic yet challenging imaging variations to expand training data. By jointly optimizing the data augmentation model and a segmentation network during training, challenging examples are generated to enhance network generalizability for the downstream task. The proposed adversarial data augmentation does not rely on generative networks and can be used as a plug-in module in general segmentation networks. It is computationally efficient and applicable for both low-shot supervised and semi-supervised learning. We analyze and evaluate the method on two MR image segmentation tasks: cardiac segmentation and prostate segmentation with limited labeled data. Results show that the proposed approach can alleviate the need for labeled data while improving model generalization ability, indicating its practical value in medical imaging applications.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Humanos , Masculino , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina Supervisionado
14.
IEEE Trans Med Imaging ; 40(3): 1065-1077, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-33351758

RESUMO

Class imbalance poses a challenge for developing unbiased, accurate predictive models. In particular, in image segmentation neural networks may overfit to the foreground samples from small structures, which are often heavily under-represented in the training set, leading to poor generalization. In this study, we provide new insights on the problem of overfitting under class imbalance by inspecting the network behavior. We find empirically that when training with limited data and strong class imbalance, at test time the distribution of logit activations may shift across the decision boundary, while samples of the well-represented class seem unaffected. This bias leads to a systematic under-segmentation of small structures. This phenomenon is consistently observed for different databases, tasks and network architectures. To tackle this problem, we introduce new asymmetric variants of popular loss functions and regularization techniques including a large margin loss, focal loss, adversarial training, mixup and data augmentation, which are explicitly designed to counter logit shift of the under-represented classes. Extensive experiments are conducted on several challenging segmentation tasks. Our results demonstrate that the proposed modifications to the objective function can lead to significantly improved segmentation accuracy compared to baselines and alternative approaches.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Bases de Dados Factuais
15.
IEEE Trans Cybern ; 51(7): 3441-3454, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31484151

RESUMO

Thin-section magnetic resonance imaging (MRI) can provide higher resolution anatomical structures and more precise clinical information than thick-section images. However, thin-section MRI is not always available due to the imaging cost issue. In multicenter retrospective studies, a large number of data are often in thick-section manner with different section thickness. The lack of thin-section data and the difference in section thickness bring considerable difficulties in the study based on the image big data. In this article, we introduce DeepVolume, a two-step deep learning architecture to address the challenge of accurate thin-section MR image reconstruction. The first stage is the brain structure-aware network, in which the thick-section MR images in axial and sagittal planes are fused by a multitask 3-D U-net with prior knowledge of brain volume segmentation, which encourages the reconstruction result to have correct brain structure. The second stage is the spatial connection-aware network, in which the preliminary reconstruction results are adjusted slice-by-slice by a recurrent convolutional network embedding convolutional long short-term memory (LSTM) block, which enhances the precision of the reconstruction by utilizing the previously unassessed sagittal information. We used 305 paired brain MRI samples with thickness of 1.0 mm and 6.5 mm in this article. Extensive experiments illustrate that DeepVolume can produce the state-of-the-art reconstruction results by embedding more anatomical knowledge. Furthermore, considering DeepVolume as an intermediate step, the practical and clinical value of our method is validated by applying the brain volume estimation and voxel-based morphometry. The results show that DeepVolume can provide much more reliable brain volume estimation in the normalized space based on the thick-section MR images compared with the traditional solutions.


Assuntos
Encéfalo/diagnóstico por imagem , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Adolescente , Encéfalo/anatomia & histologia , Encéfalo/fisiologia , Criança , Pré-Escolar , Humanos , Lactente , Recém-Nascido
16.
NPJ Digit Med ; 4(1): 60, 2021 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-33782526

RESUMO

Data privacy mechanisms are essential for rapidly scaling medical training databases to capture the heterogeneity of patient data distributions toward robust and generalizable machine learning systems. In the current COVID-19 pandemic, a major focus of artificial intelligence (AI) is interpreting chest CT, which can be readily used in the assessment and management of the disease. This paper demonstrates the feasibility of a federated learning method for detecting COVID-19 related CT abnormalities with external validation on patients from a multinational study. We recruited 132 patients from seven multinational different centers, with three internal hospitals from Hong Kong for training and testing, and four external, independent datasets from Mainland China and Germany, for validating model generalizability. We also conducted case studies on longitudinal scans for automated estimation of lesion burden for hospitalized COVID-19 patients. We explore the federated learning algorithms to develop a privacy-preserving AI model for COVID-19 medical image diagnosis with good generalization capability on unseen multinational datasets. Federated learning could provide an effective mechanism during pandemics to rapidly develop clinically useful AI across institutions and countries overcoming the burden of central aggregation of large amounts of sensitive data.

17.
IEEE Trans Med Imaging ; 39(10): 3053-3063, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32275586

RESUMO

There is clinical evidence that suppressing the bone structures in Chest X-rays (CXRs) improves diagnostic value, either for radiologists or computer-aided diagnosis. However, bone-free CXRs are not always accessible. We hereby propose a coarse-to-fine CXR bone suppression approach by using structural priors derived from unpaired computed tomography (CT) images. In the low-resolution stage, we use the digitally reconstructed radiograph (DRR) image that is computed from CT as a bridge to connect CT and CXR. We then perform CXR bone decomposition by leveraging the DRR bone decomposition model learned from unpaired CTs and domain adaptation between CXR and DRR. To further mitigate the domain differences between CXRs and DRRs and speed up the learning convergence, we perform all the aboved operations in Laplacian of Gaussian (LoG) domain. After obtaining the bone decomposition result in DRR, we upsample it to a high resolution, based on which the bone region in the original high-resolution CXR is cropped and processed to produce a high-resolution bone decomposition result. Finally, such a produced bone image is subtracted from the original high-resolution CXR to obtain the bone suppression result. We conduct experiments and clinical evaluations based on two benchmarking CXR databases to show that (i) the proposed method outperforms the state-of-the-art unsupervised CXR bone suppression approaches; (ii) the CXRs with bone suppression are instrumental to radiologists for reducing their false-negative rate of lung diseases from 15% to 8%; and (iii) state-of-the-art disease classification performances are achieved by learning a deep network that takes the original CXR and its bone-suppressed image as inputs.


Assuntos
Pneumopatias , Tomografia Computadorizada por Raios X , Osso e Ossos/diagnóstico por imagem , Humanos , Radiografia Torácica , Tórax , Raios X
18.
Sci Rep ; 7(1): 5467, 2017 07 14.
Artigo em Inglês | MEDLINE | ID: mdl-28710497

RESUMO

Deep learning-based radiomics (DLR) was developed to extract deep information from multiple modalities of magnetic resonance (MR) images. The performance of DLR for predicting the mutation status of isocitrate dehydrogenase 1 (IDH1) was validated in a dataset of 151 patients with low-grade glioma. A modified convolutional neural network (CNN) structure with 6 convolutional layers and a fully connected layer with 4096 neurons was used to segment tumors. Instead of calculating image features from segmented images, as typically performed for normal radiomics approaches, image features were obtained by normalizing the information of the last convolutional layers of the CNN. Fisher vector was used to encode the CNN features from image slices of different sizes. High-throughput features with dimensionality greater than 1.6*104 were obtained from the CNN. Paired t-tests and F-scores were used to select CNN features that were able to discriminate IDH1. With the same dataset, the area under the operating characteristic curve (AUC) of the normal radiomics method was 86% for IDH1 estimation, whereas for DLR the AUC was 92%. The AUC of IDH1 estimation was further improved to 95% using DLR based on multiple-modality MR images. DLR could be a powerful way to extract deep information from medical images.


Assuntos
Aprendizado Profundo , Glioma/diagnóstico por imagem , Glioma/patologia , Isocitrato Desidrogenase/genética , Imageamento por Ressonância Magnética , Adulto , Estudos de Coortes , Feminino , Glioma/genética , Humanos , Masculino , Gradação de Tumores , Invasividade Neoplásica , Redes Neurais de Computação , Curva ROC , Fatores de Tempo
19.
Comput Assist Surg (Abingdon) ; 22(sup1): 18-25, 2017 12.
Artigo em Inglês | MEDLINE | ID: mdl-28914549

RESUMO

Glioblastoma is the most aggressive malignant brain tumor with poor prognosis. Radiomics is a newly emerging and promising technique to reveal the complex relationships between high-throughput medical image features and deep information of disease including pathology, biomarkers and genomics. An approach was developed to investigate the internal relationship between magnetic resonance imaging (MRI) features and the age-related origins of glioblastomas based on a quantitative radiomics method. A fully automatic image segmentation method was applied to segment the tumor regions from three dimensional MRI images. 555 features were then extracted from the image data. By analyzing large numbers of quantitative image features, some predictive and prognostic information could be obtained by the radiomics approach. 96 patients diagnosed with glioblastoma pathologically have been divided into two age groups (<45 and ≥45 years old). As expected, there are 101 features showing the consistency with the age groups (T test, p < .05), and unsupervised clustering results of those features also show coherence with the age difference (T test, p= .006). In conclusion, glioblastoma in different age groups present different radiomics-feature patterns with statistical significance, which indicates that glioblastoma in different age groups should have different pathologic, protein, or genic origins.


Assuntos
Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/mortalidade , Glioblastoma/diagnóstico por imagem , Glioblastoma/mortalidade , Imageamento Tridimensional , Imageamento por Ressonância Magnética/métodos , Adulto , Fatores Etários , Neoplasias Encefálicas/patologia , Análise por Conglomerados , Estudos de Coortes , Progressão da Doença , Estudos de Avaliação como Assunto , Feminino , Glioblastoma/patologia , Humanos , Estimativa de Kaplan-Meier , Masculino , Pessoa de Meia-Idade , Prognóstico , Estudos Retrospectivos , Medição de Risco , Análise de Sobrevida , Fatores de Tempo
20.
J Healthc Eng ; 2017: 9283480, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29065666

RESUMO

This work proposed a novel automatic three-dimensional (3D) magnetic resonance imaging (MRI) segmentation method which would be widely used in the clinical diagnosis of the most common and aggressive brain tumor, namely, glioma. The method combined a multipathway convolutional neural network (CNN) and fully connected conditional random field (CRF). Firstly, 3D information was introduced into the CNN which makes more accurate recognition of glioma with low contrast. Then, fully connected CRF was added as a postprocessing step which purposed more delicate delineation of glioma boundary. The method was applied to T2flair MRI images of 160 low-grade glioma patients. With 59 cases of data training and manual segmentation as the ground truth, the Dice similarity coefficient (DSC) of our method was 0.85 for the test set of 101 MRI images. The results of our method were better than those of another state-of-the-art CNN method, which gained the DSC of 0.76 for the same dataset. It proved that our method could produce better results for the segmentation of low-grade gliomas.


Assuntos
Neoplasias Encefálicas/diagnóstico por imagem , Glioma/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Feminino , Humanos , Imageamento Tridimensional , Imageamento por Ressonância Magnética , Masculino
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa