Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 61
Filtrar
1.
IEEE Trans Med Imaging ; PP2021 Nov 29.
Artigo em Inglês | MEDLINE | ID: mdl-34843432

RESUMO

Due to the lack of properly annotated medical data, exploring the generalization capability of the deep model is becoming a public concern. Zero-shot learning (ZSL) has emerged in recent years to equip the deep model with the ability to recognize unseen classes. However, existing studies mainly focus on natural images, which utilize linguistic models to extract auxiliary information for ZSL. It is impractical to apply the natural image ZSL solutions directly to medical images, since the medical terminology is very domain-specific, and it is not easy to acquire linguistic models for the medical terminology. In this work, we propose a new paradigm of ZSL specifically for medical images utilizing cross-modality information. We make three main contributions with the proposed paradigm. First, we extract the prior knowledge about the segmentation targets, called relation prototypes, from the prior model and then propose a cross-modality adaptation module to inherit the prototypes to the zero-shot model. Second, we propose a relation prototype awareness module to make the zero-shot model aware of information contained in the prototypes. Last but not least, we develop an inheritance attention module to recalibrate the relation prototypes to enhance the inheritance process. The proposed framework is evaluated on two public cross-modality datasets including a cardiac dataset and an abdominal dataset. Extensive experiments show that the proposed framework significantly outperforms the state of the arts.

2.
IEEE Trans Med Imaging ; PP2021 Nov 09.
Artigo em Inglês | MEDLINE | ID: mdl-34752391

RESUMO

Computed tomography (CT) images are often impaired by unfavorable artifacts caused by metallic implants within patients, which would adversely affect the subsequent clinical diagnosis and treatment. Although the existing deep-learning-based approaches have achieved promising success on metal artifact reduction (MAR) for CT images, most of them treated the task as a general image restoration problem and utilized off-the-shelf network modules for image quality enhancement. Hence, such frameworks always suffer from lack of sufficient model interpretability for the specific task. Besides, the existing MAR techniques largely neglect the intrinsic prior knowledge underlying metal-corrupted CT images which is beneficial for the MAR performance improvement. In this paper, we specifically propose a deep interpretable convolutional dictionary network (DICDNet) for the MAR task. Particularly, we first explore that the metal artifacts always present non-local streaking and star-shape patterns in CT images. Based on such observations, a convolutional dictionary model is deployed to encode the metal artifacts. To solve the model, we propose a novel optimization algorithm based on the proximal gradient technique. With only simple operators, the iterative steps of the proposed algorithm can be easily unfolded into corresponding network modules with specific physical meanings. Comprehensive experiments on synthesized and clinical datasets substantiate the effectiveness of the proposed DICDNet as well as its superior interpretability, compared to current state-of-the-art MAR methods. Code is available at https://github.com/hongwang01/DICDNet.

3.
Artigo em Inglês | MEDLINE | ID: mdl-34637384

RESUMO

Till March 31st, 2021, the coronavirus disease 2019 (COVID-19) has reportedly infected more than 127 million people and caused over 2.5 million deaths worldwide. Timely diagnosis of COVID-19 is crucial for management of individual patients as well as containment of the highly contagious disease. Having realized the clinical value of non-contrast chest computed tomography (CT) for diagnosis of COVID-19, deep learning (DL) based automated methods have been proposed to aid the radiologists in reading the huge quantities of CT exams as a result of the pandemic. In this work, we address an overlooked problem for training deep convolutional neural networks for COVID-19 classification using real-world multi-source data, namely, the data source bias problem. The data source bias problem refers to the situation in which certain sources of data comprise only a single class of data, and training with such source-biased data may make the DL models learn to distinguish data sources instead of COVID-19. To overcome this problem, we propose MIx-aNd-Interpolate (MINI), a conceptually simple, easy-to-implement, efficient yet effective training strategy. The proposed MINI approach generates volumes of the absent class by combining the samples collected from different hospitals, which enlarges the sample space of the original source-biased dataset. Experimental results on a large collection of real patient data (1,221 COVID-19 and 1,520 negative CT images, and the latter consisting of 786 community acquired pneumonia and 734 non-pneumonia) from eight hospitals and health institutions show that: 1) MINI can improve COVID-19 classification performance upon the baseline (which does not deal with the source bias), and 2) MINI is superior to competing methods in terms of the extent of improvement.

4.
IEEE Trans Med Imaging ; PP2021 Oct 04.
Artigo em Inglês | MEDLINE | ID: mdl-34606452

RESUMO

In semi-supervised medical image segmentation, most previous works draw on the common assumption that higher entropy means higher uncertainty. In this paper, we investigate a novel method of estimating uncertainty. We observe that, when assigned different misclassification costs in a certain degree, if the segmentation result of a pixel becomes inconsistent, this pixel shows a relative uncertainty in its segmentation. Therefore, we present a new semi-supervised segmentation model, namely, conservative-radical network (CoraNet in short) based on our uncertainty estimation and separate self-training strategy. In particular, our CoraNet model consists of three major components: a conservative-radical module (CRM), a certain region segmentation network (C-SN), and an uncertain region segmentation network (UC-SN) that could be alternatively trained in an end-to-end manner. We have extensively evaluated our method on various segmentation tasks with publicly available benchmark datasets, including CT pancreas, MR endocardium, and MR multi-structures segmentation on the ACDC dataset. Compared with the current state of the art, our CoraNet has demonstrated superior performance. In addition, we have also analyzed its connection with and difference from conventional methods of uncertainty estimation in semi-supervised medical image segmentation.

5.
IEEE Trans Med Imaging ; PP2021 Oct 04.
Artigo em Inglês | MEDLINE | ID: mdl-34606453

RESUMO

Medical images from multicentres often suffer from the domain shift problem, which makes the deep learning models trained on one domain usually fail to generalize well to another. One of the potential solutions for the problem is the generative adversarial network (GAN), which has the capacity to translate images between different domains. Nevertheless, the existing GAN-based approaches are prone to fail at preserving image-objects in image-to-image (I2I) translation, which reduces their practicality on domain adaptation tasks. In this regard, a novel GAN (namely IB-GAN) is proposed to preserve image-objects during cross-domain I2I adaptation. Specifically, we integrate the information bottleneck constraint into the typical cycle-consistency-based GAN to discard the superfluous information (e.g., domain information) and maintain the consistency of disentangled content features for image-object preservation. The proposed IB-GAN is evaluated on three tasks-polyp segmentation using colonoscopic images, the segmentation of optic disc and cup in fundus images and the whole heart segmentation using multi-modal volumes. We show that the proposed IB-GAN can generate realistic translated images and remarkably boost the generalization of widely used segmentation networks (e.g., U-Net).

6.
IEEE Trans Med Imaging ; PP2021 Aug 16.
Artigo em Inglês | MEDLINE | ID: mdl-34398751

RESUMO

Unsupervised domain adaptation (UDA) methods have shown their promising performance in the cross-modality medical image segmentation tasks. These typical methods usually utilize a translation network to transform images from the source domain to target domain or train the pixel-level classifier merely using translated source images and original target images. However, when there exists a large domain shift between source and target domains, we argue that this asymmetric structure, to some extent, could not fully eliminate the domain gap. In this paper, we present a novel deep symmetric architecture of UDA for medical image segmentation, which consists of a segmentation sub-network, and two symmetric source and target domain translation sub-networks. To be specific, based on two translation sub-networks, we introduce a bidirectional alignment scheme via a shared encoder and two private decoders to simultaneously align features 1) from source to target domain and 2) from target to source domain, which is able to effectively mitigate the discrepancy between domains. Furthermore, for the segmentation sub-network, we train a pixel-level classifier using not only original target images and translated source images, but also original source images and translated target images, which could sufficiently leverage the semantic information from the images with different styles. Extensive experiments demonstrate that our method has remarkable advantages compared to the state-of-the-art methods in three segmentation tasks, i.e., cross-modality cardiac, BraTS, and abdominal multi-organ segmentation.

7.
Med Image Anal ; 74: 102214, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34464837

RESUMO

Medical image segmentation tasks hitherto have achieved excellent progresses with large-scale datasets, which empowers us to train potent deep convolutional neural networks (DCNNs). However, labeling such large-scale datasets is laborious and error-prone, which leads the noisy (or incorrect) labels to be an ubiquitous problem in the real-world scenarios. In addition, data collected from different sites usually exhibit significant data distribution shift (or domain shift). As a result, noisy label and domain shift become two common problems in medical imaging application scenarios, especially in medical image segmentation, which degrade the performance of deep learning models significantly. In this paper, we identify a novel problem hidden in medical image segmentation, which is unsupervised domain adaptation on noisy labeled data, and propose a novel algorithm named "Self-Cleansing Unsupervised Domain Adaptation" (S-CDUA) to address such issue. S-CUDA sets up a realistic scenario to solve the above problems simultaneously where training data (i.e., source domain) not only shows domain shift w.r.t. unsupervised test data (i.e., target domain) but also contains noisy labels. The key idea of S-CUDA is to learn noise-excluding and domain invariant knowledge from noisy supervised data, which will be applied on the highly corrupted data for label cleansing and further data-recycling, as well as on the test data with domain shift for supervised propagation. To this end, we propose a novel framework leveraging noisy-label learning and domain adaptation techniques to cleanse the noisy labels and learn from trustable clean samples, thus enabling robust adaptation and prediction on the target domain. Specifically, we train two peer adversarial networks to identify high-confidence clean data and exchange them in companions to eliminate the error accumulation problem and narrow the domain gap simultaneously. In the meantime, the high-confidence noisy data are detected and cleansed in order to reuse the contaminated training data. Therefore, our proposed method can not only cleanse the noisy labels in the training set but also take full advantage of the existing noisy data to update the parameters of the network. For evaluation, we conduct experiments on two popular datasets (REFUGE and Drishti-GS) for optic disc (OD) and optic cup (OC) segmentation, and on another public multi-vendor dataset for spinal cord gray matter (SCGM) segmentation. Experimental results show that our proposed method can cleanse noisy labels efficiently and obtain a model with better generalization performance at the same time, which outperforms previous state-of-the-art methods by large margin. Our code can be found at https://github.com/zzdxjtu/S-cuda.

8.
Eur Radiol ; 2021 Aug 21.
Artigo em Inglês | MEDLINE | ID: mdl-34417848

RESUMO

OBJECTIVES: The molecular subtyping of diffuse gliomas is important. The aim of this study was to establish predictive models based on preoperative multiparametric MRI. METHODS: A total of 1016 diffuse glioma patients were retrospectively collected from Beijing Tiantan Hospital. Patients were randomly divided into the training (n = 780) and validation (n = 236) sets. According to the 2016 WHO classification, diffuse gliomas can be classified into four binary classification tasks (tasks I-IV). Predictive models based on radiomics and deep convolutional neural network (DCNN) were developed respectively, and their performances were compared with receiver operating characteristic (ROC) curves. Additionally, the radiomics and DCNN features were visualized and compared with the t-distributed stochastic neighbor embedding technique and Spearman's correlation test. RESULTS: In the training set, areas under the curves (AUCs) of the DCNN models (ranging from 0.99 to 1.00) outperformed the radiomics models in all tasks, and the accuracies of the DCNN models (ranging from 0.90 to 0.94) outperformed the radiomics models in tasks I, II, and III. In the independent validation set, the accuracies of the DCNN models outperformed the radiomics models in all tasks (0.74-0.83), and the AUCs of the DCNN models (0.85-0.89) outperformed the radiomics models in tasks I, II, and III. DCNN features demonstrated more superior discriminative capability than the radiomics features in feature visualization analysis, and their general correlations were weak. CONCLUSIONS: Both the radiomics and DCNN models could preoperatively predict the molecular subtypes of diffuse gliomas, and the latter performed better in most circumstances. KEY POINTS: • The molecular subtypes of diffuse gliomas could be predicted with MRI. • Deep learning features tend to outperform radiomics features in large cohorts. • The correlation between the radiomics features and DCNN features was low.

9.
IEEE Trans Med Imaging ; 40(12): 3641-3651, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34197318

RESUMO

As the labeled anomalous medical images are usually difficult to acquire, especially for rare diseases, the deep learning based methods, which heavily rely on the large amount of labeled data, cannot yield a satisfactory performance. Compared to the anomalous data, the normal images without the need of lesion annotation are much easier to collect. In this paper, we propose an anomaly detection framework, namely [Formula: see text], extracting [Formula: see text]elf-supervised and tr [Formula: see text]ns [Formula: see text]ation-consistent features for [Formula: see text]nomaly [Formula: see text]etection. The proposed SALAD is a reconstruction-based method, which learns the manifold of normal data through an encode-and-reconstruct translation between image and latent spaces. In particular, two constraints (i.e., structure similarity loss and center constraint loss) are proposed to regulate the cross-space (i.e., image and feature) translation, which enforce the model to learn translation-consistent and representative features from the normal data. Furthermore, a self-supervised learning module is engaged into our framework to further boost the anomaly detection accuracy by deeply exploiting useful information from the raw normal data. An anomaly score, as a measure to separate the anomalous data from the healthy ones, is constructed based on the learned self-supervised-and-translation-consistent features. Extensive experiments are conducted on optical coherence tomography (OCT) and chest X-ray datasets. The experimental results demonstrate the effectiveness of our approach.

10.
Med Image Anal ; 70: 102006, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33690025

RESUMO

Cervical cancer causes the fourth most cancer-related deaths of women worldwide. Early detection of cervical intraepithelial neoplasia (CIN) can significantly increase the survival rate of patients. World Health Organization (WHO) divided the CIN into three grades (CIN1, CIN2 and CIN3). In clinical practice, different CIN grades require different treatments. Although existing studies proposed computer aided diagnosis (CAD) systems for cervical cancer diagnosis, most of them are fail to perform accurate separation between CIN1 and CIN2/3, due to the similar appearances under colposcopy. To boost the accuracy of CAD systems, we construct a colposcopic image dataset for GRAding cervical intraepithelial Neoplasia with fine-grained lesion Description (GRAND). The dataset consists of colposcopic images collected from 8,604 patients along with the pathological reports. Additionally, we invite the experienced colposcopist to annotate two main clues, which are usually adopted for clinical diagnosis of CIN grade, i.e., texture of acetowhite epithelium (TAE) and appearance of blood vessel (ABV). A multi-rater model using the annotated clues is benchmarked for our dataset. The proposed framework contains several sub-networks (raters) to exploit the fine-grained lesion features TAE and ABV, respectively, by contrastive learning and a backbone network to extract the global information from colposcopic images. A comprehensive experiment is conducted on our GRAND dataset. The experimental results demonstrate the benefit of using additional lesion descriptions (TAE and ABV), which increases the CIN grading accuracy by over 10%. Furthermore, we conduct a human-machine confrontation to evaluate the potential of the proposed benchmark framework for clinical applications. Particularly, three colposcopists on different professional levels (intern, in-service and professional) are invited to compete with our benchmark framework by investigating a same extra test set-our framework achieves a comparable CIN grading accuracy to that of a professional colposcopist.


Assuntos
Neoplasia Intraepitelial Cervical , Neoplasias do Colo do Útero , Benchmarking , Neoplasia Intraepitelial Cervical/diagnóstico por imagem , Colposcopia , Feminino , Humanos , Gravidez , Neoplasias do Colo do Útero/diagnóstico por imagem
11.
Artigo em Inglês | MEDLINE | ID: mdl-33587702

RESUMO

Electroencephalogram (EEG) has been widely used in brain computer interface (BCI) due to its convenience and reliability. The EEG-based BCI applications are majorly limited by the time-consuming calibration procedure for discriminative feature representation and classification. Existing EEG classification methods either heavily depend on the handcrafted features or require adequate annotated samples at each session for calibration. To address these issues, we propose a novel dynamic joint domain adaptation network based on adversarial learning strategy to learn domain-invariant feature representation, and thus improve EEG classification performance in the target domain by leveraging useful information from the source session. Specifically, we explore the global discriminator to align the marginal distribution across domains, and the local discriminator to reduce the conditional distribution discrepancy between sub-domains via conditioning on deep representation as well as the predicted labels from the classifier. In addition, we further investigate a dynamic adversarial factor to adaptively estimate the relative importance of alignment between the marginal and conditional distributions. To evaluate the efficacy of our method, extensive experiments are conducted on two public EEG datasets, namely, Datasets IIa and IIb of BCI Competition IV. The experimental results demonstrate that the proposed method achieves superior performance compared with the state-of-the-art methods.


Assuntos
Interfaces Cérebro-Computador , Algoritmos , Eletroencefalografia , Humanos , Aprendizagem , Reprodutibilidade dos Testes
12.
IEEE Trans Neural Netw Learn Syst ; 32(2): 535-545, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-32745012

RESUMO

In the context of motor imagery, electroencephalography (EEG) data vary from subject to subject such that the performance of a classifier trained on data of multiple subjects from a specific domain typically degrades when applied to a different subject. While collecting enough samples from each subject would address this issue, it is often too time-consuming and impractical. To tackle this problem, we propose a novel end-to-end deep domain adaptation method to improve the classification performance on a single subject (target domain) by taking the useful information from multiple subjects (source domain) into consideration. Especially, the proposed method jointly optimizes three modules, including a feature extractor, a classifier, and a domain discriminator. The feature extractor learns the discriminative latent features by mapping the raw EEG signals into a deep representation space. A center loss is further employed to constrain an invariant feature space and reduce the intrasubject nonstationarity. Furthermore, the domain discriminator matches the feature distribution shift between source and target domains by an adversarial learning strategy. Finally, based on the consistent deep features from both domains, the classifier is able to leverage the information from the source domain and accurately predict the label in the target domain at the test time. To evaluate our method, we have conducted extensive experiments on two real public EEG data sets, data set IIa, and data set IIb of brain-computer interface (BCI) Competition IV. The experimental results validate the efficacy of our method. Therefore, our method is promising to reduce the calibration time for the use of BCI and promote the development of BCI.

13.
J Cancer Res Clin Oncol ; 147(3): 821-833, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-32852634

RESUMO

PURPOSE: Microvascular invasion (MVI) is a valuable predictor of survival in hepatocellular carcinoma (HCC) patients. This study developed predictive models using eXtreme Gradient Boosting (XGBoost) and deep learning based on CT images to predict MVI preoperatively. METHODS: In total, 405 patients were included. A total of 7302 radiomic features and 17 radiological features were extracted by a radiomics feature extraction package and radiologists, respectively. We developed a XGBoost model based on radiomics features, radiological features and clinical variables and a three-dimensional convolutional neural network (3D-CNN) to predict MVI status. Next, we compared the efficacy of the two models. RESULTS: Of the 405 patients, 220 (54.3%) were MVI positive, and 185 (45.7%) were MVI negative. The areas under the receiver operating characteristic curves (AUROCs) of the Radiomics-Radiological-Clinical (RRC) Model and 3D-CNN Model in the training set were 0.952 (95% confidence interval (CI) 0.923-0.973) and 0.980 (95% CI 0.959-0.993), respectively (p = 0.14). The AUROCs of the RRC Model and 3D-CNN Model in the validation set were 0.887 (95% CI 0.797-0.947) and 0.906 (95% CI 0.821-0.960), respectively (p = 0.83). Based on the MVI status predicted by the RRC and 3D-CNN Models, the mean recurrence-free survival (RFS) was significantly better in the predicted MVI-negative group than that in the predicted MVI-positive group (RRC Model: 69.95 vs. 24.80 months, p < 0.001; 3D-CNN Model: 64.06 vs. 31.05 months, p = 0.027). CONCLUSION: The RRC Model and 3D-CNN models showed considerable efficacy in identifying MVI preoperatively. These machine learning models may facilitate decision-making in HCC treatment but requires further validation.


Assuntos
Carcinoma Hepatocelular/irrigação sanguínea , Aprendizado Profundo , Neoplasias Hepáticas/irrigação sanguínea , Carcinoma Hepatocelular/diagnóstico por imagem , Carcinoma Hepatocelular/patologia , Estudos de Coortes , Intervalo Livre de Doença , Feminino , Humanos , Neoplasias Hepáticas/diagnóstico por imagem , Neoplasias Hepáticas/patologia , Masculino , Microcirculação , Pessoa de Meia-Idade , Modelos Estatísticos , Neovascularização Patológica/diagnóstico por imagem , Neovascularização Patológica/patologia , Estudos Retrospectivos
14.
Med Image Anal ; 67: 101832, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33166776

RESUMO

Segmentation of medical images, particularly late gadolinium-enhanced magnetic resonance imaging (LGE-MRI) used for visualizing diseased atrial structures, is a crucial first step for ablation treatment of atrial fibrillation. However, direct segmentation of LGE-MRIs is challenging due to the varying intensities caused by contrast agents. Since most clinical studies have relied on manual, labor-intensive approaches, automatic methods are of high interest, particularly optimized machine learning approaches. To address this, we organized the 2018 Left Atrium Segmentation Challenge using 154 3D LGE-MRIs, currently the world's largest atrial LGE-MRI dataset, and associated labels of the left atrium segmented by three medical experts, ultimately attracting the participation of 27 international teams. In this paper, extensive analysis of the submitted algorithms using technical and biological metrics was performed by undergoing subgroup analysis and conducting hyper-parameter analysis, offering an overall picture of the major design choices of convolutional neural networks (CNNs) and practical considerations for achieving state-of-the-art left atrium segmentation. Results show that the top method achieved a Dice score of 93.2% and a mean surface to surface distance of 0.7 mm, significantly outperforming prior state-of-the-art. Particularly, our analysis demonstrated that double sequentially used CNNs, in which a first CNN is used for automatic region-of-interest localization and a subsequent CNN is used for refined regional segmentation, achieved superior results than traditional methods and machine learning approaches containing single CNNs. This large-scale benchmarking study makes a significant step towards much-improved segmentation methods for atrial LGE-MRIs, and will serve as an important benchmark for evaluating and comparing the future works in the field. Furthermore, the findings from this study can potentially be extended to other imaging datasets and modalities, having an impact on the wider medical imaging community.


Assuntos
Benchmarking , Gadolínio , Algoritmos , Átrios do Coração/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética
15.
Med Image Anal ; 67: 101876, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33197863

RESUMO

Fully convolutional networks (FCNs) trained with abundant labeled data have been proven to be a powerful and efficient solution for medical image segmentation. However, FCNs often fail to achieve satisfactory results due to the lack of labelled data and significant variability of appearance in medical imaging. To address this challenging issue, this paper proposes a conjugate fully convolutional network (CFCN) where pairwise samples are input for capturing a rich context representation and guide each other with a fusion module. To avoid the overfitting problem introduced by intra-class heterogeneity and boundary ambiguity with a small number of training samples, we propose to explicitly exploit the prior information from the label space, termed as proxy supervision. We further extend the CFCN to a compact conjugate fully convolutional network (C2FCN), which just has one head for fitting the proxy supervision without incurring two additional branches of decoders fitting ground truth of the input pairs compared to CFCN. In the test phase, the segmentation probability is inferred by the learned logical relation implied in the proxy supervision. Quantitative evaluation on the Liver Tumor Segmentation (LiTS) and Combined (CT-MR) Healthy Abdominal Organ Segmentation (CHAOS) datasets shows that the proposed framework achieves a significant performance improvement on both binary segmentation and multi-category segmentation, especially with a limited amount of training data. The source code is available at https://github.com/renzhenwang/pairwise_segmentation.


Assuntos
Processamento de Imagem Assistida por Computador , Neoplasias Hepáticas , Diagnóstico por Imagem , Humanos , Neoplasias Hepáticas/diagnóstico por imagem , Probabilidade
16.
IEEE Trans Med Imaging ; 40(10): 2656-2671, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-33338014

RESUMO

Medical image segmentation has achieved remarkable advancements using deep neural networks (DNNs). However, DNNs often need big amounts of data and annotations for training, both of which can be difficult and costly to obtain. In this work, we propose a unified framework for generalized low-shot (one- and few-shot) medical image segmentation based on distance metric learning (DML). Unlike most existing methods which only deal with the lack of annotations while assuming abundance of data, our framework works with extreme scarcity of both, which is ideal for rare diseases. Via DML, the framework learns a multimodal mixture representation for each category, and performs dense predictions based on cosine distances between the pixels' deep embeddings and the category representations. The multimodal representations effectively utilize the inter-subject similarities and intraclass variations to overcome overfitting due to extremely limited data. In addition, we propose adaptive mixing coefficients for the multimodal mixture distributions to adaptively emphasize the modes better suited to the current input. The representations are implicitly embedded as weights of the fc layer, such that the cosine distances can be computed efficiently via forward propagation. In our experiments on brain MRI and abdominal CT datasets, the proposed framework achieves superior performances for low-shot segmentation towards standard DNN-based (3D U-Net) and classical registration-based (ANTs) methods, e.g., achieving mean Dice coefficients of 81%/69% for brain tissue/abdominal multi-organ segmentation using a single training sample, as compared to 52%/31% and 72%/35% by the U-Net and ANTs, respectively.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Imageamento por Ressonância Magnética
17.
Sci Rep ; 10(1): 21122, 2020 12 03.
Artigo em Inglês | MEDLINE | ID: mdl-33273592

RESUMO

The current outbreak of coronavirus disease 2019 (COVID-19) has recently been declared as a pandemic and spread over 200 countries and territories. Forecasting the long-term trend of the COVID-19 epidemic can help health authorities determine the transmission characteristics of the virus and take appropriate prevention and control strategies beforehand. Previous studies that solely applied traditional epidemic models or machine learning models were subject to underfitting or overfitting problems. We propose a new model named Dynamic-Susceptible-Exposed-Infective-Quarantined (D-SEIQ), by making appropriate modifications of the Susceptible-Exposed-Infective-Recovered (SEIR) model and integrating machine learning based parameter optimization under epidemiological rational constraints. We used the model to predict the long-term reported cumulative numbers of COVID-19 cases in China from January 27, 2020. We evaluated our model on officially reported confirmed cases from three different regions in China, and the results proved the effectiveness of our model in terms of simulating and predicting the trend of the COVID-19 outbreak. In China-Excluding-Hubei area within 7 days after the first public report, our model successfully and accurately predicted the long trend up to 40 days and the exact date of the outbreak peak. The predicted cumulative number (12,506) by March 10, 2020, was only 3·8% different from the actual number (13,005). The parameters obtained by our model proved the effectiveness of prevention and intervention strategies on epidemic control in China. The prediction results for five other countries suggested the external validity of our model. The integrated approach of epidemic and machine learning models could accurately forecast the long-term trend of the COVID-19 outbreak. The model parameters also provided insights into the analysis of COVID-19 transmission and the effectiveness of interventions in China.


Assuntos
COVID-19/epidemiologia , Pandemias/estatística & dados numéricos , China , Previsões/métodos , Humanos , Modelos Estatísticos
18.
BMC Med ; 18(1): 406, 2020 12 22.
Artigo em Inglês | MEDLINE | ID: mdl-33349257

RESUMO

BACKGROUND: Colposcopy diagnosis and directed biopsy are the key components in cervical cancer screening programs. However, their performance is limited by the requirement for experienced colposcopists. This study aimed to develop and validate a Colposcopic Artificial Intelligence Auxiliary Diagnostic System (CAIADS) for grading colposcopic impressions and guiding biopsies. METHODS: Anonymized digital records of 19,435 patients were obtained from six hospitals across China. These records included colposcopic images, clinical information, and pathological results (gold standard). The data were randomly assigned (7:1:2) to a training and a tuning set for developing CAIADS and to a validation set for evaluating performance. RESULTS: The agreement between CAIADS-graded colposcopic impressions and pathology findings was higher than that of colposcopies interpreted by colposcopists (82.2% versus 65.9%, kappa 0.750 versus 0.516, p < 0.001). For detecting pathological high-grade squamous intraepithelial lesion or worse (HSIL+), CAIADS showed higher sensitivity than the use of colposcopies interpreted by colposcopists at either biopsy threshold (low-grade or worse 90.5%, 95% CI 88.9-91.4% versus 83.5%, 81.5-85.3%; high-grade or worse 71.9%, 69.5-74.2% versus 60.4%, 57.9-62.9%; all p < 0.001), whereas the specificities were similar (low-grade or worse 51.8%, 49.8-53.8% versus 52.0%, 50.0-54.1%; high-grade or worse 93.9%, 92.9-94.9% versus 94.9%, 93.9-95.7%; all p > 0.05). The CAIADS also demonstrated a superior ability in predicting biopsy sites, with a median mean-intersection-over-union (mIoU) of 0.758. CONCLUSIONS: The CAIADS has potential in assisting beginners and for improving the diagnostic quality of colposcopy and biopsy in the detection of cervical precancer/cancer.


Assuntos
Inteligência Artificial , Carcinoma de Células Escamosas/diagnóstico , Colposcopia/métodos , Detecção Precoce de Câncer/métodos , Neoplasias do Colo do Útero/diagnóstico , Adulto , Idoso , Biópsia/métodos , Biópsia/estatística & dados numéricos , Carcinoma de Células Escamosas/patologia , Carcinoma de Células Escamosas/prevenção & controle , China/epidemiologia , Colposcopia/estatística & dados numéricos , Confiabilidade dos Dados , Testes Diagnósticos de Rotina/métodos , Detecção Precoce de Câncer/estatística & dados numéricos , Feminino , Humanos , Pessoa de Meia-Idade , Gradação de Tumores/métodos , Valor Preditivo dos Testes , Gravidez , Reprodutibilidade dos Testes , Neoplasias do Colo do Útero/patologia , Neoplasias do Colo do Útero/prevenção & controle , Adulto Jovem
19.
IEEE J Biomed Health Inform ; 24(10): 2787-2797, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32816680

RESUMO

Coronavirus Disease 2019 (COVID-19) has rapidly spread worldwide since first reported. Timely diagnosis of COVID-19 is crucial both for disease control and patient care. Non-contrast thoracic computed tomography (CT) has been identified as an effective tool for the diagnosis, yet the disease outbreak has placed tremendous pressure on radiologists for reading the exams and may potentially lead to fatigue-related mis-diagnosis. Reliable automatic classification algorithms can be really helpful; however, they usually require a considerable number of COVID-19 cases for training, which is difficult to acquire in a timely manner. Meanwhile, how to effectively utilize the existing archive of non-COVID-19 data (the negative samples) in the presence of severe class imbalance is another challenge. In addition, the sudden disease outbreak necessitates fast algorithm development. In this work, we propose a novel approach for effective and efficient training of COVID-19 classification networks using a small number of COVID-19 CT exams and an archive of negative samples. Concretely, a novel self-supervised learning method is proposed to extract features from the COVID-19 and negative samples. Then, two kinds of soft-labels ('difficulty' and 'diversity') are generated for the negative samples by computing the earth mover's distances between the features of the negative and COVID-19 samples, from which data 'values' of the negative samples can be assessed. A pre-set number of negative samples are selected accordingly and fed to the neural network for training. Experimental results show that our approach can achieve superior performance using about half of the negative samples, substantially reducing model training time.


Assuntos
Betacoronavirus , Técnicas de Laboratório Clínico/estatística & dados numéricos , Infecções por Coronavirus/diagnóstico por imagem , Infecções por Coronavirus/diagnóstico , Pandemias , Pneumonia Viral/diagnóstico por imagem , Pneumonia Viral/diagnóstico , Interpretação de Imagem Radiográfica Assistida por Computador/estatística & dados numéricos , Aprendizado de Máquina Supervisionado , Tomografia Computadorizada por Raios X/estatística & dados numéricos , Algoritmos , COVID-19 , Teste para COVID-19 , Estudos de Coortes , Biologia Computacional , Infecções por Coronavirus/classificação , Aprendizado Profundo , Erros de Diagnóstico/estatística & dados numéricos , Humanos , Redes Neurais de Computação , Pandemias/classificação , Pneumonia Viral/classificação , Estudos Retrospectivos , SARS-CoV-2
20.
IEEE Trans Med Imaging ; 39(12): 4174-4185, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32755853

RESUMO

Fully convolutional neural networks have made promising progress in joint liver and liver tumor segmentation. Instead of following the debates over 2D versus 3D networks (for example, pursuing the balance between large-scale 2D pretraining and 3D context), in this paper, we novelly identify the wide variation in the ratio between intra- and inter-slice resolutions as a crucial obstacle to the performance. To tackle the mismatch between the intra- and inter-slice information, we propose a slice-aware 2.5D network that emphasizes extracting discriminative features utilizing not only in-plane semantics but also out-of-plane coherence for each separate slice. Specifically, we present a slice-wise multi-input multi-output architecture to instantiate such a design paradigm, which contains a Multi-Branch Decoder (MD) with a Slice-centric Attention Block (SAB) for learning slice-specific features and a Densely Connected Dice (DCD) loss to regularize the inter-slice predictions to be coherent and continuous. Based on the aforementioned innovations, we achieve state-of-the-art results on the MICCAI 2017 Liver Tumor Segmentation (LiTS) dataset. Besides, we also test our model on the ISBI 2019 Segmentation of THoracic Organs at Risk (SegTHOR) dataset, and the result proves the robustness and generalizability of the proposed method in other segmentation tasks.


Assuntos
Processamento de Imagem Assistida por Computador , Neoplasias Hepáticas , Redes Neurais de Computação , Humanos , Neoplasias Hepáticas/diagnóstico por imagem , Órgãos em Risco
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...