Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
1.
Bioinformatics ; 38(2): 461-468, 2022 01 03.
Artigo em Inglês | MEDLINE | ID: mdl-34559177

RESUMO

MOTIVATION: Drug response prediction (DRP) plays an important role in precision medicine (e.g. for cancer analysis and treatment). Recent advances in deep learning algorithms make it possible to predict drug responses accurately based on genetic profiles. However, existing methods ignore the potential relationships among genes. In addition, similarity among cell lines/drugs was rarely considered explicitly. RESULTS: We propose a novel DRP framework, called TGSA, to make better use of prior domain knowledge. TGSA consists of Twin Graph neural networks for Drug Response Prediction (TGDRP) and a Similarity Augmentation (SA) module to fuse fine-grained and coarse-grained information. Specifically, TGDRP abstracts cell lines as graphs based on STRING protein-protein association networks and uses Graph Neural Networks (GNNs) for representation learning. SA views DRP as an edge regression problem on a heterogeneous graph and utilizes GNNs to smooth the representations of similar cell lines/drugs. Besides, we introduce an auxiliary pre-training strategy to remedy the identified limitations of scarce data and poor out-of-distribution generalization. Extensive experiments on the GDSC2 dataset demonstrate that our TGSA consistently outperforms all the state-of-the-art baselines under various experimental settings. We further evaluate the effectiveness and contributions of each component of TGSA via ablation experiments. The promising performance of TGSA shows enormous potential for clinical applications in precision medicine. AVAILABILITY AND IMPLEMENTATION: The source code is available at https://github.com/violet-sto/TGSA. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Assuntos
Neoplasias , Redes Neurais de Computação , Humanos , Algoritmos , Software , Medicina de Precisão , Proteínas
2.
Clin Exp Ophthalmol ; 50(7): 714-723, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35704615

RESUMO

BACKGROUND: To evaluate artificial intelligence (AI) models based on objective indices and raw corneal data from the Scheimpflug Pentacam HR system (OCULUS Optikgeräte GmbH, Wetzlar, Germany) for the detection of clinically unaffected eyes in patients with asymmetric keratoconus (AKC) eyes. METHODS: A total of 1108 eyes of 1108 patients were enrolled, including 430 eyes from normal control subjects, 231 clinically unaffected eyes from patients with AKC, and 447 eyes from keratoconus (KC) patients. Eyes were divided into a training set (664 eyes), a test set (222 eyes) and a validation set (222 eyes). AI models were built based on objective indices (XGBoost, LGBM, LR and RF) and entire corneal raw data (KerNet). The discriminating performances of the AI models were evaluated by accuracy and the area under the ROC curve (AUC). RESULTS: The KerNet model showed great overall discriminating power in the test (accuracy = 94.67%, AUC = 0.985) and validation (accuracy = 94.12%, AUC = 0.990) sets, which were higher than the index-derived AI models (accuracy = 84.02%-86.98%, AUC = 0.944-0.968). In the test set, the KerNet model demonstrated good diagnostic power for the AKC group (accuracy = 95.24%, AUC = 0.984). The validation set also proved that the KerNet model was useful for AKC group diagnosis (accuracy = 94.12%, AUC = 0.983). CONCLUSIONS: KerNet outperformed all the index-derived AI models. Based on the raw data of the entire cornea, KerNet was helpful for distinguishing clinically unaffected eyes in patients with AKC from normal eyes.


Assuntos
Ceratocone , Inteligência Artificial , Córnea , Paquimetria Corneana , Topografia da Córnea/métodos , Humanos , Ceratocone/diagnóstico , Curva ROC , Estudos Retrospectivos , Tomografia
3.
IEEE J Biomed Health Inform ; 26(4): 1411-1421, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-34314364

RESUMO

Accurate cervical lesion detection (CLD) methods using colposcopic images are highly demanded in computer-aided diagnosis (CAD) for automatic diagnosis of High-grade Squamous Intraepithelial Lesions (HSIL). However, compared to natural scene images, the specific characteristics of colposcopic images, such as low contrast, visual similarity, and ambiguous lesion boundaries, pose difficulties to accurately locating HSIL regions and also significantly impede the performance improvement of existing CLD approaches. To tackle these difficulties and better capture cervical lesions, we develop novel feature enhancing mechanisms from both global and local perspectives, and propose a new discriminative CLD framework, called CervixNet, with a Global Class Activation (GCA) module and a Local Bin Excitation (LBE) module. Specifically, the GCA module learns discriminative features by introducing an auxiliary classifier, and guides our model to focus on HSIL regions while ignoring noisy regions. It globally facilitates the feature extraction process and helps boost feature discriminability. Further, our LBE module excites lesion features in a local manner, and allows the lesion regions to be more fine-grained enhanced by explicitly modelling the inter-dependencies among bins of proposal feature. Extensive experiments on a number of 9888 clinical colposcopic images verify the superiority of our method (AP .75 = 20.45) over state-of-the-art models on four widely used metrics.


Assuntos
Colposcopia , Neoplasias do Colo do Útero , Colposcopia/métodos , Feminino , Humanos , Gravidez , Neoplasias do Colo do Útero/diagnóstico por imagem , Neoplasias do Colo do Útero/patologia
4.
IEEE Trans Med Imaging ; 40(10): 2575-2588, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-33606628

RESUMO

Many known supervised deep learning methods for medical image segmentation suffer an expensive burden of data annotation for model training. Recently, few-shot segmentation methods were proposed to alleviate this burden, but such methods often showed poor adaptability to the target tasks. By prudently introducing interactive learning into the few-shot learning strategy, we develop a novel few-shot segmentation approach called Interactive Few-shot Learning (IFSL), which not only addresses the annotation burden of medical image segmentation models but also tackles the common issues of the known few-shot segmentation methods. First, we design a new few-shot segmentation structure, called Medical Prior-based Few-shot Learning Network (MPrNet), which uses only a few annotated samples (e.g., 10 samples) as support images to guide the segmentation of query images without any pre-training. Then, we propose an Interactive Learning-based Test Time Optimization Algorithm (IL-TTOA) to strengthen our MPrNet on the fly for the target task in an interactive fashion. To our best knowledge, our IFSL approach is the first to allow few-shot segmentation models to be optimized and strengthened on the target tasks in an interactive and controllable manner. Experiments on four few-shot segmentation tasks show that our IFSL approach outperforms the state-of-the-art methods by more than 20% in the DSC metric. Specifically, the interactive optimization algorithm (IL-TTOA) further contributes ~10% DSC improvement for the few-shot segmentation models.


Assuntos
Aprendizado Profundo , Treinamento por Simulação , Algoritmos
5.
IEEE J Biomed Health Inform ; 25(10): 3700-3708, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-33232248

RESUMO

Colorectal cancer (CRC) is one of the most life-threatening malignancies. Colonoscopy pathology examination can identify cells of early-stage colon tumors in small tissue image slices. But, such examination is time-consuming and exhausting on high resolution images. In this paper, we present a new framework for colonoscopy pathology whole slide image (WSI) analysis, including lesion segmentation and tissue diagnosis. Our framework contains an improved U-shape network with a VGG net as backbone, and two schemes for training and inference, respectively (the training scheme and inference scheme). Based on the characteristics of colonoscopy pathology WSI, we introduce a specific sampling strategy for sample selection and a transfer learning strategy for model training in our training scheme. Besides, we propose a specific loss function, class-wise DSC loss, to train the segmentation network. In our inference scheme, we apply a sliding-window based sampling strategy for patch generation and diploid ensemble (data ensemble and model ensemble) for the final prediction. We use the predicted segmentation mask to generate the classification probability for the likelihood of WSI being malignant. To our best knowledge, DigestPath 2019 is the first challenge and the first public dataset available on colonoscopy tissue screening and segmentation, and our proposed framework yields good performance on this dataset. Our new framework achieved a DSC of 0.7789 and AUC of 1 on the online test dataset, and we won the [Formula: see text] place in the DigestPath 2019 Challenge (task 2). Our code is available at https://github.com/bhfs9999/colonoscopy_tissue_screen_and_segmentation.


Assuntos
Aprendizado Profundo , Colonoscopia , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação
6.
Artigo em Inglês | MEDLINE | ID: mdl-32356757

RESUMO

Higher-resolution biopsy slice images reveal many details, which are widely used in medical practice. However, taking high-resolution slice images is more costly than taking low-resolution ones. In this paper, we propose a joint framework containing a novel transfer learning strategy and a deep super-resolution framework to generate high-resolution slice images from low-resolution ones. The super-resolution framework called SRFBN+ is proposed by modifying a state-of-the-art framework SRFBN. Specifically, the structure of the feedback block of SRFBN was modified to be more flexible. Besides, it is challenging to use typical transfer learning strategies directly for the tasks on slice images, as the patterns on different types of biopsy slice images are varying. To this end, we propose a novel transfer learning strategy, called Channel Fusion Transfer Learning (CF-Trans). CF-Trans builds a middle domain by fusing the data manifolds of the source domain and the target domain, serving as a springboard for knowledge transfer. Thus, in the transfer learning setting, SRFBN+ can be trained on the source domain and then the middle domain and finally the target domain. Experiments on biopsy slice images validate SRFBN+ works well in generating super-resolution slice images, and CF-Trans is an efficient transfer learning strategy.


Assuntos
Biópsia/métodos , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Microscopia/métodos , Algoritmos , Colo/patologia , Biologia Computacional , Bases de Dados Factuais , Feminino , Humanos , Ovário/patologia
7.
IEEE J Biomed Health Inform ; 25(10): 3898-3910, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-33979295

RESUMO

Keratoconus is one of the most severe corneal diseases, which is difficult to detect at the early stage (i.e., sub-clinical keratoconus) and possibly results in vision loss. In this paper, we propose a novel end-to-end deep learning approach, called KerNet, which processes the raw data of the Pentacam HR system (consisting of five numerical matrices) to detect keratoconus and sub-clinical keratoconus. Specifically, we propose a novel convolutional neural network, called KerNet, containing five branches as the backbone with a multi-level fusion architecture. The five branches receive five matrices separately and capture effectively the features of different matrices by several cascaded residual blocks. The multi-level fusion architecture (i.e., low-level fusion and high-level fusion) moderately takes into account the correlation among five slices and fuses the extracted features for better prediction. Experimental results show that: (1) our novel approach outperforms state-of-the-art methods on an in-house dataset, by ~1% for keratoconus detection accuracy and ~4 for sub-clinical keratoconus detection accuracy; (2) the attention maps visualized by Grad-CAM show that our KerNet places more attention on the inferior temporal part for sub-clinical keratoconus, which has been proved as the identifying regions for ophthalmologists to detect sub-clinical keratoconus in previous clinical studies. To our best knowledge, we are the first to propose an end-to-end deep learning approach utilizing raw data obtained by the Pentacam HR system for keratoconus and subclinical keratoconus detection. Further, the prediction performance and the clinical significance of our KerNet are well evaluated and proved by two clinical experts. Our code is available at https://github.com/upzheng/Keratoconus.


Assuntos
Aprendizado Profundo , Ceratocone , Córnea , Humanos , Ceratocone/diagnóstico , Redes Neurais de Computação
8.
Front Oncol ; 10: 1769, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33014870

RESUMO

Interleukin-1 receptor associated kinase-1 (IRAK1) exhibits important roles in inflammation, infection, and autoimmune diseases; however, only a few inhibitors have been discovered. In this study, at first, a discriminatory structure-based virtual screening (SBVS) was employed, but only one active compound (compound 1, IC50 = 2.25 µM) was identified. The low hit rate (2.63%) which derives from the weak discriminatory power of docking among high-scored molecules was observed in our virtual screening (VS) process for IRAK1 inhibitor. Furthermore, an artificial intelligence (AI) method, which employed a support vector machine (SVM) model, integrated information of molecular docking, pharmacophore scoring and molecular descriptors was constructed to enhance the traditional IRAK1-VS protocol. Using AI, it was found that VS of IRAK1 inhibitors excluded by over 50% of the inactive compounds, which could significantly improve the prediction accuracy of the SBVS model. Moreover, four active molecules (two of which exhibited comparative IC50 with compound 1) were accurately identified from a set of highly similar candidates. Amongst, compounds with better activity exhibited good selectivity against IRAK4. The AI assisted workflow could serve as an effective tool for enhancement of SBVS.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa