Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Bioinformatics ; 40(4)2024 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-38561176

RESUMO

MOTIVATION: Understanding the intermolecular interactions of ligand-target pairs is key to guiding the optimization of drug research on cancers, which can greatly mitigate overburden workloads for wet labs. Several improved computational methods have been introduced and exhibit promising performance for these identification tasks, but some pitfalls restrict their practical applications: (i) first, existing methods do not sufficiently consider how multigranular molecule representations influence interaction patterns between proteins and compounds; and (ii) second, existing methods seldom explicitly model the binding sites when an interaction occurs to enable better prediction and interpretation, which may lead to unexpected obstacles to biological researchers. RESULTS: To address these issues, we here present DrugMGR, a deep multigranular drug representation model capable of predicting binding affinities and regions for each ligand-target pair. We conduct consistent experiments on three benchmark datasets using existing methods and introduce a new specific dataset to better validate the prediction of binding sites. For practical application, target-specific compound identification tasks are also carried out to validate the capability of real-world compound screen. Moreover, the visualization of some practical interaction scenarios provides interpretable insights from the results of the predictions. The proposed DrugMGR achieves excellent overall performance in these datasets, exhibiting its advantages and merits against state-of-the-art methods. Thus, the downstream task of DrugMGR can be fine-tuned for identifying the potential compounds that target proteins for clinical treatment. AVAILABILITY AND IMPLEMENTATION: https://github.com/lixiaokun2020/DrugMGR.


Assuntos
Proteínas , Ligantes , Proteínas/química , Sítios de Ligação
2.
Comput Biol Med ; 166: 107541, 2023 Sep 30.
Artigo em Inglês | MEDLINE | ID: mdl-37804779

RESUMO

Colorectal cancer (CRC) holds the distinction of being the most prevalent malignant tumor affecting the digestive system. It is a formidable global health challenge, as it ranks as the fourth leading cause of cancer-related fatalities around the world. Despite considerable advancements in comprehending and addressing colorectal cancer (CRC), the likelihood of recurring tumors and metastasis remains a major cause of high morbidity and mortality rates during treatment. Currently, colonoscopy is the predominant method for CRC screening. Artificial intelligence has emerged as a promising tool in aiding the diagnosis of polyps, which have demonstrated significant potential. Unfortunately, most segmentation methods face challenges in terms of limited accuracy and generalization to different datasets, especially the slow processing and analysis speed has become a major obstacle. In this study, we propose a fast and efficient polyp segmentation framework based on the Large-Kernel Receptive Field Block (LK-RFB) and Global Parallel Partial Decoder(GPPD). Our proposed ColonNet has been extensively tested and proven effective, achieving a DICE coefficient of over 0.910 and an FPS of over 102 on the CVC-300 dataset. In comparison to the state-of-the-art (SOTA) methods, ColonNet outperforms or achieves comparable performance on five publicly available datasets, establishing a new SOTA. Compared to state-of-the-art methods, ColonNet achieves the highest FPS (over 102 FPS) while maintaining excellent segmentation results, achieving the best or comparable performance on the five public datasets. The code will be released at: https://github.com/SPECTRELWF/ColonNet.

3.
Med Image Anal ; 90: 102944, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37708709

RESUMO

In this work, we address the task of tumor cellularity (TC) estimation with a novel framework based on the label distribution learning (LDL) paradigm. We propose a self-ensemble label distribution learning framework (SLDL) to resolve the challenges of existing LDL-based methods, including difficulties for inter-rater ambiguity exploitation, proper and flexible label distribution generation, and accurate TC value recovery. The proposed SLDL makes four main contributions which have been demonstrated to be quite effective in numerous experiments. First, we propose an expertness-aware conditional VAE for diversified single-rater modeling and an attention-based multi-rater fusion strategy that enables effective inter-rater ambiguity exploitation. Second, we propose a template-based label distribution generation method that is tailored for the TC estimation task and constructs label distributions based on the annotation priors. Third, we propose a novel restricted distribution loss, significantly improving the TC value estimation by effectively regularizing the learning with unimodal loss and regression loss. Fourth, to the best of our knowledge, we are the first to simultaneously leverage inter-rater and intra-rater variability to address the label ambiguity issue in the breast tumor cellularity estimation tasks. The experimental results on the public BreastPathQ dataset demonstrate that the SLDL outperforms the existing methods by a large margin and achieves new state-of-the-art results in the TC estimation task. The code will be available from https://github.com/PerceptionComputingLab/ULTRA.

4.
Eur Radiol ; 32(10): 7163-7172, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35488916

RESUMO

OBJECTIVE: To develop novel deep learning network (DLN) with the incorporation of the automatic segmentation network (ASN) for morphological analysis and determined the performance for diagnosis breast cancer in automated breast ultrasound (ABUS). METHODS: A total of 769 breast tumors were enrolled in this study and were randomly divided into training set and test set at 600 vs. 169. The novel DLNs (Resent v2, ResNet50 v2, ResNet101 v2) added a new ASN to the traditional ResNet networks and extracted morphological information of breast tumors. The accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), area under the receiver operating characteristic (ROC) curve (AUC), and average precision (AP) were calculated. The diagnostic performances of novel DLNs were compared with those of two radiologists with different experience. RESULTS: The ResNet34 v2 model had higher specificity (76.81%) and PPV (82.22%) than the other two, the ResNet50 v2 model had higher accuracy (78.11%) and NPV (72.86%), and the ResNet101 v2 model had higher sensitivity (85.00%). According to the AUCs and APs, the novel ResNet101 v2 model produced the best result (AUC 0.85 and AP 0.90) compared with the remaining five DLNs. Compared with the novice radiologist, the novel DLNs performed better. The F1 score was increased from 0.77 to 0.78, 0.81, and 0.82 by three novel DLNs. However, their diagnostic performance was worse than that of the experienced radiologist. CONCLUSIONS: The novel DLNs performed better than traditional DLNs and may be helpful for novice radiologists to improve their diagnostic performance of breast cancer in ABUS. KEY POINTS: • A novel automatic segmentation network to extract morphological information was successfully developed and implemented with ResNet deep learning networks. • The novel deep learning networks in our research performed better than the traditional deep learning networks in the diagnosis of breast cancer using ABUS images. • The novel deep learning networks in our research may be useful for novice radiologists to improve diagnostic performance.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Mama/diagnóstico por imagem , Neoplasias da Mama/diagnóstico por imagem , Feminino , Humanos , Sensibilidade e Especificidade , Ultrassonografia Mamária/métodos
5.
IEEE Trans Med Imaging ; 37(8): 1943-1954, 2018 08.
Artigo em Inglês | MEDLINE | ID: mdl-29994627

RESUMO

Segmentation of brain tumors from magnetic resonance imaging (MRI) data sets is of great importance for improved diagnosis, growth rate prediction, and treatment planning. However, automating this process is challenging due to the presence of severe partial volume effect and considerable variability in tumor structures, as well as imaging conditions, especially for the gliomas. In this paper, we introduce a new methodology that combines random forests and active contour model for the automated segmentation of the gliomas from multimodal volumetric MR images. Specifically, we employ a feature representations learning strategy to effectively explore both local and contextual information from multimodal images for tissue segmentation by using modality specific random forests as the feature learning kernels. Different levels of the structural information is subsequently integrated into concatenated and connected random forests for gliomas structure inferring. Finally, a novel multiscale patch driven active contour model is exploited to refine the inferred structure by taking advantage of sparse representation techniques. Results reported on public benchmarks reveal that our architecture achieves competitive accuracy compared to the state-of-the-art brain tumor segmentation methods while being computationally efficient.


Assuntos
Neoplasias Encefálicas/diagnóstico por imagem , Encéfalo/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Imageamento por Ressonância Magnética/métodos , Algoritmos , Humanos , Imageamento Tridimensional
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA