Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
PeerJ ; 12: e17184, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38560451

RESUMO

Background: Single-cell annotation plays a crucial role in the analysis of single-cell genomics data. Despite the existence of numerous single-cell annotation algorithms, a comprehensive tool for integrating and comparing these algorithms is also lacking. Methods: This study meticulously investigated a plethora of widely adopted single-cell annotation algorithms. Ten single-cell annotation algorithms were selected based on the classification of either reference dataset-dependent or marker gene-dependent approaches. These algorithms included SingleR, Seurat, sciBet, scmap, CHETAH, scSorter, sc.type, cellID, scCATCH, and SCINA. Building upon these algorithms, we developed an R package named scAnnoX for the integration and comparative analysis of single-cell annotation algorithms. Results: The development of the scAnnoX software package provides a cohesive framework for annotating cells in scRNA-seq data, enabling researchers to more efficiently perform comparative analyses among the cell type annotations contained in scRNA-seq datasets. The integrated environment of scAnnoX streamlines the testing, evaluation, and comparison processes among various algorithms. Among the ten annotation tools evaluated, SingleR, Seurat, sciBet, and scSorter emerged as top-performing algorithms in terms of prediction accuracy, with SingleR and sciBet demonstrating particularly superior performance, offering guidance for users. Interested parties can access the scAnnoX package at https://github.com/XQ-hub/scAnnoX.


Assuntos
Análise de Célula Única , Software , Algoritmos , Genômica , Existencialismo
2.
ACS Sens ; 8(8): 3158-3166, 2023 08 25.
Artigo em Inglês | MEDLINE | ID: mdl-37489756

RESUMO

Graphically encoded hydrogel microparticle (HMP)-based bioassay is a diagnostic tool characterized by exceptional multiplex detectability and robust sensitivity and specificity. Specifically, deep learning enables highly fast and accurate analyses of HMPs with diverse graphical codes. However, previous related studies have found the use of plain particles as data to be disadvantageous for accurate analyses of HMPs loaded with functional nanomaterials. Furthermore, the manual data annotation method used in existing approaches is highly labor-intensive and time-consuming. In this study, we present an efficient deep-learning-based analysis of encoded HMPs with diverse graphical codes and functional nanomaterials, utilizing the auto-annotation and synthetic data mixing methods for model training. The auto-annotation enhanced the throughput of dataset preparation up to 0.11 s/image. Using synthetic data mixing, a mean average precision of 0.88 was achieved in the analysis of encoded HMPs with magnetic nanoparticles, representing an approximately twofold improvement over the standard method. To evaluate the practical applicability of the proposed automatic analysis strategy, a single-image analysis was performed after the triplex immunoassay for the preeclampsia-related protein biomarkers. Finally, we accomplished a processing throughput of 0.353 s per sample for analyzing the result image.


Assuntos
Aprendizado Profundo , Hidrogéis , Processamento de Imagem Assistida por Computador/métodos , Biomarcadores , Imunoensaio/métodos
4.
BMC Med Inform Decis Mak ; 22(1): 229, 2022 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-36050674

RESUMO

BACKGROUND: Extracting metastatic information from previous radiologic-text reports is important, however, laborious annotations have limited the usability of these texts. We developed a deep-learning model for extracting primary lung cancer sites and metastatic lymph nodes and distant metastasis information from PET-CT reports for determining lung cancer stages. METHODS: PET-CT reports, fully written in English, were acquired from two cohorts of patients with lung cancer who were diagnosed at a tertiary hospital between January 2004 and March 2020. One cohort of 20,466 PET-CT reports was used for training and the validation set, and the other cohort of 4190 PET-CT reports was used for an additional-test set. A pre-processing model (Lung Cancer Spell Checker) was applied to correct the typographical errors, and pseudo-labelling was used for training the model. The deep-learning model was constructed using the Convolutional-Recurrent Neural Network. The performance metrics for the prediction model were accuracy, precision, sensitivity, micro-AUROC, and AUPRC. RESULTS: For the extraction of primary lung cancer location, the model showed a micro-AUROC of 0.913 and 0.946 in the validation set and the additional-test set, respectively. For metastatic lymph nodes, the model showed a sensitivity of 0.827 and a specificity of 0.960. In predicting distant metastasis, the model showed a micro-AUROC of 0.944 and 0.950 in the validation and the additional-test set, respectively. CONCLUSION: Our deep-learning method could be used for extracting lung cancer stage information from PET-CT reports and may facilitate lung cancer studies by alleviating laborious annotation by clinicians.


Assuntos
Aprendizado Profundo , Neoplasias Pulmonares , Humanos , Pulmão , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/patologia , Processamento de Linguagem Natural , Estadiamento de Neoplasias , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos
5.
Comput Med Imaging Graph ; 99: 102091, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35803034

RESUMO

Most learning-based magnetic resonance image (MRI) segmentation methods rely on the manual annotation to provide supervision, which is extremely tedious, especially when multiple anatomical structures are required. In this work, we aim to develop a hybrid framework named Spine-GFlow that combines the image features learned by a CNN model and anatomical priors for multi-tissue segmentation in a sagittal lumbar MRI. Our framework does not require any manual annotation and is robust against image feature variation caused by different image settings and/or underlying pathology. Our contributions include: 1) a rule-based method that automatically generates the weak annotation (initial seed area), 2) a novel proposal generation method that integrates the multi-scale image features and anatomical prior, 3) a comprehensive loss for CNN training that optimizes the pixel classification and feature distribution simultaneously. Our Spine-GFlow has been validated on 2 independent datasets: HKDDC (containing images obtained from 3 different machines) and IVDM3Seg. The segmentation results of vertebral bodies (VB), intervertebral discs (IVD), and spinal canal (SC) are evaluated quantitatively using intersection over union (IoU) and the Dice coefficient. Results show that our method, without requiring manual annotation, has achieved a segmentation performance comparable to a model trained with full supervision (mean Dice 0.914 vs 0.916).


Assuntos
Disco Intervertebral , Imageamento por Ressonância Magnética , Processamento de Imagem Assistida por Computador/métodos , Disco Intervertebral/diagnóstico por imagem , Disco Intervertebral/patologia , Região Lombossacral , Imageamento por Ressonância Magnética/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA