Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Mod Pathol ; 37(11): 100563, 2024 Jul 16.
Artigo em Inglês | MEDLINE | ID: mdl-39025402

RESUMO

The biopsy Gleason score is an important prognostic marker for prostate cancer patients. It is, however, subject to substantial variability among pathologists. Artificial intelligence (AI)-based algorithms employing deep learning have shown their ability to match pathologists' performance in assigning Gleason scores, with the potential to enhance pathologists' grading accuracy. The performance of Gleason AI algorithms in research is mostly reported on common benchmark data sets or within public challenges. In contrast, many commercial algorithms are evaluated in clinical studies, for which data are not publicly released. As commercial AI vendors typically do not publish performance on public benchmarks, comparison between research and commercial AI is difficult. The aims of this study are to evaluate and compare the performance of top-ranked public and commercial algorithms using real-world data. We curated a diverse data set of whole-slide prostate biopsy images through crowdsourcing containing images with a range of Gleason scores and from diverse sources. Predictions were obtained from 5 top-ranked public algorithms from the Prostate cANcer graDe Assessment (PANDA) challenge and 2 commercial Gleason grading algorithms. Additionally, 10 pathologists (A.C., C.R., J.v.I., K.R.M.L., P.R., P.G.S., R.G., S.F.K.J., T.v.d.K., X.F.) evaluated the data set in a reader study. Overall, the pairwise quadratic weighted kappa among pathologists ranged from 0.777 to 0.916. Both public and commercial algorithms showed high agreement with pathologists, with quadratic kappa ranging from 0.617 to 0.900. Commercial algorithms performed on par or outperformed top public algorithms.

2.
BMC Med Inform Decis Mak ; 22(1): 102, 2022 04 15.
Artigo em Inglês | MEDLINE | ID: mdl-35428335

RESUMO

BACKGROUND: There is progress to be made in building artificially intelligent systems to detect abnormalities that are not only accurate but can handle the true breadth of findings that radiologists encounter in body (chest, abdomen, and pelvis) computed tomography (CT). Currently, the major bottleneck for developing multi-disease classifiers is a lack of manually annotated data. The purpose of this work was to develop high throughput multi-label annotators for body CT reports that can be applied across a variety of abnormalities, organs, and disease states thereby mitigating the need for human annotation. METHODS: We used a dictionary approach to develop rule-based algorithms (RBA) for extraction of disease labels from radiology text reports. We targeted three organ systems (lungs/pleura, liver/gallbladder, kidneys/ureters) with four diseases per system based on their prevalence in our dataset. To expand the algorithms beyond pre-defined keywords, attention-guided recurrent neural networks (RNN) were trained using the RBA-extracted labels to classify reports as being positive for one or more diseases or normal for each organ system. Alternative effects on disease classification performance were evaluated using random initialization or pre-trained embedding as well as different sizes of training datasets. The RBA was tested on a subset of 2158 manually labeled reports and performance was reported as accuracy and F-score. The RNN was tested against a test set of 48,758 reports labeled by RBA and performance was reported as area under the receiver operating characteristic curve (AUC), with 95% CIs calculated using the DeLong method. RESULTS: Manual validation of the RBA confirmed 91-99% accuracy across the 15 different labels. Our models extracted disease labels from 261,229 radiology reports of 112,501 unique subjects. Pre-trained models outperformed random initialization across all diseases. As the training dataset size was reduced, performance was robust except for a few diseases with a relatively small number of cases. Pre-trained classification AUCs reached > 0.95 for all four disease outcomes and normality across all three organ systems. CONCLUSIONS: Our label-extracting pipeline was able to encompass a variety of cases and diseases in body CT reports by generalizing beyond strict rules with exceptional accuracy. The method described can be easily adapted to enable automated labeling of hospital-scale medical data sets for training image-based disease classifiers.


Assuntos
Aprendizado Profundo , Abdome , Humanos , Redes Neurais de Computação , Pelve/diagnóstico por imagem , Tomografia Computadorizada por Raios X
3.
Comput Biol Med ; 170: 108018, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38281317

RESUMO

In histopathology practice, scanners, tissue processing, staining, and image acquisition protocols vary from center to center, resulting in subtle variations in images. Vanilla convolutional neural networks are sensitive to such domain shifts. Data augmentation is a popular way to improve domain generalization. Currently, state-of-the-art domain generalization in computational pathology is achieved using a manually curated set of augmentation transforms. However, manual tuning of augmentation parameters is time-consuming and can lead to sub-optimal generalization performance. Meta-learning frameworks can provide efficient ways to find optimal training hyper-parameters, including data augmentation. In this study, we hypothesize that an automated search of augmentation hyper-parameters can provide superior generalization performance and reduce experimental optimization time. We select four state-of-the-art automatic augmentation methods from general computer vision and investigate their capacity to improve domain generalization in histopathology. We analyze their performance on data from 25 centers across two different tasks: tumor metastasis detection in lymph nodes and breast cancer tissue type classification. On tumor metastasis detection, most automatic augmentation methods achieve comparable performance to state-of-the-art manual augmentation. On breast cancer tissue type classification, the leading automatic augmentation method significantly outperforms state-of-the-art manual data augmentation.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Humanos , Feminino , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Mama
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA