Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 82
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
J Biomed Inform ; 149: 104576, 2024 01.
Artigo em Inglês | MEDLINE | ID: mdl-38101690

RESUMO

INTRODUCTION: Machine learning algorithms are expected to work side-by-side with humans in decision-making pipelines. Thus, the ability of classifiers to make reliable decisions is of paramount importance. Deep neural networks (DNNs) represent the state-of-the-art models to address real-world classification. Although the strength of activation in DNNs is often correlated with the network's confidence, in-depth analyses are needed to establish whether they are well calibrated. METHOD: In this paper, we demonstrate the use of DNN-based classification tools to benefit cancer registries by automating information extraction of disease at diagnosis and at surgery from electronic text pathology reports from the US National Cancer Institute (NCI) Surveillance, Epidemiology, and End Results (SEER) population-based cancer registries. In particular, we introduce multiple methods for selective classification to achieve a target level of accuracy on multiple classification tasks while minimizing the rejection amount-that is, the number of electronic pathology reports for which the model's predictions are unreliable. We evaluate the proposed methods by comparing our approach with the current in-house deep learning-based abstaining classifier. RESULTS: Overall, all the proposed selective classification methods effectively allow for achieving the targeted level of accuracy or higher in a trade-off analysis aimed to minimize the rejection rate. On in-distribution validation and holdout test data, with all the proposed methods, we achieve on all tasks the required target level of accuracy with a lower rejection rate than the deep abstaining classifier (DAC). Interpreting the results for the out-of-distribution test data is more complex; nevertheless, in this case as well, the rejection rate from the best among the proposed methods achieving 97% accuracy or higher is lower than the rejection rate based on the DAC. CONCLUSIONS: We show that although both approaches can flag those samples that should be manually reviewed and labeled by human annotators, the newly proposed methods retain a larger fraction and do so without retraining-thus offering a reduced computational cost compared with the in-house deep learning-based abstaining classifier.


Assuntos
Aprendizado Profundo , Humanos , Incerteza , Redes Neurais de Computação , Algoritmos , Aprendizado de Máquina
2.
J Biomed Inform ; 125: 103957, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34823030

RESUMO

In the last decade, the widespread adoption of electronic health record documentation has created huge opportunities for information mining. Natural language processing (NLP) techniques using machine and deep learning are becoming increasingly widespread for information extraction tasks from unstructured clinical notes. Disparities in performance when deploying machine learning models in the real world have recently received considerable attention. In the clinical NLP domain, the robustness of convolutional neural networks (CNNs) for classifying cancer pathology reports under natural distribution shifts remains understudied. In this research, we aim to quantify and improve the performance of the CNN for text classification on out-of-distribution (OOD) datasets resulting from the natural evolution of clinical text in pathology reports. We identified class imbalance due to different prevalence of cancer types as one of the sources of performance drop and analyzed the impact of previous methods for addressing class imbalance when deploying models in real-world domains. Our results show that our novel class-specialized ensemble technique outperforms other methods for the classification of rare cancer types in terms of macro F1 scores. We also found that traditional ensemble methods perform better in top classes, leading to higher micro F1 scores. Based on our findings, we formulate a series of recommendations for other ML practitioners on how to build robust models with extremely imbalanced datasets in biomedical NLP applications.


Assuntos
Processamento de Linguagem Natural , Neoplasias , Registros Eletrônicos de Saúde , Humanos , Aprendizado de Máquina , Redes Neurais de Computação
3.
BMC Bioinformatics ; 22(1): 113, 2021 Mar 09.
Artigo em Inglês | MEDLINE | ID: mdl-33750288

RESUMO

BACKGROUND: Automated text classification has many important applications in the clinical setting; however, obtaining labelled data for training machine learning and deep learning models is often difficult and expensive. Active learning techniques may mitigate this challenge by reducing the amount of labelled data required to effectively train a model. In this study, we analyze the effectiveness of 11 active learning algorithms on classifying subsite and histology from cancer pathology reports using a Convolutional Neural Network as the text classification model. RESULTS: We compare the performance of each active learning strategy using two differently sized datasets and two different classification tasks. Our results show that on all tasks and dataset sizes, all active learning strategies except diversity-sampling strategies outperformed random sampling, i.e., no active learning. On our large dataset (15K initial labelled samples, adding 15K additional labelled samples each iteration of active learning), there was no clear winner between the different active learning strategies. On our small dataset (1K initial labelled samples, adding 1K additional labelled samples each iteration of active learning), marginal and ratio uncertainty sampling performed better than all other active learning techniques. We found that compared to random sampling, active learning strongly helps performance on rare classes by focusing on underrepresented classes. CONCLUSIONS: Active learning can save annotation cost by helping human annotators efficiently and intelligently select which samples to label. Our results show that a dataset constructed using effective active learning techniques requires less than half the amount of labelled data to achieve the same performance as a dataset constructed using random sampling.


Assuntos
Aprendizado de Máquina , Neoplasias , Algoritmos , Humanos , Neoplasias/genética , Neoplasias/patologia , Redes Neurais de Computação
4.
Am J Epidemiol ; 190(11): 2405-2419, 2021 11 02.
Artigo em Inglês | MEDLINE | ID: mdl-34165150

RESUMO

Hydroxychloroquine (HCQ) was proposed as an early therapy for coronavirus disease 2019 (COVID-19) after in vitro studies indicated possible benefit. Previous in vivo observational studies have presented conflicting results, though recent randomized clinical trials have reported no benefit from HCQ among patients hospitalized with COVID-19. We examined the effects of HCQ alone and in combination with azithromycin in a hospitalized population of US veterans with COVID-19, using a propensity score-adjusted survival analysis with imputation of missing data. According to electronic health record data from the US Department of Veterans Affairs health care system, 64,055 US Veterans were tested for the virus that causes COVID-19 between March 1, 2020 and April 30, 2020. Of the 7,193 veterans who tested positive, 2,809 were hospitalized, and 657 individuals were prescribed HCQ within the first 48-hours of hospitalization for the treatment of COVID-19. There was no apparent benefit associated with HCQ receipt, alone or in combination with azithromycin, and there was an increased risk of intubation when HCQ was used in combination with azithromycin (hazard ratio = 1.55; 95% confidence interval: 1.07, 2.24). In conclusion, we assessed the effectiveness of HCQ with or without azithromycin in treatment of patients hospitalized with COVID-19, using a national sample of the US veteran population. Using rigorous study design and analytic methods to reduce confounding and bias, we found no evidence of a survival benefit from the administration of HCQ.


Assuntos
Antibacterianos/uso terapêutico , Azitromicina/uso terapêutico , Tratamento Farmacológico da COVID-19 , Hospitalização/estatística & dados numéricos , Hidroxicloroquina/uso terapêutico , Veteranos/estatística & dados numéricos , Idoso , Idoso de 80 Anos ou mais , Antibacterianos/efeitos adversos , Azitromicina/efeitos adversos , COVID-19/mortalidade , Quimioterapia Combinada , Feminino , Humanos , Hidroxicloroquina/efeitos adversos , Análise de Intenção de Tratamento , Aprendizado de Máquina , Masculino , Pessoa de Meia-Idade , Farmacoepidemiologia , Estudos Retrospectivos , SARS-CoV-2 , Resultado do Tratamento , Estados Unidos/epidemiologia
5.
J Biomed Inform ; 110: 103564, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32919043

RESUMO

OBJECTIVE: In machine learning, it is evident that the classification of the task performance increases if bootstrap aggregation (bagging) is applied. However, the bagging of deep neural networks takes tremendous amounts of computational resources and training time. The research question that we aimed to answer in this research is whether we could achieve higher task performance scores and accelerate the training by dividing a problem into sub-problems. MATERIALS AND METHODS: The data used in this study consist of free text from electronic cancer pathology reports. We applied bagging and partitioned data training using Multi-Task Convolutional Neural Network (MT-CNN) and Multi-Task Hierarchical Convolutional Attention Network (MT-HCAN) classifiers. We split a big problem into 20 sub-problems, resampled the training cases 2,000 times, and trained the deep learning model for each bootstrap sample and each sub-problem-thus, generating up to 40,000 models. We performed the training of many models concurrently in a high-performance computing environment at Oak Ridge National Laboratory (ORNL). RESULTS: We demonstrated that aggregation of the models improves task performance compared with the single-model approach, which is consistent with other research studies; and we demonstrated that the two proposed partitioned bagging methods achieved higher classification accuracy scores on four tasks. Notably, the improvements were significant for the extraction of cancer histology data, which had more than 500 class labels in the task; these results show that data partition may alleviate the complexity of the task. On the contrary, the methods did not achieve superior scores for the tasks of site and subsite classification. Intrinsically, since data partitioning was based on the primary cancer site, the accuracy depended on the determination of the partitions, which needs further investigation and improvement. CONCLUSION: Results in this research demonstrate that 1. The data partitioning and bagging strategy achieved higher performance scores. 2. We achieved faster training leveraged by the high-performance Summit supercomputer at ORNL.


Assuntos
Neoplasias , Redes Neurais de Computação , Metodologias Computacionais , Humanos , Armazenamento e Recuperação da Informação , Aprendizado de Máquina
6.
BMC Bioinformatics ; 19(Suppl 18): 485, 2018 Dec 21.
Artigo em Inglês | MEDLINE | ID: mdl-30577756

RESUMO

BACKGROUND: Manual extraction of information from electronic pathology (epath) reports to populate the Surveillance, Epidemiology, and End Result (SEER) database is labor intensive. Systematizing the data extraction automatically using machine-learning (ML) and natural language processing (NLP) is desirable to reduce the human labor required to populate the SEER database and to improve the timeliness of the data. This enables scaling up registry efficiency and collection of new data elements. To ensure the integrity, quality, and continuity of the SEER data, the misclassification error of ML and NPL algorithms needs to be negligible. Current algorithms fail to achieve the precision of human experts who can bring additional information in their assessments. Differences in registry format and the desire to develop a common information extraction platform further complicate the ML/NLP tasks. The purpose of our study is to develop triage rules to partially automate registry workflow to improve the precision of the auto-extracted information. RESULTS: This paper presents a mathematical framework to improve the precision of a classifier beyond that of the Bayes classifier by selectively classifying item that are most likely to be correct. This results in a triage rule that only classifies a subset of the item. We characterize the optimal triage rule and demonstrate its usefulness in the problem of classifying cancer site from electronic pathology reports to achieve a desired precision. CONCLUSIONS: From the mathematical formalism, we propose a heuristic estimate for triage rule based on post-processing the soft-max output from standard machine learning algorithms. We show, in test cases, that the triage rule significantly improve the classification accuracy.


Assuntos
Computadores/tendências , Bases de Dados Factuais/tendências , Triagem/métodos , Teorema de Bayes , Humanos , Armazenamento e Recuperação da Informação
7.
BMC Bioinformatics ; 19(Suppl 18): 488, 2018 Dec 21.
Artigo em Inglês | MEDLINE | ID: mdl-30577743

RESUMO

BACKGROUND: Deep Learning (DL) has advanced the state-of-the-art capabilities in bioinformatics applications which has resulted in trends of increasingly sophisticated and computationally demanding models trained by larger and larger data sets. This vastly increased computational demand challenges the feasibility of conducting cutting-edge research. One solution is to distribute the vast computational workload across multiple computing cluster nodes with data parallelism algorithms. In this study, we used a High-Performance Computing environment and implemented the Downpour Stochastic Gradient Descent algorithm for data parallelism to train a Convolutional Neural Network (CNN) for the natural language processing task of information extraction from a massive dataset of cancer pathology reports. We evaluated the scalability improvements using data parallelism training and the Titan supercomputer at Oak Ridge Leadership Computing Facility. To evaluate scalability, we used different numbers of worker nodes and performed a set of experiments comparing the effects of different training batch sizes and optimizer functions. RESULTS: We found that Adadelta would consistently converge at a lower validation loss, though requiring over twice as many training epochs as the fastest converging optimizer, RMSProp. The Adam optimizer consistently achieved a close 2nd place minimum validation loss significantly faster; using a batch size of 16 and 32 allowed the network to converge in only 4.5 training epochs. CONCLUSIONS: We demonstrated that the networked training process is scalable across multiple compute nodes communicating with message passing interface while achieving higher classification accuracy compared to a traditional machine learning algorithm.


Assuntos
Metodologias Computacionais , Aprendizado Profundo/tendências , Neoplasias/diagnóstico , Compreensão , Humanos , Neoplasias/patologia , Redes Neurais de Computação
8.
J Biomed Inform ; 61: 110-8, 2016 06.
Artigo em Inglês | MEDLINE | ID: mdl-27044930

RESUMO

Cancer surveillance data are collected every year in the United States via the National Program of Cancer Registries (NPCR) and the Surveillance, Epidemiology and End Results (SEER) Program of the National Cancer Institute (NCI). General trends are closely monitored to measure the nation's progress against cancer. The objective of this study was to apply a novel web informatics approach for enabling fully automated monitoring of cancer mortality trends. The approach involves automated collection and text mining of online obituaries to derive the age distribution, geospatial, and temporal trends of cancer deaths in the US. Using breast and lung cancer as examples, we mined 23,850 cancer-related and 413,024 general online obituaries spanning the timeframe 2008-2012. There was high correlation between the web-derived mortality trends and the official surveillance statistics reported by NCI with respect to the age distribution (ρ=0.981 for breast; ρ=0.994 for lung), the geospatial distribution (ρ=0.939 for breast; ρ=0.881 for lung), and the annual rates of cancer deaths (ρ=0.661 for breast; ρ=0.839 for lung). Additional experiments investigated the effect of sample size on the consistency of the web-based findings. Overall, our study findings support web informatics as a promising, cost-effective way to dynamically monitor spatiotemporal cancer mortality trends.


Assuntos
Internet , Informática Médica , Neoplasias/mortalidade , Vigilância da População , Programa de SEER , Neoplasias da Mama , Humanos , Incidência , Neoplasias Pulmonares , Mortalidade , Estados Unidos/epidemiologia
9.
Bioinformatics ; 30(1): 104-14, 2014 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-24078710

RESUMO

MOTIVATION: Life stories of diseased and healthy individuals are abundantly available on the Internet. Collecting and mining such online content can offer many valuable insights into patients' physical and emotional states throughout the pre-diagnosis, diagnosis, treatment and post-treatment stages of the disease compared with those of healthy subjects. However, such content is widely dispersed across the web. Using traditional query-based search engines to manually collect relevant materials is rather labor intensive and often incomplete due to resource constraints in terms of human query composition and result parsing efforts. The alternative option, blindly crawling the whole web, has proven inefficient and unaffordable for e-health researchers. RESULTS: We propose a user-oriented web crawler that adaptively acquires user-desired content on the Internet to meet the specific online data source acquisition needs of e-health researchers. Experimental results on two cancer-related case studies show that the new crawler can substantially accelerate the acquisition of highly relevant online content compared with the existing state-of-the-art adaptive web crawling technology. For the breast cancer case study using the full training set, the new method achieves a cumulative precision between 74.7 and 79.4% after 5 h of execution till the end of the 20-h long crawling session as compared with the cumulative precision between 32.8 and 37.0% using the peer method for the same time period. For the lung cancer case study using the full training set, the new method achieves a cumulative precision between 56.7 and 61.2% after 5 h of execution till the end of the 20-h long crawling session as compared with the cumulative precision between 29.3 and 32.4% using the peer method. Using the reduced training set in the breast cancer case study, the cumulative precision of our method is between 44.6 and 54.9%, whereas the cumulative precision of the peer method is between 24.3 and 26.3%; for the lung cancer case study using the reduced training set, the cumulative precisions of our method and the peer method are, respectively, between 35.7 and 46.7% versus between 24.1 and 29.6%. These numbers clearly show a consistently superior accuracy of our method in discovering and acquiring user-desired online content for e-health research. AVAILABILITY AND IMPLEMENTATION: The implementation of our user-oriented web crawler is freely available to non-commercial users via the following Web site: http://bsec.ornl.gov/AdaptiveCrawler.shtml. The Web site provides a step-by-step guide on how to execute the web crawler implementation. In addition, the Web site provides the two study datasets including manually labeled ground truth, initial seeds and the crawling results reported in this article.


Assuntos
Internet , Biologia Computacional/métodos , Humanos , Neoplasias , Software , Fatores de Tempo , Interface Usuário-Computador
10.
BJR Artif Intell ; 1(1): ubae006, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38828430

RESUMO

Innovation in medical imaging artificial intelligence (AI)/machine learning (ML) demands extensive data collection, algorithmic advancements, and rigorous performance assessments encompassing aspects such as generalizability, uncertainty, bias, fairness, trustworthiness, and interpretability. Achieving widespread integration of AI/ML algorithms into diverse clinical tasks will demand a steadfast commitment to overcoming issues in model design, development, and performance assessment. The complexities of AI/ML clinical translation present substantial challenges, requiring engagement with relevant stakeholders, assessment of cost-effectiveness for user and patient benefit, timely dissemination of information relevant to robust functioning throughout the AI/ML lifecycle, consideration of regulatory compliance, and feedback loops for real-world performance evidence. This commentary addresses several hurdles for the development and adoption of AI/ML technologies in medical imaging. Comprehensive attention to these underlying and often subtle factors is critical not only for tackling the challenges but also for exploring novel opportunities for the advancement of AI in radiology.

11.
bioRxiv ; 2024 May 22.
Artigo em Inglês | MEDLINE | ID: mdl-38826407

RESUMO

The expansion of biobanks has significantly propelled genomic discoveries yet the sheer scale of data within these repositories poses formidable computational hurdles, particularly in handling extensive matrix operations required by prevailing statistical frameworks. In this work, we introduce computational optimizations to the SAIGE (Scalable and Accurate Implementation of Generalized Mixed Model) algorithm, notably employing a GPU-based distributed computing approach to tackle these challenges. We applied these optimizations to conduct a large-scale genome-wide association study (GWAS) across 2,068 phenotypes derived from electronic health records of 635,969 diverse participants from the Veterans Affairs (VA) Million Veteran Program (MVP). Our strategies enabled scaling up the analysis to over 6,000 nodes on the Department of Energy (DOE) Oak Ridge Leadership Computing Facility (OLCF) Summit High-Performance Computer (HPC), resulting in a 20-fold acceleration compared to the baseline model. We also provide a Docker container with our optimizations that was successfully used on multiple cloud infrastructures on UK Biobank and All of Us datasets where we showed significant time and cost benefits over the baseline SAIGE model.

12.
BJR Artif Intell ; 1(1): ubae003, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38476957

RESUMO

The adoption of artificial intelligence (AI) tools in medicine poses challenges to existing clinical workflows. This commentary discusses the necessity of context-specific quality assurance (QA), emphasizing the need for robust QA measures with quality control (QC) procedures that encompass (1) acceptance testing (AT) before clinical use, (2) continuous QC monitoring, and (3) adequate user training. The discussion also covers essential components of AT and QA, illustrated with real-world examples. We also highlight what we see as the shared responsibility of manufacturers or vendors, regulators, healthcare systems, medical physicists, and clinicians to enact appropriate testing and oversight to ensure a safe and equitable transformation of medicine through AI.

13.
Med Phys ; 50(2): e1-e24, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36565447

RESUMO

Rapid advances in artificial intelligence (AI) and machine learning, and specifically in deep learning (DL) techniques, have enabled broad application of these methods in health care. The promise of the DL approach has spurred further interest in computer-aided diagnosis (CAD) development and applications using both "traditional" machine learning methods and newer DL-based methods. We use the term CAD-AI to refer to this expanded clinical decision support environment that uses traditional and DL-based AI methods. Numerous studies have been published to date on the development of machine learning tools for computer-aided, or AI-assisted, clinical tasks. However, most of these machine learning models are not ready for clinical deployment. It is of paramount importance to ensure that a clinical decision support tool undergoes proper training and rigorous validation of its generalizability and robustness before adoption for patient care in the clinic. To address these important issues, the American Association of Physicists in Medicine (AAPM) Computer-Aided Image Analysis Subcommittee (CADSC) is charged, in part, to develop recommendations on practices and standards for the development and performance assessment of computer-aided decision support systems. The committee has previously published two opinion papers on the evaluation of CAD systems and issues associated with user training and quality assurance of these systems in the clinic. With machine learning techniques continuing to evolve and CAD applications expanding to new stages of the patient care process, the current task group report considers the broader issues common to the development of most, if not all, CAD-AI applications and their translation from the bench to the clinic. The goal is to bring attention to the proper training and validation of machine learning algorithms that may improve their generalizability and reliability and accelerate the adoption of CAD-AI systems for clinical decision support.


Assuntos
Inteligência Artificial , Diagnóstico por Computador , Humanos , Reprodutibilidade dos Testes , Diagnóstico por Computador/métodos , Diagnóstico por Imagem , Aprendizado de Máquina
14.
Med Phys ; 50(3): e53-e61, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36705550

RESUMO

Over several months, representatives from the U.S. Department of Energy (DOE) Office of Science and National Institutes of Health (NIH) had a number of meetings that lead to the conclusion that innovations in the Nation's health care could be realized by more directed interactions between NIH and DOE. It became clear that the expertise amassed and instrumentation advances developed at the DOE physical science laboratories to enable cutting-edge research in particle physics could also feed innovation in medical healthcare. To meet their scientific mission, the DOE laboratories created advances in such technologies as particle beam generation, radioisotope production, high-energy particle detection and imaging, superconducting particle accelerators, superconducting magnets, cryogenics, high-speed electronics, artificial intelligence, and big data. To move forward, NIH and DOE initiated the process of convening a joint workshop which occurred on July 12th and 13th, 2021. This Special Report presents a summary of the findings of the collaborative workshop and introduces the goals of the next one.


Assuntos
Pesquisa Biomédica , Disciplinas das Ciências Naturais , Estados Unidos , Inteligência Artificial , National Institutes of Health (U.S.) , Laboratórios
15.
Cancer Biomark ; 33(2): 185-198, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35213361

RESUMO

BACKGROUND: With the use of artificial intelligence and machine learning techniques for biomedical informatics, security and privacy concerns over the data and subject identities have also become an important issue and essential research topic. Without intentional safeguards, machine learning models may find patterns and features to improve task performance that are associated with private personal information. OBJECTIVE: The privacy vulnerability of deep learning models for information extraction from medical textural contents needs to be quantified since the models are exposed to private health information and personally identifiable information. The objective of the study is to quantify the privacy vulnerability of the deep learning models for natural language processing and explore a proper way of securing patients' information to mitigate confidentiality breaches. METHODS: The target model is the multitask convolutional neural network for information extraction from cancer pathology reports, where the data for training the model are from multiple state population-based cancer registries. This study proposes the following schemes to collect vocabularies from the cancer pathology reports; (a) words appearing in multiple registries, and (b) words that have higher mutual information. We performed membership inference attacks on the models in high-performance computing environments. RESULTS: The comparison outcomes suggest that the proposed vocabulary selection methods resulted in lower privacy vulnerability while maintaining the same level of clinical task performance.


Assuntos
Confidencialidade , Aprendizado Profundo , Armazenamento e Recuperação da Informação/métodos , Processamento de Linguagem Natural , Neoplasias/epidemiologia , Inteligência Artificial , Aprendizado Profundo/normas , Humanos , Neoplasias/patologia , Sistema de Registros
16.
IEEE J Biomed Health Inform ; 26(6): 2796-2803, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-35020599

RESUMO

Recent applications ofdeep learning have shown promising results for classifying unstructured text in the healthcare domain. However, the reliability of models in production settings has been hindered by imbalanced data sets in which a small subset of the classes dominate. In the absence of adequate training data, rare classes necessitate additional model constraints for robust performance. Here, we present a strategy for incorporating short sequences of text (i.e. keywords) into training to boost model accuracy on rare classes. In our approach, we assemble a set of keywords, including short phrases, associated with each class. The keywords are then used as additional data during each batch of model training, resulting in a training loss that has contributions from both raw data and keywords. We evaluate our approach on classification of cancer pathology reports, which shows a substantial increase in model performance for rare classes. Furthermore, we analyze the impact of keywords on model output probabilities for bigrams, providing a straightforward method to identify model difficulties for limited training data.


Assuntos
Reprodutibilidade dos Testes , Coleta de Dados , Humanos
17.
JAMIA Open ; 5(3): ooac075, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-36110150

RESUMO

Objective: We aim to reduce overfitting and model overconfidence by distilling the knowledge of an ensemble of deep learning models into a single model for the classification of cancer pathology reports. Materials and Methods: We consider the text classification problem that involves 5 individual tasks. The baseline model consists of a multitask convolutional neural network (MtCNN), and the implemented ensemble (teacher) consists of 1000 MtCNNs. We performed knowledge transfer by training a single model (student) with soft labels derived through the aggregation of ensemble predictions. We evaluate performance based on accuracy and abstention rates by using softmax thresholding. Results: The student model outperforms the baseline MtCNN in terms of abstention rates and accuracy, thereby allowing the model to be used with a larger volume of documents when deployed. The highest boost was observed for subsite and histology, for which the student model classified an additional 1.81% reports for subsite and 3.33% reports for histology. Discussion: Ensemble predictions provide a useful strategy for quantifying the uncertainty inherent in labeled data and thereby enable the construction of soft labels with estimated probabilities for multiple classes for a given document. Training models with the derived soft labels reduce model confidence in difficult-to-classify documents, thereby leading to a reduction in the number of highly confident wrong predictions. Conclusions: Ensemble model distillation is a simple tool to reduce model overconfidence in problems with extreme class imbalance and noisy datasets. These methods can facilitate the deployment of deep learning models in high-risk domains with low computational resources where minimizing inference time is required.

18.
Radiat Res ; 197(4): 434-445, 2022 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-35090025

RESUMO

With a widely attended virtual kickoff event on January 29, 2021, the National Cancer Institute (NCI) and the Department of Energy (DOE) launched a series of 4 interactive, interdisciplinary workshops-and a final concluding "World Café" on March 29, 2021-focused on advancing computational approaches for predictive oncology in the clinical and research domains of radiation oncology. These events reflect 3,870 human hours of virtual engagement with representation from 8 DOE national laboratories and the Frederick National Laboratory for Cancer Research (FNL), 4 research institutes, 5 cancer centers, 17 medical schools and teaching hospitals, 5 companies, 5 federal agencies, 3 research centers, and 27 universities. Here we summarize the workshops by first describing the background for the workshops. Participants identified twelve key questions-and collaborative parallel ideas-as the focus of work going forward to advance the field. These were then used to define short-term and longer-term "Blue Sky" goals. In addition, the group determined key success factors for predictive oncology in the context of radiation oncology, if not the future of all of medicine. These are: cross-discipline collaboration, targeted talent development, development of mechanistic mathematical and computational models and tools, and access to high-quality multiscale data that bridges mechanisms to phenotype. The workshop participants reported feeling energized and highly motivated to pursue next steps together to address the unmet needs in radiation oncology specifically and in cancer research generally and that NCI and DOE project goals align at the convergence of radiation therapy and advanced computing.


Assuntos
Radioterapia (Especialidade) , Academias e Institutos , Humanos , National Cancer Institute (U.S.) , Radioterapia (Especialidade)/educação , Estados Unidos
19.
J Biomed Inform ; 44(5): 815-23, 2011 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-21554985

RESUMO

Development of a computational decision aid for a new medical imaging modality typically is a long and complicated process. It consists of collecting data in the form of images and annotations, development of image processing and pattern recognition algorithms for analysis of the new images and finally testing of the resulting system. Since new imaging modalities are developed more rapidly than ever before, any effort for decreasing the time and cost of this development process could result in maximizing the benefit of the new imaging modality to patients by making the computer aids quickly available to radiologists that interpret the images. In this paper, we make a step in this direction and investigate the possibility of translating the knowledge about the detection problem from one imaging modality to another. Specifically, we present a computer-aided detection (CAD) system for mammographic masses that uses a mutual information-based template matching scheme with intelligently selected templates. We presented principles of template matching with mutual information for mammography before. In this paper, we present an implementation of those principles in a complete computer-aided detection system. The proposed system, through an automatic optimization process, chooses the most useful templates (mammographic regions of interest) using a large database of previously collected and annotated mammograms. Through this process, the knowledge about the task of detecting masses in mammograms is incorporated in the system. Then, we evaluate whether our system developed for screen-film mammograms can be successfully applied not only to other mammograms but also to digital breast tomosynthesis (DBT) reconstructed slices without adding any DBT cases for training. Our rationale is that since mutual information is known to be a robust inter-modality image similarity measure, it has high potential of transferring knowledge between modalities in the context of the mass detection task. Experimental evaluation of the system on mammograms showed competitive performance compared to other mammography CAD systems recently published in the literature. When the system was applied "as-is" to DBT, its performance was notably worse than that for mammograms. However, with a simple additional preprocessing step, the performance of the system reached levels similar to that obtained for mammograms. In conclusion, the presented CAD system not only performed competitively on screen-film mammograms but it also performed robustly on DBT showing that direct transfer of knowledge across breast imaging modalities for mass detection is in fact possible.


Assuntos
Mama/patologia , Mamografia/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Algoritmos , Neoplasias da Mama/diagnóstico , Neoplasias da Mama/diagnóstico por imagem , Diagnóstico por Computador/métodos , Feminino , Humanos , Reconhecimento Automatizado de Padrão
20.
Nat Rev Cancer ; 21(12): 747-752, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34535775

RESUMO

STANDFIRST: Artificial intelligence and machine learning techniques are breaking into biomedical research and health care, which importantly includes cancer research and oncology, where the potential applications are vast. These include detection and diagnosis of cancer, subtype classification, optimization of cancer treatment and identification of new therapeutic targets in drug discovery. While big data used to train machine learning models may already exist, leveraging this opportunity to realize the full promise of artificial intelligence in both the cancer research space and the clinical space will first require significant obstacles to be surmounted. In this Viewpoint article, we asked four experts for their opinions on how we can begin to implement artificial intelligence while ensuring standards are maintained so as transform cancer diagnosis and the prognosis and treatment of patients with cancer and to drive biological discovery.


Assuntos
Pesquisa Biomédica , Neoplasias , Inteligência Artificial , Descoberta de Drogas/métodos , Humanos , Aprendizado de Máquina , Oncologia , Neoplasias/diagnóstico , Neoplasias/terapia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA