Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 37
Filter
1.
J Biopharm Stat ; : 1-19, 2024 Jun 18.
Article in English | MEDLINE | ID: mdl-38889012

ABSTRACT

BACKGROUND: Positive and negative likelihood ratios (PLR and NLR) are important metrics of accuracy for diagnostic devices with a binary output. However, the properties of Bayesian and frequentist interval estimators of PLR/NLR have not been extensively studied and compared. In this study, we explore the potential use of the Bayesian method for interval estimation of PLR/NLR, and, more broadly, for interval estimation of the ratio of two independent proportions. METHODS: We develop a Bayesian-based approach for interval estimation of PLR/NLR for use as a part of a diagnostic device performance evaluation. Our approach is applicable to a broader setting for interval estimation of any ratio of two independent proportions. We compare score and Bayesian interval estimators for the ratio of two proportions in terms of the coverage probability (CP) and expected interval width (EW) via extensive experiments and applications to two case studies. A supplementary experiment was also conducted to assess the performance of the proposed exact Bayesian method under different priors. RESULTS: Our experimental results show that the overall mean CP for Bayesian interval estimation is consistent with that for the score method (0.950 vs. 0.952), and the overall mean EW for Bayesian is shorter than that for score method (15.929 vs. 19.724). Application to two case studies showed that the intervals estimated using the Bayesian and frequentist approaches are very similar. DISCUSSION: Our numerical results indicate that the proposed Bayesian approach has a comparable CP performance with the score method while yielding higher precision (i.e. a shorter EW).

2.
J Biopharm Stat ; 33(5): 611-638, 2023 09 03.
Article in English | MEDLINE | ID: mdl-36710380

ABSTRACT

A limitation of the common measures of diagnostic test performance, such as sensitivity and specificity, is that they do not consider the relative importance of false negative and false positive test results, which are likely to have different clinical consequences. Therefore, the use of classification or prediction measures alone to compare diagnostic tests or biomarkers can be inconclusive for clinicians. Comparing tests on net benefit can be more conclusive because clinical consequences of misdiagnoses are considered. The literature suggested evaluating the binary diagnostic tests based on net benefit, but did not consider diagnostic tests that classify more than two disease states, e.g., stroke subtype (large-artery atherosclerosis, cardioembolism, small-vessel occlusion, stroke of other determined etiology, stroke of undetermined etiology), skin lesion subtype, breast cancer subtypes (benign, mass, calcification, architectural distortion, etc.), METAVIR liver fibrosis state (F0- F4), histopathological classification of cervical intraepithelial neoplasia (CIN), prostate Gleason grade, brain injury (intracranial hemorrhage, mass effect, midline shift, cranial fracture) . Other diseases have more than two stages, such as Alzheimer's disease (dementia due to Alzheimer's disease, mild cognitive disability (MCI) due to Alzheimer's disease, and preclinical presymptomatics due to Alzheimer's disease). In diseases with more than two states, the benefits and risks may vary between states. This paper extends the net-benefit approach of evaluating binary diagnostic tests to multi-state clinical conditions to rule-in or rule-out a clinical condition based on adverse consequences of work-up delay (due to false negative test result) and unnecessary workup (due to false positive test result). We demonstrate our approach with numerical examples and real data.


Subject(s)
Alzheimer Disease , Cognitive Dysfunction , Stroke , Male , Humans , Alzheimer Disease/diagnosis , Cognitive Dysfunction/diagnosis , Sensitivity and Specificity , Stroke/diagnosis , Diagnostic Tests, Routine , Neuropsychological Tests
3.
Cancer ; 128 Suppl 4: 883-891, 2022 02 15.
Article in English | MEDLINE | ID: mdl-35133658

ABSTRACT

Multicancer screening is a promising approach to improving the detection of preclinical disease, but current technologies have limited ability to identify precursor or early stage lesions, and approaches for developing the evidentiary chain are unclear. Frameworks to enable development and evaluation from discovery through evidence of clinical effectiveness are discussed.


Subject(s)
Early Detection of Cancer , Neoplasms , Humans , Mass Screening , Neoplasms/diagnosis
4.
Biom J ; 64(2): 225-234, 2022 02.
Article in English | MEDLINE | ID: mdl-33377537

ABSTRACT

In their paper, Liu et al. (2020) pointed out illogical discrepancies between subgroup and overall causal effects for some efficacy measures, in particular the odds and hazard ratios. As the authors show, the culprit is subgroups having prognostic effects within treatment arms. In response to their provocative findings, we found that the odds and hazard ratios are logic respecting when the subgroups are purely predictive, that is, the distribution of the potential outcome for the control treatment is homogeneous across subgroups. We also found that when we redefined the odds and hazards ratio causal estimands in terms of the joint distribution of the potential outcomes, the discrepancies are resolved under specific models in which the potential outcomes are conditionally independent. In response to other discussion points in the paper, we also provide remarks on association versus causation, confounding, statistical computing software, and dichotomania.


Subject(s)
Logic , Software , Plant Extracts , Proportional Hazards Models , Randomized Controlled Trials as Topic
5.
Pharm Stat ; 20(5): 965-978, 2021 09.
Article in English | MEDLINE | ID: mdl-33942971

ABSTRACT

How do we communicate nuanced regulatory information to different audiences, recognizing that the consumer audience is very different from the physician audience? In particular, how do we communicate the heterogeneity of treatment effects - the potential differences in treatment effects based on sex, race, and age? That is a fundamental question at the heart of this panel discussion. Each panelist addressed a specific "challenge question" during their 5-minute presentation, and the list of questions is provided. The presentations were followed by a question and answer session with members of the audience and the panelists.

6.
Clin Infect Dis ; 63(6): 812-7, 2016 09 15.
Article in English | MEDLINE | ID: mdl-27193750

ABSTRACT

The medical community needs systematic and pragmatic approaches for evaluating the benefit-risk trade-offs of diagnostics that assist in medical decision making. Benefit-Risk Evaluation of Diagnostics: A Framework (BED-FRAME) is a strategy for pragmatic evaluation of diagnostics designed to supplement traditional approaches. BED-FRAME evaluates diagnostic yield and addresses 2 key issues: (1) that diagnostic yield depends on prevalence, and (2) that different diagnostic errors carry different clinical consequences. As such, evaluating and comparing diagnostics depends on prevalence and the relative importance of potential errors. BED-FRAME provides a tool for communicating the expected clinical impact of diagnostic application and the expected trade-offs of diagnostic alternatives. BED-FRAME is a useful fundamental supplement to the standard analysis of diagnostic studies that will aid in clinical decision making.


Subject(s)
Decision Support Systems, Clinical , Diagnosis, Computer-Assisted , Risk Assessment/methods , Actinobacteria , Anti-Bacterial Agents/therapeutic use , Drug Resistance, Bacterial , Gram-Positive Bacterial Infections/drug therapy , Humans , Models, Statistical , Prevalence
8.
J Biopharm Stat ; 26(6): 1083-1097, 2016.
Article in English | MEDLINE | ID: mdl-27548805

ABSTRACT

Comparing diagnostic tests on accuracy alone can be inconclusive. For example, a test may have better sensitivity than another test yet worse specificity. Comparing tests on benefit risk may be more conclusive because clinical consequences of diagnostic error are considered. For benefit-risk evaluation, we propose diagnostic yield, the expected distribution of subjects with true positive, false positive, true negative, and false negative test results in a hypothetical population. We construct a table of diagnostic yield that includes the number of false positive subjects experiencing adverse consequences from unnecessary work-up. We then develop a decision theory for evaluating tests. The theory provides additional interpretation to quantities in the diagnostic yield table. It also indicates that the expected utility of a test relative to a perfect test is a weighted accuracy measure, the average of sensitivity and specificity weighted for prevalence and relative importance of false positive and false negative testing errors, also interpretable as the cost-benefit ratio of treating non-diseased and diseased subjects. We propose plots of diagnostic yield, weighted accuracy, and relative net benefit of tests as functions of prevalence or cost-benefit ratio. Concepts are illustrated with hypothetical screening tests for colorectal cancer with test positive subjects being referred to colonoscopy.


Subject(s)
Diagnostic Tests, Routine , Risk Assessment , Colonoscopy , Colorectal Neoplasms/diagnosis , False Negative Reactions , False Positive Reactions , Humans , Prevalence , Sensitivity and Specificity
9.
Clin Infect Dis ; 61(5): 800-6, 2015 Sep 01.
Article in English | MEDLINE | ID: mdl-26113652

ABSTRACT

Clinical trials that compare strategies to optimize antibiotic use are of critical importance but are limited by competing risks that distort outcome interpretation, complexities of noninferiority trials, large sample sizes, and inadequate evaluation of benefits and harms at the patient level. The Antibacterial Resistance Leadership Group strives to overcome these challenges through innovative trial design. Response adjusted for duration of antibiotic risk (RADAR) is a novel methodology utilizing a superiority design and a 2-step process: (1) categorizing patients into an overall clinical outcome (based on benefits and harms), and (2) ranking patients with respect to a desirability of outcome ranking (DOOR). DOORs are constructed by assigning higher ranks to patients with (1) better overall clinical outcomes and (2) shorter durations of antibiotic use for similar overall clinical outcomes. DOOR distributions are compared between antibiotic use strategies. The probability that a randomly selected patient will have a better DOOR if assigned to the new strategy is estimated. DOOR/RADAR represents a new paradigm in assessing the risks and benefits of new strategies to optimize antibiotic use.


Subject(s)
Anti-Bacterial Agents/administration & dosage , Anti-Bacterial Agents/therapeutic use , Clinical Trials as Topic , Drug Resistance, Bacterial , Research Design , Bacterial Infections/drug therapy , Humans , Patient Safety , Risk , Treatment Outcome
10.
J Med Imaging (Bellingham) ; 11(1): 014501, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38283653

ABSTRACT

Purpose: Understanding an artificial intelligence (AI) model's ability to generalize to its target population is critical to ensuring the safe and effective usage of AI in medical devices. A traditional generalizability assessment relies on the availability of large, diverse datasets, which are difficult to obtain in many medical imaging applications. We present an approach for enhanced generalizability assessment by examining the decision space beyond the available testing data distribution. Approach: Vicinal distributions of virtual samples are generated by interpolating between triplets of test images. The generated virtual samples leverage the characteristics already in the test set, increasing the sample diversity while remaining close to the AI model's data manifold. We demonstrate the generalizability assessment approach on the non-clinical tasks of classifying patient sex, race, COVID status, and age group from chest x-rays. Results: Decision region composition analysis for generalizability indicated that a disproportionately large portion of the decision space belonged to a single "preferred" class for each task, despite comparable performance on the evaluation dataset. Evaluation using cross-reactivity and population shift strategies indicated a tendency to overpredict samples as belonging to the preferred class (e.g., COVID negative) for patients whose subgroup was not represented in the model development data. Conclusions: An analysis of an AI model's decision space has the potential to provide insight into model generalizability. Our approach uses the analysis of composition of the decision space to obtain an improved assessment of model generalizability in the case of limited test data.

11.
Clin Trials ; 10(5): 666-76, 2013 Oct.
Article in English | MEDLINE | ID: mdl-23983159

ABSTRACT

BACKGROUND: Biomarker assays can be evaluated for analytical performance (ability of assay to measure the biomarker quantity) and clinical performance (ability of assay result to inform of the clinical condition of interest). Additionally, a biomarker assay is said to have clinical utility if it ultimately improves patient outcomes when used as intended. PURPOSE: This article reviews analytical and clinical performance studies of biomarker assay tests and also some designs of clinical utility studies. RESULTS: Appropriate design and statistical analysis of analytical and clinical evaluation studies depend on the intended clinical use of the test. Some key aspects to valid performance studies include using subjects who are independent of those used to develop the test, masking users of the test to any other available test or reference results, and including in the primary statistical analysis subjects with unavailable results in an intention-to-diagnose analysis. Ingenuity in study design and analysis may be required for efficient and unbiased estimation of performance. LIMITATIONS: Performance studies need to be carefully planned as they can be prone to many sources of bias. Analytical inaccuracy can hamper the clinical performance of biomarkers. CONCLUSIONS: As biomedical research and technology advance, challenges in study design and statistical analysis will continue to emerge for analytical and clinical performance studies of biomarker assays. Although not emphasized in some circles, the analytical performance of a biomarker assay is important to characterize. Analytical performance studies have many study design and statistical analysis challenges that deserve further attention.


Subject(s)
Biomarkers , Biomedical Research/methods , Diagnostic Techniques and Procedures , Research Design , Data Interpretation, Statistical , Humans , Reproducibility of Results , Sensitivity and Specificity
12.
Ther Innov Regul Sci ; 57(3): 453-463, 2023 05.
Article in English | MEDLINE | ID: mdl-36869194

ABSTRACT

The use of Bayesian statistics to support regulatory evaluation of medical devices began in the late 1990s. We review the literature, focusing on recent developments of Bayesian methods, including hierarchical modeling of studies and subgroups, borrowing strength from prior data, effective sample size, Bayesian adaptive designs, pediatric extrapolation, benefit-risk decision analysis, use of real-world evidence, and diagnostic device evaluation. We illustrate how these developments were utilized in recent medical device evaluations. In Supplementary Material, we provide a list of medical devices for which Bayesian statistics were used to support approval by the US Food and Drug Administration (FDA), including those since 2010, the year the FDA published their guidance on Bayesian statistics for medical devices. We conclude with a discussion of current and future challenges and opportunities for Bayesian statistics, including artificial intelligence/machine learning (AI/ML) Bayesian modeling, uncertainty quantification, Bayesian approaches using propensity scores, and computational challenges for high dimensional data and models.


Subject(s)
Artificial Intelligence , Research Design , United States , Humans , Child , Bayes Theorem , Sample Size , United States Food and Drug Administration
13.
Acad Radiol ; 30(2): 159-182, 2023 02.
Article in English | MEDLINE | ID: mdl-36464548

ABSTRACT

Multiparametric quantitative imaging biomarkers (QIBs) offer distinct advantages over single, univariate descriptors because they provide a more complete measure of complex, multidimensional biological systems. In disease, where structural and functional disturbances occur across a multitude of subsystems, multivariate QIBs are needed to measure the extent of system malfunction. This paper, the first Use Case in a series of articles on multiparameter imaging biomarkers, considers multiple QIBs as a multidimensional vector to represent all relevant disease constructs more completely. The approach proposed offers several advantages over QIBs as multiple endpoints and avoids combining them into a single composite that obscures the medical meaning of the individual measurements. We focus on establishing statistically rigorous methods to create a single, simultaneous measure from multiple QIBs that preserves the sensitivity of each univariate QIB while incorporating the correlation among QIBs. Details are provided for metrological methods to quantify the technical performance. Methods to reduce the set of QIBs, test the superiority of the mp-QIB model to any univariate QIB model, and design study strategies for generating precision and validity claims are also provided. QIBs of Alzheimer's Disease from the ADNI merge data set are used as a case study to illustrate the methods described.


Subject(s)
Alzheimer Disease , Diagnostic Imaging , Humans , Diagnostic Imaging/methods , Biomarkers , Alzheimer Disease/diagnostic imaging
14.
Acad Radiol ; 30(2): 215-229, 2023 02.
Article in English | MEDLINE | ID: mdl-36411153

ABSTRACT

This paper is the fifth in a five-part series on statistical methodology for performance assessment of multi-parametric quantitative imaging biomarkers (mpQIBs) for radiomic analysis. Radiomics is the process of extracting visually imperceptible features from radiographic medical images using data-driven algorithms. We refer to the radiomic features as data-driven imaging markers (DIMs), which are quantitative measures discovered under a data-driven framework from images beyond visual recognition but evident as patterns of disease processes irrespective of whether or not ground truth exists for the true value of the DIM. This paper aims to set guidelines on how to build machine learning models using DIMs in radiomics and to apply and report them appropriately. We provide a list of recommendations, named RANDAM (an abbreviation of "Radiomic ANalysis and DAta Modeling"), for analysis, modeling, and reporting in a radiomic study to make machine learning analyses in radiomics more reproducible. RANDAM contains five main components to use in reporting radiomics studies: design, data preparation, data analysis and modeling, reporting, and material availability. Real case studies in lung cancer research are presented along with simulation studies to compare different feature selection methods and several validation strategies.


Subject(s)
Lung Neoplasms , Multiparametric Magnetic Resonance Imaging , Humans , ROC Curve , Multiparametric Magnetic Resonance Imaging/methods , Diagnostic Imaging , Lung Neoplasms/diagnostic imaging , Lung
15.
Acad Radiol ; 30(2): 196-214, 2023 02.
Article in English | MEDLINE | ID: mdl-36273996

ABSTRACT

Combinations of multiple quantitative imaging biomarkers (QIBs) are often able to predict the likelihood of an event of interest such as death or disease recurrence more effectively than single imaging measurements can alone. The development of such multiparametric quantitative imaging and evaluation of its fitness of use differs from the analogous processes for individual QIBs in several key aspects. A computational procedure to combine the QIB values into a model output must be specified. The output must also be reproducible and be shown to have reasonably strong ability to predict the risk of an event of interest. Attention must be paid to statistical issues not often encountered in the single QIB scenario, including overfitting and bias in the estimates of model performance. This is the fourth in a five-part series on statistical methodology for assessing the technical performance of multiparametric quantitative imaging. Considerations for data acquisition are discussed and recommendations from the literature on methodology to construct and evaluate QIB-based models for risk prediction are summarized. The findings in the literature upon which these recommendations are based are demonstrated through simulation studies. The concepts in this manuscript are applied to a real-life example involving prediction of major adverse cardiac events using automated plaque analysis.


Subject(s)
Diagnostic Imaging , Humans , Diagnostic Imaging/methods , Biomarkers , Computer Simulation
16.
Acad Radiol ; 30(2): 183-195, 2023 02.
Article in English | MEDLINE | ID: mdl-36202670

ABSTRACT

This manuscript is the third in a five-part series related to statistical assessment methodology for technical performance of multi-parametric quantitative imaging biomarkers (mp-QIBs). We outline approaches and statistical methodologies for developing and evaluating a phenotype classification model from a set of multiparametric QIBs. We then describe validation studies of the classifier for precision, diagnostic accuracy, and interchangeability with a comparator classifier. We follow with an end-to-end real-world example of development and validation of a classifier for atherosclerotic plaque phenotypes. We consider diagnostic accuracy and interchangeability to be clinically meaningful claims for a phenotype classification model informed by mp-QIB inputs, aiming to provide tools to demonstrate agreement between imaging-derived characteristics and clinically established phenotypes. Understanding that we are working in an evolving field, we close our manuscript with an acknowledgement of existing challenges and a discussion of where additional work is needed. In particular, we discuss the challenges involved with technical performance and analytical validation of mp-QIBs. We intend for this manuscript to further advance the robust and promising science of multiparametric biomarker development.


Subject(s)
Diagnostic Imaging , Diagnostic Imaging/methods , Biomarkers , Phenotype
17.
Acad Radiol ; 30(2): 147-158, 2023 02.
Article in English | MEDLINE | ID: mdl-36180328

ABSTRACT

Multiparameter quantitative imaging incorporates anatomical, functional, and/or behavioral biomarkers to characterize tissue, detect disease, identify phenotypes, define longitudinal change, or predict outcome. Multiple imaging parameters are sometimes considered separately but ideally are evaluated collectively. Often, they are transformed as Likert interpretations, ignoring the correlations of quantitative properties that may result in better reproducibility or outcome prediction. In this paper we present three use cases of multiparameter quantitative imaging: i) multidimensional descriptor, ii) phenotype classification, and iii) risk prediction. A fourth application based on data-driven markers from radiomics is also presented. We describe the technical performance characteristics and their metrics common to all use cases, and provide a structure for the development, estimation, and testing of multiparameter quantitative imaging. This paper serves as an overview for a series of individual articles on the four applications, providing the statistical framework for multiparameter imaging applications in medicine.


Subject(s)
Diagnostic Imaging , Reproducibility of Results , Diagnostic Imaging/methods , Biomarkers , Phenotype
18.
J Clin Transl Sci ; 7(1): e212, 2023.
Article in English | MEDLINE | ID: mdl-37900353

ABSTRACT

Increasing emphasis on the use of real-world evidence (RWE) to support clinical policy and regulatory decision-making has led to a proliferation of guidance, advice, and frameworks from regulatory agencies, academia, professional societies, and industry. A broad spectrum of studies use real-world data (RWD) to produce RWE, ranging from randomized trials with outcomes assessed using RWD to fully observational studies. Yet, many proposals for generating RWE lack sufficient detail, and many analyses of RWD suffer from implausible assumptions, other methodological flaws, or inappropriate interpretations. The Causal Roadmap is an explicit, itemized, iterative process that guides investigators to prespecify study design and analysis plans; it addresses a wide range of guidance within a single framework. By supporting the transparent evaluation of causal assumptions and facilitating objective comparisons of design and analysis choices based on prespecified criteria, the Roadmap can help investigators to evaluate the quality of evidence that a given study is likely to produce, specify a study to generate high-quality RWE, and communicate effectively with regulatory agencies and other stakeholders. This paper aims to disseminate and extend the Causal Roadmap framework for use by clinical and translational researchers; three companion papers demonstrate applications of the Causal Roadmap for specific use cases.

19.
JCO Precis Oncol ; 6: e2100372, 2022 08.
Article in English | MEDLINE | ID: mdl-35952319

ABSTRACT

PURPOSE: As immune checkpoint inhibitors (ICI) become increasingly used in frontline settings, identifying early indicators of response is needed. Recent studies suggest a role for circulating tumor DNA (ctDNA) in monitoring response to ICI, but uncertainty exists in the generalizability of these studies. Here, the role of ctDNA for monitoring response to ICI is assessed through a standardized approach by assessing clinical trial data from five independent studies. PATIENTS AND METHODS: Patient-level clinical and ctDNA data were pooled and harmonized from 200 patients across five independent clinical trials investigating the treatment of patients with non-small-cell lung cancer with programmed cell death-1 (PD-1)/programmed death ligand-1 (PD-L1)-directed monotherapy or in combination with chemotherapy. CtDNA levels were measured using different ctDNA assays across the studies. Maximum variant allele frequencies were calculated using all somatic tumor-derived variants in each unique patient sample to correlate ctDNA changes with overall survival (OS) and progression-free survival (PFS). RESULTS: We observed strong associations between reductions in ctDNA levels from on-treatment liquid biopsies with improved OS (OS; hazard ratio, 2.28; 95% CI, 1.62 to 3.20; P < .001) and PFS (PFS; hazard ratio 1.76; 95% CI, 1.31 to 2.36; P < .001). Changes in the maximum variant allele frequencies ctDNA values showed strong association across different outcomes. CONCLUSION: In this pooled analysis of five independent clinical trials, consistent and robust associations between reductions in ctDNA and outcomes were found across multiple end points assessed in patients with non-small-cell lung cancer treated with an ICI. Additional tumor types, stages, and drug classes should be included in future analyses to further validate this. CtDNA may serve as an important tool in clinical development and an early indicator of treatment benefit.


Subject(s)
Antineoplastic Agents, Immunological , Carcinoma, Non-Small-Cell Lung , Circulating Tumor DNA , Lung Neoplasms , Antineoplastic Agents, Immunological/therapeutic use , Biomarkers, Tumor/genetics , Carcinoma, Non-Small-Cell Lung/drug therapy , Circulating Tumor DNA/genetics , Clinical Trials as Topic , Humans , Immune Checkpoint Inhibitors/pharmacology , Lung Neoplasms/drug therapy , Prognosis
20.
J Biopharm Stat ; 21(5): 954-70, 2011 Sep.
Article in English | MEDLINE | ID: mdl-21830925

ABSTRACT

Studies of the accuracy of medical tests to diagnose the presence or absence of disease can suffer from an inability to verify the true disease state in everyone. When verification is missing at random (MAR), the missing data mechanism can be ignored in likelihood-based inference. However, this assumption may not hold even approximately. When verification is nonignorably missing, the most general model of the distribution of disease state, test result, and verification indicator is overparameterized. Parameters are only partially identified, creating regions of ignorance for maximum likelihood estimators. For studies of a single test, we use Bayesian analysis to implement the most general nonignorable model, a reduced nonignorable model with identifiable parameters, and the MAR model. Simple Gibbs sampling algorithms are derived that enable computation of the posterior distribution of test accuracy parameters. In particular, the posterior distribution is easily obtained for the most general nonignorable model, which makes relatively weak assumptions about the missing data mechanism. For this model, the posterior distribution combines two sources of uncertainty: ignorance in the estimation of partially identified parameters, and imprecision due to finite sampling variability. We compare the three models on data from a study of the accuracy of scintigraphy to diagnose liver disease.


Subject(s)
Diagnostic Tests, Routine/statistics & numerical data , Disease , Liver Diseases/diagnosis , Models, Statistical , Radionuclide Imaging/statistics & numerical data , Research Design/statistics & numerical data , Algorithms , Bayes Theorem , Diagnostic Tests, Routine/trends , False Negative Reactions , Humans , Likelihood Functions , Liver Diseases/metabolism , Models, Theoretical , Regression Analysis , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL