RESUMO
BACKGROUND & AIMS: Better surveillance tests for hepatocellular carcinoma (HCC) are needed. The GALAD score (gender, age, α-fetoprotein [AFP] L3, AFP, and des carboxyprothrombin) has been shown to have excellent sensitivity and specificity for HCC in phase 2 studies. We performed a phase 3 biomarker validation study to compare GALAD with AFP in detecting HCC. METHODS: This is a prospective study of patients with cirrhosis enrolled at 7 centers. Surveillance for HCC was performed every 6 months at each site, and HCC diagnosis was confirmed per American Association for the Study of Liver Diseases guidelines. Blood for biomarker research was obtained at each follow-up visit and stored in a biorepository. Measurements of AFP, AFP-L3, and des-γ carboxyprothrombin) were performed in a FujiFilm laboratory by staff blinded to clinical data. The performance of GALAD in detecting HCC was retrospectively evaluated within 12 months before the clinical diagnosis. All analyses were conducted by an unblinded statistician in the EDRN data management and coordinating center. RESULTS: A total of 1,558 patients with cirrhosis were enrolled and followed for a median of 2.2 years. A total of 109 patients developed HCC (76 very early or early stage), with an annual incident rate of 2.4%. The areas under the curve for AFP and GALAD within 12 months before HCC were 0.66 and 0.78 (P < .001), respectively. Using a cutoff for GALAD of -1.36, the specificity was 82%, and the sensitivity at 12 months before HCC diagnosis was 62%. For comparison, performance of AFP at 82% specificity showed 41% sensitivity at 12 months before HCC diagnosis (P = .001). CONCLUSIONS: GALAD score, compared to AFP, improves the detection of HCC within 12 months before the actual diagnosis.
RESUMO
Background Abbreviated MRI is a proposed paradigm shift for hepatocellular carcinoma (HCC) surveillance, but data on its performance are lacking for histopathologically confirmed early-stage HCC. Purpose To evaluate the sensitivity and specificity of dynamic contrast-enhanced abbreviated MRI for early-stage HCC detection, using surgical pathologic findings as the reference standard. Materials and Methods This retrospective study was conducted at three U.S. liver transplant centers in patients with cirrhosis who underwent liver resection or transplant between January 2009 and December 2019 and standard "full" liver MRI with and without contrast enhancement within 3 months before surgery. Patients who had HCC-directed treatment before surgery were excluded. Dynamic abbreviated MRI examinations were simulated from the presurgical full MRI by selecting the coronal T2-weighted and axial three-dimensional fat-suppressed T1-weighted dynamic contrast-enhanced sequences at precontrast, late arterial, portal venous, and delayed phases. Two abdominal radiologists at each center independently interpreted the simulated abbreviated examinations with use of the Liver Imaging Reporting and Data System version 2018. Patients with any high-risk liver observations (>LR-3) were classified as positive; otherwise, they were classified as negative. With liver pathologic findings as the reference standard for the presence versus absence of early-stage HCC, the sensitivity, specificity, and their 95% CIs were calculated. Logistic regression was used to identify factors associated with correct classification. Results A total of 161 patients with early-stage HCC (median age, 62 years [IQR, 58-67 years]; 123 men) and 138 patients without HCC (median age, 55 years [IQR, 47-63 years]; 85 men) were confirmed with surgical pathologic findings. The sensitivity and specificity of abbreviated MRI were 88.2% (142 of 161 patients) (95% CI: 83.5, 92.5) and 89.1% (123 of 138 patients) (95% CI: 84.4, 93.8), respectively. Sensitivity was lower for Child-Pugh class B or C versus Child-Pugh class A cirrhosis (64.1% vs 94.2%; P < .001). Conclusion With surgical pathologic findings as the reference standard, dynamic abbreviated MRI had high sensitivity and specificity for early-stage hepatocellular carcinoma detection in patients with compensated cirrhosis but lower sensitivity in those with decompensated cirrhosis. © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Kim in this issue.
Assuntos
Carcinoma Hepatocelular , Neoplasias Hepáticas , Masculino , Humanos , Pessoa de Meia-Idade , Carcinoma Hepatocelular/epidemiologia , Neoplasias Hepáticas/epidemiologia , Estudos Retrospectivos , Meios de Contraste , Imageamento por Ressonância Magnética/métodos , Cirrose Hepática/diagnóstico por imagem , Sensibilidade e Especificidade , Gadolínio DTPARESUMO
Referral strategies based on risk scores and medical tests are commonly proposed. Direct assessment of their clinical utility requires implementing the strategy and is not possible in the early phases of biomarker research. Prior to late-phase studies, net benefit measures can be used to assess the potential clinical impact of a proposed strategy. Validation studies, in which the biomarker defines a prespecified referral strategy, are a gold standard approach to evaluating biomarker potential. Uncertainty, quantified by a confidence interval, is important to consider when deciding whether a biomarker warrants an impact study, does not demonstrate clinical potential, or that more data are needed. We establish distribution theory for empirical estimators of net benefit and propose empirical estimators of variance. The primary results are for the most commonly employed estimators of net benefit: from cohort and unmatched case-control samples, and for point estimates and net benefit curves. Novel estimators of net benefit under stratified two-phase and categorically matched case-control sampling are proposed and distribution theory developed. Results for common variants of net benefit and for estimation from right-censored outcomes are also presented. We motivate and demonstrate the methodology with examples from lung cancer research and highlight its application to study design.
Assuntos
Projetos de Pesquisa , Biomarcadores , Estudos de Casos e Controles , Humanos , IncertezaRESUMO
Before implementing a biomarker in routine clinical care, it must demonstrate clinical utility by leading to clinical actions that positively affect patient-relevant outcomes. Randomly controlled early detection utility trials, especially those targeting mortality endpoint, are challenging due to their high costs and prolonged duration. Special design considerations are required to determine the clinical utility of early detection assays. This commentary reports on discussions among the National Cancer Institute's Early Detection Research Network investigators, outlining the recommended process for carrying out single-organ biomarker-driven clinical utility studies. We present the early detection utility studies in the context of phased biomarker development. We describe aspects of the studies related to the features of biomarker tests, the clinical context of endpoints, the performance criteria for later phase evaluation, and study size. We discuss novel adaptive design approaches for improving the efficiency and practicality of clinical utility trials. We recommend using multiple strategies, including adopting real-world evidence, emulated trials, and mathematical modeling to circumvent the challenges in conducting early detection utility trials.
Assuntos
Biomarcadores Tumorais , Detecção Precoce de Câncer , Neoplasias , Projetos de Pesquisa , Humanos , Biomarcadores Tumorais/sangue , Biomarcadores Tumorais/análise , Detecção Precoce de Câncer/métodos , Neoplasias/diagnósticoRESUMO
BACKGROUND: Colorectal cancer (CRC) screening is underutilized despite evidence that screening improves survival. Since healthcare provider recommendation is a strong predictor of CRC screening completion, providers are encouraged to engage eligible patients in collaborative decision-making that attends to patients' values, needs, and preferences for guideline-concordant screening modalities. METHODS: This three-arm randomized controlled trial is testing the effectiveness of an evidence-based video intervention informing patients of screening choices delivered in a clinic prior to a healthcare appointment. We hypothesize that participants randomized to watch a basic video describing CRC and screening in addition to an informed choice video showing the advantages and disadvantages of fecal immunochemical test (FIT), stool DNA FIT (s-DNA FIT), and colonoscopy (Arm 3) will exhibit a greater proportion of time adherent to CRC screening guidelines after 1, 3 and 6 years than those who only watch the basic video (Arm 2) or no video at all (Arm 1). Primary care and Obstetrician/Gynecology clinics across the United States are recruiting 5280 patients, half who have never been screened and half who previously screened but are currently not guideline adherent. Participants complete surveys prior to and following an index appointment to self-report personal, cognitive, and environmental factors potentially associated with screening. Proportion of time adherent to screening guidelines will be assessed using medical record data and supplemented with annual surveys self-reporting screening. CONCLUSION: Results will provide evidence on the effectiveness of informational and motivational videos to encourage CRC screening that can be easily integrated into clinical practice. CLINICALTRIALS: gov #NCT05246839.
Assuntos
Neoplasias Colorretais , Detecção Precoce de Câncer , Colonoscopia , Humanos , Programas de Rastreamento , Sangue Oculto , Estudos Prospectivos , Estados UnidosRESUMO
Decision curves are a tool for evaluating the population impact of using a risk model for deciding whether to undergo some intervention, which might be a treatment to help prevent an unwanted clinical event or invasive diagnostic testing such as biopsy. The common formulation of decision curves is based on an opt-in framework. That is, a risk model is evaluated based on the population impact of using the model to opt high-risk patients into treatment in a setting where the standard of care is not to treat. Opt-in decision curves display the population net benefit of the risk model in comparison to the reference policy of treating no patients. In some contexts, however, the standard of care in the absence of a risk model is to treat everyone, and the potential use of the risk model would be to opt low-risk patients out of treatment. Although opt-out settings were discussed in the original decision curve paper, opt-out decision curves are underused. We review the formulation of opt-out decision curves and discuss their advantages for interpretation and inference when treat-all is the standard.
Assuntos
Tomada de Decisão Clínica , Tomada de Decisões , Técnicas de Apoio para a Decisão , Atenção à Saúde , Gestão de Riscos/métodos , Análise Custo-Benefício , Humanos , Políticas , Risco , Medição de Risco , Padrão de CuidadoRESUMO
BACKGROUND: Microsimulation models synthesize evidence about disease processes and interventions, providing a method for predicting long-term benefits and harms of prevention, screening, and treatment strategies. Because models often require assumptions about unobservable processes, assessing a model's predictive accuracy is important. METHODS: We validated 3 colorectal cancer (CRC) microsimulation models against outcomes from the United Kingdom Flexible Sigmoidoscopy Screening (UKFSS) Trial, a randomized controlled trial that examined the effectiveness of one-time flexible sigmoidoscopy screening to reduce CRC mortality. The models incorporate different assumptions about the time from adenoma initiation to development of preclinical and symptomatic CRC. Analyses compare model predictions to study estimates across a range of outcomes to provide insight into the accuracy of model assumptions. RESULTS: All 3 models accurately predicted the relative reduction in CRC mortality 10 years after screening (predicted hazard ratios, with 95% percentile intervals: 0.56 [0.44, 0.71], 0.63 [0.51, 0.75], 0.68 [0.53, 0.83]; estimated with 95% confidence interval: 0.56 [0.45, 0.69]). Two models with longer average preclinical duration accurately predicted the relative reduction in 10-year CRC incidence. Two models with longer mean sojourn time accurately predicted the number of screen-detected cancers. All 3 models predicted too many proximal adenomas among patients referred to colonoscopy. CONCLUSION: Model accuracy can only be established through external validation. Analyses such as these are therefore essential for any decision model. Results supported the assumptions that the average time from adenoma initiation to development of preclinical cancer is long (up to 25 years), and mean sojourn time is close to 4 years, suggesting the window for early detection and intervention by screening is relatively long. Variation in dwell time remains uncertain and could have important clinical and policy implications.