Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24
Filtrar
2.
Cancer ; 130(5): 770-780, 2024 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-37877788

RESUMO

BACKGROUND: Recent therapeutic advances and screening technologies have improved survival among patients with lung cancer, who are now at high risk of developing second primary lung cancer (SPLC). Recently, an SPLC risk-prediction model (called SPLC-RAT) was developed and validated using data from population-based epidemiological cohorts and clinical trials, but real-world validation has been lacking. The predictive performance of SPLC-RAT was evaluated in a hospital-based cohort of lung cancer survivors. METHODS: The authors analyzed data from 8448 ever-smoking patients diagnosed with initial primary lung cancer (IPLC) in 1997-2006 at Mayo Clinic, with each patient followed for SPLC through 2018. The predictive performance of SPLC-RAT and further explored the potential of improving SPLC detection through risk model-based surveillance using SPLC-RAT versus existing clinical surveillance guidelines. RESULTS: Of 8448 IPLC patients, 483 (5.7%) developed SPLC over 26,470 person-years. The application of SPLC-RAT showed high discrimination area under the receiver operating characteristics curve: 0.81). When the cohort was stratified by a 10-year risk threshold of ≥5.6% (i.e., 80th percentile from the SPLC-RAT development cohort), the observed SPLC incidence was significantly elevated in the high-risk versus low-risk subgroup (13.1% vs. 1.1%, p < 1 × 10-6 ). The risk-based surveillance through SPLC-RAT (≥5.6% threshold) outperformed the National Comprehensive Cancer Network guidelines with higher sensitivity (86.4% vs. 79.4%) and specificity (38.9% vs. 30.4%) and required 20% fewer computed tomography follow-ups needed to detect one SPLC (162 vs. 202). CONCLUSION: In a large, hospital-based cohort, the authors validated the predictive performance of SPLC-RAT in identifying high-risk survivors of SPLC and showed its potential to improve SPLC detection through risk-based surveillance. PLAIN LANGUAGE SUMMARY: Lung cancer survivors have a high risk of developing second primary lung cancer (SPLC). However, no evidence-based guidelines for SPLC surveillance are available for lung cancer survivors. Recently, an SPLC risk-prediction model was developed and validated using data from population-based epidemiological cohorts and clinical trials, but real-world validation has been lacking. Using a large, real-world cohort of lung cancer survivors, we showed the high predictive accuracy and risk-stratification ability of the SPLC risk-prediction model. Furthermore, we demonstrated the potential to enhance efficiency in detecting SPLC using risk model-based surveillance strategies compared to the existing consensus-based clinical guidelines, including the National Comprehensive Cancer Network.


Assuntos
Sobreviventes de Câncer , Neoplasias Pulmonares , Segunda Neoplasia Primária , Humanos , Neoplasias Pulmonares/diagnóstico , Neoplasias Pulmonares/epidemiologia , Neoplasias Pulmonares/terapia , Risco , Fumar , Pulmão
3.
JAMA Oncol ; 9(12): 1640-1648, 2023 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-37883107

RESUMO

Importance: The revised 2021 US Preventive Services Task Force (USPSTF) guidelines for lung cancer screening have been shown to reduce disparities in screening eligibility and performance between African American and White individuals vs the 2013 guidelines. However, potential disparities across other racial and ethnic groups in the US remain unknown. Risk model-based screening may reduce racial and ethnic disparities and improve screening performance, but neither validation of key risk prediction models nor their screening performance has been examined by race and ethnicity. Objective: To validate and recalibrate the Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial 2012 (PLCOm2012) model-a well-established risk prediction model based on a predominantly White population-across races and ethnicities in the US and evaluate racial and ethnic disparities and screening performance through risk-based screening using PLCOm2012 vs the USPSTF 2021 criteria. Design, Setting, and Participants: In a population-based cohort design, the Multiethnic Cohort Study enrolled participants in 1993-1996, followed up through December 31, 2018. Data analysis was conducted from April 1, 2022, to May 19. 2023. A total of 105 261 adults with a smoking history were included. Exposures: The 6-year lung cancer risk was calculated through recalibrated PLCOm2012 (ie, PLCOm2012-Update) and screening eligibility based on a 6-year risk threshold greater than or equal to 1.3%, yielding similar eligibility as the USPSTF 2021 guidelines. Outcomes: Predictive accuracy, screening eligibility-incidence (E-I) ratio (ie, ratio of the number of eligible to incident cases), and screening performance (sensitivity, specificity, and number needed to screen to detect 1 lung cancer). Results: Of 105 261 participants (60 011 [57.0%] men; mean [SD] age, 59.8 [8.7] years), consisting of 19 258 (18.3%) African American, 27 227 (25.9%) Japanese American, 21 383 (20.3%) Latino, 8368 (7.9%) Native Hawaiian/Other Pacific Islander, and 29 025 (27.6%) White individuals, 1464 (1.4%) developed lung cancer within 6 years from enrollment. The PLCOm2012-Update showed good predictive accuracy across races and ethnicities (area under the curve, 0.72-0.82). The USPSTF 2021 criteria yielded a large disparity among African American individuals, whose E-I ratio was 53% lower vs White individuals (E-I ratio: 9.5 vs 20.3; P < .001). Under the risk-based screening (PLCOm2012-Update 6-year risk ≥1.3%), the disparity between African American and White individuals was substantially reduced (E-I ratio: 15.9 vs 18.4; P < .001), with minimal disparities observed in persons of other minoritized groups, including Japanese American, Latino, and Native Hawaiian/Other Pacific Islander. Risk-based screening yielded superior overall and race and ethnicity-specific performance to the USPSTF 2021 criteria, with higher overall sensitivity (67.2% vs 57.7%) and lower number needed to screen (26 vs 30) at similar specificity (76.6%). Conclusions: The findings of this cohort study suggest that risk-based lung cancer screening can reduce racial and ethnic disparities and improve screening performance across races and ethnicities vs the USPSTF 2021 criteria.


Assuntos
Detecção Precoce de Câncer , Neoplasias Pulmonares , Masculino , Adulto , Humanos , Pessoa de Meia-Idade , Feminino , Estudos de Coortes , Neoplasias Pulmonares/diagnóstico , Neoplasias Pulmonares/epidemiologia , Etnicidade , Hispânico ou Latino
4.
Circulation ; 148(12): 950-958, 2023 09 19.
Artigo em Inglês | MEDLINE | ID: mdl-37602376

RESUMO

BACKGROUND: Previous studies comparing percutaneous coronary intervention (PCI) with coronary artery bypass grafting (CABG) in patients with multivessel coronary disease not involving the left main have shown significantly lower rates of death, myocardial infarction (MI), or stroke after CABG. These studies did not routinely use current-generation drug-eluting stents or fractional flow reserve (FFR) to guide PCI. METHODS: FAME 3 (Fractional Flow Reserve versus Angiography for Multivessel Evaluation) is an investigator-initiated, multicenter, international, randomized trial involving patients with 3-vessel coronary artery disease (not involving the left main coronary artery) in 48 centers worldwide. Patients were randomly assigned to receive FFR-guided PCI using zotarolimus drug-eluting stents or CABG. The prespecified key secondary end point of the trial reported here is the 3-year incidence of the composite of death, MI, or stroke. RESULTS: A total of 1500 patients were randomized to FFR-guided PCI or CABG. Follow-up was achieved in >96% of patients in both groups. There was no difference in the incidence of the composite of death, MI, or stroke after FFR-guided PCI compared with CABG (12.0% versus 9.2%; hazard ratio [HR], 1.3 [95% CI, 0.98-1.83]; P=0.07). The rates of death (4.1% versus 3.9%; HR, 1.0 [95% CI, 0.6-1.7]; P=0.88) and stroke (1.6% versus 2.0%; HR, 0.8 [95% CI, 0.4-1.7]; P=0.56) were not different. MI occurred more frequently after PCI (7.0% versus 4.2%; HR, 1.7 [95% CI, 1.1-2.7]; P=0.02). CONCLUSIONS: At 3-year follow-up, there was no difference in the incidence of the composite of death, MI, or stroke after FFR-guided PCI with current-generation drug-eluting stents compared with CABG. There was a higher incidence of MI after PCI compared with CABG, with no difference in death or stroke. These results provide contemporary data to allow improved shared decision-making between physicians and patients with 3-vessel coronary artery disease. REGISTRATION: URL: https://www. CLINICALTRIALS: gov; Unique identifier: NCT02100722.


Assuntos
Doença da Artéria Coronariana , Reserva Fracionada de Fluxo Miocárdico , Infarto do Miocárdio , Intervenção Coronária Percutânea , Acidente Vascular Cerebral , Humanos , Doença da Artéria Coronariana/cirurgia , Seguimentos , Intervenção Coronária Percutânea/efeitos adversos , Ponte de Artéria Coronária/efeitos adversos , Acidente Vascular Cerebral/epidemiologia , Acidente Vascular Cerebral/etiologia
5.
Neuroradiology ; 65(11): 1605-1617, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37269414

RESUMO

PURPOSE: This study aimed to assess and externally validate the performance of a deep learning (DL) model for the interpretation of non-contrast computed tomography (NCCT) scans of patients with suspicion of traumatic brain injury (TBI). METHODS: This retrospective and multi-reader study included patients with TBI suspicion who were transported to the emergency department and underwent NCCT scans. Eight reviewers, with varying levels of training and experience (two neuroradiology attendings, two neuroradiology fellows, two neuroradiology residents, one neurosurgery attending, and one neurosurgery resident), independently evaluated NCCT head scans. The same scans were evaluated using the version 5.0 of the DL model icobrain tbi. The establishment of the ground truth involved a thorough assessment of all accessible clinical and laboratory data, as well as follow-up imaging studies, including NCCT and magnetic resonance imaging, as a consensus amongst the study reviewers. The outcomes of interest included neuroimaging radiological interpretation system (NIRIS) scores, the presence of midline shift, mass effect, hemorrhagic lesions, hydrocephalus, and severe hydrocephalus, as well as measurements of midline shift and volumes of hemorrhagic lesions. Comparisons using weighted Cohen's kappa coefficient were made. The McNemar test was used to compare the diagnostic performance. Bland-Altman plots were used to compare measurements. RESULTS: One hundred patients were included, with the DL model successfully categorizing 77 scans. The median age for the total group was 48, with the omitted group having a median age of 44.5 and the included group having a median age of 48. The DL model demonstrated moderate agreement with the ground truth, trainees, and attendings. With the DL model's assistance, trainees' agreement with the ground truth improved. The DL model showed high specificity (0.88) and positive predictive value (0.96) in classifying NIRIS scores as 0-2 or 3-4. Trainees and attendings had the highest accuracy (0.95). The DL model's performance in classifying various TBI CT imaging common data elements was comparable to that of trainees and attendings. The average difference for the DL model in quantifying the volume of hemorrhagic lesions was 6.0 mL with a wide 95% confidence interval (CI) of - 68.32 to 80.22, and for midline shift, the average difference was 1.4 mm with a 95% CI of - 3.4 to 6.2. CONCLUSION: While the DL model outperformed trainees in some aspects, attendings' assessments remained superior in most instances. Using the DL model as an assistive tool benefited trainees, improving their NIRIS score agreement with the ground truth. Although the DL model showed high potential in classifying some TBI CT imaging common data elements, further refinement and optimization are necessary to enhance its clinical utility.


Assuntos
Lesões Encefálicas Traumáticas , Aprendizado Profundo , Hidrocefalia , Humanos , Estudos Retrospectivos , Lesões Encefálicas Traumáticas/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Neuroimagem/métodos
6.
JCO Precis Oncol ; 6: e2200220, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-36201713

RESUMO

PURPOSE: Brain metastasis is common in lung cancer, and treatment of brain metastasis can lead to significant morbidity. Although early detection of brain metastasis may improve outcomes, there are no prediction models to identify high-risk patients for brain magnetic resonance imaging (MRI) surveillance. Our goal is to develop a machine learning-based clinicogenomic prediction model to estimate patient-level brain metastasis risk. METHODS: A penalized regression competing risk model was developed using 330 patients diagnosed with lung cancer between January 2014 and June 2019 and followed through June 2021 at Stanford HealthCare. The main outcome was time from the diagnosis of distant metastatic disease to the development of brain metastasis, death, or censoring. RESULTS: Among the 330 patients, 84 (25%) developed brain metastasis over 627 person-years, with a 1-year cumulative brain metastasis incidence of 10.2% (95% CI, 6.8 to 13.6). Features selected for model inclusion were histology, cancer stage, age at diagnosis, primary site, and RB1 and ALK alterations. The prediction model yielded high discrimination (area under the curve 0.75). When the cohort was stratified by risk using a 1-year risk threshold of > 14.2% (85th percentile), the high-risk group had increased 1-year cumulative incidence of brain metastasis versus the low-risk group (30.8% v 6.1%, P < .01). Of 48 high-risk patients, 24 developed brain metastasis, and of these, 12 patients had brain metastasis detected more than 7 months after last brain MRI. Patients who missed this 7-month window had larger brain metastases (58% v 33% largest diameter > 10 mm; odds ratio, 2.80, CI, 0.51 to 13) versus those who had MRIs more frequently. CONCLUSION: The proposed model can identify high-risk patients, who may benefit from more intensive brain MRI surveillance to reduce morbidity of subsequent treatment through early detection.


Assuntos
Neoplasias Encefálicas , Neoplasias Pulmonares , Encéfalo/diagnóstico por imagem , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/secundário , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Imageamento por Ressonância Magnética , Receptores Proteína Tirosina Quinases , Estudos Retrospectivos
7.
JNCI Cancer Spectr ; 6(3)2022 05 02.
Artigo em Inglês | MEDLINE | ID: mdl-35642317

RESUMO

BACKGROUND: In 2021, the US Preventive Services Task Force (USPSTF) revised its lung cancer screening guidelines to expand screening eligibility. We evaluated screening sensitivities and racial and ethnic disparities under the 2021 USPSTF criteria vs alternative risk-based criteria in a racially and ethnically diverse population. METHODS: In the Multiethnic Cohort, we evaluated the proportion of ever-smoking lung cancer cases eligible for screening (ie, screening sensitivity) under the 2021 USPSTF criteria and under risk-based criteria through the PLCOm2012 model (6-year risk ≥1.51%). We also calculated the screening disparity (ie, absolute sensitivity difference) for each of 4 racial or ethnic groups (African American, Japanese American, Latino, Native Hawaiian) vs White cases. RESULTS: Among 5900 lung cancer cases, 43.3% were screen eligible under the 2021 USPSTF criteria. Screening sensitivities varied by race and ethnicity, with Native Hawaiian (56.7%) and White (49.6%) cases attaining the highest sensitivities and Latino (37.3%), African American (38.4%), and Japanese American (40.0%) cases attaining the lowest. Latino cases had the greatest screening disparity vs White cases at 12.4%, followed by African American (11.2%) and Japanese American (9.6%) cases. Under risk-based screening, the overall screening sensitivity increased to 75.7%, and all racial and ethnic groups had increased sensitivities (54.5%-91.9%). Whereas the screening disparity decreased to 5.1% for African American cases, it increased to 28.6% for Latino cases and 12.8% for Japanese American cases. CONCLUSIONS: In the Multiethnic Cohort, racial and ethnic disparities decreased but persisted under the 2021 USPSTF lung cancer screening guidelines. Risk-based screening through PLCOm2012 may increase screening sensitivities and help to reduce disparities in some, but not all, racial and ethnic groups. Further optimization of risk-based screening strategies across diverse populations is needed.


Assuntos
Detecção Precoce de Câncer , Neoplasias Pulmonares , Estudos de Coortes , Etnicidade , Humanos , Neoplasias Pulmonares/diagnóstico , Programas de Rastreamento
8.
Circulation ; 145(22): 1655-1662, 2022 05 31.
Artigo em Inglês | MEDLINE | ID: mdl-35369704

RESUMO

BACKGROUND: Previous studies have shown that quality of life improves after coronary revascularization more so after coronary artery bypass grafting (CABG) than after percutaneous coronary intervention (PCI). This study aimed to evaluate the effect of fractional flow reserve guidance and current generation, zotarolimus drug-eluting stents on quality of life after PCI compared with CABG. METHODS: The FAME 3 trial (Fractional Flow Reserve Versus Angiography for Multivessel Evaluation) is a multicenter, international trial including 1500 patients with 3-vessel coronary artery disease who were randomly assigned to either CABG or fractional flow reserve-guided PCI. Quality of life was measured using the European Quality of Life-5 Dimensions (EQ-5D) questionnaire at baseline and 1 and 12 months. The Canadian Cardiovascular Class angina grade and working status were assessed at the same time points and at 6 months. The primary objective was to compare EQ-5D summary index at 12 months. Secondary end points included angina grade and work status. RESULTS: The EQ-5D summary index at 12 months did not differ between the PCI and CABG groups (difference, 0.001 [95% CI, -0.016 to 0.017]; P=0.946). The trajectory of EQ-5D during the 12 months differed (P<0.001) between PCI and CABG: at 1 month, EQ-5D was 0.063 (95% CI, 0.047 to 0.079) higher in the PCI group. A similar trajectory was found for the EQ (EuroQol) visual analogue scale. The proportion of patients with Canadian Cardiovascular Class 2 or greater angina at 12 months was 6.2% versus 3.1% (odds ratio, 2.5 [95% CI, 0.96-6.8]), respectively, in the PCI group compared with the CABG group. A greater percentage of younger patients (<65 years old) were working at 12 months in the PCI group compared with the CABG group (68% versus 57%; odds ratio, 3.9 [95% CI, 1.7-8.8]). CONCLUSIONS: In the FAME 3 trial, quality of life after fractional flow reserve-guided PCI with current generation drug-eluting stents compared with CABG was similar at 1 year. The rate of significant angina was low in both groups and not significantly different. The trajectory of improvement in quality of life was significantly better after PCI, as was working status in those <65 years old. REGISTRATION: URL: https://www. CLINICALTRIALS: gov; Unique identifier: NCT02100722.


Assuntos
Doença da Artéria Coronariana , Reserva Fracionada de Fluxo Miocárdico , Intervenção Coronária Percutânea , Idoso , Angina Pectoris , Canadá , Ponte de Artéria Coronária/efeitos adversos , Ponte de Artéria Coronária/métodos , Doença da Artéria Coronariana/cirurgia , Humanos , Intervenção Coronária Percutânea/efeitos adversos , Intervenção Coronária Percutânea/métodos , Qualidade de Vida , Resultado do Tratamento
9.
J Am Soc Echocardiogr ; 35(7): 752-761.e11, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35257895

RESUMO

BACKGROUND: Fetal echocardiography is a major diagnostic imaging modality for prenatal detection of critical congenital heart disease. Diagnostic accuracy is essential for appropriate planning of delivery and neonatal care. The relationship between study comprehensiveness and diagnostic error is not well understood. The aim of this study was to test the hypothesis that high fetal echocardiographic study comprehensiveness would be associated with low diagnostic error. Diagnostic errors were defined as discordant fetal and postnatal diagnoses and were further characterized by potential causes, contributors, and clinical significance. METHODS: Fetal echocardiographic examinations performed at Lucile Packard Children's Hospital in which fetuses with critical congenital heart disease were anticipated to require postnatal surgical or catheter intervention in the first year of life were identified using the fetal cardiology program database. For this cohort, initial fetal echocardiographic images were reviewed and given a fetal echocardiography comprehensiveness score (FECS). Fetal diagnoses obtained from initial fetal echocardiographic images and reports were compared with postnatal diagnoses confirmed by transthoracic echocardiography and other imaging studies and/or surgery to determine diagnostic error. The relationship between FECS and diagnostic error was evaluated using multivariable logistic regression. RESULTS: Of the 304 initial fetal echocardiographic studies, diagnostic error (discrepant diagnosis, false negative, or false positive) occurred in 92 cases (30.3%). FECS was not associated with diagnostic error, but low FECS (≤80% complete) was associated with false negatives and procedural/conditional (P < .001) and technical (P = .005) contributors compared with high FECS (>80% complete). Cognitive factors made up the largest proportion of contributors to error. CONCLUSIONS: The comprehensiveness of fetal echocardiographic studies was not related to diagnostic error. The most common contributors to error were cognitive factors. Echocardiography laboratories can work to mitigate preventable cognitive error through quality improvement initiatives.


Assuntos
Cardiologia , Cardiopatias Congênitas , Criança , Ecocardiografia/métodos , Feminino , Coração Fetal/diagnóstico por imagem , Feto , Cardiopatias Congênitas/diagnóstico por imagem , Humanos , Recém-Nascido , Gravidez , Diagnóstico Pré-Natal/métodos , Ultrassonografia Pré-Natal/métodos
10.
N Engl J Med ; 386(2): 128-137, 2022 01 13.
Artigo em Inglês | MEDLINE | ID: mdl-34735046

RESUMO

BACKGROUND: Patients with three-vessel coronary artery disease have been found to have better outcomes with coronary-artery bypass grafting (CABG) than with percutaneous coronary intervention (PCI), but studies in which PCI is guided by measurement of fractional flow reserve (FFR) have been lacking. METHODS: In this multicenter, international, noninferiority trial, patients with three-vessel coronary artery disease were randomly assigned to undergo CABG or FFR-guided PCI with current-generation zotarolimus-eluting stents. The primary end point was the occurrence within 1 year of a major adverse cardiac or cerebrovascular event, defined as death from any cause, myocardial infarction, stroke, or repeat revascularization. Noninferiority of FFR-guided PCI to CABG was prespecified as an upper boundary of less than 1.65 for the 95% confidence interval of the hazard ratio. Secondary end points included a composite of death, myocardial infarction, or stroke; safety was also assessed. RESULTS: A total of 1500 patients underwent randomization at 48 centers. Patients assigned to undergo PCI received a mean (±SD) of 3.7±1.9 stents, and those assigned to undergo CABG received 3.4±1.0 distal anastomoses. The 1-year incidence of the composite primary end point was 10.6% among patients randomly assigned to undergo FFR-guided PCI and 6.9% among those assigned to undergo CABG (hazard ratio, 1.5; 95% confidence interval [CI], 1.1 to 2.2), findings that were not consistent with noninferiority of FFR-guided PCI (P = 0.35 for noninferiority). The incidence of death, myocardial infarction, or stroke was 7.3% in the FFR-guided PCI group and 5.2% in the CABG group (hazard ratio, 1.4; 95% CI, 0.9 to 2.1). The incidences of major bleeding, arrhythmia, and acute kidney injury were higher in the CABG group than in the FFR-guided PCI group. CONCLUSIONS: In patients with three-vessel coronary artery disease, FFR-guided PCI was not found to be noninferior to CABG with respect to the incidence of a composite of death, myocardial infarction, stroke, or repeat revascularization at 1 year. (Funded by Medtronic and Abbott Vascular; FAME 3 ClinicalTrials.gov number, NCT02100722.).


Assuntos
Ponte de Artéria Coronária , Estenose Coronária/cirurgia , Reserva Fracionada de Fluxo Miocárdico , Intervenção Coronária Percutânea/métodos , Idoso , Doenças Cardiovasculares/epidemiologia , Ponte de Artéria Coronária/efeitos adversos , Estenose Coronária/mortalidade , Feminino , Humanos , Estimativa de Kaplan-Meier , Tempo de Internação , Masculino , Pessoa de Meia-Idade , Duração da Cirurgia , Intervenção Coronária Percutânea/efeitos adversos , Reoperação , Stents
11.
J Natl Cancer Inst ; 114(1): 87-96, 2022 01 11.
Artigo em Inglês | MEDLINE | ID: mdl-34255071

RESUMO

BACKGROUND: With advancing therapeutics, lung cancer (LC) survivors are rapidly increasing in number. Although mounting evidence suggests LC survivors have high risk of second primary lung cancer (SPLC), there is no validated prediction model available for clinical use to identify high-risk LC survivors for SPLC. METHODS: Using data from 6325 ever-smokers in the Multiethnic Cohort (MEC) study diagnosed with initial primary lung cancer (IPLC) in 1993-2017, we developed a prediction model for 10-year SPLC risk after IPLC diagnosis using cause-specific Cox regression. We evaluated the model's clinical utility using decision curve analysis and externally validated it using 2 population-based data-Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial (PLCO) and National Lung Screening Trial (NLST)-that included 2963 and 2844 IPLC (101 and 93 SPLC cases), respectively. RESULTS: Over 14 063 person-years, 145 (2.3%) ever-smoking IPLC patients developed SPLC in MEC. Our prediction model demonstrated a high predictive accuracy (Brier score = 2.9, 95% confidence interval [CI] = 2.4 to 3.3) and discrimination (area under the receiver operating characteristics [AUC] = 81.9%, 95% CI = 78.2% to 85.5%) based on bootstrap validation in MEC. Stratification by the estimated risk quartiles showed that the observed SPLC incidence was statistically significantly higher in the 4th vs 1st quartile (9.5% vs 0.2%; P < .001). Decision curve analysis indicated that in a wide range of 10-year risk thresholds from 1% to 20%, the model yielded a larger net-benefit vs hypothetical all-screening or no-screening scenarios. External validation using PLCO and NLST showed an AUC of 78.8% (95% CI = 74.6% to 82.9%) and 72.7% (95% CI = 67.7% to 77.7%), respectively. CONCLUSIONS: We developed and validated a SPLC prediction model based on large population-based cohorts. The proposed prediction model can help identify high-risk LC patients for SPLC and can be incorporated into clinical decision making for SPLC surveillance and screening.


Assuntos
Neoplasias Pulmonares , Segunda Neoplasia Primária , Detecção Precoce de Câncer , Humanos , Pulmão , Neoplasias Pulmonares/diagnóstico , Neoplasias Pulmonares/epidemiologia , Masculino , Segunda Neoplasia Primária/diagnóstico , Segunda Neoplasia Primária/epidemiologia , Segunda Neoplasia Primária/etiologia , Fumar/efeitos adversos , Fumar/epidemiologia
12.
J Neurosurg ; 135(6): 1725-1741, 2021 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-33799297

RESUMO

OBJECTIVE: The CyberKnife (CK) has emerged as an effective frameless and noninvasive method for treating a myriad of neurosurgical conditions. Here, the authors conducted an extensive retrospective analysis and review of the literature to elucidate the trend for CK use in the management paradigm for common neurosurgical diseases at their institution. METHODS: A literature review (January 1990-June 2019) and clinical review (January 1999-December 2018) were performed using, respectively, online research databases and the Stanford Research Repository of patients with intracranial and spinal lesions treated with CK at Stanford. For each disease considered, the coefficient of determination (r2) was estimated as a measure of CK utilization over time. A change in treatment modality was assessed using a t-test, with statistical significance assessed at the 0.05 alpha level. RESULTS: In over 7000 patients treated with CK for various brain and spinal lesions over the past 20 years, a positive linear trend (r2 = 0.80) in the system's use was observed. CK gained prominence in the management of intracranial and spinal arteriovenous malformations (AVMs; r2 = 0.89 and 0.95, respectively); brain and spine metastases (r2 = 0.97 and 0.79, respectively); benign tumors such as meningioma (r2 = 0.85), vestibular schwannoma (r2 = 0.76), and glomus jugulare tumor (r2 = 0.89); glioblastoma (r2 = 0.54); and trigeminal neuralgia (r2 = 0.81). A statistically significant difference in the change in treatment modality to CK was observed in the management of intracranial and spinal AVMs (p < 0.05), and while the treatment of brain and spine metastases, meningioma, and glioblastoma trended toward the use of CK, the change in treatment modality for these lesions was not statistically significant. CONCLUSIONS: Evidence suggests the robust use of CK for treating a wide range of neurological conditions.

13.
J Am Coll Radiol ; 17(5): 597-605, 2020 May.
Artigo em Inglês | MEDLINE | ID: mdl-32371000

RESUMO

PURPOSE: The aim of this study was to determine whether participation in Radiology Support, Communication and Alignment Network (R-SCAN) results in a reduction of inappropriate imaging in a wide range of real-world clinical environments. METHODS: This quality improvement study used imaging data from 27 US academic and private practices that completed R-SCAN projects between January 25, 2015, and August 8, 2018. Each project consisted of baseline, educational (intervention), and posteducational phases. Baseline and posteducational imaging cases were rated as high, medium, or low value on the basis of validated ACR Appropriateness Criteria®. Four cohorts were generated: a comprehensive cohort that included all eligible practices and three topic-specific cohorts that included practices that completed projects of specific Choosing Wisely topics (pulmonary embolism, adnexal cyst, and low back pain). Changes in the proportion of high-value cases after R-SCAN intervention were assessed for each cohort using generalized estimating equation logistic regression, and changes in the number of low-value cases were analyzed using Poisson regression. RESULTS: Use of R-SCAN in the comprehensive cohort resulted in a greater proportion of high-value imaging cases (from 57% to 79%; odds ratio, 2.69; 95% confidence interval, 1.50-4.86; P = .001) and 345 fewer low-value cases after intervention (incidence rate ratio, 0.45; 95% confidence interval, 0.29-0.70; P < .001). Similar changes in proportion of high-value cases and number of low-value cases were found for the pulmonary embolism, adnexal cyst, and low back pain cohorts. CONCLUSIONS: R-SCAN participation was associated with a reduced likelihood of inappropriate imaging and is thus a promising tool to enhance the quality of patient care and promote wise use of health care resources.


Assuntos
Radiologia , Estudos de Coortes , Comunicação , Diagnóstico por Imagem , Humanos , Radiografia
14.
J Neurosurg ; 134(5): 1435-1446, 2020 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-32413851

RESUMO

OBJECTIVE: Cavernous sinus meningioma (CSM) can affect visual function and require expeditious treatment to prevent permanent visual loss. Authors of this retrospective study sought to determine the factors associated with visual functional outcomes in CSM patients treated with surgery, stereotactic radiosurgery (SRS), or stereotactic radiation therapy (SRT), alone or in combination. METHODS: Consecutive patients with CSM who had presented at an academic tertiary care hospital from 2000 to 2018 were identified through retrospective chart review. Visual function-visual eye deficit (VED), optic disc (OD) appearance, intraocular pressure (IOP), and extraocular movement (EOM)-was assessed before and after treatment for CSM. VED with visual acuity (VA) ≤ 20/200 and visual field defect ≥ -11 dB, pale OD appearance in the ipsilateral or contralateral eye, increased ipsilateral IOP, and/or EOM restriction were defined as a poor visual functional outcome. Multivariable logistic regression was used to evaluate the associations between pretreatment visual functional assessment and posttreatment visual outcomes. RESULTS: The study cohort included 44 patients (73% female; median age 55 years), with a median clinical follow-up of 14 months. Ipsilateral VED improved, remained stable, or worsened, respectively, in 0%, 33.4%, and 66.6% of the patients after subtotal resection (STR) alone; in 52.6%, 31.6%, and 15.8% after STR plus radiation treatment; in 28.5%, 43.0%, and 28.5% after gross-total resection (GTR) alone; and in 56.3%, 43.7%, and 0% after radiation treatment (SRS or SRT) alone. Contralateral VED remained intact in all the patients after STR alone and those with radiation treatment (SRS or SRT) alone; however, it improved, remained stable, or worsened in 10.5%, 84.2%, and 5.3% after STR plus radiation treatment and in 43.0%, 28.5%, and 28.5% after GTR alone. EOM remained intact, fully recovered, remained stable, and worsened, respectively, in 0%, 50%, 50%, and 0% of the patients after STR alone; in 36.8%, 47.4%, 15.8%, and 0% of the patients after STR with radiation treatment; in 57.1%, 0%, 28.6%, and 14.3% of the patients after GTR alone; and in 56.2%, 37.5%, 6.3%, and 0% of the patients after radiation treatment (SRS or SRT) alone. In multivariable analyses adjusted for age, tumor volume, and treatment modality, initial ipsilateral poor VED (OR 10.1, 95% CI 1.05-97.2, p = 0.04) and initial ipsilateral pale OD appearance (OR 21.1, 95% CI 1.6-270.5, p = 0.02) were associated with poor ipsilateral VED posttreatment. Similarly, an initial pale OD appearance (OR 15.7, 95% CI 1.3-199.0, p = 0.03), initial poor VED (OR 21.7, 95% CI 1.2-398.6, p = 0.03), and a higher IOP in the ipsilateral eye (OR 55.3, 95% CI 1.7-173.9, p = 0.02) were associated with an ipsilateral pale OD appearance posttreatment. Furthermore, a higher initial ipsilateral IOP (OR 35.9, 95% CI 3.3-400.5, p = 0.004) was indicative of a higher IOP in the ipsilateral eye posttreatment. Finally, initial restricted EOM was indicative of restricted EOM posttreatment (OR 20.6, 95% CI 18.7-77.0, p = 0.02). CONCLUSIONS: Pretreatment visual functional assessment predicts visual outcomes in patients with CSM and can be used to identify patients at greater risk for vision loss.


Assuntos
Seio Cavernoso/cirurgia , Neoplasias Meníngeas/cirurgia , Meningioma/cirurgia , Complicações Pós-Operatórias/etiologia , Transtornos da Visão/etiologia , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Movimentos Oculares , Feminino , Seguimentos , Humanos , Pressão Intraocular , Masculino , Pessoa de Meia-Idade , Disco Óptico/patologia , Estudos Retrospectivos , Resultado do Tratamento , Transtornos da Visão/diagnóstico , Acuidade Visual , Campos Visuais , Adulto Jovem
15.
Paediatr Anaesth ; 30(5): 564-570, 2020 05.
Artigo em Inglês | MEDLINE | ID: mdl-32037665

RESUMO

BACKGROUND: Patients supported with a ventricular assist device are predisposed to severe bleeding at the time of orthotopic heart transplant due to several risk factors including anticoagulation with vitamin K antagonists. Kcentra, a four-factor prothrombin complex concentrate, has been approved by the FDA for warfarin reversal in adults prior to urgent surgery. There is a lack of published data on the preoperative use of four-factor prothrombin complex concentrates in pediatric patients undergoing cardiacsurgery. METHODS: This is a single-center retrospective analysis of pediatric patients with a continuous-flow ventricular assist device who underwent heart transplant, comparing patients who received Kcentra for anticoagulation reversal with a historical patient cohort who did not. Consecutive patients from January 2013 to December 2017 were analyzed. The primary outcome was volume of blood product transfusion prior to cardiopulmonary bypass initiation. Secondary outcomes include blood product transfusion after cardiopulmonary bypass intraoperatively and up to 24 hours postoperatively, chest tube output within 24 hours of surgery, time to extubation, incidence of thromboembolism, and post-transplant length ofstay. RESULTS: From 2013 to 2017, 31 patients with continuous-flow ventricular assist devices underwent heart transplant, with 27 patients included in the analysis. Fifteen patients received Kcentra compared with 12 patients who received fresh-frozen plasma for anticoagulation reversal. Compared with the control group, patients who received Kcentra had less packed red blood cells, fresh-frozen plasma, and platelets transfused prior to cardiopulmonary bypass initiation. The Kcentra group also received less packed red blood cells on bypass and less packed red blood cells after cardiopulmonary bypass termination. There were no differences in chest tube output, time to extubation, intensive care unit length of stay, or overall hospital length of stay. Neither group had thromboembolic complications detected during the first seven postoperative days. CONCLUSION: This small retrospective study indicates that preoperative warfarin reversal with Kcentra reduces blood product exposure in pediatric patients with ventricular assist devices undergoing heart transplant.


Assuntos
Anticoagulantes/efeitos adversos , Fatores de Coagulação Sanguínea/uso terapêutico , Coagulação Sanguínea/efeitos dos fármacos , Transplante de Coração , Hemorragia/prevenção & controle , Varfarina/efeitos adversos , Adolescente , Criança , Feminino , Humanos , Masculino , Estudos Retrospectivos
16.
J Vasc Access ; 21(4): 467-474, 2020 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31774037

RESUMO

BACKGROUND: Protease-activated receptor-1 antagonism by vorapaxar could facilitate arteriovenous fistula maturation but may increase bleeding risk. OBJECTIVE: The primary objective of the Vorapaxar Study for Maturation of arteriovenous fistula for Hemodialysis Access (VorapAccess) was to determine if vorapaxar improves arteriovenous fistula functional maturation in patients with end-stage renal disease. METHODS: VorapAccess was a randomized, placebo-controlled, double-blind pilot trial comparing 2.5 mg vorapaxar per day with placebo for twelve weeks starting on day two after arteriovenous fistula creation. The primary outcome was time to functional maturation defined as successful cannulation for six hemodialysis sessions within three weeks. The planned sample size was 50 participants. The study was terminated early after withdrawal of planned financial support. Given the small number of randomized patients, we performed descriptive analyses without inference testing. RESULTS: A total of 13 participants were randomly allocated study drug (six vorapaxar and seven placebo). The median age was 56 years and seven participants (54%) were female. The median (minimum-maximum) days to functional maturation were 169 (77-287) days in the vorapaxar group and 145 (48-198) days in the placebo group. Six of the 13 (46%) participants had arteriovenous fistula functional maturation within 180 days; two of six (33%) in the vorapaxar group and four of seven (57%) in the placebo group. There was one bleeding event in the placebo group. CONCLUSION: Fewer than half of participants had functional maturation within 180 days after surgery, suggesting a major need for agents or strategies that enhance arteriovenous fistula maturation.


Assuntos
Derivação Arteriovenosa Cirúrgica , Falência Renal Crônica/terapia , Lactonas/uso terapêutico , Inibidores da Agregação Plaquetária/uso terapêutico , Piridinas/uso terapêutico , Diálise Renal , Extremidade Superior/irrigação sanguínea , Idoso , Derivação Arteriovenosa Cirúrgica/efeitos adversos , California , Método Duplo-Cego , Término Precoce de Ensaios Clínicos , Feminino , Hemorragia/induzido quimicamente , Humanos , Falência Renal Crônica/diagnóstico , Lactonas/efeitos adversos , Masculino , Pessoa de Meia-Idade , Projetos Piloto , Inibidores da Agregação Plaquetária/efeitos adversos , Piridinas/efeitos adversos , Receptor PAR-1/antagonistas & inibidores , Diálise Renal/efeitos adversos , Fatores de Risco , Fatores de Tempo , Resultado do Tratamento
17.
Pediatr Transplant ; 23(1): e13330, 2019 02.
Artigo em Inglês | MEDLINE | ID: mdl-30506612

RESUMO

Due to limited and conflicting data in pediatric patients, long-term routine surveillance endomyocardial biopsy (RSB) in pediatric heart transplant (HT) remains controversial. We sought to characterize the rate of positive RSB and determine factors associated with RSB-detected rejection. Records of patients transplanted at a single institution from 1995 to 2015 with >2 year of post-HT biopsy data were reviewed for RSB-detected rejections occurring >2 year post-HT. We illustrated the trajectory of significant rejections (ISHLT Grade ≥3A/2R) among total RSB performed over time and used multivariable logistic regression to model the association between time and risk of rejection. We estimated Kaplan-Meier freedom from rejection rates by patient characteristics and used the log-rank test to assess differences in rejection probabilities. We identified the best-fitting Cox proportional hazards regression model. In 140 patients, 86% did not have any episodes of significant RSB-detected rejection >2 year post-HT. The overall empirical rate of RSB-detected rejection >2 year post-HT was 2.9/100 patient-years. The percentage of rejection among 815 RSB was 2.6% and remained stable over time. Years since transplant remained unassociated with rejection risk after adjusting for patient characteristics (OR = 0.98; 95% CI 0.78-1.23; P = 0.86). Older age at HT was the only factor that remained significantly associated with risk of RSB-detected rejection under multivariable Cox analysis (P = 0.008). Most pediatric patients did not have RSB-detected rejection beyond 2 years post-HT, and the majority of those who did were older at time of HT. Indiscriminate long-term RSB in pediatric heart transplant should be reconsidered given the low rate of detected rejection.


Assuntos
Endocárdio/patologia , Rejeição de Enxerto/diagnóstico , Transplante de Coração , Miocárdio/patologia , Adolescente , Assistência ao Convalescente , Biópsia , Criança , Pré-Escolar , Feminino , Seguimentos , Rejeição de Enxerto/etiologia , Rejeição de Enxerto/patologia , Humanos , Lactente , Recém-Nascido , Estimativa de Kaplan-Meier , Modelos Logísticos , Masculino , Modelos de Riscos Proporcionais , Fatores de Risco , Adulto Jovem
18.
Cureus ; 9(10): e1798, 2017 Oct 24.
Artigo em Inglês | MEDLINE | ID: mdl-29282442

RESUMO

Introduction This study's objective is to compare the overall survivals (OSs) and various parameters of patients with 1-3 versus ≥ 4 brain metastases post-CyberKnife radiosurgery (CKRS) (Accuray, Sunnyvale, California) alone. Methods Charts of 150 patients, from 2009-2014, treated with only CKRS for brain metastases were reviewed retrospectively for overall survival (OS) and patient, tumor, and imaging characteristics. Parameters included demographics, Eastern Cooperative Oncology Group (ECOG) performance scores, number and control of extracranial disease (ECD) sites, cause of death (COD), histology, tumor volume (TV), and post-CKRS whole brain radiotherapy (WBRT). The imaging characteristics assessed were time of complete response (CR), partial response (PR), stable imaging or local failure (LF), and distal brain failure (DBF). Patients and their data were divided into those with 1-3 (group 1) versus ≥ 4 brain metastases (group 2). For each CR and LF patient, absolute neutrophil count (ANC), absolute lymphocyte count (ALC)), and ANC/ALC ratio (NLR) were obtained, when available, at the time of CKRS. Results Both group 1 and group 2 had a median OS of 13 months. The patient median age for the 115 group 1 patients versus the 35 group 2 patients was 62 versus 56 years. Group 1 had slightly more males and group 2, females. The predominant ECOG score for each group was 1 and the number of ECD sites was one and two, respectively. Uncontrolled ECD occurred in the majority of both group 1 and group 2 patients. The main COD was ECD in both groups. The prevalent tumor histology for groups 1 and 2 was non-small cell lung carcinoma. Median TVs were 1.08 cc versus 1.42 cc for groups 1 and 2, respectively. The majority of patients in both groups did not undergo post-CKRS WBRT. Imaging outcomes were LC (CR, PR, or stable imaging) in 93 (80.9%) and 26 (74.3%) group 1 and 2 patients, of whom 32 (27.8%) and six (17.1%) had CR; 38 (33.0%) and 18 (51.4%), PR and 23 (20.0%) and two (5.7%), stable imaging; LF was the outcome in 22 (19.1%) and nine (25.7%) patients, and DBF occurred in 62 (53.9%) and 21 (60.0%), respectively. Uni- and multivariable analyses showed the independent parameters of a lower ECOG score, a greater number of ECD sites and uncontrolled ECD were significantly associated with greater mortality risk with and without accounting for other covariates. At CKRS, 19 group 1 and 2 CR patients had a mean ANC of 5.88 K/µL and a mean ALC of 1.31 K/µL and 13 (68%) of 19 had NLRs ≤ five, while 11 with LFs had a mean ANC of 5.22 K/µL and a mean ALC of 0.93 K/µL and seven (64%) had NLRs > five. An NLR ≤ five and high ALC was associated with a CR and an NLR > five and a low ALC with an LF. Conclusions Median OS post-CKRS was 13 months for both patients with 1-3 brain metastases and with ≥ 4. This is the only study in the literature to evaluate OS in patients with 1-3 and ≥ 4 brain metastases who were treated with CKRS alone. For groups 1 and 2 patients combined, 119 (79.3%) had LC and 38 (25.3%) had CR. The ANC, ALC, and NLR values are likely predictive of CR and LF outcomes.

19.
Am Heart J ; 188: 147-155, 2017 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-28577670

RESUMO

OBJECTIVE: To examine the safety and efficacy of cangrelor in patients with single-vessel disease (SVD) and multi-vessel disease (MVD). BACKGROUND: Cangrelor, an intravenous, rapidly acting P2Y12 inhibitor, is superior to clopidogrel in reducing ischemic events among patients receiving percutaneous coronary intervention (PCI). METHODS: We studied a modified intention to treat population of patients with SVD and MVD from the CHAMPION PHOENIX trial. The primary efficacy outcome was the composite of death, myocardial infarction (MI), ischemia-driven revascularization (IDR), and stent thrombosis (ST) at 48hours. The key safety outcome was non-coronary artery bypass grafting GUSTO severe bleeding at 48hours. RESULTS: Among 10,921 patients, 5,220 (48%) had SVD and 5,701 (52%) had MVD. MVD patients were older and more often had diabetes, hyperlipidemia, hypertension, prior stroke, and prior MI. After adjustment, MVD patients had similar rates of 48-hour death/MI/IDR/ST (6.3% vs 4.2%, adjusted odds ratio [OR] 1.6 [95% CI 0.42-6.06]) and GUSTO severe bleeding (0.1% vs 0.2%, P=.67) compared with SVD patients. Consistent with overall trial findings, cangrelor use reduced ischemic complications in patients with both SVD (3.9% vs 4.5%; OR 0.86, 95% CI 0.65-1.12) and MVD (5.5% vs 7.2%; OR 0.74, 95% CI 0.6-0.92, P-interaction=.43). GUSTO severe bleeding outcomes were not significantly increased with cangrelor or clopidogrel in either SVD or MVD patients. CONCLUSION: In the CHAMPION PHOENIX trial, MVD and SVD patients had similar ischemic outcomes at 48hours and 30days. Cangrelor consistently reduced ischemic complications in both SVD and MVD patients without a significant increase in GUSTO severe bleeding. CLINICAL PERSPECTIVES.


Assuntos
Monofosfato de Adenosina/análogos & derivados , Doença da Artéria Coronariana/terapia , Infarto do Miocárdio/prevenção & controle , Intervenção Coronária Percutânea , Complicações Pós-Operatórias/prevenção & controle , Monofosfato de Adenosina/administração & dosagem , Administração Oral , Idoso , Causas de Morte/tendências , Clopidogrel , Angiografia Coronária , Doença da Artéria Coronariana/diagnóstico , Relação Dose-Resposta a Droga , Método Duplo-Cego , Feminino , Seguimentos , Saúde Global , Humanos , Incidência , Infusões Intravenosas , Masculino , Pessoa de Meia-Idade , Infarto do Miocárdio/epidemiologia , Complicações Pós-Operatórias/epidemiologia , Antagonistas do Receptor Purinérgico P2Y/administração & dosagem , Taxa de Sobrevida/tendências , Ticlopidina/administração & dosagem , Ticlopidina/análogos & derivados , Fatores de Tempo
20.
Am J Kidney Dis ; 69(6): 771-779, 2017 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-28063734

RESUMO

BACKGROUND: Controversy exists about any differences in longer-term safety across different intravenous iron formulations routinely used in hemodialysis (HD) patients. We exploited a natural experiment to compare outcomes of patients initiating HD therapy in facilities that predominantly (in ≥90% of their patients) used iron sucrose versus sodium ferric gluconate complex. STUDY DESIGN: Retrospective cohort study of incident HD patients. SETTING & PARTICIPANTS: Using the US Renal Data System, we hard-matched on geographic region and center characteristics HD facilities predominantly using ferric gluconate with similar ones using iron sucrose. Subsequently, incident HD patients were assigned to their facility iron formulation exposure. INTERVENTION: Facility-level use of iron sucrose versus ferric gluconate. OUTCOMES: Patients were followed up for mortality from any, cardiovascular, or infectious causes. Medicare-insured patients were followed up for infectious and cardiovascular (stroke or myocardial infarction) hospitalizations and for composite outcomes with the corresponding cause-specific deaths. MEASUREMENTS: HRs. RESULTS: We matched 2,015 iron sucrose facilities with 2,015 ferric gluconate facilities, in which 51,603 patients (iron sucrose, 24,911; ferric gluconate, 26,692) subsequently initiated HD therapy. All recorded patient characteristics were balanced between groups. Over 49,989 person-years, 10,381 deaths (3,908 cardiovascular and 1,209 infectious) occurred. Adjusted all-cause (HR, 0.98; 95% CI, 0.93-1.03), cardiovascular (HR, 0.96; 95% CI, 0.89-1.03), and infectious mortality (HR, 0.98; 95% CI, 0.86-1.13) did not differ between iron sucrose and ferric gluconate facilities. Among Medicare beneficiaries, no differences between ferric gluconate and iron sucrose facilities were observed in fatal or nonfatal cardiovascular events (HR, 1.01; 95% CI, 0.93-1.09). The composite infectious end point occurred less frequently in iron sucrose versus ferric gluconate facilities (HR, 0.92; 95% CI, 0.88-0.96). LIMITATIONS: Unobserved selection bias from nonrandom treatment assignment. CONCLUSIONS: Patients initiating HD therapy in facilities almost exclusively using iron sucrose versus ferric gluconate had similar longer-term outcomes. However, there was a small decrease in infectious hospitalizations and deaths in patients dialyzing in facilities predominantly using iron sucrose. This difference may be due to residual confounding, random chance, or a causal effect.


Assuntos
Anemia/tratamento farmacológico , Compostos Férricos/uso terapêutico , Ácido Glucárico/uso terapêutico , Hematínicos/uso terapêutico , Falência Renal Crônica/terapia , Diálise Renal , Administração Intravenosa , Idoso , Anemia/complicações , Doenças Cardiovasculares/mortalidade , Causas de Morte , Feminino , Óxido de Ferro Sacarado , Humanos , Infecções/mortalidade , Falência Renal Crônica/complicações , Masculino , Pessoa de Meia-Idade , Mortalidade , Modelos de Riscos Proporcionais , Estudos Retrospectivos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA