Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 20 de 160
Filtrar
1.
BMC Bioinformatics ; 25(1): 251, 2024 Jul 31.
Artículo en Inglés | MEDLINE | ID: mdl-39085787

RESUMEN

BACKGROUND: Detecting event triggers in biomedical texts, which contain domain knowledge and context-dependent terms, is more challenging than in general-domain texts. Most state-of-the-art models rely mainly on external resources such as linguistic tools and knowledge bases to improve system performance. However, they lack effective mechanisms to obtain semantic clues from label specification and sentence context. Given its success in image classification, label representation learning is a promising approach to enhancing biomedical event trigger detection models by leveraging the rich semantics of pre-defined event type labels. RESULTS: In this paper, we propose the Biomedical Label-based Synergistic representation Learning (BioLSL) model, which effectively utilizes event type labels by learning their correlation with trigger words and enriches the representation contextually. The BioLSL model consists of three modules. Firstly, the Domain-specific Joint Encoding module employs a transformer-based, domain-specific pre-trained architecture to jointly encode input sentences and pre-defined event type labels. Secondly, the Label-based Synergistic Representation Learning module learns the semantic relationships between input texts and event type labels, and generates a Label-Trigger Aware Representation (LTAR) and a Label-Context Aware Representation (LCAR) for enhanced semantic representations. Finally, the Trigger Classification module makes structured predictions, where each label is predicted with respect to its neighbours. We conduct experiments on three benchmark BioNLP datasets, namely MLEE, GE09, and GE11, to evaluate our proposed BioLSL model. Results show that BioLSL has achieved state-of-the-art performance, outperforming the baseline models. CONCLUSIONS: The proposed BioLSL model demonstrates good performance for biomedical event trigger detection without using any external resources. This suggests that label representation learning and context-aware enhancement are promising directions for improving the task. The key enhancement is that BioLSL effectively learns to construct semantic linkages between the event mentions and type labels, which provide the latent information of label-trigger and label-context relationships in biomedical texts. Moreover, additional experiments on BioLSL show that it performs exceptionally well with limited training data under the data-scarce scenarios.


Asunto(s)
Semántica , Procesamiento de Lenguaje Natural , Aprendizaje Automático , Minería de Datos/métodos , Algoritmos
2.
Ann Surg Oncol ; 30(5): 2883-2894, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-36749504

RESUMEN

BACKGROUND: Measures taken to address the COVID-19 pandemic interrupted routine diagnosis and care for breast cancer. The aim of this study was to characterize the effects of the pandemic on breast cancer care in a statewide cohort. PATIENTS AND METHODS: Using data from a large health information exchange, we retrospectively analyzed the timing of breast cancer screening, and identified a cohort of newly diagnosed patients with any stage of breast cancer to further access the information available about their surgical treatments. We compared data for four subgroups: pre-lockdown (preLD) 25 March to 16 June 2019; lockdown (LD) 23 March to 3 May 2020; reopening (RO) 4 May to 14 June 2020; and post-lockdown (postLD) 22 March to 13 June 2021. RESULTS: During LD and RO, screening mammograms in the cohort decreased by 96.3% and 36.2%, respectively. The overall breast cancer diagnosis and surgery volumes decreased up to 38.7%, and the median time to surgery was prolonged from 1.5 months to 2.4 for LD and 1.8 months for RO. Interestingly, higher mean DCIS diagnosis (5.0 per week vs. 3.1 per week, p < 0.05) and surgery volume (14.8 vs. 10.5, p < 0.05) were found for postLD compared with preLD, while median time to surgery was shorter (1.2 months vs. 1.5 months, p < 0.0001). However, the postLD average weekly screening and diagnostic mammogram did not fully recover to preLD levels (2055.3 vs. 2326.2, p < 0.05; 574.2 vs. 624.1, p < 0.05). CONCLUSIONS: Breast cancer diagnosis and treatment patterns were interrupted during the lockdown and still altered 1 year after. Screening in primary care should be expanded to mitigate possible longer-term effects of these interruptions.


Asunto(s)
Neoplasias de la Mama , COVID-19 , Intercambio de Información en Salud , Humanos , Femenino , Neoplasias de la Mama/diagnóstico , Neoplasias de la Mama/epidemiología , Neoplasias de la Mama/cirugía , COVID-19/epidemiología , Pandemias , Estudios Retrospectivos , Detección Precoz del Cáncer , Control de Enfermedades Transmisibles , Prueba de COVID-19
3.
Am J Obstet Gynecol ; 224(6): 599.e1-599.e18, 2021 06.
Artículo en Inglés | MEDLINE | ID: mdl-33460585

RESUMEN

BACKGROUND: Intrauterine devices are effective and safe, long-acting reversible contraceptives, but the risk of uterine perforation occurs with an estimated incidence of 1 to 2 per 1000 insertions. The European Active Surveillance Study for Intrauterine Devices, a European prospective observational study that enrolled 61,448 participants (2006-2012), found that women breastfeeding at the time of device insertion or with the device inserted at ≤36 weeks after delivery had a higher risk of uterine perforation. The Association of Uterine Perforation and Expulsion of Intrauterine Device (APEX-IUD) study was a Food and Drug Administration-mandated study designed to reflect current United States clinical practice. The aims of the APEX-IUD study were to evaluate the risk of intrauterine device-related uterine perforation and device expulsion among women who were breastfeeding or within 12 months after delivery at insertion. OBJECTIVE: We aimed to describe the APEX-IUD study design, methodology, and analytical plan and present population characteristics, size of risk factor groups, and duration of follow-up. STUDY DESIGN: APEX-IUD study was a retrospective cohort study conducted in 4 organizations with access to electronic health records: Kaiser Permanente Northern California, Kaiser Permanente Southern California, Kaiser Permanente Washington, and Regenstrief Institute in Indiana. Variables were identified through structured data (eg, diagnostic, procedural, medication codes) and unstructured data (eg, clinical notes) via natural language processing. Outcomes include uterine perforation and device expulsion; potential risk factors were breastfeeding at insertion, postpartum timing of insertion, device type, and menorrhagia diagnosis in the year before insertion. Covariates include demographic characteristics, clinical characteristics, and procedure-related variables, such as difficult insertion. The first potential date of inclusion for eligible women varies by research site (from January 1, 2001 to January 1, 2010). Follow-up begins at insertion and ends at first occurrence of an outcome of interest, a censoring event (device removal or reinsertion, pregnancy, hysterectomy, sterilization, device expiration, death, disenrollment, last clinical encounter), or end of the study period (June 30, 2018). Comparisons of levels of exposure variables were made using Cox regression models with confounding adjusted by propensity score weighting using overlap weights. RESULTS: The study population includes 326,658 women with at least 1 device insertion during the study period (Kaiser Permanente Northern California, 161,442; Kaiser Permanente Southern California, 123,214; Kaiser Permanente Washington, 20,526; Regenstrief Institute, 21,476). The median duration of continuous enrollment was 90 (site medians 74-177) months. The mean age was 32 years, and the population was racially and ethnically diverse across the 4 sites. The mean body mass index was 28.5 kg/m2, and of the women included in the study, 10.0% had menorrhagia ≤12 months before insertion, 5.3% had uterine fibroids, and 10% were recent smokers; furthermore, among these women, 79.4% had levonorgestrel-releasing devices, and 19.5% had copper devices. Across sites, 97,824 women had an intrauterine device insertion at ≤52 weeks after delivery, of which 94,817 women (97%) had breastfeeding status at insertion determined; in addition, 228,834 women had intrauterine device insertion at >52 weeks after delivery or no evidence of a delivery in their health record. CONCLUSION: Combining retrospective data from multiple sites allowed for a large and diverse study population. Collaboration with clinicians in the study design and validation of outcomes ensured that the APEX-IUD study results reflect current United States clinical practice. Results from this study will provide valuable information based on real-world evidence about risk factors for intrauterine devices perforation and expulsion for clinicians.


Asunto(s)
Lactancia Materna , Dispositivos Intrauterinos/efectos adversos , Periodo Posparto , Perforación Uterina/etiología , Adulto , Protocolos Clínicos , Femenino , Estudios de Seguimiento , Humanos , Expulsión de Dispositivo Intrauterino , Modelos Logísticos , Persona de Mediana Edad , Pautas de la Práctica en Medicina , Proyectos de Investigación , Estudios Retrospectivos , Medición de Riesgo , Factores de Riesgo , Factores de Tiempo , Estados Unidos/epidemiología , Perforación Uterina/epidemiología
4.
BMC Med Inform Decis Mak ; 21(1): 112, 2021 04 03.
Artículo en Inglés | MEDLINE | ID: mdl-33812369

RESUMEN

BACKGROUND: Many patients with atrial fibrillation (AF) remain undiagnosed despite availability of interventions to reduce stroke risk. Predictive models to date are limited by data requirements and theoretical usage. We aimed to develop a model for predicting the 2-year probability of AF diagnosis and implement it as proof-of-concept (POC) in a production electronic health record (EHR). METHODS: We used a nested case-control design using data from the Indiana Network for Patient Care. The development cohort came from 2016 to 2017 (outcome period) and 2014 to 2015 (baseline). A separate validation cohort used outcome and baseline periods shifted 2 years before respective development cohort times. Machine learning approaches were used to build predictive model. Patients ≥ 18 years, later restricted to age ≥ 40 years, with at least two encounters and no AF during baseline, were included. In the 6-week EHR prospective pilot, the model was silently implemented in the production system at a large safety-net urban hospital. Three new and two previous logistic regression models were evaluated using receiver-operating characteristics. Number, characteristics, and CHA2DS2-VASc scores of patients identified by the model in the pilot are presented. RESULTS: After restricting age to ≥ 40 years, 31,474 AF cases (mean age, 71.5 years; female 49%) and 22,078 controls (mean age, 59.5 years; female 61%) comprised the development cohort. A 10-variable model using age, acute heart disease, albumin, body mass index, chronic obstructive pulmonary disease, gender, heart failure, insurance, kidney disease, and shock yielded the best performance (C-statistic, 0.80 [95% CI 0.79-0.80]). The model performed well in the validation cohort (C-statistic, 0.81 [95% CI 0.8-0.81]). In the EHR pilot, 7916/22,272 (35.5%; mean age, 66 years; female 50%) were identified as higher risk for AF; 5582 (70%) had CHA2DS2-VASc score ≥ 2. CONCLUSIONS: Using variables commonly available in the EHR, we created a predictive model to identify 2-year risk of developing AF in those previously without diagnosed AF. Successful POC implementation of the model in an EHR provided a practical strategy to identify patients who may benefit from interventions to reduce their stroke risk.


Asunto(s)
Fibrilación Atrial , Accidente Cerebrovascular , Adulto , Anciano , Fibrilación Atrial/diagnóstico , Fibrilación Atrial/epidemiología , Registros Electrónicos de Salud , Femenino , Humanos , Indiana , Persona de Mediana Edad , Valor Predictivo de las Pruebas , Estudios Prospectivos , Medición de Riesgo , Factores de Riesgo , Accidente Cerebrovascular/diagnóstico , Accidente Cerebrovascular/epidemiología
5.
J Gen Intern Med ; 34(12): 2804-2811, 2019 12.
Artículo en Inglés | MEDLINE | ID: mdl-31367875

RESUMEN

BACKGROUND: Cessation counseling and pharmacotherapy are recommended for hospitalized smokers, but better coordination between cessation counselors and providers might improve utilization of pharmacotherapy and enhance smoking cessation. OBJECTIVE: To compare smoking cessation counseling combined with care coordination post-hospitalization to counseling alone on uptake of pharmacotherapy and smoking cessation. DESIGN: Unblinded, randomized clinical trial PARTICIPANTS: Hospitalized smokers referred from primarily rural hospitals INTERVENTIONS: Counseling only (C) consisted of telephone counseling provided during the hospitalization and post-discharge. Counseling with care coordination (CCC) provided similar counseling supplemented by feedback to the smoker's health care team and help for the smoker in obtaining pharmacotherapy. At 6 months post-hospitalization, persistent smokers were re-engaged with either CCC or C. MAIN MEASURES: Utilization of pharmacotherapy and smoking cessation at 3, 6, and 12 months post-discharge. KEY RESULTS: Among 606 smokers randomized, 429 (70.8%) completed the 12-month assessment and 580 (95.7%) were included in the primary analysis. Use of any cessation pharmacotherapy between 0 and 6 months (55.2%) and between 6 and 12 months (47.1%) post-discharge was similar across treatment arms though use of prescription-only pharmacotherapy between months 6-12 was significantly higher in the CCC group (30.1%) compared with the C group (18.6%) (RR, 1.61 (95% CI, 1.08, 2.41)). Self-reported abstinence rates of 26.2%, 20.3%, and 23.4% at months 3, 6, and 12, respectively, were comparable across the two treatment arms. Of those smoking at month 6, 12.5% reported abstinence at month 12. Validated smoking cessation at 12 months was 19.3% versus 16.9% in the CCC and C groups, respectively (RR, 1.13 (95% CI, 0.80, 1.61)). CONCLUSION: Supplemental care coordination, provided by counselors outside of the health care team, failed to improve smoking cessation beyond that achieved by cessation counseling alone. Re-engagement of smokers 6 months post-discharge can lead to new quitters, at which time care coordination might facilitate use of prescription medications. TRIAL REGISTRATION: NCT01063972.


Asunto(s)
Continuidad de la Atención al Paciente , Consejo/métodos , Alta del Paciente , Cese del Hábito de Fumar/métodos , Telemedicina/métodos , Teléfono , Adulto , Continuidad de la Atención al Paciente/tendencias , Consejo/tendencias , Femenino , Estudios de Seguimiento , Humanos , Masculino , Persona de Mediana Edad , Alta del Paciente/tendencias , Telemedicina/tendencias , Dispositivos para Dejar de Fumar Tabaco/tendencias
6.
Int Arch Allergy Immunol ; 178(2): 201-210, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30544116

RESUMEN

BACKGROUND: Dermatophagoides pteronyssinus (DP) and Blomia tropicalis (BT) are the dominant house dust mites inducing allergic diseases in tropical climates. It is not known whether the efficacy of DP subcutaneous immunotherapy (SCIT) is similar in patients sensitized to DP alone or to both DP and BT. METHOD: Ninety-five children (5-17 years old) affected by asthma with rhinitis and sensitized to both DP and BT received 3 years of DP-SCIT. Clinical symptom and medication scores, serum-specific IgE and IgG4 were evaluated during DP-SCIT. Patients were grouped based on DP and BT co-sensitization or cross-reactivity, according to positive or negative IgE to BT major allergen (BTMA). RESULTS: After 3 years of DP-SCIT, all patients had significant reductions in symptoms and medication use. In all, 65% of the patients were free of asthma symptoms and medication use; in addition, 3% was free of rhinitis symptoms. FEV1 in all patients were greater than 95% of predicted. DP-SCIT induced significant increases in DP- and BT-specific IgG4. In 50% of patients, DP-specific IgG4 increased more than 67-fold. BT-specific IgG4 increased more than 2.5 fold. A moderate correlation (r = 0.48-0.61, p < 0.01) was found between specific IgE against DP and BT in the BTMA- group (n = 34) before and after DP-SCIT, whereas no correlation was found in the BTMA+ group (n = 61). The 2 BTMA groups responded similarly with regard to clinical improvement and increase in specific IgG4 to both DP and BT. No safety finding of concern were reported in either group. CONCLUSION: DP-SCIT may be of clinical benefit to patients with IgE sensitizations to both DP and BT. DP-SCIT induces IgG4 that cross-react with BT allergens.


Asunto(s)
Antígenos Dermatofagoides/inmunología , Asma/inmunología , Asma/terapia , Dermatophagoides pteronyssinus/inmunología , Desensibilización Inmunológica , Rinitis Alérgica/inmunología , Rinitis Alérgica/terapia , Adolescente , Animales , Asma/diagnóstico , Niño , Preescolar , Desensibilización Inmunológica/efectos adversos , Desensibilización Inmunológica/métodos , Femenino , Humanos , Inmunoensayo , Inmunoglobulina E/sangre , Inmunoglobulina E/inmunología , Inmunoglobulina G/sangre , Inmunoglobulina G/inmunología , Masculino , Pruebas de Función Respiratoria , Rinitis Alérgica/diagnóstico
7.
BMC Pediatr ; 19(1): 174, 2019 05 29.
Artículo en Inglés | MEDLINE | ID: mdl-31142302

RESUMEN

BACKGROUND: Prolonged neonatal jaundice (PNNJ) is often caused by breast milk jaundice, but it could also point to other serious conditions (biliary atresia, congenital hypothyroidism). When babies with PNNJ receive a routine set of laboratory investigations to detect serious but uncommon conditions, there is always a tendency to over-investigate a large number of well, breastfed babies. A local unpublished survey in Perak state of Malaysia revealed that the diagnostic criteria and initial management of PNNJ were not standardized. This study aims to evaluate and improve the current management of PNNJ in the administrative region of Perak. METHODS: A 3-phase quasi-experimental community study was conducted from April 2012 to June 2013. Phase l was a cross-sectional study to review the current practice of PNNJ management. Phase ll was an interventional phase involving the implementation of a new protocol. Phase lll was a 6 months post-interventional audit. A registry of PNNJ was implemented to record the incidence rate. A self-reporting surveillance system was put in place to receive any reports of biliary atresia, urinary tract infection, or congenital hypothyroidism cases. RESULTS: In Phase I, 12 hospitals responded, and 199 case notes were reviewed. In Phase II, a new protocol was developed and implemented in all government health facilities in Perak. In Phase III, the 6-month post-intervention audit showed that there were significant improvements when comparing mean scores of pre- and post-intervention: history taking scores (p < 0.001), family history details (p < 0.05), physical examination documentation (p < 0.001), and total investigations done per patient (from 9.01 to 5.81, p < 0.001). The total number of patient visits reduced from 2.46 to 2.2 per patient. The incidence of PNNJ was found to be high (incidence rate of 158 per 1000 live births). CONCLUSIONS: The new protocol standardized and improved the quality of care with better clinical assessment and a reduction in unnecessary laboratory investigations. TRIAL REGISTRATION: Research registration number: NMRR-12-105-11288 .


Asunto(s)
Auditoría Clínica , Protocolos Clínicos/normas , Manejo de la Enfermedad , Ictericia Neonatal , Mejoramiento de la Calidad , Algoritmos , Atresia Biliar/complicaciones , Atresia Biliar/diagnóstico , Estudios Transversales , Salud de la Familia , Humanos , Recién Nacido , Ictericia Neonatal/diagnóstico , Ictericia Neonatal/etiología , Ictericia Neonatal/terapia , Malasia , Anamnesis , Registros Médicos , Tamizaje Neonatal/normas , Examen Físico , Guías de Práctica Clínica como Asunto , Derivación y Consulta/normas , Sistema de Registros/estadística & datos numéricos
9.
Support Care Cancer ; 26(12): 4049-4055, 2018 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-29869719

RESUMEN

PURPOSE: History of cancer is significantly associated with increases in healthcare costs, worse work performance, and higher absenteeism in the workplace. This is particularly important as most cancer survivors return to employment. Sleep disturbance is a largely overlooked potential contributor to these changes. METHODS: Data from 9488 state employees participating in the Kansas State employee wellness program were used to assess cancer history, sleep disturbance, healthcare expenditures, work performance ratings, and absenteeism. Participants were categorized as having had no history of breast or prostate cancer, a past history only with no current cancer treatment, or current treatment for breast or prostate cancer. Indirect mediation analyses determined whether sleep disturbance mediated the influence of cancer status on outcomes. RESULTS: Employees receiving treatment for breast or prostate cancer had significantly greater healthcare expenditures and absenteeism than those with a past history or no history of cancer (ps < .0001). Sleep disturbance significantly mediated the impact of cancer on healthcare expenditures and absenteeism (ps < .05), accounting for 2 and 8% of the impact of cancer on healthcare expenditure and missed full days of work, respectively. CONCLUSIONS: The worse outcomes observed among employees receiving treatment for breast and prostate cancer, the most common forms of cancer among women and men, were partially explained by the impacts of cancer and treatment for cancer on sleep disturbance. These findings suggest that preventing or addressing sleep disturbance may result in economic benefits in addition to improvements in health and quality of life.


Asunto(s)
Absentismo , Supervivientes de Cáncer/psicología , Gastos en Salud/tendencias , Calidad de Vida/psicología , Trastornos del Sueño-Vigilia/complicaciones , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Femenino , Humanos , Masculino , Persona de Mediana Edad , Lugar de Trabajo , Adulto Joven
10.
Cancer ; 123(12): 2338-2351, 2017 Jun 15.
Artículo en Inglés | MEDLINE | ID: mdl-28211937

RESUMEN

BACKGROUND: Annual computed tomography (CT) scans are a component of the current standard of care for the posttreatment surveillance of survivors of colorectal cancer (CRC) after curative-intent resection. The authors conducted a retrospective study with the primary aim of assessing patient, physician, and organizational characteristics associated with the receipt of CT surveillance among veterans. METHODS: The Department of Veterans Affairs Central Cancer Registry was used to identify patients diagnosed with AJCC collaborative stage I to III CRC between 2001 and 2009. Patient sociodemographic and clinical (ie, CRC stage and comorbidity) characteristics, provider specialty, and organizational characteristics were measured. Hierarchical multivariable logistic regression models were used to assess the association between patient, provider, and organizational characteristics on receipt of 1) consistently guideline-concordant care (at least 1 CT every 12 months for both of the first 2 years of CRC surveillance) versus no CT receipt and 2) potential overuse (>1 CT every 12 months during the first 2 years of CRC surveillance) of CRC surveillance using CT. The authors also analyzed the impact of the 2005 American Society of Clinical Oncology update in CRC surveillance guidelines on care received over time. RESULTS: For 2263 survivors of stage II/III CRC who were diagnosed after 2005, 19.4% of patients received no surveillance CT, whereas potential overuse occurred in both surveillance years for 14.9% of patients. Guideline-concordant care was associated with younger age, higher stage of disease (stage III vs stage II), and geographic region. In adjusted analyses, younger age and higher stage of disease (stage III vs stage II) were found to be associated with overuse. There was no significant difference in the annual rate of CT scanning noted across time periods (year ≤ 2005 vs year > 2005). CONCLUSIONS: Among a minority of veteran survivors of CRC, both underuse and potential overuse of CT surveillance were present. Patient factors, but no provider or organizational characteristics, were found to be significantly associated with patterns of care. The 2005 change in American Society of Clinical Oncology guidelines did not appear to have an impact on rates of surveillance CT. Cancer 2017;123:2338-2351. © 2017 American Cancer Society.


Asunto(s)
Adenocarcinoma/diagnóstico por imagen , Neoplasias Colorrectales/diagnóstico por imagen , Adhesión a Directriz/estadística & datos numéricos , Hospitales de Veteranos/estadística & datos numéricos , Recurrencia Local de Neoplasia/diagnóstico por imagen , Sistema de Registros , Sobrevivientes , Tomografía Computarizada por Rayos X/estadística & datos numéricos , Adenocarcinoma/patología , Adenocarcinoma/cirugía , Adulto , Factores de Edad , Anciano , Anciano de 80 o más Años , Neoplasias Colorrectales/patología , Neoplasias Colorrectales/cirugía , Femenino , Humanos , Modelos Logísticos , Masculino , Uso Excesivo de los Servicios de Salud/estadística & datos numéricos , Persona de Mediana Edad , Análisis Multivariante , Estadificación de Neoplasias , Guías de Práctica Clínica como Asunto , Estudios Retrospectivos , Estados Unidos , United States Department of Veterans Affairs
11.
Crit Care Med ; 45(5): 851-857, 2017 May.
Artículo en Inglés | MEDLINE | ID: mdl-28263192

RESUMEN

OBJECTIVES: Delirium severity is independently associated with longer hospital stays, nursing home placement, and death in patients outside the ICU. Delirium severity in the ICU is not routinely measured because the available instruments are difficult to complete in critically ill patients. We designed our study to assess the reliability and validity of a new ICU delirium severity tool, the Confusion Assessment Method for the ICU-7 delirium severity scale. DESIGN: Observational cohort study. SETTING: Medical, surgical, and progressive ICUs of three academic hospitals. PATIENTS: Five hundred eighteen adult (≥ 18 yr) patients. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Patients received the Confusion Assessment Method for the ICU, Richmond Agitation-Sedation Scale, and Delirium Rating Scale-Revised-98 assessments. A 7-point scale (0-7) was derived from responses to the Confusion Assessment Method for the ICU and Richmond Agitation-Sedation Scale items. Confusion Assessment Method for the ICU-7 showed high internal consistency (Cronbach's α = 0.85) and good correlation with Delirium Rating Scale-Revised-98 scores (correlation coefficient = 0.64). Known-groups validity was supported by the separation of mechanically ventilated and nonventilated assessments. Median Confusion Assessment Method for the ICU-7 scores demonstrated good predictive validity with higher odds (odds ratio = 1.47; 95% CI = 1.30-1.66) of in-hospital mortality and lower odds (odds ratio = 0.8; 95% CI = 0.72-0.9) of being discharged home after adjusting for age, race, gender, severity of illness, and chronic comorbidities. Higher Confusion Assessment Method for the ICU-7 scores were also associated with increased length of ICU stay (p = 0.001). CONCLUSIONS: Our results suggest that Confusion Assessment Method for the ICU-7 is a valid and reliable delirium severity measure among ICU patients. Further research comparing it to other delirium severity measures, its use in delirium efficacy trials, and real-life implementation is needed to determine its role in research and clinical practice.


Asunto(s)
Delirio/diagnóstico , Unidades de Cuidados Intensivos/estadística & datos numéricos , Escalas de Valoración Psiquiátrica/normas , Índice de Severidad de la Enfermedad , Adulto , Anciano , Atención , Estado de Conciencia , Femenino , Hospitales Universitarios , Humanos , Masculino , Escala del Estado Mental , Persona de Mediana Edad , Alta del Paciente , Estudios Prospectivos , Reproducibilidad de los Resultados , Factores Socioeconómicos
12.
Crit Care Med ; 44(9): 1727-34, 2016 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-27276344

RESUMEN

OBJECTIVES: Delirium is a highly prevalent syndrome of acute brain dysfunction among critically ill patients that has been linked to multiple risk factors, such as age, preexisting cognitive impairment, and use of sedatives; but to date, the relationship between race and delirium is unclear. We conducted this study to identify whether African-American race is a risk factor for developing ICU delirium. DESIGN: A prospective cohort study. SETTING: Medical and surgical ICUs of a university-affiliated, safety net hospital in Indianapolis, IN. PATIENTS: A total of 2,087 consecutive admissions with 1,008 African Americans admitted to the ICU services from May 2009 to August 2012. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Incident delirium was defined as first positive Confusion Assessment Method for the ICU result after an initial negative Confusion Assessment Method for the ICU; and prevalent delirium was defined as positive Confusion Assessment Method for the ICU on first Confusion Assessment Method for the ICU assessment. The overall incident delirium rate in African Americans was 8.7% compared with 10.4% in Caucasians (p = 0.26). The prevalent delirium rate was 14% in both African Americans and Caucasians (p = 0.95). Significant age and race interactions were detected for incident delirium (p = 0.02) but not for prevalent delirium (p = 0.3). The hazard ratio for incident delirium for African Americans in the 18-49 years age group compared with Caucasians of similar age was 0.4 (0.1-0.9). The hazard and odds ratios for incident and prevalent delirium in other groups were not different. CONCLUSIONS: African-American race does not confer any additional risk for developing incident or prevalent delirium in the ICU. Instead, younger African Americans tend to have lower rates of incident delirium compared with Caucasians of similar age.


Asunto(s)
Negro o Afroamericano/psicología , Cuidados Críticos , Delirio/etnología , Adolescente , Adulto , Anciano , Enfermedad Crítica , Delirio/diagnóstico , Femenino , Humanos , Incidencia , Masculino , Persona de Mediana Edad , Evaluación de Resultado en la Atención de Salud , Prevalencia , Estudios Prospectivos , Factores de Riesgo , Adulto Joven
13.
J Cancer Educ ; 31(3): 421-9, 2016 09.
Artículo en Inglés | MEDLINE | ID: mdl-26507744

RESUMEN

Participation in cancer prevention trials (CPT) is lower than 3 % among high-risk healthy individuals, and racial/ethnic minorities are the most under-represented. Novel recruitment strategies are therefore needed. Online health risk assessment (HRA) serves as a gateway component of nearly all employee wellness programs (EWPs) and may be a missed opportunity. This study aimed to explore employees' interest, willingness, motivators, and barriers of releasing their HRA responses to an external secure research database for recruitment purpose. We used qualitative research methods (focus group and individual interviews) to examine employees' interest and willingness in releasing their online HRA responses to an external, secure database to register as potential CPT participants. Fifteen structured interviews (40 % of study participants were of racial/ethnic minority) were conducted, and responses reached saturation after four interviews. All employees showed interest and willingness to release their online HRA responses to register as a potential CPT participant. Content analyses revealed that 91 % of participants were motivated to do so, and the major motivators were to (1) obtain help in finding personally relevant prevention trials, (2) help people they know who are affected by cancer, and/or (3) increase knowledge about CPT. A subset of participants (45 %) expressed barriers of releasing their HRA responses due to concerns about credibility and security of the external database. Online HRA may be a feasible but underutilized recruitment method for cancer prevention trials. EWP-sponsored HRA shows promise for the development of a large, centralized registry of racially/ethnically representative CPT potential participants.


Asunto(s)
Ensayos Clínicos como Asunto/métodos , Ensayos Clínicos como Asunto/psicología , Motivación , Neoplasias/prevención & control , Selección de Paciente , Proyectos de Investigación , Adulto , Femenino , Grupos Focales , Conocimientos, Actitudes y Práctica en Salud , Promoción de la Salud , Humanos , Masculino , Persona de Mediana Edad , Neoplasias/psicología , Servicios de Salud del Trabajador , Investigación Cualitativa , Medición de Riesgo , Factores Socioeconómicos
14.
Clin Gastroenterol Hepatol ; 13(13): 2323-32.e1-9, 2015 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-26122761

RESUMEN

BACKGROUND & AIMS: In outpatients undergoing endoscopic retrograde cholangiopancreatography (ERCP) with anesthesia, rates of and risk factors for admission are unclear. We aimed to develop a model that would allow physicians to predict hospitalization of patients during postanesthesia recovery. METHODS: We conducted a retrospective study of data from ERCPs performed on outpatients from May 2012 through October 2013 at the Indiana University School of Medicine. Medical records were abstracted for preanesthesia, intra-anesthesia, and early (within the first hour) postanesthesia characteristics potentially associated with admission. Significant factors associated with admission were incorporated into a logistic regression model to identify subgroups with low, moderate, or high probabilities for admission. The population was divided into training (first 12 months) and validation (last 6 months) sets to develop and test the model. RESULTS: We identified 3424 ERCPs during the study period; 10.7% of patients were admitted to the hospital, and 3.7% developed post-ERCP pancreatitis. Postanesthesia recovery times were significantly longer for patients requiring admission (362.6 ± 213.0 minutes vs 218.4 ± 71.8 minutes for patients not admitted; P < .0001). A higher proportion of admitted patients had high-risk indications. Admitted patients also had more severe comorbidities, higher baseline levels of pain, longer procedure times, performance of sphincter of Oddi manometry, higher pain during the first hour after anesthesia, and greater use of opiates or anxiolytics. A multivariate regression model identified patients who were admitted with a high level of accuracy in the training set (area under the curve, 0.83) and fair accuracy in the validation set (area under the curve, 0.78). On the basis of this model, nearly 50% of patients could be classified as low risk for admission. CONCLUSION: By using factors that can be assessed through the first hour after ERCP, we developed a model that accurately predicts which patients are likely to be admitted to the hospital. Rates of admission after outpatient ERCP are low, so a policy of prolonged observation might be unnecessary.


Asunto(s)
Atención Ambulatoria/métodos , Colangiopancreatografia Retrógrada Endoscópica/efectos adversos , Técnicas de Apoyo para la Decisión , Pancreatitis/epidemiología , Admisión del Paciente , Adulto , Anciano , Femenino , Hospitales Universitarios , Humanos , Indiana/epidemiología , Masculino , Persona de Mediana Edad , Estudios Retrospectivos , Medición de Riesgo
15.
Crit Care Med ; 42(12): e791-5, 2014 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-25402299

RESUMEN

OBJECTIVES: Mechanically ventilated critically ill patients receive significant amounts of sedatives and analgesics that increase their risk of developing coma and delirium. We evaluated the impact of a "Wake-up and Breathe Protocol" at our local ICU on sedation and delirium. DESIGN: A pre/post implementation study design. SETTING: A 22-bed mixed surgical and medical ICU. PATIENTS: Seven hundred two consecutive mechanically ventilated ICU patients from June 2010 to January 2013. INTERVENTIONS: Implementation of daily paired spontaneous awakening trials (daily sedation vacation plus spontaneous breathing trials) as a quality improvement project. MEASUREMENTS AND MAIN RESULTS: After implementation of our program, there was an increase in the mean Richmond Agitation Sedation Scale scores on weekdays of 0.88 (p < 0.0001) and an increase in the mean Richmond Agitation Sedation Scale scores on weekends of 1.21 (p < 0.0001). After adjusting for age, race, gender, severity of illness, primary diagnosis, and ICU, the incidence and prevalence of delirium did not change post implementation of the protocol (incidence: 23% pre vs 19.6% post; p = 0.40; prevalence: 66.7% pre vs 55.3% post; p = 0.06). The combined prevalence of delirium/coma decreased from 90.8% pre protocol implementation to 85% postimplementation (odds ratio, 0.505; 95% CI, 0.299-0.853; p = 0.01). CONCLUSIONS: Implementing a "Wake Up and Breathe Program" resulted in reduced sedation among critically ill mechanically ventilated patients but did not change the incidence or prevalence of delirium.


Asunto(s)
Enfermedad Crítica , Sedación Profunda/métodos , Delirio/prevención & control , Respiración Artificial/métodos , Respiración , Adulto , Anciano , Protocolos Clínicos , Coma/prevención & control , Femenino , Humanos , Incidencia , Unidades de Cuidados Intensivos , Tiempo de Internación , Masculino , Persona de Mediana Edad
16.
Stat Med ; 33(3): 500-13, 2014 Feb 10.
Artículo en Inglés | MEDLINE | ID: mdl-24038175

RESUMEN

In this paper, we consider the design for comparing the performance of two binary classification rules, for example, two record linkage algorithms or two screening tests. Statistical methods are well developed for comparing these accuracy measures when the gold standard is available for every unit in the sample, or in a two-phase study when the gold standard is ascertained only in the second phase in a subsample using a fixed sampling scheme. However, these methods do not attempt to optimize the sampling scheme to minimize the variance of the estimators of interest. In comparing the performance of two classification rules, the parameters of primary interest are the difference in sensitivities, specificities, and positive predictive values. We derived the analytic variance formulas for these parameter estimates and used them to obtain the optimal sampling design. The efficiency of the optimal sampling design is evaluated through an empirical investigation that compares the optimal sampling with simple random sampling and with proportional allocation. Results of the empirical study show that the optimal sampling design is similar for estimating the difference in sensitivities and in specificities, and both achieve a substantial amount of variance reduction with an over-sample of subjects with discordant results and under-sample of subjects with concordant results. A heuristic rule is recommended when there is no prior knowledge of individual sensitivities and specificities, or the prevalence of the true positive findings in the study population. The optimal sampling is applied to a real-world example in record linkage to evaluate the difference in classification accuracy of two matching algorithms.


Asunto(s)
Algoritmos , Biometría/métodos , Clasificación/métodos , Modelos Estadísticos , Valor Predictivo de las Pruebas , Femenino , Humanos , Masculino
17.
Stat Med ; 33(24): 4250-65, 2014 Oct 30.
Artículo en Inglés | MEDLINE | ID: mdl-24935712

RESUMEN

Record linkage methods commonly use a traditional latent class model to classify record pairs from different sources as true matches or non-matches. This approach was first formally described by Fellegi and Sunter and assumes that the agreement in fields is independent conditional on the latent class. Consequences of violating the conditional independence assumption include bias in parameter estimates from the model. We sought to further characterize the impact of conditional dependence on the overall misclassification rate, sensitivity, and positive predictive value in the record linkage problem when the conditional independence assumption is violated. Additionally, we evaluate various methods to account for the conditional dependence. These methods include loglinear models with appropriate interaction terms identified through the correlation residual plot as well as Gaussian random effects models. The proposed models are used to link newborn screening data obtained from a health information exchange. On the basis of simulations, loglinear models with interaction terms demonstrated the best misclassification rate, although this type of model cannot accommodate other data features such as continuous measures for agreement. Results indicate that Gaussian random effects models, which can handle additional data features, perform better than assuming conditional independence and in some situations perform as well as the loglinear model with interaction terms.


Asunto(s)
Algoritmos , Biometría/métodos , Intervalos de Confianza , Registros Médicos/clasificación , Modelos Estadísticos , Simulación por Computador , Femenino , Humanos , Indiana , Recién Nacido , Masculino , Tamizaje Neonatal/normas
18.
Proc Natl Acad Sci U S A ; 108(46): E1146-55, 2011 Nov 15.
Artículo en Inglés | MEDLINE | ID: mdl-22006328

RESUMEN

Autosomal dominant hypophosphatemic rickets (ADHR) is unique among the disorders involving Fibroblast growth factor 23 (FGF23) because individuals with R176Q/W and R179Q/W mutations in the FGF23 (176)RXXR(179)/S(180) proteolytic cleavage motif can cycle from unaffected status to delayed onset of disease. This onset may occur in physiological states associated with iron deficiency, including puberty and pregnancy. To test the role of iron status in development of the ADHR phenotype, WT and R176Q-Fgf23 knock-in (ADHR) mice were placed on control or low-iron diets. Both the WT and ADHR mice receiving low-iron diet had significantly elevated bone Fgf23 mRNA. WT mice on a low-iron diet maintained normal serum intact Fgf23 and phosphate metabolism, with elevated serum C-terminal Fgf23 fragments. In contrast, the ADHR mice on the low-iron diet had elevated intact and C-terminal Fgf23 with hypophosphatemic osteomalacia. We used in vitro iron chelation to isolate the effects of iron deficiency on Fgf23 expression. We found that iron chelation in vitro resulted in a significant increase in Fgf23 mRNA that was dependent upon Mapk. Thus, unlike other syndromes of elevated FGF23, our findings support the concept that late-onset ADHR is the product of gene-environment interactions whereby the combined presence of an Fgf23-stabilizing mutation and iron deficiency can lead to ADHR.


Asunto(s)
Raquitismo Hipofosfatémico Familiar/genética , Factores de Crecimiento de Fibroblastos/genética , Deficiencias de Hierro , Anemia Ferropénica/complicaciones , Animales , Raquitismo Hipofosfatémico Familiar/fisiopatología , Femenino , Factor-23 de Crecimiento de Fibroblastos , Interacción Gen-Ambiente , Glucuronidasa/metabolismo , Hipofosfatemia/genética , Proteínas Klotho , Sistema de Señalización de MAP Quinasas , Masculino , Ratones , Ratones Transgénicos , Osteocitos/citología , Osteomalacia/genética , Fenotipo , Estructura Terciaria de Proteína , Ratas
19.
Ann Am Thorac Soc ; 2024 Jul 16.
Artículo en Inglés | MEDLINE | ID: mdl-39012168

RESUMEN

RATIONALE: Observational studies report significant protective effect of antifibrotics on mortality among patients with idiopathic pulmonary fibrosis. Many of these studies, however, were subject to immortal time bias due to the mishandling of delayed antifibrotic initiation. OBJECTIVES: To evaluate the antifibrotic effect on mortality among patients with idiopathic pulmonary fibrosis using appropriate statistical methods that avoid immortal time bias. METHODS: Using a large administrative database, we identified 10,289 patients with idiopathic pulmonary fibrosis, of which 2,300 used antifibrotics. Treating delayed antifibrotic initiation as a time-dependent variable, three statistical methods were used to control baseline characteristics and avoid immortal time bias. Stratified analysis was performed for patients who initiated antifibrotics early and those who initiated treatment late. For comparison, methods that mishandle immortal time bias were performed. A simulation study was conducted to demonstrate the performance of these models in a wide range of scenarios. MEASUREMENTS AND MAIN RESULTS: All three statistical methods yielded non-significant results for the antifibrotic effect on mortality, with the stratified analysis for patients with early antifibrotic initiation suggesting evidence for reduced mortality risk: HR=0.89 (95% CI: 0.79-1.01, p=0.08) for all patients and HR=0.85 (95% CI: 0.73-0.98, p=0.03) for patients who were 65 years or older. Methods that mishandle immortal time bias demonstrated significantly lower mortality risk for antifibrotic users. Bias of these methods was evident in the simulation study, where appropriate methods performed well with little to no bias. CONCLUSIONS: Findings in this study did not confirm an association between antifibrotics and mortality, with a stratified analysis showing support for a potential treatment effect with early treatment initiation.

20.
Neural Netw ; 178: 106424, 2024 Jun 01.
Artículo en Inglés | MEDLINE | ID: mdl-38875934

RESUMEN

In natural language processing, fact verification is a very challenging task, which requires retrieving multiple evidence sentences from a reliable corpus to verify the authenticity of a claim. Although most of the current deep learning methods use the attention mechanism for fact verification, they have not considered imposing attentional constraints on important related words in the claim and evidence sentences, resulting in inaccurate attention for some irrelevant words. In this paper, we propose a syntactic evidence network (SENet) model which incorporates entity keywords, syntactic information and sentence attention for fact verification. The SENet model extracts entity keywords from claim and evidence sentences, and uses a pre-trained syntactic dependency parser to extract the corresponding syntactic sentence structures and incorporates the extracted syntactic information into the attention mechanism for language-driven word representation. In addition, the sentence attention mechanism is applied to obtain a richer semantic representation. We have conducted experiments on the FEVER and UKP Snopes datasets for performance evaluation. Our SENet model has achieved 78.69% in Label Accuracy and 75.63% in FEVER Score on the FEVER dataset. In addition, our SENet model also has achieved 65.0% in precision and 61.2% in macro F1 on the UKP Snopes dataset. The experimental results have shown that our proposed SENet model has outperformed the baseline models and achieved the state-of-the-art performance for fact verification.

SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda