Your browser doesn't support javascript.
loading
Montrer: 20 | 50 | 100
Résultats 1 - 20 de 158
Filtrer
1.
Neural Netw ; 178: 106424, 2024 Jun 01.
Article de Anglais | MEDLINE | ID: mdl-38875934

RÉSUMÉ

In natural language processing, fact verification is a very challenging task, which requires retrieving multiple evidence sentences from a reliable corpus to verify the authenticity of a claim. Although most of the current deep learning methods use the attention mechanism for fact verification, they have not considered imposing attentional constraints on important related words in the claim and evidence sentences, resulting in inaccurate attention for some irrelevant words. In this paper, we propose a syntactic evidence network (SENet) model which incorporates entity keywords, syntactic information and sentence attention for fact verification. The SENet model extracts entity keywords from claim and evidence sentences, and uses a pre-trained syntactic dependency parser to extract the corresponding syntactic sentence structures and incorporates the extracted syntactic information into the attention mechanism for language-driven word representation. In addition, the sentence attention mechanism is applied to obtain a richer semantic representation. We have conducted experiments on the FEVER and UKP Snopes datasets for performance evaluation. Our SENet model has achieved 78.69% in Label Accuracy and 75.63% in FEVER Score on the FEVER dataset. In addition, our SENet model also has achieved 65.0% in precision and 61.2% in macro F1 on the UKP Snopes dataset. The experimental results have shown that our proposed SENet model has outperformed the baseline models and achieved the state-of-the-art performance for fact verification.

3.
Ann Surg Oncol ; 30(5): 2883-2894, 2023 May.
Article de Anglais | MEDLINE | ID: mdl-36749504

RÉSUMÉ

BACKGROUND: Measures taken to address the COVID-19 pandemic interrupted routine diagnosis and care for breast cancer. The aim of this study was to characterize the effects of the pandemic on breast cancer care in a statewide cohort. PATIENTS AND METHODS: Using data from a large health information exchange, we retrospectively analyzed the timing of breast cancer screening, and identified a cohort of newly diagnosed patients with any stage of breast cancer to further access the information available about their surgical treatments. We compared data for four subgroups: pre-lockdown (preLD) 25 March to 16 June 2019; lockdown (LD) 23 March to 3 May 2020; reopening (RO) 4 May to 14 June 2020; and post-lockdown (postLD) 22 March to 13 June 2021. RESULTS: During LD and RO, screening mammograms in the cohort decreased by 96.3% and 36.2%, respectively. The overall breast cancer diagnosis and surgery volumes decreased up to 38.7%, and the median time to surgery was prolonged from 1.5 months to 2.4 for LD and 1.8 months for RO. Interestingly, higher mean DCIS diagnosis (5.0 per week vs. 3.1 per week, p < 0.05) and surgery volume (14.8 vs. 10.5, p < 0.05) were found for postLD compared with preLD, while median time to surgery was shorter (1.2 months vs. 1.5 months, p < 0.0001). However, the postLD average weekly screening and diagnostic mammogram did not fully recover to preLD levels (2055.3 vs. 2326.2, p < 0.05; 574.2 vs. 624.1, p < 0.05). CONCLUSIONS: Breast cancer diagnosis and treatment patterns were interrupted during the lockdown and still altered 1 year after. Screening in primary care should be expanded to mitigate possible longer-term effects of these interruptions.


Sujet(s)
Tumeurs du sein , COVID-19 , Échange d'informations de santé , Humains , Femelle , Tumeurs du sein/diagnostic , Tumeurs du sein/épidémiologie , Tumeurs du sein/chirurgie , COVID-19/épidémiologie , Pandémies , Études rétrospectives , Dépistage précoce du cancer , Contrôle des maladies transmissibles , Dépistage de la COVID-19
4.
Calif J Health Promot ; 19(1): 76-83, 2021 Jul.
Article de Anglais | MEDLINE | ID: mdl-34566536

RÉSUMÉ

BACKGROUND AND PURPOSE: Asian-Americans suffer from significant liver cancer disparity caused by chronic hepatitis B virus (HBV) infection. Understanding psychosocial predictors of HBV screening is critical to designing effective interventions. METHODS: Chinese-, Korean-, and Vietnamese-Americans in the Baltimore-Washington metropolitan region (N=877) were recruited from community-based organizations. Applying the Social Cognitive Theory (SCT), three main theoretical constructs (knowledge, outcome expectancy, and self-efficacy) were tested. Descriptive analyses using Chi-square and ANOVA and multivariate logistic regression models were conducted. RESULTS: About 47% of participants reported ever having screening for HBV. Vietnamese-Americans had the lowest HBV screening rate (39%), followed by Korean-Americans (46%) and Chinese-Americans (55%). Multiple logistic regression analyses showed significant effects of HBV-related knowledge on screening in all three groups, whereas self-efficacy had significant effects in the Chinese and Korean subgroups, but not Vietnamese. HBV outcome expectancy had no effect on the screening outcome in any of the groups. Additionally, consistent in all three groups, those who had lived in the United States longer were less likely to have screening. CONCLUSION: HBV screening rates in Asian Americans remain low; targeted interventions need to consider the differences across ethnic subgroups and address the psychosocial risk factors.

5.
Learn Health Syst ; 5(3): e10281, 2021 Jul.
Article de Anglais | MEDLINE | ID: mdl-34277946

RÉSUMÉ

INTRODUCTION: Learning health systems (LHSs) are usually created and maintained by single institutions or healthcare systems. The Indiana Learning Health System Initiative (ILHSI) is a new multi-institutional, collaborative regional LHS initiative led by the Regenstrief Institute (RI) and developed in partnership with five additional organizations: two Indiana-based health systems, two schools at Indiana University, and our state-wide health information exchange. We report our experiences and lessons learned during the initial 2-year phase of developing and implementing the ILHSI. METHODS: The initial goals of the ILHSI were to instantiate the concept, establish partnerships, and perform LHS pilot projects to inform expansion. We established shared governance and technical capabilities, conducted a literature review-based and regional environmental scan, and convened key stakeholders to iteratively identify focus areas, and select and implement six initial joint projects. RESULTS: The ILHSI successfully collaborated with its partner organizations to establish a foundational governance structure, set goals and strategies, and prioritize projects and training activities. We developed and deployed strategies to effectively use health system and regional HIE infrastructure and minimize information silos, a frequent challenge for multi-organizational LHSs. Successful projects were diverse and included deploying a Fast Healthcare Interoperability Standards (FHIR)-based tool across emergency departments state-wide, analyzing free-text elements of cross-hospital surveys, and developing models to provide clinical decision support based on clinical and social determinants of health. We also experienced organizational challenges, including changes in key leadership personnel and varying levels of engagement with health system partners, which impacted initial ILHSI efforts and structures. Reflecting on these early experiences, we identified lessons learned and next steps. CONCLUSIONS: Multi-organizational LHSs can be challenging to develop but present the opportunity to leverage learning across multiple organizations and systems to benefit the general population. Attention to governance decisions, shared goal setting and monitoring, and careful selection of projects are important for early success.

6.
BMC Med Inform Decis Mak ; 21(1): 112, 2021 04 03.
Article de Anglais | MEDLINE | ID: mdl-33812369

RÉSUMÉ

BACKGROUND: Many patients with atrial fibrillation (AF) remain undiagnosed despite availability of interventions to reduce stroke risk. Predictive models to date are limited by data requirements and theoretical usage. We aimed to develop a model for predicting the 2-year probability of AF diagnosis and implement it as proof-of-concept (POC) in a production electronic health record (EHR). METHODS: We used a nested case-control design using data from the Indiana Network for Patient Care. The development cohort came from 2016 to 2017 (outcome period) and 2014 to 2015 (baseline). A separate validation cohort used outcome and baseline periods shifted 2 years before respective development cohort times. Machine learning approaches were used to build predictive model. Patients ≥ 18 years, later restricted to age ≥ 40 years, with at least two encounters and no AF during baseline, were included. In the 6-week EHR prospective pilot, the model was silently implemented in the production system at a large safety-net urban hospital. Three new and two previous logistic regression models were evaluated using receiver-operating characteristics. Number, characteristics, and CHA2DS2-VASc scores of patients identified by the model in the pilot are presented. RESULTS: After restricting age to ≥ 40 years, 31,474 AF cases (mean age, 71.5 years; female 49%) and 22,078 controls (mean age, 59.5 years; female 61%) comprised the development cohort. A 10-variable model using age, acute heart disease, albumin, body mass index, chronic obstructive pulmonary disease, gender, heart failure, insurance, kidney disease, and shock yielded the best performance (C-statistic, 0.80 [95% CI 0.79-0.80]). The model performed well in the validation cohort (C-statistic, 0.81 [95% CI 0.8-0.81]). In the EHR pilot, 7916/22,272 (35.5%; mean age, 66 years; female 50%) were identified as higher risk for AF; 5582 (70%) had CHA2DS2-VASc score ≥ 2. CONCLUSIONS: Using variables commonly available in the EHR, we created a predictive model to identify 2-year risk of developing AF in those previously without diagnosed AF. Successful POC implementation of the model in an EHR provided a practical strategy to identify patients who may benefit from interventions to reduce their stroke risk.


Sujet(s)
Fibrillation auriculaire , Accident vasculaire cérébral , Adulte , Sujet âgé , Fibrillation auriculaire/diagnostic , Fibrillation auriculaire/épidémiologie , Dossiers médicaux électroniques , Femelle , Humains , Indiana , Adulte d'âge moyen , Valeur prédictive des tests , Études prospectives , Appréciation des risques , Facteurs de risque , Accident vasculaire cérébral/diagnostic , Accident vasculaire cérébral/épidémiologie
7.
Am J Obstet Gynecol ; 224(6): 599.e1-599.e18, 2021 06.
Article de Anglais | MEDLINE | ID: mdl-33460585

RÉSUMÉ

BACKGROUND: Intrauterine devices are effective and safe, long-acting reversible contraceptives, but the risk of uterine perforation occurs with an estimated incidence of 1 to 2 per 1000 insertions. The European Active Surveillance Study for Intrauterine Devices, a European prospective observational study that enrolled 61,448 participants (2006-2012), found that women breastfeeding at the time of device insertion or with the device inserted at ≤36 weeks after delivery had a higher risk of uterine perforation. The Association of Uterine Perforation and Expulsion of Intrauterine Device (APEX-IUD) study was a Food and Drug Administration-mandated study designed to reflect current United States clinical practice. The aims of the APEX-IUD study were to evaluate the risk of intrauterine device-related uterine perforation and device expulsion among women who were breastfeeding or within 12 months after delivery at insertion. OBJECTIVE: We aimed to describe the APEX-IUD study design, methodology, and analytical plan and present population characteristics, size of risk factor groups, and duration of follow-up. STUDY DESIGN: APEX-IUD study was a retrospective cohort study conducted in 4 organizations with access to electronic health records: Kaiser Permanente Northern California, Kaiser Permanente Southern California, Kaiser Permanente Washington, and Regenstrief Institute in Indiana. Variables were identified through structured data (eg, diagnostic, procedural, medication codes) and unstructured data (eg, clinical notes) via natural language processing. Outcomes include uterine perforation and device expulsion; potential risk factors were breastfeeding at insertion, postpartum timing of insertion, device type, and menorrhagia diagnosis in the year before insertion. Covariates include demographic characteristics, clinical characteristics, and procedure-related variables, such as difficult insertion. The first potential date of inclusion for eligible women varies by research site (from January 1, 2001 to January 1, 2010). Follow-up begins at insertion and ends at first occurrence of an outcome of interest, a censoring event (device removal or reinsertion, pregnancy, hysterectomy, sterilization, device expiration, death, disenrollment, last clinical encounter), or end of the study period (June 30, 2018). Comparisons of levels of exposure variables were made using Cox regression models with confounding adjusted by propensity score weighting using overlap weights. RESULTS: The study population includes 326,658 women with at least 1 device insertion during the study period (Kaiser Permanente Northern California, 161,442; Kaiser Permanente Southern California, 123,214; Kaiser Permanente Washington, 20,526; Regenstrief Institute, 21,476). The median duration of continuous enrollment was 90 (site medians 74-177) months. The mean age was 32 years, and the population was racially and ethnically diverse across the 4 sites. The mean body mass index was 28.5 kg/m2, and of the women included in the study, 10.0% had menorrhagia ≤12 months before insertion, 5.3% had uterine fibroids, and 10% were recent smokers; furthermore, among these women, 79.4% had levonorgestrel-releasing devices, and 19.5% had copper devices. Across sites, 97,824 women had an intrauterine device insertion at ≤52 weeks after delivery, of which 94,817 women (97%) had breastfeeding status at insertion determined; in addition, 228,834 women had intrauterine device insertion at >52 weeks after delivery or no evidence of a delivery in their health record. CONCLUSION: Combining retrospective data from multiple sites allowed for a large and diverse study population. Collaboration with clinicians in the study design and validation of outcomes ensured that the APEX-IUD study results reflect current United States clinical practice. Results from this study will provide valuable information based on real-world evidence about risk factors for intrauterine devices perforation and expulsion for clinicians.


Sujet(s)
Allaitement naturel , Dispositifs intra-utérins/effets indésirables , Période du postpartum , Perforation utérine/étiologie , Adulte , Protocoles cliniques , Femelle , Études de suivi , Humains , Expulsion de dispositif intra-utérin , Modèles logistiques , Adulte d'âge moyen , Types de pratiques des médecins , Plan de recherche , Études rétrospectives , Appréciation des risques , Facteurs de risque , Facteurs temps , États-Unis/épidémiologie , Perforation utérine/épidémiologie
8.
Chest ; 159(6): 2346-2355, 2021 06.
Article de Anglais | MEDLINE | ID: mdl-33345951

RÉSUMÉ

BACKGROUND: Chronic cough (CC) of 8 weeks or more affects about 10% of adults and may lead to expensive treatments and reduced quality of life. Incomplete diagnostic coding complicates identifying CC in electronic health records (EHRs). Natural language processing (NLP) of EHR text could improve detection. RESEARCH QUESTION: Can NLP be used to identify cough in EHRs, and to characterize adults and encounters with CC? STUDY DESIGN AND METHODS: A Midwestern EHR system identified patients aged 18 to 85 years during 2005 to 2015. NLP was used to evaluate text notes, except prescriptions and instructions, for mentions of cough. Two physicians and a biostatistician reviewed 12 sets of 50 encounters each, with iterative refinements, until the positive predictive value for cough encounters exceeded 90%. NLP, International Classification of Diseases, 10th revision, or medication was used to identify cough. Three encounters spanning 56 to 120 days defined CC. Descriptive statistics summarized patients and encounters, including referrals. RESULTS: Optimizing NLP required identifying and eliminating cough denials, instructions, and historical references. Of 235,457 cough encounters, 23% had a relevant diagnostic code or medication. Applying chronicity to cough encounters identified 23,371 patients (61% women) with CC. NLP alone identified 74% of these patients; diagnoses or medications alone identified 15%. The positive predictive value of NLP in the reviewed sample was 97%. Referrals for cough occurred for 3.0% of patients; pulmonary medicine was most common initially (64% of referrals). LIMITATIONS: Some patients with diagnosis codes for cough, encounters at intervals greater than 4 months, or multiple acute cough episodes may have been misclassified. INTERPRETATION: NLP successfully identified a large cohort with CC. Most patients were identified through NLP alone, rather than diagnoses or medications. NLP improved detection of patients nearly sevenfold, addressing the gap in ability to identify and characterize CC disease burden. Nearly all cases appeared to be managed in primary care. Identifying these patients is important for characterizing treatment and unmet needs.


Sujet(s)
Toux/diagnostic , Dossiers médicaux électroniques , Pneumologie/statistiques et données numériques , Adolescent , Adulte , Sujet âgé , Sujet âgé de 80 ans ou plus , Maladie chronique , Femelle , Humains , Mâle , Adulte d'âge moyen , États-Unis , Jeune adulte
9.
Materials (Basel) ; 13(18)2020 Sep 10.
Article de Anglais | MEDLINE | ID: mdl-32927905

RÉSUMÉ

Our cities, parks, beaches, and oceans have been contaminated for many years with millions of tonnes of unsightly and toxic cigarette butts (CBs). This study presents and discusses some of the results of an ongoing study on recycling in fired-clay bricks. Energy savings: the energy value of CBs with remnant tobacco was found to be 16.5 MJ/kg. If just 2.5% of all bricks produced annually worldwide included 1% CB content, all of the CBs currently produced could be recycled in bricks, and it is estimated that global firing energy consumption could be reduced by approximately 20 billion MJ (megajoules). This approximately equates to the power used by one million homes in Victoria, Australia, every year. Bacteriological study: CBs were investigated for the presence of ten common bacteria in two pilot studies. Staphylococcus spp. and Pseudomonas aeruginosa were detected in fresh used CB samples, and Listeria spp. were detected in old used CB samples. All of the CB samples except the dried sample had significant counts of Bacillus spp. Some species of the detected bacteria in this study are pathogenic. Further confirmation and comprehensive microbiological study are needed in this area. The contact of naphthalene balls with CBs had a significant disinfecting effect on Bacillus spp. The implementation procedure for recycling CBs in bricks, odour from Volatile Organic Compound (VOC) emissions in CBs, sterilization methods, CB collection systems, and safety instructions were investigated, and they are discussed. Proposal: when considering the combined risks from many highly toxic chemicals and possible pathogens in cigarette butts, it is proposed that littering of this waste anywhere in cities and the environment be strictly prohibited and that offenders be heavily fined.

10.
Asian Bioeth Rev ; 12(4): 529-537, 2020 Dec.
Article de Anglais | MEDLINE | ID: mdl-32837563

RÉSUMÉ

Malaysia had its first four patients with COVID-19 on 25 January 2020. In the same week, the World Health Organization declared it as a public health emergency of international concern. The pandemic has since challenged the ethics and practice of medicine. There is palpable tension from the conflict of interest between public health initiatives and individual's rights. Ensuring equitable care and distribution of health resources for patients with and without COVID-19 is a recurring ethical challenge for clinicians. Palliative care aims to mitigate suffering caused by a life-limiting illness, and this crisis has led to the awareness and urgency to ensure it reaches all who needs it. We share here the palliative care perspectives and ethical challenges during the COVID-19 pandemic in Malaysia.

11.
Adv Ther ; 37(1): 552-565, 2020 01.
Article de Anglais | MEDLINE | ID: mdl-31828610

RÉSUMÉ

INTRODUCTION: Most cases of small cell lung cancer (SCLC) are diagnosed at an advanced stage. The objective of this study was to investigate patient characteristics, survival, chemotherapy treatments, and health care use after a diagnosis of advanced SCLC in subjects enrolled in a health system network. METHODS: This was a retrospective cohort study of patients aged ≥ 18 years who either were diagnosed with stage III/IV SCLC or who progressed to advanced SCLC during the study period (2005-2015). Patients identified from the Indiana State Cancer Registry and the Indiana Network for Patient Care were followed from their advanced diagnosis index date until the earliest date of the last visit, death, or the end of the study period. Patient characteristics, survival, chemotherapy regimens, associated health care visits, and durations of treatment were reported. Time-to-event analyses were performed using the Kaplan-Meier method. RESULTS: A total of 498 patients with advanced SCLC were identified, of whom 429 were newly diagnosed with advanced disease and 69 progressed to advanced disease during the study period. Median survival from the index diagnosis date was 13.2 months. First-line (1L) chemotherapy was received by 464 (93.2%) patients, most commonly carboplatin/etoposide, received by 213 (45.9%) patients, followed by cisplatin/etoposide (20.7%). Ninety-five (20.5%) patients progressed to second-line (2L) chemotherapy, where topotecan monotherapy (20.0%) was the most common regimen, followed by carboplatin/etoposide (14.7%). Median survival was 10.1 months from 1L initiation and 7.7 months from 2L initiation. CONCLUSION: Patients in a regional health system network diagnosed with advanced SCLC were treated with chemotherapy regimens similar to those in earlier reports based on SEER-Medicare data. Survival of patients with advanced SCLC was poor, illustrating the lack of progress over several decades in the treatment of this lethal disease and highlighting the need for improved treatments.


Sujet(s)
Protocoles de polychimiothérapie antinéoplasique/administration et posologie , Tumeurs du poumon/traitement médicamenteux , Carcinome pulmonaire à petites cellules/traitement médicamenteux , Adulte , Sujet âgé , Carboplatine/usage thérapeutique , Cisplatine/administration et posologie , Épirubicine/administration et posologie , Étoposide/administration et posologie , Femelle , Humains , Tumeurs du poumon/mortalité , Mâle , Medicare (USA) , Adulte d'âge moyen , Études rétrospectives , Carcinome pulmonaire à petites cellules/mortalité , Analyse de survie , Résultat thérapeutique , États-Unis
12.
Heart Lung ; 49(2): 112-116, 2020.
Article de Anglais | MEDLINE | ID: mdl-31879037

RÉSUMÉ

BACKGROUND: In-hospital respiratory outcomes of non-surgical patients with undiagnosed obstructive sleep apnea (OSA), particularly those with significant comorbidities are not well defined. Undiagnosed and untreated OSA may be associated with increased cardiopulmonary morbidity. STUDY OBJECTIVES: Evaluate respiratory failure outcomes in patients identified as at-risk for OSA by the Berlin Questionnaire (BQ). METHODS: This was a retrospective study conducted using electronic health records at a large health system. The BQ was administered at admission to screen for OSA to medical-service patients under the age of 80 years old meeting the following health system criteria: (1) BMI greater than 30; (2) any of the following comorbid diagnoses: hypertension, heart failure, acute coronary syndrome, pulmonary hypertension, arrhythmia, cerebrovascular event/stroke, or diabetes. Patients with known OSA or undergoing surgery were excluded. Patients were classified as high-risk or low-risk for OSA based on the BQ score as follows: low-risk (0 or 1 category with a positive score on the BQ); high-risk (2 or more categories with a positive score on BQ). The primary outcome was respiratory failure during index hospital stay defined by any of the following: orders for conventional ventilation or intubation; at least two instances of oxygen saturation less than 88% by pulse oximetry; at least two instances of respiratory rate over 30 breaths per minute; and any orders placed for non-invasive mechanical ventilation without a previous diagnosis of sleep apnea. Propensity scores were used to control for patient characteristics. RESULTS: Records of 15,253 patients were assessed. There were no significant differences in the composite outcome of respiratory failure by risk of OSA (high risk: 11%, low risk: 10%, p = 0.55). When respiratory failure was defined as need for ventilation, more patients in the low-risk group experienced invasive mechanical ventilation (high-risk: 1.8% vs. low-risk: 2.3%, p = 0.041). Mortality was decreased in patients at high-risk for OSA (0.86%) vs. low risk for OSA (1.53%, p < 0.001). CONCLUSIONS: Further prospective studies are needed to understand the contribution of undiagnosed OSA to in-hospital respiratory outcomes.


Sujet(s)
Ventilation artificielle , Insuffisance respiratoire/épidémiologie , Syndrome d'apnées obstructives du sommeil/complications , Adulte , Sujet âgé , Comorbidité , Femelle , Hospitalisation , Humains , Durée du séjour , Mâle , Adulte d'âge moyen , Oxymétrie , Études rétrospectives , Facteurs de risque , Accident vasculaire cérébral/épidémiologie , Enquêtes et questionnaires
13.
PLoS One ; 14(8): e0218759, 2019.
Article de Anglais | MEDLINE | ID: mdl-31437170

RÉSUMÉ

BACKGROUND: Data on initiation and utilization of direct-acting antiviral therapies for hepatitis C virus infection in the United States are limited. This study evaluated treatment initiation, time to treatment, and real-world effectiveness of direct-acting antiviral therapy in individuals with hepatitis C virus infection treated during the first 2 years of availability of all-oral direct-acting antiviral therapies. METHODS: A retrospective cohort analysis was undertaken using electronic medical records and chart review abstraction of hepatitis C virus-infected individuals aged >18 years diagnosed with chronic hepatitis C virus infection between January 1, 2014, and December 31, 2015 from the Indiana University Health database. RESULTS: Eight hundred thirty people initiated direct-acting antiviral therapy during the 2-year observation window. The estimated incidence of treatment initiation was 8.8%±0.34% at the end of year 1 and 15.0%±0.5% at the end of year 2. Median time to initiating therapy was 300 days. Using a Cox regression analysis, positive predictors of treatment initiation included age (hazard ratio, 1.008), prior hepatitis C virus treatment (1.74), cirrhosis (2.64), and history of liver transplant (1.5). History of drug abuse (0.43), high baseline alanine aminotransferase levels (0.79), hepatitis B virus infection (0.41), and self-pay (0.39) were negatively associated with treatment initiation. In the evaluable population (n = 423), 83.9% (95% confidence interval, 80.1-87.3%) of people achieved sustained virologic response. CONCLUSION: In the early years of the direct-acting antiviral era, <10% of people diagnosed with chronic hepatitis C virus infection received direct-acting antiviral treatment; median time to treatment initiation was 300 days. Future analyses should evaluate time to treatment initiation among those with less advanced fibrosis.


Sujet(s)
Antiviraux/usage thérapeutique , Hépatite C chronique/traitement médicamenteux , Administration par voie orale , Adulte , Sujet âgé , Antiviraux/administration et posologie , Études de cohortes , Association de médicaments , Femelle , Hépatite C chronique/virologie , Humains , Indiana , Mâle , Adulte d'âge moyen , Études rétrospectives , Réponse virologique soutenue , Délai jusqu'au traitement , États-Unis
14.
Materials (Basel) ; 12(16)2019 Aug 07.
Article de Anglais | MEDLINE | ID: mdl-31394815

RÉSUMÉ

Fibres have been used in construction materials for a very long time. Through previous research and investigations, the use of natural and synthetic fibres have shown promising results, as their presence has demonstrated significant benefits in terms of the overall physical and mechanical properties of the composite material. When comparing fibre reinforcement to traditional reinforcement, the ratio of fibre required is significantly less, making fibre reinforcement both energy and economically efficient. More recently, waste fibres have been studied for their potential as reinforcement in construction materials. The build-up of waste materials all around the world is a known issue, as landfill space is limited, and the incineration process requires considerable energy and produces unwanted emissions. The utilisation of waste fibres in construction materials can alleviate these issues and promote environmentally friendly and sustainable solutions that work in the industry. This study reviews the types, properties, and applications of different fibres used in a wide range of materials in the construction industry, including concrete, asphalt concrete, soil, earth materials, blocks and bricks, composites, and other applications.

15.
J Gen Intern Med ; 34(12): 2804-2811, 2019 12.
Article de Anglais | MEDLINE | ID: mdl-31367875

RÉSUMÉ

BACKGROUND: Cessation counseling and pharmacotherapy are recommended for hospitalized smokers, but better coordination between cessation counselors and providers might improve utilization of pharmacotherapy and enhance smoking cessation. OBJECTIVE: To compare smoking cessation counseling combined with care coordination post-hospitalization to counseling alone on uptake of pharmacotherapy and smoking cessation. DESIGN: Unblinded, randomized clinical trial PARTICIPANTS: Hospitalized smokers referred from primarily rural hospitals INTERVENTIONS: Counseling only (C) consisted of telephone counseling provided during the hospitalization and post-discharge. Counseling with care coordination (CCC) provided similar counseling supplemented by feedback to the smoker's health care team and help for the smoker in obtaining pharmacotherapy. At 6 months post-hospitalization, persistent smokers were re-engaged with either CCC or C. MAIN MEASURES: Utilization of pharmacotherapy and smoking cessation at 3, 6, and 12 months post-discharge. KEY RESULTS: Among 606 smokers randomized, 429 (70.8%) completed the 12-month assessment and 580 (95.7%) were included in the primary analysis. Use of any cessation pharmacotherapy between 0 and 6 months (55.2%) and between 6 and 12 months (47.1%) post-discharge was similar across treatment arms though use of prescription-only pharmacotherapy between months 6-12 was significantly higher in the CCC group (30.1%) compared with the C group (18.6%) (RR, 1.61 (95% CI, 1.08, 2.41)). Self-reported abstinence rates of 26.2%, 20.3%, and 23.4% at months 3, 6, and 12, respectively, were comparable across the two treatment arms. Of those smoking at month 6, 12.5% reported abstinence at month 12. Validated smoking cessation at 12 months was 19.3% versus 16.9% in the CCC and C groups, respectively (RR, 1.13 (95% CI, 0.80, 1.61)). CONCLUSION: Supplemental care coordination, provided by counselors outside of the health care team, failed to improve smoking cessation beyond that achieved by cessation counseling alone. Re-engagement of smokers 6 months post-discharge can lead to new quitters, at which time care coordination might facilitate use of prescription medications. TRIAL REGISTRATION: NCT01063972.


Sujet(s)
Continuité des soins , Assistance/méthodes , Sortie du patient , Arrêter de fumer/méthodes , Télémédecine/méthodes , Téléphone , Adulte , Continuité des soins/tendances , Assistance/tendances , Femelle , Études de suivi , Humains , Mâle , Adulte d'âge moyen , Sortie du patient/tendances , Télémédecine/tendances , Dispositifs de sevrage tabagique/tendances
16.
Clin Epidemiol ; 11: 635-643, 2019.
Article de Anglais | MEDLINE | ID: mdl-31413641

RÉSUMÉ

OBJECTIVE: To validate algorithms identifying uterine perforations and intrauterine device (IUD) expulsions and to ascertain availability of breastfeeding status at the time of IUD insertion. STUDY DESIGN AND SETTING: Four health care systems with electronic health records (EHRs) participated: Kaiser Permanente Northern California (KPNC), Kaiser Permanente Southern California (KPSC), Kaiser Permanente Washington (KPWA), and Regenstrief Institute (RI). The study included women ≤50 years of age with an IUD insertion. Site-specific algorithms using structured and unstructured data were developed and a sample validated by EHR review. Positive predictive values (PPVs) of the algorithms were calculated. Breastfeeding status was assessed in a random sample of 125 women at each research site with IUD placement within 52 weeks postpartum. RESULTS: The study population included 282,028 women with 325,582 IUD insertions. The PPVs for uterine perforation were KPNC 77%, KPSC 81%, KPWA 82%, and RI 47%; PPVs for IUD expulsion were KPNC 77%, KPSC 87%, KPWA 68%, and RI 37%. Across all research sites, breastfeeding status at the time of IUD insertion was determined for 94% of those sampled. CONCLUSIONS: Algorithms with a high PPV for uterine perforation and IUD expulsion were developed at 3 of the 4 research sites. Breastfeeding status at the time of IUD insertion could be determined at all research sites. Our findings suggest that a study to evaluate the associations of breastfeeding and postpartum IUD insertions with risk of uterine perforation and IUD expulsion can be successfully conducted retrospectively; however, automated application of algorithms must be supplemented with chart review for some outcomes at one research site due to low PPV.

17.
J Manag Care Spec Pharm ; 25(5): 544-554, 2019 May.
Article de Anglais | MEDLINE | ID: mdl-31039062

RÉSUMÉ

BACKGROUND: Statins are effective in helping prevent cardiovascular disease (CVD). However, studies suggest that only 20%-64% of patients taking statins achieve reasonable low-density lipoprotein cholesterol (LDL-C) thresholds. On-treatment levels of LDL-C remain a key predictor of residual CVD event risk. OBJECTIVES: To (a) determine how many patients on statins achieved the therapeutic threshold of LDL-C < 100 mg per dL (general cohort) and < 70 mg per dL (secondary prevention cohort, or subcohort, with preexisting CVD); (b) estimate the number of potentially avoidable CVD events if the threshold were reached; and (c) forecast potential cost savings. METHODS: A retrospective, longitudinal cohort study using electronic health record data from the Indiana Network for Patient Care (INPC) was conducted. The INPC provides comprehensive information about patients in Indiana across health care organizations and care settings. Patients were aged > 45 years and seen between January 1, 2012, and October 31, 2016 (ensuring study of contemporary practice), were statin-naive for 12 months before the index date of initiating statin therapy, and had an LDL-C value recorded 6-18 months after the index date. Subsequent to descriptive cohort analysis, the theoretical CVD risk reduction achievable by reaching the threshold was calculated using Framingham Risk Score and Cholesterol Treatment Trialists' Collaboration formulas. Estimated potential cost savings used published first-year costs of CVD events, adjusted for inflation and discounted to the present day. RESULTS: Of the 89,267 patients initiating statins, 30,083 (33.7%) did not achieve the LDL-C threshold (subcohort: 58.1%). In both groups, not achieving the threshold was associated with patients who were female, black, and those who had reduced medication adherence. Higher levels of preventive aspirin use and antihypertensive treatment were associated with threshold achievement. In both cohorts, approximately 64% of patients above the threshold were within 30 mg per dL of the respective threshold. Adherence to statin therapy regimen, judged by a medication possession ratio of ≥ 80%, was 57.4% in the general cohort and 56.7% in the subcohort. Of the patients who adhered to therapy, 23.7% of the general cohort and 50.5% of the subcohort had LDL-C levels that did not meet the threshold. 10-year CVD event risk in the at-or-above threshold group was 22.78% (SD = 17.24%) in the general cohort and 29.56% (SD = 18.19%) in the subcohort. By reducing LDL-C to the threshold, a potential relative risk reduction of 14.8% in the general cohort could avoid 1,173 CVD events over 10 years (subcohort: 15.7% and 454 events). Given first-year inpatient and follow-up costs of $37,300 per CVD event, this risk reduction could save about $1,455 per patient treated to reach the threshold (subcohort: $1,902; 2017 U.S. dollars) over a 10-year period. CONCLUSIONS: Across multiple health care systems in Indiana, between 34% (general cohort) and 58% (secondary prevention cohort) of patients treated with statins did not achieve therapeutic LDL-C thresholds. Based on current CVD event risk and cost projections, such patients seem to be at increased risk and may represent an important and potentially preventable burden on health care costs. DISCLOSURES: Funding support for this study was provided by Merck (Kenilworth, NJ). Chase and Boggs are employed by Merck. Simpson is a consultant to Merck and Pfizer. The other authors have nothing to disclose.


Sujet(s)
Maladies cardiovasculaires/prévention et contrôle , Cholestérol LDL/sang , Besoins et demandes de services de santé/statistiques et données numériques , Inhibiteurs de l'hydroxyméthylglutaryl-CoA réductase/administration et posologie , Hyperlipidémies/traitement médicamenteux , Sujet âgé , Maladies cardiovasculaires/sang , Maladies cardiovasculaires/économie , Cholestérol LDL/effets des médicaments et des substances chimiques , Économies/statistiques et données numériques , Coûts indirects de la maladie , Analyse coût-bénéfice , Relation dose-effet des médicaments , Femelle , Coûts des soins de santé/statistiques et données numériques , Humains , Inhibiteurs de l'hydroxyméthylglutaryl-CoA réductase/économie , Hyperlipidémies/sang , Hyperlipidémies/économie , Indiana , Études longitudinales , Mâle , Adulte d'âge moyen , Études rétrospectives , Résultat thérapeutique
18.
BMC Pediatr ; 19(1): 174, 2019 05 29.
Article de Anglais | MEDLINE | ID: mdl-31142302

RÉSUMÉ

BACKGROUND: Prolonged neonatal jaundice (PNNJ) is often caused by breast milk jaundice, but it could also point to other serious conditions (biliary atresia, congenital hypothyroidism). When babies with PNNJ receive a routine set of laboratory investigations to detect serious but uncommon conditions, there is always a tendency to over-investigate a large number of well, breastfed babies. A local unpublished survey in Perak state of Malaysia revealed that the diagnostic criteria and initial management of PNNJ were not standardized. This study aims to evaluate and improve the current management of PNNJ in the administrative region of Perak. METHODS: A 3-phase quasi-experimental community study was conducted from April 2012 to June 2013. Phase l was a cross-sectional study to review the current practice of PNNJ management. Phase ll was an interventional phase involving the implementation of a new protocol. Phase lll was a 6 months post-interventional audit. A registry of PNNJ was implemented to record the incidence rate. A self-reporting surveillance system was put in place to receive any reports of biliary atresia, urinary tract infection, or congenital hypothyroidism cases. RESULTS: In Phase I, 12 hospitals responded, and 199 case notes were reviewed. In Phase II, a new protocol was developed and implemented in all government health facilities in Perak. In Phase III, the 6-month post-intervention audit showed that there were significant improvements when comparing mean scores of pre- and post-intervention: history taking scores (p < 0.001), family history details (p < 0.05), physical examination documentation (p < 0.001), and total investigations done per patient (from 9.01 to 5.81, p < 0.001). The total number of patient visits reduced from 2.46 to 2.2 per patient. The incidence of PNNJ was found to be high (incidence rate of 158 per 1000 live births). CONCLUSIONS: The new protocol standardized and improved the quality of care with better clinical assessment and a reduction in unnecessary laboratory investigations. TRIAL REGISTRATION: Research registration number: NMRR-12-105-11288 .


Sujet(s)
Audit clinique , Protocoles cliniques/normes , Prise en charge de la maladie , Ictère néonatal , Amélioration de la qualité , Algorithmes , Atrésie des voies biliaires/complications , Atrésie des voies biliaires/diagnostic , Études transversales , Santé de la famille , Humains , Nouveau-né , Ictère néonatal/diagnostic , Ictère néonatal/étiologie , Ictère néonatal/thérapie , Malaisie , Recueil de l'anamnèse , Dossiers médicaux , Dépistage néonatal/normes , Examen physique , Guides de bonnes pratiques cliniques comme sujet , Orientation vers un spécialiste/normes , Enregistrements/statistiques et données numériques
19.
Hosp Pract (1995) ; 47(1): 42-45, 2019 Feb.
Article de Anglais | MEDLINE | ID: mdl-30409047

RÉSUMÉ

BACKGROUND: Rapid response teams (RRTs) improve mortality by intervening in the hours preceding arrest. Implementation of these teams varies across institutions. SETTING AND DESIGN: Our health-care system has two different RRT models at two hospitals: Hospital A does not utilize a proactive rounder while Hospital B does. We studied the patterns of RRT calls at each hospital focusing on the differences between night and day and during nursing shift transitions. RESULTS: The presence of proactive surveillance appeared to be associated with an increased total number of RRT calls with more than twice as many calls made at the smaller Hospital B than Hospital A. Hospital B had more calls in the daytime compared to the nighttime. Both hospitals showed a surge in the night-to-day shift transition (7-8am) compared to the preceding nighttime. Hospital A additionally showed a surge in calls during the day-to-night shift transition (7-8pm) compared to the preceding daytime. CONCLUSIONS: Differences in the diurnal patterns of RRT activation exist between hospitals even within the same system. As a continuously learning system, each hospital should consider tracking these patterns to identify their unique vulnerabilities. More calls are noted between 7-8am compared to the overnight hours. This may represent the reestablishment of the 'afferent' arm of the RRT as the hospital returns to daytime staffing and activity. Factors that influence the impact of proactive rounding on RRT performance may deserve further study.


Sujet(s)
Traitement d'urgence/normes , Arrêt cardiaque/thérapie , Équipe hospitalière de secours d'urgence/normes , Unités de soins intensifs/normes , Soins de nuit/normes , Hospitalisation/statistiques et données numériques , Humains ,
20.
Int Arch Allergy Immunol ; 178(2): 201-210, 2019.
Article de Anglais | MEDLINE | ID: mdl-30544116

RÉSUMÉ

BACKGROUND: Dermatophagoides pteronyssinus (DP) and Blomia tropicalis (BT) are the dominant house dust mites inducing allergic diseases in tropical climates. It is not known whether the efficacy of DP subcutaneous immunotherapy (SCIT) is similar in patients sensitized to DP alone or to both DP and BT. METHOD: Ninety-five children (5-17 years old) affected by asthma with rhinitis and sensitized to both DP and BT received 3 years of DP-SCIT. Clinical symptom and medication scores, serum-specific IgE and IgG4 were evaluated during DP-SCIT. Patients were grouped based on DP and BT co-sensitization or cross-reactivity, according to positive or negative IgE to BT major allergen (BTMA). RESULTS: After 3 years of DP-SCIT, all patients had significant reductions in symptoms and medication use. In all, 65% of the patients were free of asthma symptoms and medication use; in addition, 3% was free of rhinitis symptoms. FEV1 in all patients were greater than 95% of predicted. DP-SCIT induced significant increases in DP- and BT-specific IgG4. In 50% of patients, DP-specific IgG4 increased more than 67-fold. BT-specific IgG4 increased more than 2.5 fold. A moderate correlation (r = 0.48-0.61, p < 0.01) was found between specific IgE against DP and BT in the BTMA- group (n = 34) before and after DP-SCIT, whereas no correlation was found in the BTMA+ group (n = 61). The 2 BTMA groups responded similarly with regard to clinical improvement and increase in specific IgG4 to both DP and BT. No safety finding of concern were reported in either group. CONCLUSION: DP-SCIT may be of clinical benefit to patients with IgE sensitizations to both DP and BT. DP-SCIT induces IgG4 that cross-react with BT allergens.


Sujet(s)
Antigènes de Dermatophagoides/immunologie , Asthme/immunologie , Asthme/thérapie , Dermatophagoides pteronyssinus/immunologie , Désensibilisation immunologique , Rhinite allergique/immunologie , Rhinite allergique/thérapie , Adolescent , Animaux , Asthme/diagnostic , Enfant , Enfant d'âge préscolaire , Désensibilisation immunologique/effets indésirables , Désensibilisation immunologique/méthodes , Femelle , Humains , Dosage immunologique , Immunoglobuline E/sang , Immunoglobuline E/immunologie , Immunoglobuline G/sang , Immunoglobuline G/immunologie , Mâle , Tests de la fonction respiratoire , Rhinite allergique/diagnostic
SÉLECTION CITATIONS
DÉTAIL DE RECHERCHE
...