Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 851
Filtrar
Mais filtros

Intervalo de ano de publicação
1.
Proc Natl Acad Sci U S A ; 120(31): e2216021120, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37490532

RESUMO

Wastewater monitoring has provided health officials with early warnings for new COVID-19 outbreaks, but to date, no approach has been validated to distinguish signal (sustained surges) from noise (background variability) in wastewater data to alert officials to the need for heightened public health response. We analyzed 62 wk of data from 19 sites participating in the North Carolina Wastewater Monitoring Network to characterize wastewater metrics around the Delta and Omicron surges. We found that wastewater data identified outbreaks 4 to 5 d before case data (reported on the earlier of the symptom start date or test collection date), on average. At most sites, correlations between wastewater and case data were similar regardless of how wastewater concentrations were normalized and whether calculated with county-level or sewershed-level cases, suggesting that officials may not need to geospatially align case data with sewershed boundaries to gain insights into disease transmission. Although wastewater trend lines captured clear differences in the Delta versus Omicron surge trajectories, no single wastewater metric (detectability, percent change, or flow-population normalized viral concentrations) reliably signaled when these surges started. After iteratively examining different combinations of these three metrics, we developed the Covid-SURGE (Signaling Unprecedented Rises in Groupwide Exposure) algorithm, which identifies unprecedented signals in the wastewater data. With a true positive rate of 82%, a false positive rate of 7%, and strong performance during both surges and in small and large sites, our algorithm provides public health officials with an automated way to flag community-level COVID-19 surges in real time.


Assuntos
COVID-19 , Humanos , COVID-19/epidemiologia , Águas Residuárias , Algoritmos , Benchmarking , Surtos de Doenças , RNA Viral
2.
Am J Epidemiol ; 2024 Jun 24.
Artigo em Inglês | MEDLINE | ID: mdl-38918039

RESUMO

There is a dearth of safety data on maternal outcomes after perinatal medication exposure. Data-mining for unexpected adverse event occurrence in existing datasets is a potentially useful approach. One method, the Poisson tree-based scan statistic (TBSS), assumes that the expected outcome counts, based on incidence of outcomes in the control group, are estimated without error. This assumption may be difficult to satisfy with a small control group. Our simulation study evaluated the effect of imprecise incidence proportions from the control group on TBSS' ability to identify maternal outcomes in pregnancy research. We simulated base case analyses with "true" expected incidence proportions and compared these to imprecise incidence proportions derived from sparse control samples. We varied parameters impacting Type I error and statistical power (exposure group size, outcome's incidence proportion, and effect size). We found that imprecise incidence proportions generated by a small control group resulted in inaccurate alerting, inflation of Type I error, and removal of very rare outcomes for TBSS analysis due to "zero" background counts. Ideally, the control size should be at least several times larger than the exposure size to limit the number of false positive alerts and retain statistical power for true alerts.

3.
Br J Haematol ; 204(5): 1660-1671, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38419589

RESUMO

The supply of blood components and products in sufficient quantities is key to any effective health care system. This report describes the challenges faced by the English blood service, NHS Blood and Transplant (NHSBT), towards the end of the COVID-19 pandemic, which in October 2022 led to an Amber Alert being declared to hospitals indicating an impending blood shortage. The impact on the hospital transfusion services and clinical users is explained. The actions taken by NHSBT to mitigate the blood supply challenges and ensure equity of transfusion support for hospitals in England including revisions to the national blood shortage plans are described. This report focuses on the collaboration and communication between NHSBT, NHS England (NHSE), Department of Health and Social Care (DHSC), National Blood Transfusion Committee (NBTC), National Transfusion Laboratory Managers Advisory Group for NBTC (NTLM), National Transfusion Practitioners Network, the medical Royal Colleges and clinical colleagues across the NHS.


Assuntos
Doadores de Sangue , Transfusão de Sangue , COVID-19 , SARS-CoV-2 , Humanos , Inglaterra , COVID-19/epidemiologia , Transfusão de Sangue/estatística & dados numéricos , Doadores de Sangue/provisão & distribuição , Bancos de Sangue/provisão & distribuição , Medicina Estatal/organização & administração , Pandemias
4.
BMC Med ; 22(1): 408, 2024 Sep 20.
Artigo em Inglês | MEDLINE | ID: mdl-39304846

RESUMO

BACKGROUND: Although electronic alerts are being increasingly implemented in patients with acute kidney injury (AKI), their effect remains unclear. Therefore, we conducted this meta-analysis aiming at investigating their impact on the care and outcomes of AKI patients. METHODS: PubMed, Embase, Cochrane Library, and Clinical Trial Registries databases were systematically searched for relevant studies from inception to March 2024. Randomized controlled trials comparing electronic alerts with usual care in patients with AKI were selected. RESULTS: Six studies including 40,146 patients met the inclusion criteria. The pooled results showed that electronic alerts did not improve mortality rates (relative risk (RR) = 1.02, 95% confidence interval (CI) = 0.97-1.08, P = 0.44) or reduce creatinine levels (mean difference (MD) = - 0.21, 95% CI = - 1.60-1.18, P = 0.77) and AKI progression (RR = 0.97, 95% CI = 0.90-1.04, P = 0.40). Instead, electronic alerts increased the odds of dialysis and AKI documentation (RR = 1.14, 95% CI = 1.05-1.25, P = 0.002; RR = 1.21, 95% CI = 1.01-1.44, P = 0.04, respectively), but the trial sequential analysis (TSA) could not confirm these results. No differences were observed in other care-centered outcomes including renal consults and investigations between the alert and usual care groups. CONCLUSIONS: Electronic alerts increased the incidence of AKI and dialysis in AKI patients, which likely reflected improved recognition and early intervention. However, these changes did not improve the survival or kidney function of AKI patients. The findings warrant further research to comprehensively evaluate the impact of electronic alerts.


Assuntos
Injúria Renal Aguda , Injúria Renal Aguda/terapia , Humanos , Ensaios Clínicos Controlados Aleatórios como Assunto , Resultado do Tratamento
5.
J Pediatr ; 269: 113973, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38401785

RESUMO

OBJECTIVE: To test whether different clinical decision support tools increase clinician orders and patient completions relative to standard practice and each other. STUDY DESIGN: A pragmatic, patient-randomized clinical trial in the electronic health record was conducted between October 2019 and April 2020 at Geisinger Health System in Pennsylvania, with 4 arms: care gap-a passive listing recommending screening; alert-a panel promoting and enabling lipid screen orders; both; and a standard practice-no guideline-based notification-control arm. Data were analyzed for 13 346 9- to 11-year-old patients seen within Geisinger primary care, cardiology, urgent care, or nutrition clinics, or who had an endocrinology visit. Principal outcomes were lipid screening orders by clinicians and completions by patients within 1 week of orders. RESULTS: Active (care gap and/or alert) vs control arm patients were significantly more likely (P < .05) to have lipid screening tests ordered and completed, with ORs ranging from 1.67 (95% CI 1.28-2.19) to 5.73 (95% CI 4.46-7.36) for orders and 1.54 (95% CI 1.04-2.27) to 2.90 (95% CI 2.02-4.15) for completions. Alerts, with or without care gaps listed, outperformed care gaps alone on orders, with odds ratios ranging from 2.92 (95% CI 2.32-3.66) to 3.43 (95% CI 2.73-4.29). CONCLUSIONS: Electronic alerts can increase lipid screening orders and completions, suggesting clinical decision support can improve guideline-concordant screening. The study also highlights electronic record-based patient randomization as a way to determine relative effectiveness of support tools. TRIAL REGISTRATION: ClinicalTrials.gov Identifier: NCT04118348.


Assuntos
Sistemas de Apoio a Decisões Clínicas , Programas de Rastreamento , Criança , Feminino , Humanos , Masculino , Registros Eletrônicos de Saúde , Lipídeos/sangue , Programas de Rastreamento/métodos
6.
J Cardiovasc Electrophysiol ; 35(8): 1548-1558, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38818537

RESUMO

INTRODUCTION: During the SARS-CoV-2 COVID-19 pandemic, the global health system needed to review important processes involved in daily routines such as outpatient activities within the hospital, including follow-up visits of implantable cardiac electronic devices (CIEDs) carried out in office. The aim of this study is to describe our 3.5 years of real-world experience of a full remote CIED follow-up, evaluate the success rate of remote transmissions, and verify the adopted organizational model. METHODS: From April 2020 to November 2023, all patients with an activated and well-functioning remote monitoring (RM) system and automatic algorithms, like autocapture and autosensing, underwent exclusive RM follow-up. Unscheduled in-office visits were only prompted by remote yellow or red alerts. Patients were divided into two groups, based on available technology: Manual Transmission System (MTS) and Automatic Transmission System (ATS). The ATS group, in addition to ensuring a daily transmission of any yellow or red alerts, was checked at least every 15 days to ensure a valid connection. An automatic transmission was scheduled once a year, irrespective of alerts occurred. The MTS group provided a manual transmission every 6 months. RESULTS: One thousand nine hundred thirty-seven consecutive patients were included in the study. By the end of November 2023, a total of 1409 patients (1192 in the ATS and 217 in the MTS group) were still actively followed by our remote clinic (384 expired, 137 dismissed, 7 transferred). The overall success rate of transmissions with the adopted organizational model was 96.6% in the ATS group (connection index) and 87% in the MTS group. Conventional in-hospital follow-up visits decreased by 44%. Total clinic working time, resulting from the sum of the time spent during in-hospital and remote follow-up, after an initial increase, was progressively reduced to the actual -25%. Mortality rate for any cause was 7.5% per year in remote follow-up patients and 8.3% (p=NS) in in-office patients. In the ATS group, no device malfunctions were notified to our remote clinic, before we had already realized it through appropriate alerts. CONCLUSIONS: The available technology makes moving to a 100% remote clinic possible, without overwhelming clinic workflow, safely. Adopting an appropriate organizational model, it is possible to maintain high transmission success rates. The automatic transmissions allow a more frequent control of patients with CIED.


Assuntos
COVID-19 , Desfibriladores Implantáveis , Marca-Passo Artificial , Humanos , COVID-19/epidemiologia , Masculino , Feminino , Idoso , Modelos Organizacionais , Pessoa de Meia-Idade , Telemedicina/organização & administração , Fatores de Tempo , Tecnologia de Sensoriamento Remoto , Estudos Retrospectivos
7.
J Cardiovasc Electrophysiol ; 35(2): 341-345, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38164063

RESUMO

INTRODUCTION: The increasing use of insertable cardiac monitors (ICMs) for long-term continuous arrhythmia monitoring creates a high volume of transmissions and a significant workload for clinics. The ability to remotely reprogram device alert settings without in-office patient visits was recently introduced, but its impact on clinic workflow compared to the previous ICM iteration is unknown. METHODS: The aim of this real-world study was to evaluate the impact of device reprogramming capabilities on ICM alert burden and on clinic workflow. Deidentified data was obtained from US patients and a total of 19 525 receiving a LINQ II were propensity score-matched with 19 525 implanted with LINQ TruRhythm (TR) ICM based on age and reason for monitoring. RESULTS: After reprogramming, ICM alerts reduced by 20.5% (p < .001). Compared with patients monitored with LINQ TR, patients with LINQ II had their device reprogrammed sooner after implant and more frequently during follow-up. Adoption of remote programming was projected to lead to an annual total clinic time savings of 211 h per 100 ICM patients managed. CONCLUSION: These data suggest that utilization of ICM alert reprogramming has increased with remote capabilities, which may reduce clinic and patient burden for ICM follow-up and free clinician time for other valuable patient care activities.


Assuntos
Arritmias Cardíacas , Eletrocardiografia Ambulatorial , Humanos , Arritmias Cardíacas/diagnóstico , Arritmias Cardíacas/terapia , Doença do Sistema de Condução Cardíaco
8.
Transfusion ; 64(4): 665-673, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38456520

RESUMO

BACKGROUND: Microbial screening of platelet concentrates (PC) with automated culture methods is widely implemented to reduce septic transfusion reactions. Herein, detection of bacterial contamination in PC was compared between units prepared in plasma and a mix of plasma and platelet additive solution (PAS) and between the BACT/ALERT 3D and next generation BACT/ALERT VIRTUO systems. STUDY DESIGN/METHODS: Double apheresis units were split into single units, diluted in either PAS (PAS-PC) or plasma (plasma-PC), and tested for in vitro quality and sterility prior to spiking with ~30 CFU/unit of Staphylococcus epidermidis, Staphylococcus aureus, Serratia marcescens, and Klebsiella pneumoniae or ~10 CFU/mL of Cutibacterium acnes. Spiked PC were sampled for BACT/ALERT testing (36 and 48 h post-spiking) and colony counts (24, 36, and 48 h post-spiking). Times to detection (TtoD) and bacterial loads were compared between PC products and BACT/ALERT systems (N = 3). RESULTS: Bacterial growth was similar in plasma-PC and PAS-PC. No significant differences in TtoD were observed between plasma-PC and PAS-PC at the 36-h sampling time except for S. epidermidis which grew faster in plasma-PC and C. acnes which was detected earlier in PAS-PC (p < .05). Detection of facultative bacteria was 1.3-2.2 h sooner in VIRTUO compared with 3D (p < .05) while TtoD for C. acnes was not significantly different between the two systems. DISCUSSION: Comparable bacterial detection was observed in plasma-PC and PAS-PC with PC sampling performed at 36-h post blood collection. PC sampling at ≤36 h could result in faster detection of facultative pathogenic organisms with the VIRTUO system and improved PC safety.


Assuntos
Remoção de Componentes Sanguíneos , Infecções Estafilocócicas , Humanos , Plaquetas/microbiologia , Preservação de Sangue/métodos , Staphylococcus epidermidis , Transfusão de Plaquetas
9.
Psychophysiology ; 61(5): e14508, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38164815

RESUMO

In emergency medical services, paramedics are informed of an emergency call by a high-intensity acoustic alarm called the "call alert." Sudden, loud sounds like the call alert may cause a startle response and be experienced as aversive. Studies have identified an association between the call alert and adverse health effects in first responders; conceivably, these adverse health effects might be reduced by modifying the call alert to blunt its startling and aversive properties. Here, we assessed whether the call alert causes a startle response and whether its startling and aversive properties are reduced when the call alert is preceded by a weak acoustic "prepulse," a process referred to as "prepulse inhibition" (PPI). Paramedics (n = 50; 34M:13F:3 not reported; ages 20-68) were exposed to four call alerts (two with and two without a prepulse) in counterbalanced order. Responses were measured using electromyography (measuring blink amplitude), visual analog scales (quantifying perceived call alert intensity and aversiveness), and an electrocardiogram (assessing heart rate). Paramedics responded to the call alert with a startle reflex blink and an increased heart rate. Acoustic prepulses significantly reduced the amplitude of the call alert-induced startle blink, the perceived sound intensity, and the perceived "dislike" of the call alert. These findings confirm that the call alert is associated with an acoustic startle response in paramedics; adding a prepulse to the call alert can reduce its startling and aversive properties. Conceivably, such reductions might also diminish adverse health effects associated with the call alert in first responders.


Assuntos
Serviços Médicos de Emergência , Inibição Pré-Pulso , Humanos , Reflexo de Sobressalto/fisiologia , Estimulação Acústica , Eletromiografia
10.
Qual Life Res ; 33(7): 1985-1995, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38771558

RESUMO

PURPOSE: Clinical benefits result from electronic patient-reported outcome (ePRO) systems that enable remote symptom monitoring. Although clinically useful, real-time alert notifications for severe or worsening symptoms can overburden nurses. Thus, we aimed to algorithmically identify likely non-urgent alerts that could be suppressed. METHODS: We evaluated alerts from the PRO-TECT trial (Alliance AFT-39) in which oncology practices implemented remote symptom monitoring. Patients completed weekly at-home ePRO symptom surveys, and nurses received real-time alert notifications for severe or worsening symptoms. During parts of the trial, patients and nurses each indicated whether alerts were urgent or could wait until the next visit. We developed an algorithm for suppressing alerts based on patient assessment of urgency and model-based predictions of nurse assessment of urgency. RESULTS: 593 patients participated (median age = 64 years, 61% female, 80% white, 10% reported never using computers/tablets/smartphones). Patients completed 91% of expected weekly surveys. 34% of surveys generated an alert, and 59% of alerts prompted immediate nurse actions. Patients considered 10% of alerts urgent. Of the remaining cases, nurses considered alerts urgent more often when patients reported any worsening symptom compared to the prior week (33% of alerts with versus 26% without any worsening symptom, p = 0.009). The algorithm identified 38% of alerts as likely non-urgent that could be suppressed with acceptable discrimination (sensitivity = 80%, 95% CI [76%, 84%]; specificity = 52%, 95% CI [49%, 55%]). CONCLUSION: An algorithm can identify remote symptom monitoring alerts likely to be considered non-urgent by nurses, and may assist in fostering nurse acceptance and implementation feasibility of ePRO systems.


Assuntos
Algoritmos , Medidas de Resultados Relatados pelo Paciente , Humanos , Feminino , Masculino , Pessoa de Meia-Idade , Idoso , Neoplasias , Inquéritos e Questionários , Adulto
11.
Endocr Pract ; 30(7): 657-662, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38679387

RESUMO

OBJECTIVE: Guidelines recommend screening all individuals with resistant hypertension for primary aldosteronism (PA) but less than 2% are screened. We aimed to develop a noninterruptive Best Practice Alert (BPA) to assess if its implementation in the electronic health record improved PA screening rates among individuals with apparent treatment-resistant hypertension (aTRH). METHODS: We implemented a noninterruptive BPA on 9/17/2022 at our ambulatory primary care, endocrinology, nephrology, and cardiology clinics. We assessed clinical parameters of people with aTRH before (9/17/2021-9/16/2022) and after (9/17/2022-9/16/2023) the BPA was implemented. The noninterruptive BPA embedded with an order set identified people with aTRH and recommended screening for PA if it was not previously performed. RESULTS: There were 10 944 and 11 463 people with aTRH who attended office visits during the 12 months before and after the BPA implementation, respectively. There were no statistically significant differences in median age (P = .096), sex (P = .577), race (P = .753), and ethnicity (P = .472) between the pre- and post-BPA implementation groups. There was a significant increase in PA screening orders placed (227 [2.1%] vs 476 [4.2%], P < .001) and PA screening labs performed (169 [1.5%] vs 382 [3.3, P < .001) after BPA implementation. PA screening tests were positive in 26% (44/169) and 23% (88/382) of people in the pre- and post-BPA groups, respectively (P = .447). CONCLUSION: Implementation of a real-time electronic health record BPA doubled the screening rate for PA among people with aTRH; however, the overall screening rate was low.


Assuntos
Hiperaldosteronismo , Hipertensão , Programas de Rastreamento , Humanos , Hiperaldosteronismo/diagnóstico , Hiperaldosteronismo/complicações , Hiperaldosteronismo/sangue , Feminino , Masculino , Pessoa de Meia-Idade , Hipertensão/diagnóstico , Programas de Rastreamento/métodos , Programas de Rastreamento/normas , Adulto , Guias de Prática Clínica como Assunto , Idoso , Registros Eletrônicos de Saúde
12.
Regul Toxicol Pharmacol ; 150: 105646, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38777300

RESUMO

Environmental exposures are the main cause of cancer, and their carcinogenicity has not been fully evaluated, identifying potential carcinogens that have not been evaluated is critical for safety. This study is the first to propose a weight of evidence (WoE) approach based on computational methods to prioritize potential carcinogens. Computational methods such as read across, structural alert, (Quantitative) structure-activity relationship and chemical-disease association were evaluated and integrated. Four different WoE approach was evaluated, compared to the best single method, the WoE-1 approach gained 0.21 and 0.39 improvement in the area under the receiver operating characteristic curve (AUC) and Matthew's correlation coefficient (MCC) value, respectively. The evaluation of 681 environmental exposures beyond IARC list 1-2B prioritized 52 chemicals of high carcinogenic concern, of which 21 compounds were known carcinogens or suspected carcinogens, and eight compounds were identified as potential carcinogens for the first time. This study illustrated that the WoE approach can effectively complement different computational methods, and can be used to prioritize chemicals of carcinogenic concern.


Assuntos
Carcinógenos , Exposição Ambiental , Humanos , Carcinógenos/toxicidade , Exposição Ambiental/efeitos adversos , Relação Quantitativa Estrutura-Atividade , Medição de Risco , Animais
13.
BMC Health Serv Res ; 24(1): 204, 2024 Feb 14.
Artigo em Inglês | MEDLINE | ID: mdl-38355492

RESUMO

BACKGROUND: We identified that Stanford Health Care had a significant number of patients who after discharge are found by the utilization review committee not to meet Center for Mediare and Medicaid Services (CMS) 2-midnight benchmark for inpatient status. Some of the charges incurred during the care of these patients are written-off and known as Medicare 1-day write-offs. This study which aims to evaluate the use of a Best Practice Alert (BPA) feature on the electronic medical record, EPIC, to ensure appropriate designation of a patient's hospitalization status as either inpatient or outpatient in accordance with Center for Medicare and Medicaid services (CMS) 2 midnight length of stay benchmark thereby reducing the number of associated write-offs. METHOD: We incorporated a best practice alert (BPA) into the Epic Electronic Medical Record (EMR) that would prompt the discharging provider and the case manager to review the patients' inpatient designation prior to discharge and change the patient's designation to observation when deemed appropriate. Patients who met the inclusion criteria (Patients must have Medicare fee-for-service insurance, inpatient length of stay (LOS) less than 2 midnights, inpatient designation as hospitalization status at time of discharge, was hospitalized to an acute level of care and belonged to one of 37 listed hospital services at the time of signing of the discharge order) were randomized to have the BPA either silent or active over a three-month period from July 18, 2019, to October 18, 2019. RESULT: A total of 88 patients were included in this study: 40 in the control arm and 48 in the intervention arm. In the intervention arm, 8 (8/48, 16.7%) had an inpatient status designation despite potentially meeting Medicare guidelines for an observation stay, comparing to 23 patients (23/40, 57.5%) patients in the control group (p = 0.001). The estimated number of write-offs in the control arm was 17 (73.9%, out of 23 inpatient patients) while in the intervention arm was 1 (12.5%, out of 8 inpatient patient) after accounting for patients who may have met inpatient criteria for other reasons based on case manager note review. CONCLUSION: This is the first time to our knowledge that a BPA has been used in this manner to reduce the number of Medicare 1-day write-offs.


Assuntos
Medicare , Melhoria de Qualidade , Idoso , Humanos , Estados Unidos , Hospitalização , Tempo de Internação , Alta do Paciente
14.
J Dairy Sci ; 107(8): 6052-6064, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38554821

RESUMO

The use of sensor-based measures of rumination time as a parameter for early disease detection has received a lot of attention in scientific research. This study aimed to assess the accuracy of health alerts triggered by a sensor-based accelerometer system within 2 different management strategies on a commercial dairy farm. Multiparous Holstein cows were enrolled during the dry-off period and randomly allocated to conventional (CON) or sensor-based (SEN) management groups at calving. All cows were monitored for disorders for a minimum of 10 DIM following standardized operating procedures (SOP). The CON group (n = 199) followed an established monitoring protocol on the farm. The health alerts of this group were not available during the study but were later included in the analysis. The SEN group (n = 197) was only investigated when the sensor system triggered a health alert, and a more intensive monitoring approach was implemented according to the SOP. To analyze the efficiency of the health alerts in detecting disorders, the sensitivity (SE) and specificity (SP) of health alerts were determined for the CON group. In addition, all cows were divided into 3 subgroups based on their health status and the status of the health alerts in order to retrospectively compare the course of rumination time. Most health alerts (87%, n = 217) occurred on DIM 1. For the confirmation of diagnoses, health alerts showed SE and SP levels of 71% and 47% for CON cows. In SEN cows, SE of 71% and 75% and SP of 48% and 43% were found for the detection of ketosis and hypocalcemia, respectively. The rumination time of the subgroups was affected by DIM and the interaction between DIM and the status of health alert and health condition.


Assuntos
Parto , Animais , Bovinos , Feminino , Doenças dos Bovinos/diagnóstico , Indústria de Laticínios/métodos , Gravidez , Lactação
15.
J Dairy Sci ; 2024 Sep 06.
Artigo em Inglês | MEDLINE | ID: mdl-39245172

RESUMO

The growing use of automated systems in the dairy industry generates a vast amount of cow-level data daily, creating opportunities for using these data to support real-time decision-making. Currently, various commercial systems offer built-in alert algorithms to identify cows requiring attention. To our knowledge, no work has been done to compare the use of models accounting for herd-level variability on their predictive ability against automated systems. Long Short-Term Memory (LSTM) models are machine learning models capable of learning temporal patterns and making predictions based on time series data. The objective of our study was to evaluate the ability of LSTM models to identify a health alert associated with a ketosis diagnosis (HAK) using deviations of daily milk yield, milk FPR, number of successful milkings, rumination time, and activity index from the herd median by parity and DIM, considering various time series lengths and numbers of d before HAK. Additionally, we aimed to use Explainable Artificial Intelligence method to understand the relationships between input variables and model outputs. Data on daily milk yield, milk fat-to-protein ratio (FPR), number of successful milkings, rumination time, activity, and health events during 0 to 21 d in milk (DIM) were retrospectively obtained from a commercial Holstein dairy farm in northern Indiana from February 2020 to January 2023. A total of 1,743 cows were included in the analysis (non-HAK = 1,550; HAK = 193). Variables were transformed based on deviations from the herd median by parity and DIM. Six LSTM models were developed to identify HAK 1, 2, and 3 d before farm diagnosis using historic cow-level data with varying time series lengths. Model performance was assessed using repeated stratified 10-fold cross-validation for 20 repeats. The Shapley additive explanations framework (SHAP) was used for model explanation. Model accuracy was 83, 74, and 70%, balanced error rate was 17 to 18, 26 to 28, and 34%, sensitivity was 81 to 83, 71 to 74, and 62%, specificity was 83, 74, and 71%, positive predictive value was 38, 25 to 27, and 21%, negative predictive value was 97 to 98, 95 to 96, and 94%, and area under the curve was 0.89 to 0.90, 0.80 to 0.81, and 0.72 for models identifying HAK 1, 2, and 3 d before diagnosis, respectively. Performance declined as the time interval between identification and farm diagnosis increased, and extending the time series length did not improve model performance. Model explanation revealed that cows with lower milk yield, number of successful milkings, rumination time, and activity, and higher milk FPR compared with herdmates of the same parity and DIM were more likely to be classified as HAK. Our results demonstrate the potential of LSTM models in identifying HAK using deviations of daily milk production variables, rumination time, and activity index from the herd median by parity and DIM. Future studies are needed to evaluate the performance of health alerts using LSTM models controlling for herd-specific metrics against commercial built-in algorithms in multiple farms and for other disorders.

16.
BMC Med Inform Decis Mak ; 24(1): 255, 2024 Sep 16.
Artigo em Inglês | MEDLINE | ID: mdl-39285367

RESUMO

BACKGROUND: The aim is to develop and deploy an automated clinical alert system to enhance patient care and streamline healthcare operations. Structured and unstructured data from multiple sources are used to generate near real-time alerts for specific clinical scenarios, with an additional goal to improve clinical decision-making through accuracy and reliability. METHODS: The automated clinical alert system, named Smart Watchers, was developed using Apache NiFi and Python scripts to create flexible data processing pipelines and customisable clinical alerts. A comparative analysis between Smart Watchers and the legacy Elastic Watchers was conducted to evaluate performance metrics such as accuracy, reliability, and scalability. The evaluation involved measuring the time taken for manual data extraction through the electronic patient record (EPR) front-end and comparing it with the automated data extraction process using Smart Watchers. RESULTS: Deployment of Smart Watchers showcased a consistent time savings between 90% to 98.67% compared to manual data extraction through the EPR front-end. The results demonstrate the efficiency of Smart Watchers in automating data extraction and alert generation, significantly reducing the time required for these tasks when compared to manual methods in a scalable manner. CONCLUSIONS: The research underscores the utility of employing an automated clinical alert system, and its portability facilitated its use across multiple clinical settings. The successful implementation and positive impact of the system lay a foundation for future technological innovations in this rapidly evolving field.


Assuntos
Registros Eletrônicos de Saúde , Humanos , Registros Eletrônicos de Saúde/normas , Armazenamento e Recuperação da Informação/métodos
17.
J Korean Med Sci ; 39(4): e40, 2024 Jan 29.
Artigo em Inglês | MEDLINE | ID: mdl-38288541

RESUMO

BACKGROUND: In order to minimize the spread of seasonal influenza epidemic to communities worldwide, the Korea Disease Control and Prevention Agency has issued an influenza epidemic alert using the influenza epidemic threshold formula based on the results of the influenza-like illness (ILI) rate. However, unusual changes have occurred in the pattern of respiratory infectious diseases, including seasonal influenza, after the coronavirus disease 2019 (COVID-19) pandemic. As a result, the importance of detecting the onset of an epidemic earlier than the existing epidemic alert system is increasing. Accordingly, in this study, the Time Derivative (TD) method was suggested as a supplementary approach to the existing influenza alert system for the early detection of seasonal influenza epidemics. METHODS: The usefulness of the TD method as an early epidemic alert system was evaluated by applying the ILI rate for each week during past seasons when seasonal influenza epidemics occurred, ranging from the 2013-2014 season to the 2022-2023 season to compare it with the issued time of the actual influenza epidemic alert. RESULTS: As a result of applying the TD method, except for the two seasons (2020-2021 season and 2021-2022 season) that had no influenza epidemic, an influenza early epidemic alert was suggested during the remaining seasons, excluding the 2017-2018 and 2022-2023 seasons. CONCLUSION: The TD method is a time series analysis that enables early epidemic alert in real-time without relying on past epidemic information. It can be considered as an alternative approach when it is challenging to set an epidemic threshold based on past period information. This situation may arise when there has been a change in the typical seasonal epidemic pattern of various respiratory viruses, including influenza, following the COVID-19 pandemic.


Assuntos
COVID-19 , Vírus da Influenza A Subtipo H1N1 , Influenza Humana , Viroses , Humanos , Influenza Humana/diagnóstico , Influenza Humana/epidemiologia , Influenza Humana/prevenção & controle , Pandemias , Viroses/epidemiologia , COVID-19/epidemiologia , Estações do Ano
18.
Ren Fail ; 46(2): 2400552, 2024 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-39252153

RESUMO

OBJECTIVES: To determine whether clinical decision support systems (CDSS) for acute kidney injury (AKI) would enhance patient outcomes in terms of mortality, dialysis, and acute kidney damage progression. METHODS: The systematic review and meta-analysis included the relevant randomized controlled trials (RCTs) retrieved from PubMed, EMBASE, Web of Science, Cochrane, and SCOPUS databases until 21st January 2024. The meta-analysis was done using (RevMan 5.4.1). PROSPERO ID: CRD42024517399. RESULTS: Our meta-analysis included ten RCTs with 18,355 patients. There was no significant difference between CDSS and usual care in all-cause mortality (RR: 1.00 with 95% CI [0.93, 1.07], p = 0.91) and renal replacement therapy (RR: 1.11 with 95% CI [0.99, 1.24], p = 0.07). However, CDSS was significantly associated with a decreased incidence of hyperkalemia (RR: 0.27 with 95% CI [0.10, 0.73], p = 0.01) and increased eGFR change (MD: 1.97 with 95% CI [0.47, 3.48], p = 0.01). CONCLUSIONS: CDSS were not associated with clinical benefit in patients with AKI, with no effect on all-cause mortality or the need for renal replacement therapy. However, CDSS reduced the incidence of hyperkalemia and improved eGFR change in AKI patients.


Assuntos
Injúria Renal Aguda , Sistemas de Apoio a Decisões Clínicas , Ensaios Clínicos Controlados Aleatórios como Assunto , Humanos , Injúria Renal Aguda/terapia , Injúria Renal Aguda/mortalidade , Terapia de Substituição Renal/métodos , Taxa de Filtração Glomerular , Hiperpotassemia/etiologia , Hiperpotassemia/terapia , Hiperpotassemia/mortalidade , Diálise Renal
19.
BMC Med Educ ; 24(1): 835, 2024 Aug 02.
Artigo em Inglês | MEDLINE | ID: mdl-39095851

RESUMO

BACKGROUND: Medical universities often face the ongoing challenge of identifying and supporting at-risk students to enhance retention rates and academic success. This study explores a comprehensive analysis of perceived at-risk factors impeding academic and career aspirations and compares the perspectives of students and faculty in a medical school. METHODS: We focused on first and second-year medical (MBBS) students and teaching faculty in an international medical college offering a twinning program in India and Malaysia. Our investigation involved a comprehensive assessment of 25 at-risk factors through Likert-type questionnaires distributed to 250 MBBS students and 50 teaching faculty. RESULTS: Our findings revealed distinct disparities in perceptions between faculty and students regarding mean scores of classroom engagement (p = 0.017), procrastination (p = 0.001), unrealistic goals (p = 0.026), emotional/behavioral problems (p = 0.008), limited key social skills (p = 0.023), and a non-supportive home environment (p = 0.001). These differences underscore the need for increased communication and understanding between faculty and students to address these risk factors effectively. In contrast, no significant disparities were observed among faculty and students' perceptions concerning mean scores of various potential at-risk factors, including academic unpreparedness, cultural/language barriers, individual guidance/mentoring, limited communication skills, racism/sexism, self-confidence, self-respect, self-concept, motivation, underprepared for current academic challenges, self-discipline, negative social network, negative peer culture, transportation time, college financial cost, college evaluation culture bias, broken college relationships, teaching methodology, and learning disabilities. However, varying degrees of influence were perceived by faculty and students, suggesting the importance of individualized support. CONCLUSION: This study contributes to the academic community by shedding light on the multifaceted nature of at-risk factors influencing student success. It underscores the need for proactive measures and tailored interventions to enhance student retention in higher education and academic achievement, fostering a sustainable foundation for lifelong learning and growth.


Assuntos
Educação de Graduação em Medicina , Estudantes de Medicina , Humanos , Estudantes de Medicina/psicologia , Feminino , Masculino , Fatores de Risco , Malásia , Índia , Docentes de Medicina/psicologia , Sucesso Acadêmico , Inquéritos e Questionários , Adulto Jovem , Adulto
20.
J Formos Med Assoc ; 2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38548525

RESUMO

BACKGROUND: The coronavirus disease 2019 (COVID-19) pandemic has significantly impacted the supply and transfusion of blood components. This study aims to evaluate changes in blood collection and transfusions during the period following the nationwide Level 3 alert (May-July 2021). METHODS: We retrieved usage data for red blood cells (RBC) from the Taiwan National Health Insurance (NHI) database 2019-2021. RESULTS: During the Level 3 alert period, approximately 85% of COVID-19 cases (11,455/13,624) were in Taipei. In Taipei, blood collection declined by 26.34% and RBC transfusions decreased by 17.14% compared to pre-pandemic levels. RBC usage decreased across all service types, with a significant decrease observed in hematology/oncology by 15.62% (-483 patients, -2,425 units). In non-Taipei regions, blood collection declined by 12.54%, rebounding around one month earlier than in Taipei. The decline in RBC transfusions occurred one month later than in Taipei, with a much lower magnitude (4.57%). Strain on the blood supply occurred in May and June in both Taipei and non-Taipei regions. Among 7,532 hospitalized COVID-19 patients, approximately 6.9% patients required a total of 1,873 RBC transfusions. The rapid increase in COVID-19 inpatients did not significantly increase the burden of blood demands. SUMMARY: During the Level 3 alert, the most significant decline in both RBC collection and transfusions was observed in Taipei. In non-Taipei regions, the decrease in RBC use was only marginal. Notably, there was a significant decrease in RBC use in hematology/oncology in Taipei. This study supports transfusion specialists in seeking efficient ways to address similar future challenges.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA