Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 77
Filter
Add more filters

Publication year range
1.
BMC Nurs ; 23(1): 143, 2024 Mar 01.
Article in English | MEDLINE | ID: mdl-38429750

ABSTRACT

BACKGROUND: In low and middle-income countries like Kenya, critical care facilities are limited, meaning acutely ill patients are managed in the general wards. Nurses in these wards are expected to detect and respond to patient deterioration to prevent cardiac arrest or death. This study examined nurses' vital signs documentation practices during clinical deterioration and explored factors influencing their ability to detect and respond to deterioration. METHODS: This convergent parallel mixed methods study was conducted in the general medical and surgical wards of three hospitals in Kenya's coastal region. Quantitative data on the extent to which the nurses monitored and documented the vital signs 24 h before a cardiac arrest (death) occurred was retrieved from patients' medical records. In-depth, semi-structured interviews were conducted with twenty-four purposefully drawn registered nurses working in the three hospitals' adult medical and surgical wards. RESULTS: This study reviewed 405 patient records and found most of the documentation of the vital signs was done in the nursing notes and not the vital signs observation chart. During the 24 h prior to death, respiratory rate was documented the least in only 1.2% of the records. Only a very small percentage of patients had any vital event documented for all six-time points, i.e. four hourly. Thematic analysis of the interview data identified five broad themes related to detecting and responding promptly to deterioration. These were insufficient monitoring of vital signs linked to limited availability of equipment and supplies, staffing conditions and workload, lack of training and guidelines, and communication and teamwork constraints among healthcare workers. CONCLUSION: The study showed that nurses did not consistently monitor and record vital signs in the general wards. They also worked in suboptimal ward environments that do not support their ability to promptly detect and respond to clinical deterioration. The findings illustrate the importance of implementation of standardised systems for patient assessment and alert mechanisms for deterioration response. Furthermore, creating a supportive work environment is imperative in empowering nurses to identify and respond to patient deterioration. Addressing these issues is not only beneficial for the nurses but, more importantly, for the well-being of the patients they serve.

2.
BMC Med Res Methodol ; 23(1): 8, 2023 01 11.
Article in English | MEDLINE | ID: mdl-36631766

ABSTRACT

BACKGROUND: In the older general population, neurodegenerative diseases (NDs) are associated with increased disability, decreased physical and cognitive function. Detecting risk factors can help implement prevention measures. Using deep neural networks (DNNs), a machine-learning algorithm could be an alternative to Cox regression in tabular datasets with many predictive features. We aimed to compare the performance of different types of DNNs with regularized Cox proportional hazards models to predict NDs in the older general population. METHODS: We performed a longitudinal analysis with participants of the English Longitudinal Study of Ageing. We included men and women with no NDs at baseline, aged 60 years and older, assessed every 2 years from 2004 to 2005 (wave2) to 2016-2017 (wave 8). The features were a set of 91 epidemiological and clinical baseline variables. The outcome was new events of Parkinson's, Alzheimer or dementia. After applying multiple imputations, we trained three DNN algorithms: Feedforward, TabTransformer, and Dense Convolutional (Densenet). In addition, we trained two algorithms based on Cox models: Elastic Net regularization (CoxEn) and selected features (CoxSf). RESULTS: 5433 participants were included in wave 2. During follow-up, 12.7% participants developed NDs. Although the five models predicted NDs events, the discriminative ability was superior using TabTransformer (Uno's C-statistic (coefficient (95% confidence intervals)) 0.757 (0.702, 0.805). TabTransformer showed superior time-dependent balanced accuracy (0.834 (0.779, 0.889)) and specificity (0.855 (0.0.773, 0.909)) than the other models. With the CoxSf (hazard ratio (95% confidence intervals)), age (10.0 (6.9, 14.7)), poor hearing (1.3 (1.1, 1.5)) and weight loss 1.3 (1.1, 1.6)) were associated with a higher DNN risk. In contrast, executive function (0.3 (0.2, 0.6)), memory (0, 0, 0.1)), increased gait speed (0.2, (0.1, 0.4)), vigorous physical activity (0.7, 0.6, 0.9)) and higher BMI (0.4 (0.2, 0.8)) were associated with a lower DNN risk. CONCLUSION: TabTransformer is promising for prediction of NDs with heterogeneous tabular datasets with numerous features. Moreover, it can handle censored data. However, Cox models perform well and are easier to interpret than DNNs. Therefore, they are still a good choice for NDs.


Subject(s)
Neurodegenerative Diseases , Male , Humans , Female , Middle Aged , Aged , Cohort Studies , Longitudinal Studies , Neurodegenerative Diseases/diagnosis , Neurodegenerative Diseases/epidemiology , Machine Learning , Neural Networks, Computer
3.
BMC Med Res Methodol ; 23(1): 232, 2023 10 13.
Article in English | MEDLINE | ID: mdl-37833647

ABSTRACT

BACKGROUND: Growth studies rely on longitudinal measurements, typically represented as trajectories. However, anthropometry is prone to errors that can generate outliers. While various methods are available for detecting outlier measurements, a gold standard has yet to be identified, and there is no established method for outlying trajectories. Thus, outlier types and their effects on growth pattern detection still need to be investigated. This work aimed to assess the performance of six methods at detecting different types of outliers, propose two novel methods for outlier trajectory detection and evaluate how outliers affect growth pattern detection. METHODS: We included 393 healthy infants from The Applied Research Group for Kids (TARGet Kids!) cohort and 1651 children with severe malnutrition from the co-trimoxazole prophylaxis clinical trial. We injected outliers of three types and six intensities and applied four outlier detection methods for measurements (model-based and World Health Organization cut-offs-based) and two for trajectories. We also assessed growth pattern detection before and after outlier injection using time series clustering and latent class mixed models. Error type, intensity, and population affected method performance. RESULTS: Model-based outlier detection methods performed best for measurements with precision between 5.72-99.89%, especially for low and moderate error intensities. The clustering-based outlier trajectory method had high precision of 14.93-99.12%. Combining methods improved the detection rate to 21.82% in outlier measurements. Finally, when comparing growth groups with and without outliers, the outliers were shown to alter group membership by 57.9 -79.04%. CONCLUSIONS: World Health Organization cut-off-based techniques were shown to perform well in few very particular cases (extreme errors of high intensity), while model-based techniques performed well, especially for moderate errors of low intensity. Clustering-based outlier trajectory detection performed exceptionally well across all types and intensities of errors, indicating a potential strategic change in how outliers in growth data are viewed. Finally, the importance of detecting outliers was shown, given its impact on children growth studies, as demonstrated by comparing results of growth group detection.


Subject(s)
Child Development , Research Design , Child , Humans , Cluster Analysis , Infant
4.
BMC Infect Dis ; 23(1): 362, 2023 May 30.
Article in English | MEDLINE | ID: mdl-37254064

ABSTRACT

BACKGROUND: Although tuberculosis (TB) patients coinfected with HIV are at risk of poor treatment outcomes, there is paucity of data on changing trends of TB/HIV co-infection and their treatment outcomes. This study aims to estimate the burden of TB/HIV co-infection over time, describe the treatment available to TB/HIV patients and estimate the effect of TB/HIV co-infection on TB treatment outcomes. METHODS: This was a retrospective data analyses from TB surveillance in two counties in Kenya (Nyeri and Kilifi): 2012‒2020. All TB patients aged ≥ 18 years were included. The main exposure was HIV status categorised as infected, negative or unknown status. World Health Organization TB treatment outcomes were explored; cured, treatment complete, failed treatment, defaulted/lost-to-follow-up, died and transferred out. Time at risk was from date of starting TB treatment to six months later/date of the event and Cox proportion with shared frailties models were used to estimate effects of TB/HIV co-infection on TB treatment outcomes. RESULTS: The study includes 27,285 patients, median (IQR) 37 (29‒49) years old and 64% male. 23,986 (88%) were new TB cases and 91% were started on 2RHZE/4RH anti-TB regimen. Overall, 7879 (29%, 95% 28‒30%) were HIV infected. The proportion of HIV infected patient was 32% in 2012 and declined to 24% in 2020 (trend P-value = 0.01). Uptake of ARTs (95%) and cotrimoxazole prophylaxis (99%) was high. Overall, 84% patients completed six months TB treatment, 2084 (7.6%) died, 4.3% LTFU, 0.9% treatment failure and 2.8% transferred out. HIV status was associated with lower odds of completing TB treatment: infected Vs negative (aOR 0.56 (95%CI 0.52‒0.61) and unknown vs negative (aOR 0.57 (95%CI 0.44‒0.73). Both HIV infected and unknown status were associated with higher hazard of death: (aHR 2.40 (95%CI 2.18‒2.63) and 1.93 (95%CI 1.44‒2.56)) respectively and defaulting treatment/LTFU: aHR 1.16 (95%CI 1.01‒1.32) and 1.55 (95%CI 1.02‒2.35)) respectively. HIV status had no effect on hazard of transferring out and treatment failure. CONCLUSION: The overall burden of TB/HIV coinfection was within previous pooled estimate. Our findings support the need for systematic HIV testing as those with unknown status had similar TB treatment outcomes as the HIV infected.


Subject(s)
Coinfection , HIV Infections , Latent Tuberculosis , Tuberculosis , Humans , Male , Adult , Middle Aged , Female , HIV Infections/complications , HIV Infections/drug therapy , HIV Infections/epidemiology , Retrospective Studies , Longitudinal Studies , Coinfection/drug therapy , Coinfection/epidemiology , Coinfection/complications , Kenya/epidemiology , Antitubercular Agents/therapeutic use , Tuberculosis/complications , Tuberculosis/drug therapy , Tuberculosis/epidemiology , Treatment Outcome , Latent Tuberculosis/drug therapy
5.
Parasitol Res ; 122(3): 801-814, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36683088

ABSTRACT

Aedes aegypti is an important vector of several arboviruses including dengue and chikungunya viruses. Accurate identification of larval habitats of Ae. aegypti is considered an essential step in targeted control. This study determined Ae. aegypti productivity in selected larval habitats in Msambweni, Kwale County, Kenya. Three sequential larval habitat surveys were conducted. The first survey was habitat census (baseline) through which 83 representative larval habitats were identified and selected. The second and third surveys involved estimating daily productivity of the 83 selected larval habitats for 30 consecutive days during a wet and a dry season, respectively. Of 664 larval habitats examined at baseline, 144 larval habitats (21.7%) were found to be infested with Ae. aegypti larvae. At baseline, majority (71%) of the pupae were collected from two (2/6) larval habitat types, tires and pots. Multivariate analysis identified habitat type and the habitat being movable as the predictors for pupal abundance. During the 30-day daily pupal production surveys, only a few of the habitats harbored pupae persistently. Pupae were found in 28% and 12% of the larval habitats during the wet and dry seasons, respectively. In the wet season, drums, tires, and pots were identified as the key habitat types accounting for 85% of all pupae sampled. Three habitats (all drums) accounted for 80% of all the pupae collected in the dry season. Predictors for pupal productivity in the wet season were habitat type, place (whether the habitat is located at the back or front of the house), habitat purpose (use of the water in the habitat), and source of water. Although the multivariate model for habitat type did not converge, habitat type and habitat size were the only significant predictors during the dry season. Drums, pots, and tires were sources of more than 85% of Ae. aegypti pupae, reinforcing the "key container concept." Targeting these three types of habitats makes epidemiological sense, especially during the dry season.


Subject(s)
Aedes , Dengue , Animals , Pupa , Larva , Kenya , Mosquito Vectors , Ecosystem , Seasons , Water
6.
BMC Med ; 19(1): 122, 2021 06 04.
Article in English | MEDLINE | ID: mdl-34082778

ABSTRACT

BACKGROUND: Diagnosing bacterial meningitis is essential to optimise the type and duration of antimicrobial therapy to limit mortality and sequelae. In sub-Saharan Africa, many public hospitals lack laboratory capacity, relying on clinical features to empirically treat or not treat meningitis. We investigated whether clinical features of bacterial meningitis identified prior to the introduction of conjugate vaccines still discriminate meningitis in children aged ≥60 days. METHODS: We conducted a retrospective cohort study to validate seven clinical features identified in 2002 (KCH-2002): bulging fontanel, neck stiffness, cyanosis, seizures outside the febrile convulsion age range, focal seizures, impaired consciousness, or fever without malaria parasitaemia and Integrated Management of Childhood Illness (IMCI) signs: neck stiffness, lethargy, impaired consciousness or seizures, and assessed at admission in discriminating bacterial meningitis after the introduction of conjugate vaccines. Children aged ≥60 days hospitalised between 2012 and 2016 at Kilifi County Hospital were included in this analysis. Meningitis was defined as positive cerebrospinal fluid (CSF) culture, organism observed on CSF microscopy, positive CSF antigen test, leukocytes ≥50/µL, or CSF to blood glucose ratio <0.1. RESULTS: Among 12,837 admissions, 98 (0.8%) had meningitis. The presence of KCH-2002 signs had a sensitivity of 86% (95% CI 77-92) and specificity of 38% (95% CI 37-38). Exclusion of 'fever without malaria parasitaemia' reduced sensitivity to 58% (95% CI 48-68) and increased specificity to 80% (95% CI 79-80). IMCI signs had a sensitivity of 80% (95% CI 70-87) and specificity of 62% (95% CI 61-63). CONCLUSIONS: A lower prevalence of bacterial meningitis and less typical signs than in 2002 meant the lower performance of KCH-2002 signs. Clinicians and policymakers should be aware of the number of lumbar punctures (LPs) or empirical treatments needed for each case of meningitis. Establishing basic capacity for CSF analysis is essential to exclude bacterial meningitis in children with potential signs.


Subject(s)
Child, Hospitalized , Meningitis, Bacterial , Child , Humans , Infant , Kenya/epidemiology , Meningitis, Bacterial/diagnosis , Meningitis, Bacterial/epidemiology , Retrospective Studies , Spinal Puncture
7.
BMC Med ; 19(1): 222, 2021 09 20.
Article in English | MEDLINE | ID: mdl-34538239

ABSTRACT

BACKGROUND: Despite adherence to WHO guidelines, inpatient mortality among sick children admitted to hospital with complicated severe acute malnutrition (SAM) remains unacceptably high. Several studies have examined risk factors present at admission for mortality. However, risks may evolve during admission with medical and nutritional treatment or deterioration. Currently, no specific guidance exists for assessing daily treatment response. This study aimed to determine the prognostic value of monitoring clinical signs on a daily basis for assessing mortality risk during hospitalization in children with SAM. METHODS: This is a secondary analysis of data from a randomized trial (NCT02246296) among 843 hospitalized children with SAM. Daily clinical signs were prospectively collected during ward rounds. Multivariable extended Cox regression using backward feature selection was performed to identify daily clinical warning signs (CWS) associated with time to death within the first 21 days of hospitalization. Predictive models were subsequently developed, and their prognostic performance evaluated using Harrell's concordance index (C-index) and time-dependent area under the curve (tAUC). RESULTS: Inpatient case fatality ratio was 16.3% (n=127). The presence of the following CWS during daily assessment were found to be independent predictors of inpatient mortality: symptomatic hypoglycemia, reduced consciousness, chest indrawing, not able to complete feeds, nutritional edema, diarrhea, and fever. Daily risk scores computed using these 7 CWS together with MUAC<10.5cm at admission as additional CWS predict survival outcome of children with SAM with a C-index of 0.81 (95% CI 0.77-0.86). Moreover, counting signs among the top 5 CWS (reduced consciousness, symptomatic hypoglycemia, chest indrawing, not able to complete foods, and MUAC<10.5cm) provided a simpler tool with similar prognostic performance (C-index of 0.79; 95% CI 0.74-0.84). Having 1 or 2 of these CWS on any day during hospitalization was associated with a 3 or 11-fold increased mortality risk compared with no signs, respectively. CONCLUSIONS: This study provides evidence for structured monitoring of daily CWS as recommended clinical practice as it improves prediction of inpatient mortality among sick children with complicated SAM. We propose a simple counting-tool to guide healthcare workers to assess treatment response for these children. TRIAL REGISTRATION: NCT02246296.


Subject(s)
Malnutrition , Severe Acute Malnutrition , Child , Hospitalization , Humans , Infant , Inpatients , Risk Factors
8.
BMC Med Res Methodol ; 21(1): 89, 2021 04 27.
Article in English | MEDLINE | ID: mdl-33906605

ABSTRACT

BACKGROUND: Survival analyses methods (SAMs) are central to analysing time-to-event outcomes. Appropriate application and reporting of such methods are important to ensure correct interpretation of the data. In this study, we systematically review the application and reporting of SAMs in studies of tuberculosis (TB) patients in Africa. It is the first review to assess the application and reporting of SAMs in this context. METHODS: Systematic review of studies involving TB patients from Africa published between January 2010 and April 2020 in English language. Studies were eligible if they reported use of SAMs. Application and reporting of SAMs were evaluated based on seven author-defined criteria. RESULTS: Seventy-six studies were included with patient numbers ranging from 56 to 182,890. Forty-three (57%) studies involved a statistician/epidemiologist. The number of published papers per year applying SAMs increased from two in 2010 to 18 in 2019 (P = 0.004). Sample size estimation was not reported by 67 (88%) studies. A total of 22 (29%) studies did not report summary follow-up time. The survival function was commonly presented using Kaplan-Meier survival curves (n = 51, (67%) studies) and group comparisons were performed using log-rank tests (n = 44, (58%) studies). Sixty seven (91%), 3 (4.1%) and 4 (5.4%) studies reported Cox proportional hazard, competing risk and parametric survival regression models, respectively. A total of 37 (49%) studies had hierarchical clustering, of which 28 (76%) did not adjust for the clustering in the analysis. Reporting was adequate among 4.0, 1.3 and 6.6% studies for sample size estimation, plotting of survival curves and test of survival regression underlying assumptions, respectively. Forty-five (59%), 52 (68%) and 73 (96%) studies adequately reported comparison of survival curves, follow-up time and measures of effect, respectively. CONCLUSION: The quality of reporting survival analyses remains inadequate despite its increasing application. Because similar reporting deficiencies may be common in other diseases in low- and middle-income countries, reporting guidelines, additional training, and more capacity building are needed along with more vigilance by reviewers and journal editors.


Subject(s)
Tuberculosis , Africa/epidemiology , Humans , Kaplan-Meier Estimate , Sample Size , Survival Analysis , Tuberculosis/diagnosis , Tuberculosis/epidemiology
9.
Matern Child Nutr ; 16(2): e12913, 2020 04.
Article in English | MEDLINE | ID: mdl-31756291

ABSTRACT

Hospital readmission is common among children with complicated severe acute malnutrition (cSAM) but not well-characterised. Two distinct cSAM phenotypes, marasmus and kwashiorkor, exist, but their pathophysiology and whether the same phenotype persists at relapse are unclear. We aimed to test the association between cSAM phenotype at index admission and readmission following recovery. We performed secondary data analysis from a multicentre randomised trial in Kenya with 1-year active follow-up. The main outcome was cSAM phenotype upon hospital readmission. Among 1,704 HIV-negative children with cSAM discharged in the trial, 177 children contributed a total of 246 readmissions with cSAM. cSAM readmission was associated with age<12 months (p = .005), but not site, sex, season, nor cSAM phenotype. Of these, 42 children contributed 44 readmissions with cSAM that occurred after a monthly visit when SAM was confirmed absent (cSAM relapse). cSAM phenotype was sustained during cSAM relapse. The adjusted odds ratio for presenting with kwashiorkor during readmission after kwashiorkor at index admission was 39.3 [95% confidence interval (95% CI) [2.69, 1,326]; p = .01); and for presenting with marasmus during readmission after kwashiorkor at index admission was 0.02 (95% CI [0.001, 0.037]; p = .01). To validate this finding, we examined readmissions to Kilifi County Hospital, Kenya occurring at least 2 months after an admission with cSAM. Among 2,412 children with cSAM discharged alive, there were 206 readmissions with cSAM. Their phenotype at readmission was significantly influenced by their phenotype at index admission (p < .001). This is the first report describing the phenotype and rate of cSAM recurrence.


Subject(s)
Child Nutrition Disorders/epidemiology , Hospitalization/statistics & numerical data , Patient Readmission/statistics & numerical data , Severe Acute Malnutrition/epidemiology , Age Factors , Child Nutrition Disorders/therapy , Child, Preschool , Cohort Studies , Female , Follow-Up Studies , Humans , Infant , Kenya/epidemiology , Male , Phenotype , Recurrence , Retrospective Studies , Severe Acute Malnutrition/therapy
10.
PLoS Med ; 16(2): e1002747, 2019 02.
Article in English | MEDLINE | ID: mdl-30807589

ABSTRACT

BACKGROUND: Children with medically complicated severe acute malnutrition (SAM) have high risk of inpatient mortality. Diarrhea, carbohydrate malabsorption, and refeeding syndrome may contribute to early mortality and delayed recovery. We tested the hypothesis that a lactose-free, low-carbohydrate F75 milk would serve to limit these risks, thereby reducing the number of days in the stabilization phase. METHODS AND FINDINGS: In a multicenter double-blind trial, hospitalized severely malnourished children were randomized to receive standard formula (F75) or isocaloric modified F75 (mF75) without lactose and with reduced carbohydrate. The primary endpoint was time to stabilization, as defined by the World Health Organization (WHO), with intention-to-treat analysis. Secondary outcomes included in-hospital mortality, diarrhea, and biochemical features of malabsorption and refeeding syndrome. The trial was registered at clinicaltrials.gov (NCT02246296). Four hundred eighteen and 425 severely malnourished children were randomized to F75 and mF75, respectively, with 516 (61%) enrolled in Kenya and 327 (39%) in Malawi. Children with a median age of 16 months were enrolled between 4 December 2014 and 24 December 2015. One hundred ninety-four (46%) children assigned to F75 and 188 (44%) to mF75 had diarrhea at admission. Median time to stabilization was 3 days (IQR 2-5 days), which was similar between randomized groups (0.23 [95% CI -0.13 to 0.60], P = 0.59). There was no evidence of effect modification by diarrhea at admission, age, edema, or HIV status. Thirty-six and 39 children died before stabilization in the F75 and in mF75 arm, respectively (P = 0.84). Cumulative days with diarrhea (P = 0.27), enteral (P = 0.42) or intravenous fluids (P = 0.19), other serious adverse events before stabilization, and serum and stool biochemistry at day 3 did not differ between groups. The main limitation was that the primary outcome of clinical stabilization was based on WHO guidelines, comprising clinical evidence of recovery from acute illness as well as metabolic stabilization evidenced by recovery of appetite. CONCLUSIONS: Empirically treating hospitalized severely malnourished children during the stabilization phase with lactose-free, reduced-carbohydrate milk formula did not improve clinical outcomes. The biochemical analyses suggest that the lactose-free formulae may still exceed a carbohydrate load threshold for intestinal absorption, which may limit their usefulness in the context of complicated SAM. TRIAL REGISTRATION: ClinicalTrials.gov NCT02246296.


Subject(s)
Child, Hospitalized , Diet, Carbohydrate-Restricted/methods , Lactose , Milk , Severe Acute Malnutrition/diet therapy , Adolescent , Animals , Child , Child, Preschool , Double-Blind Method , Female , Humans , Infant , Male , Severe Acute Malnutrition/diagnosis
SELECTION OF CITATIONS
SEARCH DETAIL