Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 9.666
Filter
1.
Malawi Med J ; 36(1): 7-12, 2024 Mar.
Article in English | MEDLINE | ID: mdl-39086370

ABSTRACT

Introduction: Ventriculoperitoneal shunt insertion (VPSI) and endoscopic third ventriculostomy (ETV) are the major procedures for treating pediatric hydrocephalus. However, studies comparing motor development following the two treatments are limited. Objective: We aimed to determine motor development outcomes in children with hydrocephalus up to 2 years of age after undergoing VPSI or ETV, to identify which surgical approach yields better motor outcomes and may be more effective for Malawian children. Methods: This was a cross-sectional study where we recruited two groups of participants: one group consisted of children with hydrocephalus treated with VP shunt whilst the other group were treated with ETV, at least 6 months prior to this study. Participants were identified from the hospital records and were called to come for neurodevelopmental assessment using the Malawi Development Assessment Tool (MDAT). Results: A total 152 children treated for hydrocephalus within an 18-month period met the inclusion criteria. Upon follow up and tracing, we recruited 25 children who had been treated: 12 had VPSI and 13 had ETV. MDAT revealed delays in both assessed motor domains: 19 out of the 25 children had delayed gross motor whilst 16 of 25 had delayed fine motor development. There was no significant difference between the shunted and the ETV groups. Conclusion: Children with hydrocephalus demonstrate delays in motor development six to 18 months after treatment with either VPSI or ETV. This may necessitate early and prolonged intensive rehabilitation to restore motor function after surgery. Long-term follow-up studies with bigger sample sizes are required to detect the effect of the two treatment approaches.


Subject(s)
Hydrocephalus , Ventriculoperitoneal Shunt , Ventriculostomy , Humans , Hydrocephalus/surgery , Ventriculoperitoneal Shunt/adverse effects , Cross-Sectional Studies , Ventriculostomy/methods , Male , Female , Infant , Child, Preschool , Treatment Outcome , Third Ventricle/surgery , Malawi , Child Development , Motor Skills
2.
Front Vet Sci ; 11: 1399040, 2024.
Article in English | MEDLINE | ID: mdl-39086769

ABSTRACT

EU Member States should ensure that they implement adequate health surveillance schemes in all aquaculture farming areas, as appropriate for the type of production. This study presents the results of applying the FAO's Surveillance Evaluation Tool (SET) to assess the Spanish disease surveillance system for farmed fish species, which although applied previously in livestock production, is applied here to aquaculture for the first time. Overall, there were important score differences between trout and marine fish (seabass and seabream) surveillance, which were higher for trout in the following areas: Institutional (70.8% versus 50.0%), Laboratory (91.7% versus 47.2%), and Surveillance activities (75.3% versus 61.3%). For other categories, the values were lower and no significant differences were found. However, most surveillance efforts focused only on trout, for which there are EU and WOAH listed (notifiable) diseases. In contrast, for seabream and seabass, for which there are no listed diseases, it was considered that surveillance efforts should, nevertheless, be in place and should focus on the identification of abnormal mortalities and emerging diseases, for which there are as yet no standardized harmonised methodologies.

3.
Cureus ; 16(7): e63581, 2024 Jul.
Article in English | MEDLINE | ID: mdl-39087151

ABSTRACT

Our study aimed to establish the risk of selection bias in randomized controlled trials (RCT) that were overall rated as having "low bias" risk according to Cochrane's Risk of Bias, version 2 (RoB 2) tool. A systematic literature search of current systematic reviews of RCTs was conducted. From the identified reviews, RCTs with overall "high bias" and "low bias" RoB 2 risk ratings were extracted. All RCTs were statistically tested for selection bias risk. From the test results, true positive, true negative, false positive, or false negative ratings were established, and the false omission rate (FOR) with a 95% confidence interval (CI) was computed. Subgroup analysis was conducted by computing the negative likelihood ratio (-LR) concerning RoB 2 domain 1 ratings: bias arising from the randomization process. A total of 1070 published RCTs (median publication year: 2018; interquartile range: 2013-2020) were identified and tested. We found that 7.61% of all "low bias" (RoB 2)-rated RCTs were of high selection bias risk (FOR 7.61%; 95% CI: 6.31%-9.14%) and that the likelihood for high selection bias risk in "low bias" (RoB 2 domain 1)-rated RCTs was 6% higher than that for low selection bias risk (-LR: 1.06; 95% CI: 0.98-1.15). These findings raise issues about the validity of "low bias" risk ratings using Cochrane's RoB 2 tool as well as about the validity of some of the results from recently published RCTs. Our results also suggest that the likelihood of a "low bias" risk-rated body of clinical evidence being actually bias-free is low, and that generalization based on a limited, pre-specified set of appraisal criteria may not justify a high level of confidence that such evidence reflects the true treatment effect.

4.
J Behav Addict ; 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39088282

ABSTRACT

Background: Gaming Disorder was included as an addictive disorder in the latest version of the International Classification of Diseases (ICD-11), published in 2022. The present study aimed to develop a screening tool for Gaming Disorder, the Gaming Disorder Identification Test (GADIT), based on the four ICD-11 diagnostic criteria: impaired control, increasing priority, continued gaming despite harm, and functional impairment. Method: We reviewed 297 questionnaire items from 48 existing gaming addiction scales and selected 68 items based on content validity. Two datasets were collected: 1) an online panel (N = 803) from Australia, United States, United Kingdom and Canada, split into a development set (N = 589) and a validation dataset (N = 214); and 2) a university sample (N = 408) from Australia. Item response theory and confirmatory factor analyses were conducted to select eight items to form the GADIT. Validity was established by regressing the GADIT against known correlates of Gaming Disorder. Results: Confirmatory factor analyses of the GADIT showed good model fit (RMSEA=<0.001-0.108; CFI = 0.98-1.00), and internal consistency was excellent (Cronbach's alphas = 0.77-0.92). GADIT scores were strongly associated with the Internet Gaming Disorder Test (IGDT-10), and significantly associated with gaming intensity, eye fatigue, hand pain, wrist pain, back or neck pain, and excessive in-game purchases, in both the validation and the university sample datasets. Conclusion: The GADIT has strong psychometric properties in two independent samples from four English-speaking countries collected through different channels, and shown validity against existing scales and variables that are associated with Gaming Disorder. A cut-off of 5 is tentatively recommended for screening for Gaming Disorder.

6.
Public Health Nurs ; 2024 Aug 02.
Article in English | MEDLINE | ID: mdl-39092927

ABSTRACT

The aim of this study was to adapt the National Aeronautics and Space Administration Task Load Index (NASA-TLX) to the home care setting and translate and validate it in Italian. An online questionnaire containing the Italian version of the NASA-TLX adapted to the home care setting was administered to home care nurses to measure workload. Content Validity Index, Exploratory, and Confirmatory Factor Analyses were used to measure the psychometric characteristics of the modified NASA-TLX. The modified Italian version of NASA-TLX_HC-IT showed good psychometric characteristics in measuring the workload of home care nurses, with excellent fit indices. The reliability, calculated with Cronbach's alpha, was 0.73, indicating adequate reliability. A negative correlation between workload and job satisfaction among home care nurses, as well as a positive association between high workload and intention to leave the workplace, was verified. The modified Italian version of the NASA-TLX_HC-IT was confirmed to be a valid and reliable instrument to measure workload in home care nursing. Furthermore, the correlation between workload and the intention to leave the workplace among home care nurses was an important result that community nursing managers should consider preventing the shortage of home care nurses.

7.
R Soc Open Sci ; 11(6): 240161, 2024 Jun.
Article in English | MEDLINE | ID: mdl-39092146

ABSTRACT

Capuchins can employ several strategies to deal with environmental challenges, such as using stone tools to access encapsulated resources. Nut-cracking is customary in several capuchin populations and can be affected by ecological and cultural factors; however, data on success and efficiency are only known for two wild populations. In this work, using camera traps, we assessed palm nut-cracking success and efficiency in two newly studied wild bearded capuchin populations (Sapajus libidinosus) and compared them with other sites. We tested the hypothesis that the overall success and efficiency of nut-cracking would be similar between sites when processing similar resources, finding partial support for it. Although using hammerstones of different sizes, capuchins had a similar success frequency. However, efficiency (number of strikes to crack a nut) was different, with one population being more efficient. We also tested whether success and efficiency varied between sexes in adults. We predict adult males would be more successful and efficient when cracking hard nuts. We found no differences between the sexes in one site but found sex differences in the other, although also for the low-resistant nut, which was unexpected. Our data add to the knowledge of capuchin nut-cracking behaviour flexibility, variance and potential cultural traits.

8.
Clin Orthop Surg ; 16(4): 578-585, 2024 Aug.
Article in English | MEDLINE | ID: mdl-39092296

ABSTRACT

Background: Morphological differences among various ethnicities can significantly impact the reliability of acromiohumeral interval (AHI) measurements in diagnosing massive rotator cuff tears. This variation raises questions about the generalizability of AHI studies conducted in Western populations to the Asian population. Consequently, the primary objective of this study was to develop a novel parameter that can enhance the diagnosis of massive rotator cuff tears, irrespective of morphometric disparities between individuals of different ethnic backgrounds. Methods: A 10-year retrospective analysis of shoulder arthroscopic surgery patients was conducted, categorizing them into 3 groups based on intraoperative findings: those without rotator cuff tears, those with non-massive tears, and those with massive tears. AHI-glenoid ratio (AHIGR) was measured by individuals with varying academic backgrounds, and its diagnostic performance was compared to AHI. Sensitivity, specificity, accuracy, and intra- and inter-rater reliability were evaluated. Results: AHIGR exhibited significantly improved sensitivity, specificity, and accuracy as a diagnostic tool for massive rotator cuff tears, compared to AHI. A proposed cut-off point of AHIGR ≤ 0.2 yielded comparable results to AHI < 7 mm. Intra- and inter-rater reliability was excellent among different observers. Conclusions: AHIGR emerges as a promising diagnostic tool for massive rotator cuff tears, offering improved sensitivity and specificity compared to AHI. Its reproducibility among diverse observers underscores its potential clinical utility. While further research with larger and more diverse patient cohorts is necessary, AHIGR offers significant potential as a reference for enhancing the assessment of massive rotator cuff tears.


Subject(s)
Rotator Cuff Injuries , Humans , Rotator Cuff Injuries/diagnostic imaging , Retrospective Studies , Male , Female , Middle Aged , Aged , Acromion/diagnostic imaging , Arthroscopy , Adult , Humerus/diagnostic imaging , Reproducibility of Results , Sensitivity and Specificity , Shoulder Joint/diagnostic imaging
9.
Cureus ; 16(7): e63704, 2024 Jul.
Article in English | MEDLINE | ID: mdl-39092365

ABSTRACT

INTRODUCTION: The traditional approach to neonatal early-onset sepsis (NEOS) management, involving maternal risk factors and nonspecific neonatal symptoms, usually leads to unnecessary antibiotic use. This study addresses these concerns by evaluating the Kaiser sepsis calculator (KSC) in guiding antibiotic therapy for NEOS, especially in high-incidence facilities (over 4/1,000 live births), by comparing it against the 2010 Centers for Disease Control and Prevention (CDC) guidelines for neonates ≥34 weeks with suspected sepsis, thereby emphasizing its implications for personalized patient care. METHODS: This is a prospective observational study. All neonates of 34 gestational weeks or more, presenting with either maternal risk factors or sepsis symptoms within 12 hours of birth, were included in the study. The analysis focused on antibiotic recommendations by the 2010 CDC guidelines versus those by the KSC at presumed (0.5/1,000) and actual (16/1,000) sepsis incidence rates. RESULTS: NEOS was identified in 14 cases (14.1%). Compared to the KSC, at an incidence rate of 16 per 1,000, the KSC resulted in a significant 32.3% reduction in antibiotic treatment (74 cases (74.7%) vs. 42 cases (42.4%), respectively; p < 0.001). The calculator advised immediate antibiotic utilization for 13 out of 14 (92.9%) diagnosed cases, suggesting further evaluation for the remaining cases. When a presumed incidence of 0.5/1,000 was applied, the KSC indicated antibiotics less frequently than when using the actual rate of 16/1,000 (p<0.001) with two missed NEOS cases. CONCLUSIONS: Using the KSC led to a decrease of 32 cases (32.3%) in unnecessary antibiotic prescriptions compared to adherence to 2010 CDC guidelines. However, setting a presumed incidence below the actual rate risked missing NEOS. The calculator was effective when actual local incidence rates were used, ensuring no missed cases needing antibiotics.

10.
Int Health ; 2024 Aug 02.
Article in English | MEDLINE | ID: mdl-39093915

ABSTRACT

BACKGROUND: Latent tuberculosis infection (LTBI) remains a significant challenge, as there is no gold standard diagnostic test. Current methods used for identifying LTBI are the interferon-γ release assay (IGRA), which is based on a blood test, and the tuberculin skin test (TST), which has low sensitivity. Both these tests are inadequate, primarily because they have limitations with the low bacterial burden characteristic of LTBI. This highlights the need for the development and adoption of more specific and accurate diagnostic tests to effectively identify LTBI. Herein we estimate the cost-effectiveness of the Cy-Tb test as compared with the TST for LTBI diagnosis. METHODS: An economic modelling study was conducted from a health system perspective using decision tree analysis, which is most widely used for cost-effectiveness analysis using transition probabilities. Our goal was to estimate the incremental cost and number of TB cases prevented from LTBI using the Cy-Tb diagnostic test along with TB preventive therapy (TPT). Secondary data such as demographic characteristics, treatment outcome, diagnostic test results and cost data for the TST and Cy-Tb tests were collected from the published literature. The incremental cost-effectiveness ratio was calculated for the Cy-Tb test as compared with the TST. The uncertainty in the model was evaluated using one-way sensitivity analysis and probability sensitivity analysis. RESULTS: The study findings indicate that for diagnosing an additional LTBI case with the Cy-Tb test and to prevent a TB case by providing TPT prophylaxis, an additional cost of 18 658 Indian rupees (US${\$}$223.5) is required. The probabilistic sensitivity analysis indicated that using the Cy-Tb test for diagnosing LTBI was cost-effective as compared with TST testing. If the cost of the Cy-Tb test is reduced, it becomes a cost-saving strategy. CONCLUSIONS: The Cy-Tb test for diagnosing LTBI is cost-effective at the current price, and price negotiations could further change it into a cost-saving strategy. This finding emphasizes the need for healthcare providers and policymakers to consider implementing the Cy-Tb test to maximize economic benefits. Bulk procurements can also be considered to further reduce costs and increase savings.

11.
J Neurosurg Spine ; : 1-11, 2024 Aug 02.
Article in English | MEDLINE | ID: mdl-39094195

ABSTRACT

OBJECTIVE: The goal of this study was to compare rates of dysphagia and patient-reported outcomes (PROs) following long-segment (≥ 3 levels) anterior cervical spinal fusion (ACF) and posterior cervical spinal fusion (PCF) at 3 and 12 months postoperatively. PROs were also compared for patients with dysphagia versus those without dysphagia. METHODS: A prospectively collected quality improvement database was used to identify patients who had a long-segment cervical spinal fusion. Cohorts were divided into ACF and PCF groups. Eating Assessment Tool-10 scores and PROs were obtained for all patients preoperatively and at 3 and 12 months postoperatively to compare. Multivariate analysis was also performed to evaluate risk factors for dysphagia. RESULTS: A total of 132 patients met the inclusion criteria, 77 of whom had undergone ACF and 55 of whom had undergone PCF. Dysphagia rates between ACF and PCF cohorts were similar at baseline (13.0% vs 18.2%, p = 0.4). New-onset dysphagia rates were also comparable at 3-month follow-up (39.7% vs 23.1%, p = 0.08) and 12-month follow-up (32.6% vs 32.4%, p > 0.99). Patients who underwent PCF had worse Neck Disability Index (NDI) scores at 3 months than did patients with ACF (13.67 ± 9.49 vs 10.55 ± 6.24, respectively; p = 0.03). There were significantly higher NDI scores for patients with dysphagia at 3 months in both the ACF and PCF groups and at 12 months for those in the PCF group. Analogously, EuroQol-5 Dimensions scores were worse for patients with dysphagia; however, this was only significant for patients in the ACF group at 3 months. There were no significant risk factors for the development of dysphagia found on multivariate analysis. CONCLUSIONS: Similar rates and severity of dysphagia were seen following ACF and PCF at 3- and 12-month follow-up. This suggests that long-term dysphagia following cervical fusion surgery may be due to structural changes from the fusion rather than the surgical approach. However, the ACF cohort was significantly younger, and this may have partially accounted for the findings. PROs were also compared for patients with and without dysphagia, demonstrating worsened outcomes in some domains for patients who presented with dysphagia at 3- and 12-month follow-up. This suggests that dysphagia may be associated with a decreased quality of life after cervical fusion.

12.
JMIR Med Inform ; 12: e56361, 2024 Jul 15.
Article in English | MEDLINE | ID: mdl-39093715

ABSTRACT

Background: Some research has already reported the diagnostic value of artificial intelligence (AI) in different endoscopy outcomes. However, the evidence is confusing and of varying quality. Objective: This review aimed to comprehensively evaluate the credibility of the evidence of AI's diagnostic accuracy in endoscopy. Methods: Before the study began, the protocol was registered on PROSPERO (CRD42023483073). First, 2 researchers searched PubMed, Web of Science, Embase, and Cochrane Library using comprehensive search terms. Then, researchers screened the articles and extracted information. We used A Measurement Tool to Assess Systematic Reviews 2 (AMSTAR2) to evaluate the quality of the articles. When there were multiple studies aiming at the same result, we chose the study with higher-quality evaluations for further analysis. To ensure the reliability of the conclusions, we recalculated each outcome. Finally, the Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) was used to evaluate the credibility of the outcomes. Results: A total of 21 studies were included for analysis. Through AMSTAR2, it was found that 8 research methodologies were of moderate quality, while other studies were regarded as having low or critically low quality. The sensitivity and specificity of 17 different outcomes were analyzed. There were 4 studies on esophagus, 4 studies on stomach, and 4 studies on colorectal regions. Two studies were associated with capsule endoscopy, two were related to laryngoscopy, and one was related to ultrasonic endoscopy. In terms of sensitivity, gastroesophageal reflux disease had the highest accuracy rate, reaching 97%, while the invasion depth of colon neoplasia, with 71%, had the lowest accuracy rate. On the other hand, the specificity of colorectal cancer was the highest, reaching 98%, while the gastrointestinal stromal tumor, with only 80%, had the lowest specificity. The GRADE evaluation suggested that the reliability of most outcomes was low or very low. Conclusions: AI proved valuabe in endoscopic diagnoses, especially in esophageal and colorectal diseases. These findings provide a theoretical basis for developing and evaluating AI-assisted systems, which are aimed at assisting endoscopists in carrying out examinations, leading to improved patient health outcomes. However, further high-quality research is needed in the future to fully validate AI's effectiveness.

13.
Sci Rep ; 14(1): 17978, 2024 Aug 02.
Article in English | MEDLINE | ID: mdl-39095451

ABSTRACT

In this paper, a combination of theoretical modeling, finite element simulation, and experimental methods is employed to investigate the forming mechanism and evolutionary pattern of the stagnant region during mechanical scratching with a diamond wedge tool. The study is structured as follows: Firstly, a theoretical calculation model for the geometric parameters of the stagnant region on the formed groove surface is established based on the contact friction partition mechanism and slip-line field theory. The model indicates that the geometric parameters lB-sg, lV-sg, and ∆lsg of the stagnant region are determined by the length of the stagnant region lp-sg in the plastic flow plane and the transformation parameters. Secondly, the formation process of the stagnant region in mechanical scratching is investigated using an orthogonal cutting simulation model with a negative rake angle tool. The results reveal that the stagnant region is a plastic deformation region formed due to the geometrical characteristics of the negative front surface of the scratching tool and its excessive extrusion, which leads to the formation of adhesive friction within the material. Thirdly, the characteristics of the stagnant region are determined through scratching experiments. Compared to the material in the plastic flow region, the material within the stagnant region exhibits finer and denser microstructures, reduced surface hardening peaks and hardened layer depths, and significantly improved surface roughness. Finally, the evolutionary pattern of the stagnant region under the influence of scratching processing parameters is examined based on the theoretical calculation model of the geometric parameters and the scratching experiment. The findings indicate that as the wedge angle of the scratching tool decreases, the relief angle increases, the absolute value of the rotation angle around the Y-axis decreases, the scratching speed decreases, and the material's plastic adherence improves, the PI/k value decreases, the lp-sg value increases, and consequently, the geometric parameters lB-sg, lV-sg, and ∆lsg of the stagnant region on the formed groove surface also increase. The deviation analysis of the geometric parameters of the stagnant region reveals a consistent trend between the theoretical and experimental values of lV-sg and ∆lsg, with maximum deviations of 15 µm and 4.13%, respectively. This study provides theoretical and experimental evidence for the establishment of the theoretical model of the stagnant region in mechanical scratching, the analysis of its forming mechanism, and the control of the stagnant region geometric parameters on the formed groove surface.

14.
J Epidemiol ; 2024 Aug 03.
Article in English | MEDLINE | ID: mdl-39098039

ABSTRACT

BACKGROUND: To date simple assessment tool to evaluate early low nutrition risk in general older population has not been available. This study aimed to create such tool and examined its reliability and criterion-related validity. METHODS: 1,192 community elderly with a mean age of 74.7(5.8) years responded to a questionnaire consisting of 48 (Hatoyama) or 34 items (Kusatsu), which have been reported to be associated with nutritional state in older people. Item analysis was conducted on the 34 common items, and items were selected based on the following criteria: adequate pass rates and discriminative power, no gender and regional differences, and a certain level of commonality based on factor analysis. Next, the factor structure of the candidate items was examined through exploratory factor analysis, and confirmatory factor analysis was conducted as the final scale structure. Furthermore, Spearman's partial rank correlation coefficients (sex- and age-adjusted) between the created index and important health indicators were examined to determine the criterion-related validity. RESULTS: Finally, we obtained a semantic coherence of 4 factors (named health beliefs, dietary status, physical activity, and food-related quality of life) totaling 13 items; confirmatory factor analysis of the 4-factor solution yielded good model fit values, χ2 (59) =275.4 (p<0.001), CFI=0.930, and RMSEA=0.056. The factor loadings for each factor ranged from 0.43 to 0.82, indicating adequate loadings. The reliability of the index was shown to be high by Good-Poor analysis and Cronbach's α. The index showed statistically significant correlations with all health indicators. CONCLUSIONS: We have developed a simple assessment tool to evaluate early low nutrition risk in general older population.

15.
Int Dent J ; 2024 Aug 03.
Article in English | MEDLINE | ID: mdl-39098480

ABSTRACT

INTRODUCTION AND AIMS: In the face of escalating oral cancer rates, the application of large language models like Generative Pretrained Transformer (GPT)-4 presents a novel pathway for enhancing public awareness about prevention and early detection. This research aims to explore the capabilities and possibilities of GPT-4 in addressing open-ended inquiries in the field of oral cancer. METHODS: Using 60 questions accompanied by reference answers, covering concepts, causes, treatments, nutrition, and other aspects of oral cancer, evaluators from diverse backgrounds were selected to evaluate the capabilities of GPT-4 and a customized version. A P value under .05 was considered significant. RESULTS: Analysis revealed that GPT-4 and its adaptations notably excelled in answering open-ended questions, with the majority of responses receiving high scores. Although the median score for standard GPT-4 was marginally better, statistical tests showed no significant difference in capabilities between the two models (P > .05). Despite statistical significance indicated diverse backgrounds of evaluators have statistically difference (P < .05), a post hoc test and comprehensive analysis demonstrated that both editions of GPT-4 demonstrated equivalent capabilities in answering questions concerning oral cancer. CONCLUSIONS: GPT-4 has demonstrated its capability to furnish responses to open-ended inquiries concerning oral cancer. Utilizing this advanced technology to boost public awareness about oral cancer is viable and has much potential. When it's unable to locate pertinent information, it will resort to their inherent knowledge base or recommend consulting professionals after offering some basic information. Therefore, it cannot supplant the expertise and clinical judgment of surgical oncologists and could be used as an adjunctive evaluation tool.

16.
Eur Radiol Exp ; 8(1): 87, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39090324

ABSTRACT

BACKGROUND: Severe chronic obstructive pulmonary disease (COPD) often results in hyperinflation and flattening of the diaphragm. An automated computed tomography (CT)-based tool for quantifying diaphragm configuration, a biomarker for COPD, was developed in-house and tested in a large cohort of COPD patients. METHODS: We used the LungQ platform to extract the lung-diaphragm intersection, as direct diaphragm segmentation is challenging. The tool computed the diaphragm index (surface area/projected surface area) as a measure of diaphragm configuration on inspiratory scans in a COPDGene subcohort. Visual inspection of 250 randomly selected segmentations served as a quality check. Associations between the diaphragm index, Global Initiative for Chronic Obstructive Lung Disease (GOLD) stages, forced expiratory volume in 1 s (FEV1) % predicted, and CT-derived emphysema scores were explored using analysis of variance and Pearson correlation. RESULTS: The tool yielded incomplete segmentation in 9.2% (2.4% major defect, 6.8% minor defect) of 250 randomly selected cases. In 8431 COPDGene subjects (4240 healthy; 4191 COPD), the diaphragm index was increasingly lower with higher GOLD stages (never-smoked 1.83 ± 0.16; GOLD-0 1.79 ± 0.18; GOLD-1 1.71 ± 0.15; GOLD-2: 1.67 ± 0.16; GOLD-3 1.58 ± 0.14; GOLD-4 1.54 ± 0.11) (p < 0.001). Associations were found between the diaphragm index and both FEV1% predicted (r = 0.44, p < 0.001) and emphysema score (r = -0.36, p < 0.001). CONCLUSION: We developed an automated tool to quantify the diaphragm configuration in chest CT. The diaphragm index was associated with COPD severity, FEV1%predicted, and emphysema score. RELEVANCE STATEMENT: Due to the hypothesized relationship between diaphragm dysfunction and diaphragm configuration in COPD patients, automatic quantification of diaphragm configuration may prove useful in evaluating treatment efficacy in terms of lung volume reduction. KEY POINTS: Severe COPD changes diaphragm configuration to a flattened state, impeding function. An automated tool quantified diaphragm configuration on chest-CT providing a diaphragm index. The diaphragm index was correlated to COPD severity and may aid treatment assessment.


Subject(s)
Diaphragm , Pulmonary Disease, Chronic Obstructive , Tomography, X-Ray Computed , Humans , Pulmonary Disease, Chronic Obstructive/diagnostic imaging , Pulmonary Disease, Chronic Obstructive/physiopathology , Diaphragm/diagnostic imaging , Diaphragm/physiopathology , Tomography, X-Ray Computed/methods , Male , Female , Middle Aged , Aged , Forced Expiratory Volume
17.
BMC Emerg Med ; 24(1): 147, 2024 Aug 15.
Article in English | MEDLINE | ID: mdl-39148043

ABSTRACT

BACKGROUND: Emergency department (ED) crowding is a major patient safety concern and has a negative impact on healthcare systems and healthcare providers. We hypothesized that it would be feasible to control crowding by employing a multifaceted approach consisting of systematically fast-tracking patients who are mostly not in need of a hospital stay as assessed by an initial nurse and treated by decision competent physicians. METHODS: Data from 120,901 patients registered in a secondary care ED from the 4tth quarter of 2021 to the 1st quarter of 2024 was drawn from the electronic health record's data warehouse using the SAP Web Intelligence tool and processed in the Python programming language. Crowding was compared before and after ED transformation from a uniform department into a high flow (α) and a low flow (ß) section with patient placement in gurneys/chairs or beds, respectively. Patients putatively not in need of hospitalization were identified by nurse, placed in in the α setting and assessed and treated by decision competent physicians. Incidence of crowding, number of patients admitted per day and readmittances within 72 h following ED admission before and after changes were determined. Values are number of patients, mean ± SEM and mean differences with 95% CIs. Statistical significance was ascertained using Student's two tailed t-test for unpaired values. RESULTS: Before and after ED changes crowding of 130% amounted to 123.8 h and 19.3 h in the latter. This is a difference of -104.6 ± 23.9 h with a 95% CI of -159.9 to -49.3, Δ% -84 (p = 0.002). There was the same amount of patients / day amounting to 135.8 and 133.5 patients / day Δ% = -1.7 patients 95% CI -6.3 to 1.6 (p = 0.21). There was no change in readmittances within 72 h before and after changes amounting to 9.0% versus 9.5%, Δ% = 0.5, 95%, CI -0.007 to 1.0 (p > 0.052). CONCLUSION: It appears feasible to abate crowding with unchanged patient admission and without an increase in readmittances by fast-track assessment and treatment of patients who are not in need of hospitalization.


Subject(s)
Crowding , Emergency Service, Hospital , Humans , Male , Female , Middle Aged , Adult , Aged , Hospitalization
18.
Ergonomics ; : 1-13, 2024 Aug 17.
Article in English | MEDLINE | ID: mdl-39154216

ABSTRACT

This study proposes a generic approach for creating human factors-based assessment tools to enhance operational system quality by reducing errors. The approach was driven by experiences and lessons learned in creating the warehouse error prevention (WEP) tool and other system engineering tools. The generic approach consists of 1) identifying tool objectives, 2) identifying system failure modes, 3) specifying design-related quality risk factors for each failure mode, 4) designing the tool, 5) conducting user evaluations, and 6) validating the tool. The WEP tool exemplifies this approach and identifies human factors related to design flaws associated with quality risk factors in warehouse operations. The WEP tool can be used at the initial stage of design or later for process improvement and training. While this process can be adapted for various contexts, further study is necessary to support the teams in creating tools to identify design-related human factors contributing to quality issues.


This paper describes a generic approach to creating human factors­based quality assessment tools. The approach is illustrated with the Warehouse Error Prevention (WEP) tool, which is designed to help users identify HF-related quality risk factors in warehouse system designs (available for free: Setayesh et al. 2022b).

19.
Cureus ; 16(7): e64734, 2024 Jul.
Article in English | MEDLINE | ID: mdl-39156261

ABSTRACT

Background Dengue fever poses a significant health burden globally, particularly in tropical and subtropical regions. Early diagnosis and effective management are crucial in reducing morbidity and mortality associated with the disease. Bedside abdominal ultrasound has emerged as a promising tool for assessing dengue patients, providing real-time imaging of abdominal organs, and aiding clinical decision-making. Materials and methods This cross-sectional study was conducted in 55 adult emergency departments of Manipal Hospital, Bengaluru, from March 2017 to March 2018. Adult patients presenting with signs and symptoms suggestive of dengue fever were included. Clinical data, laboratory investigations, and bedside abdominal ultrasound findings were systematically recorded and analyzed using appropriate statistical methods. Results Descriptive statistics revealed characteristic clinical measurements and symptom ratings observed in dengue fever patients. Frequency distributions highlighted common symptoms encountered, while statistical analyses demonstrated significant associations between ultrasonic parameters, disease severity, and outcomes. The study found notable correlations between ultrasonic findings and dengue severity levels, emphasizing the potential of bedside ultrasound as a diagnostic and prognostic tool. Conclusion Bedside abdominal ultrasound shows promise as a valuable adjunctive tool in assessing dengue fever patients. The significant associations between ultrasonic parameters and disease severity suggest its utility in risk stratification and guiding clinical management decisions. Incorporating bedside ultrasound into routine practice may improve patient care and outcomes in dengue fever management. Further research is warranted to validate these findings and explore additional bedside ultrasound applications in dengue fever diagnosis and prognosis.

20.
J Med Internet Res ; 26: e47733, 2024 Aug 19.
Article in English | MEDLINE | ID: mdl-39159448

ABSTRACT

BACKGROUND: Previous studies have demonstrated telemedicine to be an effective tool to complement rheumatology care and address workforce shortage. With the COVID-19 outbreak, telemedicine experienced a massive upswing. An earlier analysis revealed that the motivation of patients with rheumatic and musculoskeletal diseases to use telemedicine is closely connected to their disease. It remains unclear which factors are associated with patients' motivation to use telemedicine in certain rheumatic and musculoskeletal diseases groups, such as rheumatoid arthritis (RA). OBJECTIVE: This study aims to identify factors that determine the willingness to try telemedicine among patients diagnosed with RA. METHODS: We conducted a secondary analysis of data from a German nationwide cross-sectional survey among patients with RA. Bayesian univariate logistic regression analysis was applied to the data to determine which factors were associated with willingness to try telemedicine. Predictor variables (covariates) studied individually included sociodemographic factors (eg, age, sex) and health characteristics (eg, health status). All the variables positively and negatively associated with willingness to try telemedicine in the univariate analyses were then considered for Bayesian model averaging analysis after a selection based on the variance inflation factor (≤ 2.5) to identify determinants of willingness to try telemedicine. RESULTS: Among 438 surveyed patients in the initial study, 210 were diagnosed with RA (47.9%). Among them, 146 (69.5%) answered either yes or no regarding willingness to try telemedicine and were included in the analysis. A total of 22 variables (22/55, 40%) were associated with willingness to try telemedicine (region of practical equivalence %≤5). A total of 9 determinant factors were identified using Bayesian model averaging analysis. Positive determinants included desiring telemedicine services provided by a rheumatologist (odds ratio [OR] 13.7, 95% CI 5.55-38.3), having prior knowledge of telemedicine (OR 2.91, 95% CI 1.46-6.28), residing in a town (OR 2.91, 95% CI 1.21-7.79) or city (OR 0.56, 95% CI 0.23-1.27), and perceiving one's health status as moderate (OR 1.87, 95% CI 0.94-3.63). Negative determinants included the lack of an electronic device (OR 0.1, 95% CI 0.01-0.62), absence of home internet access (OR 0.1, 95% CI 0.02-0.39), self-assessment of health status as bad (OR 0.44, 95% CI 0.21-0.89) or very bad (OR 0.47, 95% CI 0.06-2.06), and being aged between 60 and 69 years (OR 0.48, 95% CI 0.22-1.04) or older than 70 years (OR 0.38, 95% CI 0.16-0.85). CONCLUSIONS: The results suggest that some patients with RA will not have access to telemedicine without further support. Older patients, those not living in towns, those without adequate internet access, reporting a bad health status, and those not owning electronic devices might be excluded from the digital transformation in rheumatology and might not have access to adequate RA care. These patient groups certainly require support for the use of digital rheumatology care.


Subject(s)
Arthritis, Rheumatoid , Bayes Theorem , COVID-19 , Motivation , Telemedicine , Humans , Arthritis, Rheumatoid/therapy , Telemedicine/statistics & numerical data , Cross-Sectional Studies , Germany , Male , Female , Middle Aged , Aged , Adult , Surveys and Questionnaires , Patient Acceptance of Health Care/statistics & numerical data , SARS-CoV-2
SELECTION OF CITATIONS
SEARCH DETAIL