Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 803
Filter
1.
Health Technol Assess ; 28(27): 1-97, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38940695

ABSTRACT

Background: Anterior cruciate ligament injury of the knee is common and leads to decreased activity and risk of secondary osteoarthritis of the knee. Management of patients with a non-acute anterior cruciate ligament injury can be non-surgical (rehabilitation) or surgical (reconstruction). However, insufficient evidence exists to guide treatment. Objective(s): To determine in patients with non-acute anterior cruciate ligament injury and symptoms of instability whether a strategy of surgical management (reconstruction) without prior rehabilitation was more clinically and cost-effective than non-surgical management (rehabilitation). Design: A pragmatic, multicentre, superiority, randomised controlled trial with two-arm parallel groups and 1:1 allocation. Due to the nature of the interventions, no blinding could be carried out. Setting: Twenty-nine NHS orthopaedic units in the United Kingdom. Participants: Participants with a symptomatic (instability) non-acute anterior cruciate ligament-injured knee. Interventions: Patients in the surgical management arm underwent surgical anterior cruciate ligament reconstruction as soon as possible and without any further rehabilitation. Patients in the rehabilitation arm attended physiotherapy sessions and only were listed for reconstructive surgery on continued instability following rehabilitation. Surgery following initial rehabilitation was an expected outcome for many patients and within protocol. Main outcome measures: The primary outcome was the Knee Injury and Osteoarthritis Outcome Score 4 at 18 months post randomisation. Secondary outcomes included return to sport/activity, intervention-related complications, patient satisfaction, expectations of activity, generic health quality of life, knee-specific quality of life and resource usage. Results: Three hundred and sixteen participants were recruited between February 2017 and April 2020 with 156 randomised to surgical management and 160 to rehabilitation. Forty-one per cent (n = 65) of those allocated to rehabilitation underwent subsequent reconstruction within 18 months with 38% (n = 61) completing rehabilitation and not undergoing surgery. Seventy-two per cent (n = 113) of those allocated to surgery underwent reconstruction within 18 months. Follow-up at the primary outcome time point was 78% (n = 248; surgical, n = 128; rehabilitation, n = 120). Both groups improved over time. Adjusted mean Knee Injury and Osteoarthritis Outcome Score 4 scores at 18 months had increased to 73.0 in the surgical arm and to 64.6 in the rehabilitation arm. The adjusted mean difference was 7.9 (95% confidence interval 2.5 to 13.2; p = 0.005) in favour of surgical management. The per-protocol analyses supported the intention-to-treat results, with all treatment effects favouring surgical management at a level reaching statistical significance. There was a significant difference in Tegner Activity Score at 18 months. Sixty-eight per cent (n = 65) of surgery patients did not reach their expected activity level compared to 73% (n = 63) in the rehabilitation arm. There were no differences between groups in surgical complications (n = 1 surgery, n = 2 rehab) or clinical events (n = 11 surgery, n = 12 rehab). Of surgery patients, 82.9% were satisfied compared to 68.1% of rehabilitation patients. Health economic analysis found that surgical management led to improved health-related quality of life compared to non-surgical management (0.052 quality-adjusted life-years, p = 0.177), but with higher NHS healthcare costs (£1107, p < 0.001). The incremental cost-effectiveness ratio for the surgical management programme versus rehabilitation was £19,346 per quality-adjusted life-year gained. Using £20,000-30,000 per quality-adjusted life-year thresholds, surgical management is cost-effective in the UK setting with a probability of being the most cost-effective option at 51% and 72%, respectively. Limitations: Not all surgical patients underwent reconstruction, but this did not affect trial interpretation. The adherence to physiotherapy was patchy, but the trial was designed as pragmatic. Conclusions: Surgical management (reconstruction) for non-acute anterior cruciate ligament-injured patients was superior to non-surgical management (rehabilitation). Although physiotherapy can still provide benefit, later-presenting non-acute anterior cruciate ligament-injured patients benefit more from surgical reconstruction without delaying for a prior period of rehabilitation. Future work: Confirmatory studies and those to explore the influence of fidelity and compliance will be useful. Trial registration: This trial is registered as Current Controlled Trials ISRCTN10110685; ClinicalTrials.gov Identifier: NCT02980367. Funding: This award was funded by the National Institute of Health and Care Research (NIHR) Health Technology Assessment programme (NIHR award ref: 14/140/63) and is published in full in Health Technology Assessment; Vol. 28, No. 27. See the NIHR Funding and Awards website for further award information.


The study aimed to find out whether it is better to offer surgical reconstruction or rehabilitation first to patients with a more long-standing injury of their anterior cruciate ligament in their knee. This injury causes physical giving way of the knee and/or sensations of it being wobbly (instability). The instability can affect daily activities, work, sport and can lead to arthritis. There are two main treatment options for this problem: non-surgical rehabilitation (prescribed exercises and advice from physiotherapists) or an operation by a surgeon to replace the damaged ligament (anterior cruciate ligament reconstruction). Although studies have highlighted the best option for a recently injured knee, the best management was not known for patients with a long-standing injury, perhaps occurring several months previously. Because the surgery is expensive to the NHS (around £100 million per year), it was also important to look at the costs involved. We carried out a study recruiting 316 non-acute anterior cruciate ligament-injured patients from 29 different hospitals and allocated each patient to either surgery or rehabilitation as their treatment option. We measured how well they did with special function and activity scores, patient satisfaction and costs of treatment. Patients in both groups improved substantially. It was expected that some patients in the rehabilitation group would want surgery if non-surgical management was unsuccessful. Forty-one per cent of patients who initially underwent rehabilitation subsequently elected to have reconstructive surgery. Overall, the patients allocated to the surgical reconstruction group had better results in terms of knee function and stability, activity level and satisfaction with treatment than patients allocated to the non-operative rehabilitation group. There were few problems or complications with either treatment option. Although the surgery was a more expensive treatment option, it was found to be cost-effective in the UK setting. The evidence can be discussed in shared decision-making with anterior cruciate ligament-injured patients. Both strategies of management led to improvement. Although a rehabilitation strategy can be beneficial, especially for recently injured patients, it is advised that later-presenting non-acute and more long-standing anterior cruciate ligament-injured patients undergo surgical reconstruction without necessarily delaying for a period of rehabilitation.


Subject(s)
Anterior Cruciate Ligament Injuries , Anterior Cruciate Ligament Reconstruction , Cost-Benefit Analysis , Humans , Male , Female , Anterior Cruciate Ligament Injuries/surgery , Anterior Cruciate Ligament Injuries/rehabilitation , Adult , United Kingdom , Anterior Cruciate Ligament Reconstruction/rehabilitation , Quality of Life , Quality-Adjusted Life Years , Middle Aged , Young Adult , State Medicine , Joint Instability/surgery , Joint Instability/rehabilitation , Adolescent , Technology Assessment, Biomedical
2.
Radiol Artif Intell ; : e230431, 2024 May 22.
Article in English | MEDLINE | ID: mdl-38775671

ABSTRACT

"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Purpose To develop an artificial intelligence (AI) deep learning tool capable of predicting future breast cancer risk from a current negative screening mammographic examination and to evaluate the model on data from the UK National Health Service Breast Screening Program. Materials and Methods The OPTIMAM Mammography Imaging Database contains screening data, including mammograms and information on interval cancers, for > 300,000 women who attended screening at three different sites in the UK from 2012 onward. Cancer-free screening examinations from women aged 50-70 years were obtained and classified as risk-positive or risk-negative based on the occurrence of cancer within 3 years of the original examination. Examinations with confirmed cancer and images containing implants were excluded. From the resulting 5264 risk-positive and 191488 risk-negative examinations, training (n = 89285) validation (n = 2106) and test (n = 39351) datasets were produced for model development and evaluation. The AI model was trained to predict future cancer occurrence based on screening mammograms and patient age. Performance was evaluated on the test dataset using the area under the receiver operating characteristic curve (AUC) and compared across subpopulations to assess potential biases. Interpretability of the model was explored, including with saliency maps. Results On the hold-out test set, the AI model achieved an overall AUC of 0.70 (95% CI: 0.69, 0.72). There was no evidence of a difference in performance across the three sites, between patient ethnicities or across age-groups Visualization of saliency maps and sample images provided insights into the mammographic features associated with AI-predicted cancer risk. Conclusion The developed AI tool showed good performance on a multisite, UK-specific dataset. ©RSNA, 2024.

3.
Health Soc Care Deliv Res ; 12(14): 1-182, 2024 May.
Article in English | MEDLINE | ID: mdl-38794956

ABSTRACT

Background: Acute inpatient mental health services report high levels of safety incidents. The application of patient safety theory has been sparse, particularly concerning interventions that proactively seek patient perspectives. Objective(s): Develop and evaluate a theoretically based, digital monitoring tool to collect real-time information from patients on acute adult mental health wards about their perceptions of ward safety. Design: Theory-informed mixed-methods study. A prototype digital monitoring tool was developed from a co-design approach, implemented in hospital settings, and subjected to qualitative and quantitative evaluation. Setting and methods: Phase 1: scoping review of the literature on patient involvement in safety interventions in acute mental health care; evidence scan of digital technology in mental health contexts; qualitative interviews with mental health patients and staff about perspectives on ward safety. This, alongside stakeholder engagement with advisory groups, service users and health professionals, informed the development processes. Most data collection was virtual. Phase 1 resulted in the technical development of a theoretically based digital monitoring tool that collected patient feedback for proactive safety monitoring. Phase 2: implementation of the tool in six adult acute mental health wards across two UK NHS trusts; evaluation via focused ethnography and qualitative interviews. Statistical analysis of WardSonar data and routine ward data involving construction of an hour-by-hour data set per ward, permitting detailed analysis of the use of the WardSonar tool. Participants: A total of 8 patients and 13 mental health professionals participated in Phase 1 interviews; 33 staff and 34 patients participated in Phase 2 interviews. Interventions: Patients could use a web application (the WardSonar tool) to record real-time perceptions of ward safety. Staff could access aggregated, anonymous data to inform timely interventions. Results: Coronavirus disease 2019 restrictions greatly impacted the study. Stakeholder engagement permeated the project. Phase 1 delivered a theory-based, collaboratively designed digital tool for proactive patient safety monitoring. Phase 2 showed that the tool was user friendly and broadly acceptable to patients and staff. The aggregated safety data were infrequently used by staff. Feasibility depended on engaged staff and embedding use of the tool in ward routines. There is strong evidence that an incident leads to increased probability of further incidents within the next 4 hours. This puts a measure on the extent to which social/behavioural contagion persists. There is weak evidence to suggest that an incident leads to a greater use of the WardSonar tool in the following hour, but none to suggest that ward atmosphere predicts future incidents. Therefore, how often patients use the tool seems to send a stronger signal about potential incidents than patients' real-time reports about ward atmosphere. Limitations: Implementation was limited to two NHS trusts. Coronavirus disease 2019 impacted design processes including stakeholder engagement; implementation; and evaluation of the monitoring tool in routine clinical practice. Higher uptake could enhance validity of the results. Conclusions: WardSonar has the potential to provide a valuable route for patients to communicate safety concerns. The WardSonar monitoring tool has a strong patient perspective and uses proactive real-time safety monitoring rather than traditional retrospective data review. Future work: The WardSonar tool can be refined and tested further in a post Coronavirus disease 2019 context. Study registration: This study is registered as ISRCTN14470430. Funding: This award was funded by the National Institute for Health and Care Research (NIHR) Health and Social Care Delivery Research programme (NIHR award ref: NIHR128070) and is published in full in Health and Social Care Delivery Research; Vol. 12, No. 14. See the NIHR Funding and Awards website for further award information.


Mental health wards can feel unsafe. We know that patients and staff have different ideas about what makes a hospital ward safe or unsafe. Patients are often the first to know when the atmosphere on a ward becomes tense but, often, no one asks them for input or feedback at the time. We worked with service users and staff to develop new technology to make it easy for patients to tell staff about changes in the ward atmosphere. We put everyone's ideas together and some technical developers then built a digital safety tool to use on a tablet computer. Patients put in anonymous information about the ward atmosphere and staff can read it straight away. We tested it on six adult acute mental health wards for 10 weeks. We asked patients and staff what they thought about the tool and we looked at how it was being used. Patients and staff liked the look of the tool on the tablet computer. Some staff said they did not need it because they could tell how patients were feeling, but patients told us that staff did not talk with them much and did not always know when patients were feeling tense. Coronavirus disease 2019 made life difficult on the wards. Most ward managers said the tool could be helpful, but they had not had time to get used to it on the wards. Occasionally, the tablet computers were out of action. Many staff tried hard to use the tool. Most patient information was gathered when it was calm, perhaps because staff were not too busy to help them. We found that this tool could help staff know about tensions on the ward, but they need to get used to it and bring it into ward routines.


Subject(s)
COVID-19 , Patient Safety , Humans , Adult , Male , Female , COVID-19/epidemiology , Psychiatric Department, Hospital/organization & administration , United Kingdom , Qualitative Research , Middle Aged , Digital Technology , Mental Health Services/organization & administration , State Medicine/organization & administration , Patient Participation/methods
4.
Breast Cancer Res ; 26(1): 85, 2024 May 28.
Article in English | MEDLINE | ID: mdl-38807211

ABSTRACT

BACKGROUND: Abbreviated breast MRI (FAST MRI) is being introduced into clinical practice to screen women with mammographically dense breasts or with a personal history of breast cancer. This study aimed to optimise diagnostic accuracy through the adaptation of interpretation-training. METHODS: A FAST MRI interpretation-training programme (short presentations and guided hands-on workstation teaching) was adapted to provide additional training during the assessment task (interpretation of an enriched dataset of 125 FAST MRI scans) by giving readers feedback about the true outcome of each scan immediately after each scan was interpreted (formative assessment). Reader interaction with the FAST MRI scans used developed software (RiViewer) that recorded reader opinions and reading times for each scan. The training programme was additionally adapted for remote e-learning delivery. STUDY DESIGN: Prospective, blinded interpretation of an enriched dataset by multiple readers. RESULTS: 43 mammogram readers completed the training, 22 who interpreted breast MRI in their clinical role (Group 1) and 21 who did not (Group 2). Overall sensitivity was 83% (95%CI 81-84%; 1994/2408), specificity 94% (95%CI 93-94%; 7806/8338), readers' agreement with the true outcome kappa = 0.75 (95%CI 0.74-0.77) and diagnostic odds ratio = 70.67 (95%CI 61.59-81.09). Group 1 readers showed similar sensitivity (84%) to Group 2 (82% p = 0.14), but slightly higher specificity (94% v. 93%, p = 0.001). Concordance with the ground truth increased significantly with the number of FAST MRI scans read through the formative assessment task (p = 0.002) but by differing amounts depending on whether or not a reader had previously attended FAST MRI training (interaction p = 0.02). Concordance with the ground truth was significantly associated with reading batch size (p = 0.02), tending to worsen when more than 50 scans were read per batch. Group 1 took a median of 56 seconds (range 8-47,466) to interpret each FAST MRI scan compared with 78 (14-22,830, p < 0.0001) for Group 2. CONCLUSIONS: Provision of immediate feedback to mammogram readers during the assessment test set reading task increased specificity for FAST MRI interpretation and achieved high diagnostic accuracy. Optimal reading-batch size for FAST MRI was 50 reads per batch. Trial registration (25/09/2019): ISRCTN16624917.


Subject(s)
Breast Neoplasms , Learning Curve , Magnetic Resonance Imaging , Mammography , Humans , Female , Breast Neoplasms/diagnostic imaging , Breast Neoplasms/diagnosis , Magnetic Resonance Imaging/methods , Mammography/methods , Middle Aged , Early Detection of Cancer/methods , Prospective Studies , Aged , Sensitivity and Specificity , Image Interpretation, Computer-Assisted/methods , Breast/diagnostic imaging , Breast/pathology
5.
Sci Total Environ ; 927: 172118, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38569959

ABSTRACT

Declines in insect pollinators have been linked to a range of causative factors such as disease, loss of habitats, the quality and availability of food, and exposure to pesticides. Here, we analysed an extensive dataset generated from pesticide screening of foraging insects, pollen-nectar stores/beebread, pollen and ingested nectar across three species of bees collected at 128 European sites set in two types of crop. In this paper, we aimed to (i) derive a new index to summarise key aspects of complex pesticide exposure data and (ii) understand the links between pesticide exposures depicted by the different matrices, bee species and apple orchards versus oilseed rape crops. We found that summary indices were highly correlated with the number of pesticides detected in the related matrix but not with which pesticides were present. Matrices collected from apple orchards generally contained a higher number of pesticides (7.6 pesticides per site) than matrices from sites collected from oilseed rape crops (3.5 pesticides), with fungicides being highly represented in apple crops. A greater number of pesticides were found in pollen-nectar stores/beebread and pollen matrices compared with nectar and bee body matrices. Our results show that for a complete assessment of pollinator pesticide exposure, it is necessary to consider several different exposure routes and multiple species of bees across different agricultural systems.


Subject(s)
Crops, Agricultural , Environmental Monitoring , Pesticides , Pollination , Animals , Bees/physiology , Pesticides/analysis , Pollen , Malus , Environmental Exposure/statistics & numerical data
6.
Altern Lab Anim ; 52(3): 149-154, 2024 May.
Article in English | MEDLINE | ID: mdl-38606566

ABSTRACT

In the cosmetics sector, many products such as shampoos have a probability of accidental ocular exposure during their routine use. One very specific safety parameter is the residence time of the substance on the corneal surface, as prolonged exposure may cause injury. In this study, we developed a system that simulates corneal exposure to blinking and tear flow, for comparing the corneal clearance times of viscous detergent formulations. The Ex Vivo Eye Irritation Test (EVEIT), which uses corneal explants from discarded rabbit eyes from an abattoir, was used as the basis for the new system. To simulate blinking, we developed a silicone wiping membrane to regularly move across the corneal surface, under conditions of constant addition and aspiration of fluid, to mimic tear flow. Six shampoo formulations were tested and were shown to differ widely in their corneal clearance time. Three groups could be identified according to the observed clearance times (fast, intermediate and slow); the reference shampoo had the shortest clearance time of all tested formulations. With this new system, it is now possible to investigate an important physicochemical parameter, i.e. corneal clearance time, for the consideration of ocular safety during the development of novel cosmetic formulations.


Subject(s)
Blinking , Cornea , Animals , Rabbits , Cornea/drug effects , Blinking/drug effects , Animal Testing Alternatives/methods , Hair Preparations , Tears/drug effects
7.
Adv Rheumatol ; 64(1): 31, 2024 04 22.
Article in English | MEDLINE | ID: mdl-38650049

ABSTRACT

BACKGROUND: To illustrate how (standardised) effect sizes (ES) vary based on calculation method and to provide considerations for improved reporting. METHODS: Data from three trials of tanezumab in subjects with osteoarthritis were analyzed. ES of tanezumab versus comparator for WOMAC Pain (outcome) was defined as least squares difference between means (mixed model for repeated measures analysis) divided by a pooled standard deviation (SD) of outcome scores. Three approaches to computing the SD were evaluated: Baseline (the pooled SD of WOMAC Pain values at baseline [pooled across treatments]); Endpoint (the pooled SD of these values at the time primary endpoints were assessed); and Median (the median pooled SD of these values based on the pooled SDs across available timepoints). Bootstrap analyses were used to compute 95% confidence intervals (CI). RESULTS: ES (95% CI) of tanezumab 2.5 mg based on Baseline, Endpoint, and Median SDs in one study were - 0.416 (- 0.796, - 0.060), - 0.195 (- 0.371, - 0.028), and - 0.196 (- 0.373, - 0.028), respectively; negative values indicate pain improvement. This pattern of ES differences (largest with Baseline SD, smallest with Endpoint SD, Median SD similar to Endpoint SD) was consistent across all studies and doses of tanezumab. CONCLUSION: Differences in ES affect interpretation of treatment effect. Therefore, we advocate clearly reporting individual elements of ES in addition to its overall calculation. This is particularly important when ES estimates are used to determine sample sizes for clinical trials, as larger ES will lead to smaller sample sizes and potentially underpowered studies. TRIAL REGISTRATION: Clinicaltrials.gov NCT02697773, NCT02709486, and NCT02528188.


Subject(s)
Antibodies, Monoclonal, Humanized , Osteoarthritis , Randomized Controlled Trials as Topic , Humans , Antibodies, Monoclonal, Humanized/therapeutic use , Data Interpretation, Statistical , Osteoarthritis/drug therapy , Pain Measurement , Treatment Outcome
8.
JMIR Form Res ; 8: e53726, 2024 Apr 12.
Article in English | MEDLINE | ID: mdl-38607663

ABSTRACT

BACKGROUND: Acute mental health services report high levels of safety incidents that involve both patients and staff. The potential for patients to be involved in interventions to improve safety within a mental health setting is acknowledged, and there is a need for interventions that proactively seek the patient perspective of safety. Digital technologies may offer opportunities to address this need. OBJECTIVE: This research sought to design and develop a digital real-time monitoring tool (WardSonar) to collect and collate daily information from patients in acute mental health wards about their perceptions of safety. We present the design and development process and underpinning logic model and programme theory. METHODS: The first stage involved a synthesis of the findings from a systematic review and evidence scan, interviews with patients (n=8) and health professionals (n=17), and stakeholder engagement. Cycles of design activities and discussion followed with patients, staff, and stakeholder groups, to design and develop the prototype tool. RESULTS: We drew on patient safety theory and the concepts of contagion and milieu. The data synthesis, design, and development process resulted in three prototype components of the digital monitoring tool (WardSonar): (1) a patient recording interface that asks patients to input their perceptions into a tablet computer, to assess how the ward feels and whether the direction is changing, that is, "getting worse" or "getting better"; (2) a staff dashboard and functionality to interrogate the data at different levels; and (3) a public-facing ward interface. The technology is available as open-source code. CONCLUSIONS: Recent patient safety policy and research priorities encourage innovative approaches to measuring and monitoring safety. We developed a digital real-time monitoring tool to collect information from patients in acute mental health wards about perceived safety, to support staff to respond and intervene to changes in the clinical environment more proactively.

9.
Health Rep ; 35(3): 3-17, 2024 Mar 20.
Article in English | MEDLINE | ID: mdl-38527107

ABSTRACT

Background: Small area estimation refers to statistical modelling procedures that leverage information or "borrow strength" from other sources or variables. This is done to enhance the reliability of estimates of characteristics or outcomes for areas that do not contain sufficient sample sizes to provide disaggregated estimates of adequate precision and reliability. There is growing interest in secondary research applications for small area estimates (SAEs). However, it is crucial to assess the analytic value of these estimates when used as proxies for individual-level characteristics or as distinct measures that offer insights at the area level. This study assessed novel area-level community belonging measures derived using small area estimation and examined associations with individual-level measures of community belonging and self-rated health. Data and methods: SAEs of community belonging within census tracts produced from the 2016-2019 cycles of the Canadian Community Health Survey (CCHS) were merged with respondent data from the 2020 CCHS. Multinomial logistic regression models were run between area-level SAEs, individual-level sense of community belonging, and self-rated health on the study sample of people aged 18 years and older. Results: Area-level community belonging was associated with individual-level community belonging, even after adjusting for individual-level sociodemographic characteristics, despite limited agreement between individual- and area-level measures. Living in a neighbourhood with low community belonging was associated with higher odds of reporting being in fair or poor health, versus being in very good or excellent health (odds ratio: 1.53; 95% confidence interval: 1.22, 1.91), even after adjusting for other factors such as individual-level sense of community belonging, which was also associated with self-rated health. Interpretation: Area-level and individual-level sense of community belonging were independently associated with self-rated health. The novel SAEs of community belonging can be used as distinct measures of neighbourhood-level community belonging and should be understood as complementary to, rather than proxies for, individual-level measures of community belonging.


Subject(s)
Health Status , Residence Characteristics , Humans , Socioeconomic Factors , Reproducibility of Results , Canada , Health Surveys
10.
Sci Rep ; 14(1): 3524, 2024 02 12.
Article in English | MEDLINE | ID: mdl-38347035

ABSTRACT

Infectious and parasitic agents (IPAs) and their associated diseases are major environmental stressors that jeopardize bee health, both alone and in interaction with other stressors. Their impact on pollinator communities can be assessed by studying multiple sentinel bee species. Here, we analysed the field exposure of three sentinel managed bee species (Apis mellifera, Bombus terrestris and Osmia bicornis) to 11 IPAs (six RNA viruses, two bacteria, three microsporidia). The sentinel bees were deployed at 128 sites in eight European countries adjacent to either oilseed rape fields or apple orchards during crop bloom. Adult bees of each species were sampled before their placement and after crop bloom. The IPAs were detected and quantified using a harmonised, high-throughput and semi-automatized qPCR workflow. We describe differences among bee species in IPA profiles (richness, diversity, detection frequencies, loads and their change upon field exposure, and exposure risk), with no clear patterns related to the country or focal crop. Our results suggest that the most frequent IPAs in adult bees are more appropriate for assessing the bees' IPA exposure risk. We also report positive correlations of IPA loads supporting the potential IPA transmission among sentinels, suggesting careful consideration should be taken when introducing managed pollinators in ecologically sensitive environments.


Subject(s)
Bacteria , Pollination , Bees , Animals , Europe
11.
Value Health ; 27(4): 469-477, 2024 04.
Article in English | MEDLINE | ID: mdl-38307389

ABSTRACT

OBJECTIVES: The EQ-5D-5L is a commonly used health-related quality of life instrument for evaluating interventions in patients receiving dialysis; however, the minimal important difference (MID) that constitutes a meaningful treatment effect for this population has not been established. This study aims to estimate the MID for the EQ-5D-5L utility index in dialysis patients. METHODS: 6-monthly EQ-5D-5L measurements were collected from adult dialysis patients between April 2017 and November 2020 at a renal network in Sydney, Australia. EQ-VAS and Integrated Palliative care Outcome Scale Renal symptom burden scores were collected simultaneously and used as anchors. MID estimates for the EQ-5D-5L utility index were derived using anchor-based and distribution-based methods. RESULTS: A total of 352 patients with ≥1 EQ-5D-5L observation were included, constituting 1127 observations. Mean EQ-5D-5L utility index at baseline was 0.719 (SD ± 0.267), and mean EQ-5D-5L utility decreased over time by -0.017 per year (95% CI -0.029 to -0.006, P = .004). Using cross-sectional anchor-based methods, MID estimates ranged from 0.073 to 0.107. Using longitudinal anchor-based methods, MID for improvement and deterioration ranged from 0.046 to 0.079 and -0.111 to -0.048, respectively. Using receiver operating characteristic curves, MID for improvement and deterioration ranged from 0.037 to 0.122 and -0.074 to -0.063, respectively. MID estimates from distribution-based methods were consistent with anchor-based estimates. CONCLUSIONS: Anchor-based and distribution-based approaches provided EQ-5D-5L utility index MID estimates ranging from 0.034 to 0.134. These estimates can inform the target difference or "effect size" for clinical trial design among dialysis populations.


Subject(s)
Quality of Life , Renal Dialysis , Adult , Humans , Cross-Sectional Studies , Surveys and Questionnaires , Psychometrics
12.
G3 (Bethesda) ; 14(4)2024 04 03.
Article in English | MEDLINE | ID: mdl-38334143

ABSTRACT

Pollinators are vital for food security and the maintenance of terrestrial ecosystems. Bumblebees are important pollinators across northern temperate, arctic, and alpine ecosystems, yet are in decline across the globe. Vairimorpha bombi is a parasite belonging to the fungal class Microsporidia that has been implicated in the rapid decline of bumblebees in North America, where it may be an emerging infectious disease. To investigate the evolutionary basis of pathogenicity of V. bombi, we sequenced and assembled its genome using Oxford Nanopore and Illumina technologies and performed phylogenetic and genomic evolutionary analyses. The genome assembly for V. bombi is 4.73 Mb, from which we predicted 1,870 protein-coding genes and 179 tRNA genes. The genome assembly has low repetitive content and low GC content. V. bombi's genome assembly is the smallest of the Vairimorpha and closely related Nosema genera, but larger than those found in the Encephalitozoon and Ordospora sister clades. Orthology and phylogenetic analysis revealed 18 core conserved single-copy microsporidian genes including the histone acetyltransferase (HAT) GCN5. Surprisingly, V. bombi was unique to the microsporidia in not encoding the second predicted HAT ESA1. The V. bombi genome assembly annotation included 265 unique genes (i.e. not predicted in other microsporidia genome assemblies), 20% of which encode a secretion signal, which is a significant enrichment. Intriguingly, of the 36 microsporidian genomes we analyzed, 26 also had a significant enrichment of secreted signals encoded by unique genes, ranging from 6 to 71% of those predicted genes. These results suggest that microsporidia are under selection to generate and purge diverse and unique genes encoding secreted proteins, potentially contributing to or facilitating infection of their diverse hosts. Furthermore, V. bombi has 5/7 conserved spore wall proteins (SWPs) with its closest relative V. ceranae (that primarily infects honeybees), while also uniquely encoding four additional SWPs. This gene class is thought to be essential for infection, providing both environmental protection and recognition and uptake into the host cell. Together, our results show that SWPs and unique genes encoding a secretion signal are rapidly evolving in the microsporidia, suggesting that they underpin key pathobiological traits including host specificity and pathogenicity.


Subject(s)
Ecosystem , Microsporidia , Nosema , Bees/genetics , Animals , Phylogeny , Nosema/genetics , North America
13.
Mutat Res ; 828: 111853, 2024.
Article in English | MEDLINE | ID: mdl-38401335

ABSTRACT

The widespread use of chemicals and the presence of chemical and metal residues in various foods, beverages, and other consumables have raised concerns about the potential for enhanced toxicity. This study assessed the cytotoxic effects of Piperonyl butoxide (PBO) and its enhancement by combination with major contamination chemicals including Imidacloprid and metals, using different cytotoxic and genotoxic assays in Chinese hamster ovary (CHO) cells. PBO exhibited elevated cytotoxic effects in poly (ADP-ribose) polymerase (PARP) deficient CHO mutants but not in Glutathione S-transferase deficient CHO mutants. PBO cytotoxicity was enhanced by PARP inhibitor, Olaparib. PBO cytotoxicity was also enhanced with co-exposure to Imidacloprid, Lead Chloride, or Sodium Selenite. PBO induces γH2AX foci formation and apoptosis. The induction of DNA damage markers was elevated with PARP deficiency and co-exposure to Imidacloprid, Lead Chloride, or Sodium Selenite. Moreover, PBO triggers to form etch pits on plastic surfaces. These results revealed novel mechanisms of PBO cytotoxicity associated with PARP and synergistic effects with other environmental pollutants. The toxicological mechanisms underlying exposure to various combinations at different concentrations, including concentrations below the permitted limit of intake or the level of concern, require further study.


Subject(s)
Cricetulus , Drug Synergism , Neonicotinoids , Nitro Compounds , Piperonyl Butoxide , Animals , CHO Cells , Neonicotinoids/toxicity , Nitro Compounds/toxicity , Piperonyl Butoxide/toxicity , Imidazoles/toxicity , Cricetinae , Apoptosis/drug effects , DNA Damage/drug effects , Lead/toxicity , Piperazines/toxicity , Insecticides/toxicity , Poly(ADP-ribose) Polymerase Inhibitors/pharmacology , Phthalazines
14.
Hypertension ; 81(4): 851-860, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38288610

ABSTRACT

BACKGROUND: Increased cardiovascular risk following preeclampsia is well established and there are signs of early cardiovascular aging 6 months postpartum. This study assessed whether blood pressure (BP) and other cardiovascular measures are abnormal 2 years postpartum in the same cohort to determine ongoing risk markers. METHODS: Six months and 2 years postpartum, BP was measured using sphygmomanometry, 24-hour ambulatory BP monitoring, and noninvasive central BP. Anthropometric measures, blood, and urine biochemistry were performed. Cross-sectional comparisons between preeclampsia and normotensive pregnancy (NP) groups and longitudinal comparisons within each group were made at 6 months and 2 years. RESULTS: Two years postpartum, 129 NP, and 52 preeclampsia women were studied who also had 6 months measures. At both time points, preeclampsia group had significantly higher BP (office BP 2 years, 112±12/72±8 versus 104±9/67±7 mm Hg NP; [P<0.001]; mean ambulatory BP monitoring 116±9/73±8 versus 106±8/67±6 mm Hg NP; [P<0.001]). No significant BP changes noted 6 months to 2 years within either group. Office BP thresholds of 140 mm Hg systolic and 90 mm Hg diastolic classified 2% preeclampsia and 0% NP at 2 years. American Heart Association 2017 criteria (above normal, >120/80 mm Hg) classified 25% versus 8% (P<0.002), as did our reference range threshold of 122/79 mm Hg. American Heart Association criteria classified 60% post-preeclampsia versus 16% after NP with above-normal ambulatory BP monitoring (P<0.001). Other cardiovascular risk markers more common 2 years post-preeclampsia included higher body mass index (median 26.6 versus 23.1, P=0.003) and insulin resistance. CONCLUSIONS: After preeclampsia, women have significantly higher BP 6 months and 2 years postpartum, and have higher body mass index and insulin-resistance scores, increasing their future cardiovascular risk. Regular cardiovascular risk screening should be implemented for all who have experienced preeclampsia.


Subject(s)
Cardiovascular Diseases , Hypertension , Pre-Eclampsia , Pregnancy , Female , Humans , Pre-Eclampsia/diagnosis , Pre-Eclampsia/epidemiology , Cardiovascular Diseases/diagnosis , Cardiovascular Diseases/epidemiology , Cardiovascular Diseases/etiology , Cross-Sectional Studies , Risk Factors , Hypertension/diagnosis , Blood Pressure/physiology , Heart Disease Risk Factors
15.
Kidney Int ; 105(1): 35-45, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38182300

ABSTRACT

Integrated kidney care requires synergistic linkage between preventative care for people at risk for chronic kidney disease and health services providing care for people with kidney disease, ensuring holistic and coordinated care as people transition between acute and chronic kidney disease and the 3 modalities of kidney failure management: conservative kidney management, transplantation, and dialysis. People with kidney failure have many supportive care needs throughout their illness, regardless of treatment modality. Kidney supportive care is therefore a vital part of this integrated framework, but is nonexistent, poorly developed, and/or poorly integrated with kidney care in many settings, especially in low- and middle-income countries. To address this, the International Society of Nephrology has (i) coordinated the development of consensus definitions of conservative kidney management and kidney supportive care to promote international understanding and awareness of these active treatments; and (ii) identified key considerations for the development and expansion of conservative kidney management and kidney supportive care programs, especially in low resource settings, where access to kidney replacement therapy is restricted or not available. This article presents the definitions for conservative kidney management and kidney supportive care; describes their core components with some illustrative examples to highlight key points; and describes some of the additional considerations for delivering conservative kidney management and kidney supportive care in low resource settings.


Subject(s)
Delivery of Health Care, Integrated , Renal Insufficiency, Chronic , Renal Insufficiency , Humans , Kidney , Renal Insufficiency, Chronic/diagnosis , Renal Insufficiency, Chronic/therapy , Conservative Treatment
16.
Commun Biol ; 7(1): 125, 2024 01 24.
Article in English | MEDLINE | ID: mdl-38267685

ABSTRACT

Marine heatwaves (MHWs) cause disruption to marine ecosystems, deleteriously impacting macroflora and fauna. However, effects on microorganisms are relatively unknown despite ocean temperature being a major determinant of assemblage structure. Using data from thousands of Southern Hemisphere samples, we reveal that during an "unprecedented" 2015/16 Tasman Sea MHW, temperatures approached or surpassed the upper thermal boundary of many endemic taxa. Temperate microbial assemblages underwent a profound transition to niche states aligned with sites over 1000 km equatorward, adapting to higher temperatures and lower nutrient conditions bought on by the MHW. MHW conditions also modulate seasonal patterns of microbial diversity and support novel assemblage compositions. The most significant affects of MHWs on microbial assemblages occurred during warmer months, when temperatures exceeded the upper climatological bounds. Trends in microbial response across several MHWs in different locations suggest these are emergent properties of temperate ocean warming, which may facilitate monitoring, prediction and adaptation efforts.


Subject(s)
Ecosystem , Infrared Rays , Nutrients , Temperature
17.
J Ren Nutr ; 34(2): 177-184, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37918642

ABSTRACT

BACKGROUND: Frailty and malnutrition are both associated with worsening morbidity and mortality and become more prevalent in the elderly and as kidney function declines. Anorexia and reduced oral intake are common features of both frailty and malnutrition. However, there are sparse data evaluating the impact of other gastrointestinal (GI) symptoms, such as taste changes, on rates of frailty and malnutrition in people with kidney failure. The aim of this study is to describe the prevalence of frailty and malnutrition and their association with dietary intake and nutrition-related symptoms in people with kidney failure. METHODS: This observational study recruited people with kidney failure who were commencing Conservative Kidney Management or elderly people (aged > 75 years) newly commenced on dialysis from 3 renal units. Participants underwent assessments of frailty, nutritional status, dietary intake, and GI symptom burden when they attended clinic appointments, approximately every 6 months. RESULTS: Of the 85 participants, 57% were assessed as being frail and 33% were assessed as being malnourished. Participants assessed as frail reported more GI symptoms (3 vs. 2, P < .001) that were more severe (1.75 vs. 1.0, P < .001) compared to nonfrail participants. Being malnourished was associated with a 5 times higher chance of being frail (odds ratio 5.8; 95% confidence interval 1.5, 21.8; P = .015) and having more severe symptoms was associated with a 2 times higher chance (odds ratio 2.8; 95% CI 1.1, 7.0; P = .026) of being frail. In addition to experiencing more GI symptoms, that were more severe, participants who were malnourished consumed significantly less energy (1234 kcal vs. 1400 kcal, P = .01) and protein (51 g vs. 74 g, P < .001). CONCLUSIONS: Frailty and malnutrition are common and are associated with a higher GI symptom burden and poorer dietary intake. Future research is needed to determine effective interventions targeting frailty and malnutrition, including nutrition-related symptoms and optimal protein intake.


Subject(s)
Frailty , Malnutrition , Renal Insufficiency , Aged , Humans , Frailty/epidemiology , Frailty/complications , Prospective Studies , Nutrition Assessment , Malnutrition/diagnosis , Nutritional Status , Eating , Renal Insufficiency/complications , Renal Insufficiency/epidemiology , Frail Elderly , Geriatric Assessment
18.
Nature ; 628(8007): 355-358, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38030722

ABSTRACT

Sustainable agriculture requires balancing crop yields with the effects of pesticides on non-target organisms, such as bees and other crop pollinators. Field studies demonstrated that agricultural use of neonicotinoid insecticides can negatively affect wild bee species1,2, leading to restrictions on these compounds3. However, besides neonicotinoids, field-based evidence of the effects of landscape pesticide exposure on wild bees is lacking. Bees encounter many pesticides in agricultural landscapes4-9 and the effects of this landscape exposure on colony growth and development of any bee species remains unknown. Here we show that the many pesticides found in bumble bee-collected pollen are associated with reduced colony performance during crop bloom, especially in simplified landscapes with intensive agricultural practices. Our results from 316 Bombus terrestris colonies at 106 agricultural sites across eight European countries confirm that the regulatory system fails to sufficiently prevent pesticide-related impacts on non-target organisms, even for a eusocial pollinator species in which colony size may buffer against such impacts10,11. These findings support the need for postapproval monitoring of both pesticide exposure and effects to confirm that the regulatory process is sufficiently protective in limiting the collateral environmental damage of agricultural pesticide use.


Subject(s)
Insecticides , Pesticides , Bees , Animals , Pesticides/toxicity , Insecticides/toxicity , Neonicotinoids/toxicity , Agriculture , Pollen
19.
Pediatr Res ; 95(1): 275-284, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37674022

ABSTRACT

BACKGROUND: Intrauterine exposure to hypertensive disorders of pregnancy, including gestational hypertension (GH) and preeclampsia (PE), may influence infant growth and have long-term health implications. This study aimed to compare growth outcomes of infants exposed to a normotensive pregnancy (NTP), GH, or PE from birth to 2 years. METHODS: Infants were children of women enroled in the prospective Postpartum Physiology, Psychology and Paediatric (P4) cohort study who had NTP, GH or PE. Birth, 6-month (age-corrected) and 2-year (age-corrected) weight z-scores, change in weight z-scores, rapid weight gain (≥0.67 increase in weight z-score) and conditional weight gain z-scores were calculated to assess infant growth (NTP = 240, GH = 19, PE = 66). RESULTS: Infants exposed to PE compared to NTP or GH had significantly lower birth weight and length z-scores, but there were no differences in growth outcomes at 6 months or 2 years. GH and PE-exposed infants had significantly greater weight z-score gain [95% CI] (PE = 0.93 [0.66-1.18], GH = 1.03 [0.37-1.68], NTP = 0.45 [0.31-0.58], p < 0.01) and rapid weight gain (GH = 63%, PE = 59%, NTP = 42%, p = 0.02) from birth to 2 years, which remained significant for PE-exposed infants after confounder adjustment. CONCLUSION: In this cohort, GH and PE were associated with accelerated infant weight gain that may increase future cardiometabolic disease risk. IMPACT: Preeclampsia exposed infants were smaller at birth, compared with normotensive pregnancy and gestational hypertension exposed infants, but caught up in growth by 2 years of age. Both preeclampsia and gestational hypertension exposed infants had significantly accelerated weight gain from birth to 2 years, which remained significant for preeclampsia exposed infants after adjustment for confounders including small for gestational age. Monitoring of growth patterns in infants born following exposure to a hypertensive disorder of pregnancy may be indicated to prevent accelerated weight gain trajectories and obesity.


Subject(s)
Hypertension, Pregnancy-Induced , Pre-Eclampsia , Infant, Newborn , Pregnancy , Infant , Humans , Child , Female , Cohort Studies , Prospective Studies , Weight Gain
20.
J Comput Assist Tomogr ; 48(1): 1-11, 2024.
Article in English | MEDLINE | ID: mdl-37574655

ABSTRACT

ABSTRACT: The Fontan procedure is the definitive treatment for patients with single-ventricle physiology. Surgical advances have led to a growing number of patients surviving into adulthood. Fontan-associated liver disease (FALD) encompasses a spectrum of pathologic liver changes that occur secondary to altered physiology including congestion, fibrosis, and the development of liver masses. Assessment of FALD is difficult and relies on using imaging alongside of clinical, laboratory, and pathology information. Ultrasound, computed tomography, and magnetic resonance imaging are capable of demonstrating physiologic and hepatic parenchymal abnormalities commonly seen in FALD. Several novel imaging techniques including magnetic resonance elastography are under study for use as biomarkers for FALD progression. Imaging has a central role in detection and characterization of liver masses as benign or malignant. Benign FNH-like masses are commonly encountered; however, these can display atypical features and be mistaken for hepatocellular carcinoma (HCC). Fontan patients are at elevated risk for HCC, which is a feared complication and has a poor prognosis in this population. While imaging screening for HCC is widely advocated, no consensus has been reached regarding an optimal surveillance regimen.


Subject(s)
Carcinoma, Hepatocellular , Liver Diseases , Liver Neoplasms , Humans , Carcinoma, Hepatocellular/pathology , Liver Neoplasms/diagnostic imaging , Liver Neoplasms/surgery , Liver Diseases/diagnostic imaging , Liver/diagnostic imaging , Ultrasonography , Fibrosis , Liver Cirrhosis
SELECTION OF CITATIONS
SEARCH DETAIL
...