Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 136
Filter
1.
Development ; 2024 Aug 19.
Article in English | MEDLINE | ID: mdl-39157903

ABSTRACT

Ciliopathies are characterized by the absence or dysfunction of primary cilia. Despite the fact that cognitive impairments are a common feature of ciliopathies, how cilia dysfunction affects neuronal development has not been characterized in detail. Here, we show that primary cilium-mediated signaling is required cell-autonomously by neurons during neural circuit formation. In particular, a functional primary cilium is crucial during axonal pathfinding for the switch in responsiveness of axons at a choice point, or intermediate target. Utilizing different animal models and in vivo, ex vivo, as well as in vitro experiments, we provide evidence for a critical role of primary cilium-mediated signaling in long-range axon guidance. The primary cilium on the cell body of commissural neurons transduces long-range guidance signals sensed by growth cones navigating an intermediate target. In extension of our finding that Shh is required for the rostral turn of post-crossing commissural axons, we suggest a model implicating the primary cilium in Shh signaling upstream of a transcriptional change of axon guidance receptors, which in turn mediate the repulsive response to floorplate-derived Shh shown by post-crossing commissural axons.

3.
J Surg Res ; 301: 504-511, 2024 Jul 22.
Article in English | MEDLINE | ID: mdl-39042979

ABSTRACT

INTRODUCTION: Large language models like Chat Generative Pre-Trained Transformer (ChatGPT) are increasingly used in academic writing. Faculty may consider use of artificial intelligence (AI)-generated responses a form of cheating. We sought to determine whether general surgery residency faculty could detect AI versus human-written responses to a text prompt; hypothesizing that faculty would not be able to reliably differentiate AI versus human-written responses. METHODS: Ten essays were generated using a text prompt, "Tell us in 1-2 paragraphs why you are considering the University of Rochester for General Surgery residency" (Current trainees: n = 5, ChatGPT: n = 5). Ten blinded faculty reviewers rated essays (ten-point Likert scale) on the following criteria: desire to interview, relevance to the general surgery residency, overall impression, and AI- or human-generated; with scores and identification error rates compared between the groups. RESULTS: There were no differences between groups for %total points (ChatGPT 66.0 ± 13.5%, human 70.0 ± 23.0%, P = 0.508) or identification error rates (ChatGPT 40.0 ± 35.0%, human 20.0 ± 30.0%, P = 0.175). Except for one, all essays were identified incorrectly by at least two reviewers. Essays identified as human-generated received higher overall impression scores (area under the curve: 0.82 ± 0.04, P < 0.01). CONCLUSIONS: Whether use of AI tools for academic purposes should constitute academic dishonesty is controversial. We demonstrate that human and AI-generated essays are similar in quality, but there is bias against presumed AI-generated essays. Faculty are not able to reliably differentiate human from AI-generated essays, thus bias may be misdirected. AI-tools are becoming ubiquitous and their use is not easily detected. Faculty must expect these tools to play increasing roles in medical education.

4.
PLoS Med ; 21(6): e1004375, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38829821

ABSTRACT

BACKGROUND: In Australian remote communities, First Nations children with otitis media (OM)-related hearing loss are disproportionately at risk of developmental delay and poor school performance, compared to those with normal hearing. Our objective was to compare OM-related hearing loss in children randomised to one of 2 pneumococcal conjugate vaccine (PCV) formulations. METHODS AND FINDINGS: In 2 sequential parallel, open-label, randomised controlled trials (the PREVIX trials), eligible infants were first allocated 1:1:1 at age 28 to 38 days to standard or mixed PCV schedules, then at age 12 months to PCV13 (13-valent pneumococcal conjugate vaccine, +P) or PHiD-CV10 (10-valent pneumococcal Haemophilus influenzae protein D conjugate vaccine, +S) (1:1). Here, we report prevalence and level of hearing loss outcomes in the +P and +S groups at 6-monthly scheduled assessments from age 12 to 36 months. From March 2013 to September 2018, 261 infants were enrolled and 461 hearing assessments were performed. Prevalence of hearing loss was 78% (25/32) in the +P group and 71% (20/28) in the +S group at baseline, declining to 52% (28/54) in the +P groups and 56% (33/59) in the +S group at age 36 months. At primary endpoint age 18 months, prevalence of moderate (disabling) hearing loss was 21% (9/42) in the +P group and 41% (20/49) in the +S group (difference -19%; (95% confidence interval (CI) [-38, -1], p = 0.07) and prevalence of no hearing loss was 36% (15/42) in the +P group and 16% (8/49) in the +S group (difference 19%; (95% CI [2, 37], p = 0.05). At subsequent time points, prevalence of moderate hearing loss remained lower in the +P group: differences -3%; (95% CI [-23, 18], p = 1.00 at age 24 months), -12%; (95% CI [-30, 6], p = 0.29 at age 30 months), and -9%; (95% CI [-23, 5], p = 0.25 at age 36 months). A major limitation was the small sample size, hence low power to reach statistical significance, thereby reducing confidence in the effect size. CONCLUSIONS: In this study, we observed a high prevalence and persistence of moderate (disabling) hearing loss throughout early childhood. We found a lower prevalence of moderate hearing loss and correspondingly higher prevalence of no hearing loss in the +P group, which may have substantial benefits for high-risk children, their families, and society, but warrant further investigation. TRIAL REGISTRATION: ClinicalTrials.gov NCT01735084 and NCT01174849.


Subject(s)
Hearing Loss , Otitis Media , Pneumococcal Vaccines , Humans , Infant , Pneumococcal Vaccines/administration & dosage , Pneumococcal Vaccines/therapeutic use , Hearing Loss/epidemiology , Australia/epidemiology , Child, Preschool , Female , Male , Otitis Media/epidemiology , Otitis Media/prevention & control , Prevalence , Vaccines, Conjugate/administration & dosage , Pneumococcal Infections/prevention & control , Pneumococcal Infections/epidemiology , Immunization Schedule
5.
J Pediatr Surg ; 59(7): 1378-1387, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38631997

ABSTRACT

CONTEXT: Neighborhood and built environment encompass one key area of the Social Determinants of Health (SDOH) and is frequently assessed using area-level indices. OBJECTIVE: We sought to systematically review the pediatric surgery literature for use of commonly applied area-level indices and to compare their utility for prediction of outcomes. DATA SOURCES: A literature search was conducted using PubMed, Ovid MEDLINE, Ovid MEDLINE Epub Ahead of Print, PsycInfo, and an artificial intelligence search tool (1/2013-2/2023). STUDY SELECTION: Inclusion required pediatric surgical patients in the US, surgical intervention performed, and use of an area-level metric. DATA EXTRACTION: Extraction domains included study, patient, and procedure characteristics. RESULTS: Area Deprivation Index is the most consistent and commonly accepted index. It is also the most granular, as it uses Census Block Groups. Child Opportunity Index is less granular (Census Tract), but incorporates pediatric-specific predictors of risk. Results with Social Vulnerability Index, Neighborhood Deprivation Index, and Neighborhood Socioeconomic Status were less consistent. LIMITATIONS: All studies were retrospective and quality varied from good to fair. CONCLUSIONS: While each index has strengths and limitations, standardization on ideal metric(s) for the pediatric surgical population will help build the inferential power needed to move from understanding the role of SDOH to building meaningful interventions towards equity in care. TYPE OF STUDY: Systematic Review. LEVEL OF EVIDENCE: Level III.


Subject(s)
Built Environment , Perioperative Care , Social Determinants of Health , Humans , Child , Perioperative Care/methods , Perioperative Care/standards , Residence Characteristics , Neighborhood Characteristics , Surgical Procedures, Operative/statistics & numerical data
6.
J Clin Exp Hepatol ; 14(4): 101363, 2024.
Article in English | MEDLINE | ID: mdl-38495462

ABSTRACT

Rejection following liver transplantation continues to impact transplant recipients although rates have decreased over time with advances in immunosuppression management. The diagnosis of rejection remains challenging with liver biopsy remaining the reference standard for diagnosis. Proper classification of rejection type and severity is imperative as this guides management and ultimately graft preservation. Future areas of promise include non-invasive testing for detection of rejection to reduce the morbidity associated with invasive testing and further advances in immunosuppression management to reduce toxicities associated with immunosuppression while minimizing rejection related morbidity.

7.
J Am Coll Surg ; 239(2): 134-144, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-38357984

ABSTRACT

BACKGROUND: Assigning trauma team activation (TTA) levels for trauma patients is a classification task that machine learning models can help optimize. However, performance is dependent on the "ground-truth" labels used for training. Our purpose was to investigate 2 ground truths, the Cribari matrix and the Need for Trauma Intervention (NFTI), for labeling training data. STUDY DESIGN: Data were retrospectively collected from the institutional trauma registry and electronic medical record, including all pediatric patients (age <18 years) who triggered a TTA (January 2014 to December 2021). Three ground truths were used to label training data: (1) Cribari (Injury Severity Score >15 = full activation), (2) NFTI (positive for any of 6 criteria = full activation), and (3) the union of Cribari+NFTI (either positive = full activation). RESULTS: Of 1,366 patients triaged by trained staff, 143 (10.47%) were considered undertriaged using Cribari, 210 (15.37%) using NFTI, and 273 (19.99%) using Cribari+NFTI. NFTI and Cribari+NFTI were more sensitive to undertriage in patients with penetrating mechanisms of injury (p = 0.006), specifically stab wounds (p = 0.014), compared with Cribari, but Cribari indicated overtriage in more patients who required prehospital airway management (p < 0.001), CPR (p = 0.017), and who had mean lower Glasgow Coma Scale scores on presentation (p < 0.001). The mortality rate was higher in the Cribari overtriage group (7.14%, n = 9) compared with NFTI and Cribari+NFTI (0.00%, n = 0, p = 0.005). CONCLUSIONS: To prioritize patient safety, Cribari+NFTI appears best for training a machine learning algorithm to predict the TTA level.


Subject(s)
Injury Severity Score , Triage , Wounds and Injuries , Humans , Child , Retrospective Studies , Wounds and Injuries/therapy , Wounds and Injuries/diagnosis , Wounds and Injuries/mortality , Female , Male , Child, Preschool , Adolescent , Triage/standards , Triage/methods , Machine Learning , Trauma Centers , Patient Care Team/organization & administration , Infant , Registries
8.
Circulation ; 149(12): 905-913, 2024 03 19.
Article in English | MEDLINE | ID: mdl-37830200

ABSTRACT

BACKGROUND: Life's Simple 7 (LS7) is an easily calculated and interpreted metric of cardiovascular health based on 7 domains: smoking, diet, physical activity, body mass index, blood pressure, cholesterol, and fasting glucose. The Life's Essential 8 (LE8) metric was subsequently introduced, adding sleep metrics and revisions of the previous 7 domains. Although calculating LE8 requires additional information, we hypothesized that it would be a more reliable index of cardiovascular health. METHODS: Both the LS7 and LE8 metrics yield scores with higher values indicating lower risk. These were calculated among 11 609 Black and White participants free of baseline cardiovascular disease (CVD) in the Reasons for Geographic and Racial Differences in Stroke study, enrolled in 2003 to 2007, and followed for a median of 13 years. Differences in 10-year risk of incident CVD (coronary heart disease or stroke) were calculated as a function LS7, and LE8 scores were calculated using Kaplan-Meier and proportional hazards analyses. Differences in incident CVD discrimination were quantified by difference in the c-statistic. RESULTS: For both LS7 and LE8, the 10-year risk was approximately 5% for participants around the 99th percentile of scores, and a 4× higher 20% risk for participants around the first percentile. Comparing LS7 to LE8, 10-year risk was nearly identical for individuals at the same relative position in score distribution. For example, the "cluster" of 2013 participants with an LS7 score of 7 was at the 35.8th percentile in distribution of LS7 scores, and had an estimated 10-year CVD risk of 8.4% (95% CI, 7.2%-9.8%). In a similar location in the LE8 distribution, the 1457 participants with an LE8 score of 60±2.5 at the 39.4th percentile of LE8 scores had a 10-year risk of CVD of 8.5% (95% CI, 7.1%-10.1%), similar to the cluster defined by LS7. The age-race-sex adjusted c-statistic of the LS7 model was 0.691 (95% CI, 0.667-0.705), and 0.695 for LE8 (95% CI, 0.681-0.709) (P for difference, 0.12). CONCLUSIONS: Both LS7 and LE8 were associated with incident CVD, with discrimination of the 2 indices practically indistinguishable. As a simpler metric, LS7 may be favored for use by the general population and clinicians.


Subject(s)
Cardiovascular Diseases , Stroke , Humans , United States/epidemiology , Cardiovascular Diseases/diagnosis , Cardiovascular Diseases/epidemiology , Risk Factors , Smoking/epidemiology , Heart Disease Risk Factors , Stroke/diagnosis , Stroke/epidemiology
9.
J Pediatr Surg ; 59(1): 74-79, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37865573

ABSTRACT

BACKGROUND: The assignment of trauma team activation levels can be conceptualized as a classification task. Machine learning models can be used to optimize classification predictions. Our purpose was to demonstrate proof-of-concept for a machine learning tool for predicting trauma team activation levels in pediatric patients with traumatic injuries. METHODS: Following IRB approval, we retrospectively collected data from the institutional trauma registry and electronic medical record at our Pediatric Trauma Center for all patients (age <18 y) who triggered a trauma team activation (1/2014-12/2021), including: demographics, mechanisms of injury, comorbidities, pre-hospital interventions, numeric variables, and the six "Need for Trauma Intervention (NFTI)" criteria. Three machine learning models (Logistic Regression, Random Forest, Support Vector Machine) were tested 1000 times in separate trials using the union of the Cribari and NFTI metrics as ground-truth (Injury Severity Score >15 or positive for any of 6 NFTI criteria = full activation). Model performance was quantified and compared to emergency department (ED) staff. RESULTS: ED staff had 75% accuracy, an area under the curve (AUC) of 0.73 ± 0.04, and an F1 score of 0.49. The best performing of all machine learning models, the support vector machine, had 80% accuracy, AUC 0.81 ± 4.1e-5, F1 Score 0.80, with less variance compared to other models and ED staff. CONCLUSIONS: All machine learning models outperformed ED staff in all performance metrics. These results suggest that data-driven methods can optimize trauma team activations in the ED, with potential improvements in both patient safety and hospital resource utilization. TYPE OF STUDY: Economic/Decision Analysis or Modeling Studies. LEVEL OF EVIDENCE: II.


Subject(s)
Emergency Service, Hospital , Triage , Humans , Child , Retrospective Studies , Triage/methods , Trauma Centers , Machine Learning
10.
Nutr Clin Pract ; 39(1): 109-116, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38030572

ABSTRACT

A key component to nutrition support is to consider immunosuppressive agents, the interaction with nutrients, and how the side effects of the medications influence nutrition support. The immunosuppression of the solid organ-transplant recipient involves the individualized titration of multiple therapeutic agents to prevent allorecognition and, thus, rejection of the transplanted organ. Induction immunosuppression includes the agents used at the time of transplant to prevent early rejection. Maintenance immunosuppression typically consists of oral medications taken for life. Regular therapeutic monitoring of immunosuppression is necessary to balance the risk of rejection with that of infections and malignancy. In the acute-care setting, multidisciplinary collaboration, including pharmacy and nutrition, is needed to optimize the route of administration, titration, and side effects of immunosuppression. Long-term nutrition management after transplant is also vital to prevent exacerbating adverse effects of immunosuppressive therapies, including diabetes mellitus, hypertension, dyslipidemia, obesity, and bone loss. This review summarizes common immunosuppressive agents currently utilized in solid organ-transplant recipients and factors that may influence decisions on nutrition support.


Subject(s)
Organ Transplantation , Transplant Recipients , Humans , Graft Rejection/prevention & control , Graft Rejection/drug therapy , Immunosuppression Therapy/adverse effects , Immunosuppressive Agents/adverse effects , Organ Transplantation/adverse effects
11.
Pediatrics ; 152(6)2023 Dec 01.
Article in English | MEDLINE | ID: mdl-37941523

ABSTRACT

OBJECTIVES: To determine whether rate of severe intraventricular hemorrhage (IVH) or death among preterm infants receiving placental transfusion with UCM is noninferior to delayed cord clamping (DCC). METHODS: Noninferiority randomized controlled trial comparing UCM versus DCC in preterm infants born 28 to 32 weeks recruited between June 2017 through September 2022 from 19 university and private medical centers in 4 countries. The primary outcome was Grade III/IV IVH or death evaluated at a 1% noninferiority margin. RESULTS: Among 1019 infants (UCM n = 511 and DCC n = 508), all completed the trial from birth through initial hospitalization (mean gestational age 31 weeks, 44% female). For the primary outcome, 7 of 511 (1.4%) infants randomized to UCM developed severe IVH or died compared to 7 of 508 (1.4%) infants randomized to DCC (rate difference 0.01%, 95% confidence interval: (-1.4% to 1.4%), P = .99). CONCLUSIONS: In this randomized controlled trial of UCM versus DCC among preterm infants born between 28 and 32 weeks' gestation, there was no difference in the rates of severe IVH or death. UCM may be a safe alternative to DCC in premature infants born at 28 to 32 weeks who require resuscitation.


Subject(s)
Infant, Premature , Umbilical Cord Clamping , Infant, Newborn , Humans , Female , Infant , Pregnancy , Male , Umbilical Cord/surgery , Placenta , Gestational Age , Cerebral Hemorrhage/etiology , Constriction
12.
Rheumatology (Oxford) ; 62(Suppl_4): iv8-iv13, 2023 10 19.
Article in English | MEDLINE | ID: mdl-37855679

ABSTRACT

OBJECTIVES: This study had two aims: (i) to investigate outcomes of medication tapering in stable RA patients on biologic or targeted synthetic disease-modifying anti-rheumatic drugs (bDMARDs/tsDMARDs) and conventional synthetic DMARDs (csDMARDs) in a real-world prospective cohort; and (ii) to evaluate possible predictors of flare with medication taper. METHODS: A prospective cohort of patients with RA in sustained remission or low disease activity while on stable bDMARD/tsDMARDs +/- csDMARDs for at least 6 months underwent medication tapering/stopping and was tracked for 2 years. Patients were evaluated for flares in four groups: no taper, only bDMARD/tsDMARD taper, only csDMARD taper and both csDMARD and bDMARD/tsDMARD taper. RESULTS: The RHEUMTAP cohort included 131 patients that met eligibility criteria, of which 52 patients underwent a medication taper. Flare was experienced by 15 patients in the taper and two in the no-taper groups. Patients undergoing any taper/stop overall were 10 times more likely to experience a flare compared with those not tapered (HR 10.43, 95% CI 2.98-36.53, P = 0.0002). The group tapering bDMARD/tsDMARD had 31 times higher risk of flare (HR 31.43, 95% CI 6.35-155.55, P <0.0001) than the no-taper group. Patients tapering both csDMARDs and bDMARD/tsDMARDs had 18 times higher risk of flare than the no-taper group (HR 18.45, 95% CI 2.55-133.37, P = 0.0039). The only csDMARD taper group had a 91% lower risk of flare than the bDMARD/tsDMARD taper group (HR 0.09, 95% CI 0.01-0.69, P = 0.0213). CONCLUSION: In our real-world prospective RHEUMTAP cohort study on the outcomes of different medication tapering groups in well-controlled RA, patients who tapered or stopped bDMARDs/tsDMARDs with or without background therapy were more likely to experience a flare than patients that did not taper any medications and those that tapered only csDMARDs.


Subject(s)
Antirheumatic Agents , Arthritis, Rheumatoid , Biological Products , Humans , Prospective Studies , Cohort Studies , Arthritis, Rheumatoid/drug therapy , Antirheumatic Agents/therapeutic use , Risk , Biological Products/therapeutic use
13.
BMC Public Health ; 23(1): 2020, 2023 10 17.
Article in English | MEDLINE | ID: mdl-37848929

ABSTRACT

BACKGROUND: The impact of young drivers' motor vehicle crashes (MVC) is substantial, with young drivers constituting only 14% of the US population, but contributing to 30% of all fatal and nonfatal injuries due to MVCs and 35% ($25 billion) of the all medical and lost productivity costs. The current best-practice policy approach, Graduated Driver Licensing (GDL) programs, are effective primarily by delaying licensure and restricting crash opportunity. There is a critical need for interventions that target families to complement GDL. Consequently, we will determine if a comprehensive parent-teen intervention, the Drivingly Program, reduces teens' risk for a police-reported MVC in the first 12 months of licensure. Drivingly is based on strong preliminary data and targets multiple risk and protective factors by delivering intervention content to teens, and their parents, at the learner and early independent licensing phases. METHODS: Eligible participants are aged 16-17.33 years of age, have a learner's permit in Pennsylvania, have practiced no more than 10 h, and have at least one parent/caregiver supervising. Participants are recruited from the general community and through the Children's Hospital of Philadelphia's Recruitment Enhancement Core. Teen-parent dyads are randomized 1:1 to Drivingly or usual practice control group. Drivingly participants receive access to an online curriculum which has 16 lessons for parents and 13 for teens and an online logbook; website usage is tracked. Parents receive two, brief, psychoeducational sessions with a trained health coach and teens receive an on-road driving intervention and feedback session after 4.5 months in the study and access to DriverZed, the AAA Foundation's online hazard training program. Teens complete surveys at baseline, 3 months post-baseline, at licensure, 3months post-licensure, 6 months post-licensure, and 12 months post-licensure. Parents complete surveys at baseline, 3 months post-baseline, and at teen licensure. The primary end-point is police-reported MVCs within the first 12 months of licensure; crash data are provided by the Pennsylvania Department of Transportation. DISCUSSION: Most evaluations of teen driver safety programs have significant methodological limitations including lack of random assignment, insufficient statistical power, and reliance on self-reported MVCs instead of police reports. Results will identify pragmatic and sustainable solutions for MVC prevention in adolescence. TRIAL REGISTRATION: ClinicalTrials.gov # NCT03639753.


Subject(s)
Automobile Driving , Adolescent , Humans , Accidents, Traffic/prevention & control , Licensure , Parents , Transportation
14.
Clin Cardiol ; 46(11): 1418-1425, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37605862

ABSTRACT

BACKGROUND: The association of hypertension (HTN) severity and control with the risk of incident atrial fibrillation (AF) is unclear. HYPOTHESIS: Increased HTN severity and poorer blood pressure control would be associated with an increased risk of incident AF. METHODS: This analysis included 9485 participants (mean age 63 ± 8 years; 56% women; 35% Black). Participants were stratified into six mutually exclusive groups at baseline-normotension (n = 1629), prehypertension (n = 704), controlled HTN (n = 2224), uncontrolled HTN (n = 4123), controlled apparent treatment-resistant hypertension (aTRH) (n = 88), and uncontrolled aTRH (n = 717). Incident AF was ascertained at the follow-up visit, defined by either electrocardiogram or self-reported medical history of a physician diagnosis. Multivariable logistic regression analyses adjusted for demographic and clinical variables. RESULTS: Over an average of 9.3 years later, 868 incident AF cases were detected. Compared to those with normotension, incident AF risk was highest for those with aTRH (controlled aTRH: odds ratio (OR) 2.95; 95% confidence interval (CI) 1.60, 5.43, & uncontrolled aTRH: OR 2.47; 95% CI 1.76, 3.48). The increase in AF risk was smaller for those on no more than three antihypertensive agents regardless of their blood pressure control (controlled OR 1.72; 95% CI 1.30, 2.29 and uncontrolled OR 1.56; 95% CI 1.14, 2.13). CONCLUSIONS: The risk of developing AF is increased in all individuals with HTN. Risk is highest in those aTRH regardless of blood pressure control. A more aggressive approach that focuses on lifestyle and pharmacologic measures to either prevent HTN or better control HTN during earlier stages may be particularly beneficial in reducing related AF risk.


Subject(s)
Atrial Fibrillation , Hypertension , Stroke , Humans , Female , Middle Aged , Aged , Male , Atrial Fibrillation/diagnosis , Atrial Fibrillation/epidemiology , Atrial Fibrillation/complications , Race Factors , Risk Factors , Hypertension/drug therapy , Hypertension/epidemiology , Hypertension/complications , Antihypertensive Agents/therapeutic use , Stroke/epidemiology , Stroke/etiology , Stroke/prevention & control
16.
Am J Health Syst Pharm ; 80(21): 1542-1549, 2023 10 25.
Article in English | MEDLINE | ID: mdl-37471466

ABSTRACT

PURPOSE: Post-transplantation anemia (PTA) is common in kidney transplant recipients, with patients frequently treated with erythropoietin-stimulating agents such as darbepoetin alfa. The optimal dosing for darbepoetin alfa remains controversial. METHODS: This retrospective cohort study involved kidney transplant recipients who received darbepoetin alfa at 2 clinics. Patients were stratified into 2 groups: those who received a fixed dose of 200 µg and those who received a weight-based dose of 0.45 µg/kg. The dosing interval varied depending on clinical response, clinic visit timing, and frequency allowed by insurance. The primary outcome was achieving a hemoglobin concentration of at least 10 g/dL without blood transfusion by 12 weeks after darbepoetin alfa initiation. RESULTS: Of the 110 patients in the study, 45% received weight-based dosing and 55% received fixed dosing. Darbepoetin alfa was initiated significantly earlier after transplantation in the fixed-dose group (median of 14 vs 20 days; P = 0.003). The weight-based group received more doses of darbepoetin alfa (median of 4 vs 2 doses; P = 0.002) and had a significantly lower cumulative exposure to darbepoetin alfa (125 vs 590 µg; P < 0.001). The median time between doses was 9 days (interquartile range, 7-14 days) in the weight-based group and 12 days (7-32 days) in the fixed-dose group (P = 0.04). Patients in the weight-based group more frequently achieved the primary outcome (67.3% vs 47.5%; P = 0.059). There was no significant difference in secondary or safety outcomes between the groups. CONCLUSION: Weight-based and fixed dosing approaches for darbepoetin alfa were not different in the achievement of a hemoglobin concentration of at least 10 g/dL without blood transfusion at 12 weeks after darbepoetin alfa initiation, with significantly lower cumulative darbepoetin alfa utilization in the weight-based group. Weight-based dosing of darbepoetin alfa in PTA appears to be safe and effective, with the potential for significant patient and health-system cost savings.


Subject(s)
Anemia , Hematinics , Kidney Transplantation , Humans , Darbepoetin alfa/adverse effects , Kidney Transplantation/adverse effects , Retrospective Studies , Anemia/diagnosis , Anemia/drug therapy , Anemia/etiology , Hemoglobins/analysis , Hemoglobins/therapeutic use , Hematinics/adverse effects , Treatment Outcome
17.
Bull Math Biol ; 85(6): 47, 2023 04 25.
Article in English | MEDLINE | ID: mdl-37186175

ABSTRACT

Fractional calculus has recently been applied to the mathematical modelling of tumour growth, but its use introduces complexities that may not be warranted. Mathematical modelling with differential equations is a standard approach to study and predict treatment outcomes for population-level and patient-specific responses. Here, we use patient data of radiation-treated tumours to discuss the benefits and limitations of introducing fractional derivatives into three standard models of tumour growth. The fractional derivative introduces a history-dependence into the growth function, which requires a continuous death-rate term for radiation treatment. This newly proposed radiation-induced death-rate term improves computational efficiency in both ordinary and fractional derivative models. This computational speed-up will benefit common simulation tasks such as model parameterization and the construction and running of virtual clinical trials.


Subject(s)
Models, Biological , Neoplasms , Humans , Mathematical Concepts , Neoplasms/radiotherapy , Models, Theoretical , Computer Simulation
18.
J Surg Res ; 290: 71-82, 2023 10.
Article in English | MEDLINE | ID: mdl-37210758

ABSTRACT

BACKGROUND: Short bowel syndrome is the most common cause of intestinal failure, with morbidity and mortality linked to remanent small intestine length. There is no current standard for noninvasive bowel length measurement. MATERIALS AND METHODS: The literature was systematically searched for articles describing measurements of small intestine length from radiographic studies. Inclusion required reporting intestinal length as an outcome and use of diagnostic imaging for length assessment compared to a ground truth. Two reviewers independently screened studies for inclusion, extracted data, and assessed study quality. RESULTS: Eleven studies met the inclusion criteria and reported small intestinal length measurement using four imaging modalities: barium follow-through, ultrasound, computed tomography, and magnetic resonance. Five barium follow-through studies reported variable correlations with intraoperative measurements (r = 0.43-0.93); most (3/5) reported underestimation of length. US studies (n = 2) did not correlate with ground truths. Two computed tomography studies reported moderate-to-strong correlations with pathologic (r = 0.76) and intraoperative measurements (r = 0.99). Five studies of magnetic resonance showed moderate-to-strong correlations with intraoperative or postmortem measurements (r = 0.70-0.90). Vascular imaging software was used in two studies, and a segmentation algorithm was used for measurements in one. CONCLUSIONS: Noninvasive measurement of small intestine length is challenging. Three-dimensional imaging modalities reduce the risk of length underestimation, which is common with two-dimensional techniques. However, they also require longer times to perform length measurements. Automated segmentation has been trialed for magnetic resonance enterography, but this method does not translate directly to standard diagnostic imaging. While three-dimensional images are most accurate for length measurement, they are limited in their ability to measure intestinal dysmotility, which is an important functional measure in patients with intestinal failure. Future work should validate automated segmentation and measurement software using standard diagnostic imaging protocols.


Subject(s)
Intestinal Failure , Short Bowel Syndrome , Humans , Barium , Intestine, Small/surgery , Short Bowel Syndrome/surgery , Magnetic Resonance Imaging/methods
20.
Eur J Nutr ; 62(6): 2441-2448, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37119297

ABSTRACT

BACKGROUND: We examined whether the risk of incident atrial fibrillation (AF) in a large, biracial, prospective cohort is lower in participants who adhere to heart-healthy dietary patterns and higher in participants who adhere to less heart-healthy diets. METHODS: Between 2003 and 2007, the REasons for Geographic and Racial Differences in Stroke (REGARDS) cohort study enrolled 30,239 Black and White Americans aged 45 years or older. Dietary patterns (convenience, plant-based, sweets, Southern, and alcohol and salads) and the Mediterranean diet score (MDS) were derived based on food frequency questionnaire data. The primary outcome was incident AF at the follow-up visit 2013-2016, defined by either electrocardiogram or self-reported medical history of a physician diagnosis. RESULTS: This study included 8977 participants (mean age 63 ± 8.3 years; 56% women; 30% Black) free of AF at baseline who completed the follow-up exam an average of 9.4 years later. A total of 782 incident AF cases were detected. In multivariable logistic regression analyses, neither the MDS score (odds ratio (OR) per SD increment = 1.03; 95% confidence interval (CI) 0.95-1.11) or the plant-based dietary pattern (OR per SD increment = 1.03; 95% CI 0.94-1.12) were associated with AF risk. Additionally, an increased AF risk was not associated with any of the less-healthy dietary patterns. CONCLUSIONS: While specific dietary patterns have been associated with AF risk factors, our findings fail to show an association between diet patterns and AF development.


Subject(s)
Atrial Fibrillation , Diet, Mediterranean , Stroke , Humans , Female , Middle Aged , Aged , Male , Atrial Fibrillation/epidemiology , Cohort Studies , Prospective Studies , Race Factors , Stroke/epidemiology , Risk Factors
SELECTION OF CITATIONS
SEARCH DETAIL