RESUMEN
Postmenopausal osteoporosis arises from imbalanced osteoclast and osteoblast activity, and mounting evidence suggests a role for the osteoimmune system in bone homeostasis. Bisphosphonate (BP) is an antiresorptive agent, but its treatment failure rate can be as high as 40%. Here, we performed single-cell RNA sequencing on peripheral immune cells from carefully selected postmenopausal women: non-osteoporotic, osteoporosis improved after BP treatment, and BP-failed cases. We found an increase in myeloid cells in patients with osteoporosis (specifically, T cell receptor+ macrophages). Furthermore, lymphoid lineage cells varied significantly, notably elevated natural killer cells (NKs) in the BP-failed group. Moreover, we provide fruitful lists of biomarkers within the immune cells that exhibit condition-dependent differences. The existence of osteoporotic- and BP-failure-specific cellular information flows was revealed by cell-cell interaction analysis. These findings deepen our insight of the osteoporosis pathology enhancing comprehension of the role of immune heterogeneity in postmenopausal osteoporosis and BP treatment failure.
Asunto(s)
Conservadores de la Densidad Ósea , Osteoporosis Posmenopáusica , Osteoporosis , Humanos , Femenino , Difosfonatos/farmacología , Difosfonatos/uso terapéutico , Osteoporosis Posmenopáusica/tratamiento farmacológico , Osteoporosis Posmenopáusica/genética , Densidad Ósea , Conservadores de la Densidad Ósea/farmacología , Conservadores de la Densidad Ósea/uso terapéutico , Osteoporosis/tratamiento farmacológico , Osteoporosis/genética , Perfilación de la Expresión GénicaRESUMEN
Interleukin-7 (IL-7) availability determines the size and proliferative state of the resting T cell pool. However, the mechanisms that regulate steady-state IL-7 amounts are unclear. Using experimental lymphopenic mouse models and IL-7-induced homeostatic proliferation to measure IL-7 availability in vivo, we found that radioresistant cells were the source of IL-7 for both CD4+ and CD8+ T cells. Hematopoietic lineage cells, although irrelevant as a source of IL-7, were primarily responsible for limiting IL-7 availability via their expression of IL-7R. Unexpectedly, innate lymphoid cells were found to have a potent influence on IL-7 amounts in the primary and secondary lymphoid tissues. These results demonstrate that IL-7 homeostasis is achieved through consumption by multiple subsets of innate and adaptive immune cells.
Asunto(s)
Células Madre Hematopoyéticas/fisiología , Interleucina-7/metabolismo , Linfocitos/inmunología , Linfopenia/inmunología , Linfocitos T/fisiología , Inmunidad Adaptativa , Animales , Proliferación Celular , Células Cultivadas , Modelos Animales de Enfermedad , Homeostasis , Humanos , Inmunidad Innata , Interleucina-7/genética , Interleucina-7/inmunología , Ratones , Ratones Endogámicos C57BL , Ratones Noqueados , Tolerancia a Radiación , Receptores de Interleucina-7/genética , Receptores de Interleucina-7/metabolismoRESUMEN
PURPOSE OF REVIEW: As artificial intelligence and machine learning technologies continue to develop, they are being increasingly used to improve the scientific understanding and clinical care of patients with severe disorders of consciousness following acquired brain damage. We here review recent studies that utilized these techniques to reduce the diagnostic and prognostic uncertainty in disorders of consciousness, and to better characterize patients' response to novel therapeutic interventions. RECENT FINDINGS: Most papers have focused on differentiating between unresponsive wakefulness syndrome and minimally conscious state, utilizing artificial intelligence to better analyze functional neuroimaging and electroencephalography data. They often proposed new features using conventional machine learning rather than deep learning algorithms. To better predict the outcome of patients with disorders of consciousness, recovery was most often based on the Glasgow Outcome Scale, and traditional machine learning techniques were used in most cases. Machine learning has also been employed to predict the effects of novel therapeutic interventions (e.g., zolpidem and transcranial direct current stimulation). SUMMARY: Artificial intelligence and machine learning can assist in clinical decision-making, including the diagnosis, prognosis, and therapy for patients with disorders of consciousness. The performance of these models can be expected to be significantly improved by the use of deep learning techniques.
Asunto(s)
Inteligencia Artificial , Trastornos de la Conciencia , Aprendizaje Automático , Humanos , Trastornos de la Conciencia/diagnóstico , Trastornos de la Conciencia/fisiopatología , Electroencefalografía/métodosRESUMEN
Following the worldwide surge in mpox (monkeypox) in 2022, cases have persisted in Asia, including South Korea, and sexual contact is presumed as the predominant mode of transmission, with a discernible surge in prevalence among immunocompromised patients. Drugs such as tecovirimat can result in drug-resistant mutations, presenting obstacles to treatment. This study aimed to ascertain the presence of tecovirimat-related resistant mutations through genomic analysis of the monkeypox virus isolated from a reported case involving prolonged viral shedding in South Korea. Here, tecovirimat-resistant mutations, previously identified in the B.1 clade, were observed in the B.1.3 clade, predominant in South Korea. These mutations exhibited diverse patterns across different samples from the same patient and reflected the varied distribution of viral subpopulations in different anatomical regions. The A290V and A288P mutant strains we isolated hold promise for elucidating these mechanisms, enabling a comprehensive analysis of viral pathogenesis, replication strategies, and host interactions. Our findings imply that acquired drug-resistant mutations, may present a challenge to individual patient treatment. Moreover, they have the potential to give rise to transmitted drug-resistant mutations, thereby imposing a burden on the public health system. Consequently, the meticulous genomic surveillance among immunocompromised patients, conducted in this research, assumes paramount importance.
Asunto(s)
Benzamidas , Huésped Inmunocomprometido , Humanos , Esparcimiento de Virus , Isoindoles , Mutación , República de CoreaRESUMEN
OBJECTIVE: This study compares baseline clinical characteristics, physical function testing, and patient-reported outcomes for patients undergoing primary cytoreductive surgery versus neoadjuvant chemotherapy, with the goal of better understanding unique patient needs at diagnosis. METHODS: Patients with suspected advanced stage (IIIC/IV) epithelial ovarian cancer undergoing either primary cytoreductive surgery or neoadjuvant chemotherapy were enrolled in a single-institution, non-randomized prospective behavioral intervention trial of prehabilitation. Baseline clinical characteristics were abstracted. Physical function was evaluated using the Short Physical Performance Battery, Fried Frailty Index, gait speed, and grip strength. Patient-reported outcomes were evaluated using Patient-Reported Outcomes Measurement Information System metrics and the Perceived Stress Scale. RESULTS: There were no significant differences in demographics or clinical characteristics between cohorts at enrollment, with the exception of performance status, clinical stage, and albumin. While gait speed and grip strength were lower amongst neoadjuvant chemotherapy patients, there were no significant differences in physical function using the Short Physical Performance Battery and Fried Frailty Index. Patients in the neoadjuvant chemotherapy cohort reported decreased perception of physical function and increased fatigue on Patient-Reported Outcomes Measurement Information System metrics. A larger proportion of patients in the neoadjuvant cohort reported severe levels of emotional distress and anxiety, as well as greater perceived stress at diagnosis. CONCLUSIONS: Our findings suggest that patients undergoing neoadjuvant chemotherapy for advanced ovarian cancer present with increased psychosocial distress and decreased perception of physical function at diagnosis and may benefit most from early introduction of supportive care.
RESUMEN
PURPOSE: Clinical benefits result from electronic patient-reported outcome (ePRO) systems that enable remote symptom monitoring. Although clinically useful, real-time alert notifications for severe or worsening symptoms can overburden nurses. Thus, we aimed to algorithmically identify likely non-urgent alerts that could be suppressed. METHODS: We evaluated alerts from the PRO-TECT trial (Alliance AFT-39) in which oncology practices implemented remote symptom monitoring. Patients completed weekly at-home ePRO symptom surveys, and nurses received real-time alert notifications for severe or worsening symptoms. During parts of the trial, patients and nurses each indicated whether alerts were urgent or could wait until the next visit. We developed an algorithm for suppressing alerts based on patient assessment of urgency and model-based predictions of nurse assessment of urgency. RESULTS: 593 patients participated (median age = 64 years, 61% female, 80% white, 10% reported never using computers/tablets/smartphones). Patients completed 91% of expected weekly surveys. 34% of surveys generated an alert, and 59% of alerts prompted immediate nurse actions. Patients considered 10% of alerts urgent. Of the remaining cases, nurses considered alerts urgent more often when patients reported any worsening symptom compared to the prior week (33% of alerts with versus 26% without any worsening symptom, p = 0.009). The algorithm identified 38% of alerts as likely non-urgent that could be suppressed with acceptable discrimination (sensitivity = 80%, 95% CI [76%, 84%]; specificity = 52%, 95% CI [49%, 55%]). CONCLUSION: An algorithm can identify remote symptom monitoring alerts likely to be considered non-urgent by nurses, and may assist in fostering nurse acceptance and implementation feasibility of ePRO systems.
Asunto(s)
Algoritmos , Medición de Resultados Informados por el Paciente , Humanos , Femenino , Masculino , Persona de Mediana Edad , Anciano , Neoplasias , Encuestas y Cuestionarios , AdultoRESUMEN
BACKGROUND: We derived meaningful individual-level change thresholds for worsening in selected patient-reported outcomes version of the common terminology criteria for adverse events (PRO-CTCAE®) items and their composite scores. METHODS: We used two data sources, the PRO-TECT trial (Alliance AFT-39) that collected PRO-CTCAE data from adults with advanced cancer at 26 United States (U.S.) community oncology practices and the PRO-CTCAE validation study that collected PRO-CTCAE data from adults undergoing chemotherapy or radiation therapy at nine U.S. cancer centers or community oncology practices. Both studies administered selected PRO-CTCAE items and EORTC QLQ-C30 scales. Conceptually, relevant QLQ-C30 domains were used as anchors to estimate meaningful change thresholds for deterioration in corresponding PRO-CTCAE items and their composite scores. Items or composites with ÇρÇ ≥ 0.30 correlation with QLQ-C30 scales were included. Changes in PRO-CTCAE scores and composites were estimated for patients who met or exceeded a 10-point deterioration on the corresponding QLQ-C30 scale. Change scores were computed between baseline and the 3-month timepoint in PRO-TECT, and in the PRO-CTCAE validation study between baseline and a single follow-up visit that occurred between 1 and 7 weeks later. For each PRO-CTCAE item, change scores could range from - 4 to 4; for a composite, change scores could range from - 3 to 3. RESULTS: Change scores in QLQ-C30 and PRO-CTCAE were available in 406 and 792 patients in PRO-TECT and the validation study, respectively. Across QLQ-C30 scales, the proportion of patients with a 10-point or greater worsening on QLQ-C30 ranged from 15 to 30% in the PRO-TECT data and 13% to 34% in the validation data. Across PRO-CTCAE items, anchor-based meaningful change estimates for deterioration ranged from 0.05 to 0.30 (mean 0.19) in the PRO-TECT data and from 0.19 to 0.53 (mean 0.36) in the validation data. For composites, they ranged from 0.06 to 0.27 (mean 0.17) in the PRO-TECT data and 0.22 to 0.51 (mean 0.37) in the validation data. CONCLUSION: In both datasets, the minimal meaningful individual-level change threshold for worsening was one point for all items and composite scores. CLINICALTRIALS: gov: NCT03249090 (AFT-39), NCT02158637 (MC1091).
RESUMEN
BACKGROUND/AIMS: The Patient-Reported Outcomes version of the Common Terminology Criteria for Adverse Events (PRO-CTCAE®) was developed to capture symptomatic adverse events from the patient perspective. We aim to describe statistical properties of PRO-CTCAE items and summary scores and to provide evidence for recommendations regarding PRO-CTCAE administration and reporting. METHODS: Using data from the PRO-CTCAE validation study (NCT02158637), prevalence, means, and standard deviations of PRO-CTCAE items, composite scores, and mean and maximum scores across attributes (frequency, severity, and/or interference) of symptomatic adverse events were calculated. For each adverse event, correlations and agreement between attributes, correlations between attributes and composite scores, and correlations between composite, mean, and maximum scores were estimated. RESULTS: PRO-CTCAE items were completed by 899 patients with various cancer types. Most patients reported experiencing one or more adverse events, with the most prevalent being fatigue (87.7%), sad/unhappy feelings (66.0%), anxiety (63.6%), pain (63.2%), insomnia (61.8%), and dry mouth (60.0%). Attributes were moderately to strongly correlated within an adverse event (r = 0.53 to 0.77, all p < 0.001) but not fully concordant (κweighted = 0.26 to 0.60, all p < 0.001), with interference demonstrating lowest mean scores and prevalence among attributes of the same adverse event. Attributes were moderately to strongly correlated with composite scores (r = 0.67 to 0.97, all p < 0.001). Composite scores were moderately to strongly correlated with mean and maximum scores for the same adverse event (r = 0.69 to 0.94, all p < 0.001). Correlations between composite scores of different adverse events varied widely (r = 0.04 to 0.68) but were moderate to strong for conceptually related adverse events. CONCLUSIONS: Results provide evidence for PRO-CTCAE administration and reporting recommendations that the full complement of attributes be administered for each adverse event, and that attributes as well as summary scores be reported.
RESUMEN
BACKGROUND: Cardiac arrest (CA) is one of the leading causes of death among patients in the intensive care unit (ICU). Although many CA prediction models with high sensitivity have been developed to anticipate CA, their practical application has been challenging due to a lack of generalization and validation. Additionally, the heterogeneity among patients in different ICU subtypes has not been adequately addressed. OBJECTIVE: This study aims to propose a clinically interpretable ensemble approach for the timely and accurate prediction of CA within 24 hours, regardless of patient heterogeneity, including variations across different populations and ICU subtypes. Additionally, we conducted patient-independent evaluations to emphasize the model's generalization performance and analyzed interpretable results that can be readily adopted by clinicians in real-time. METHODS: Patients were retrospectively analyzed using data from the Medical Information Mart for Intensive Care-IV (MIMIC-IV) and the eICU-Collaborative Research Database (eICU-CRD). To address the problem of underperformance, we constructed our framework using feature sets based on vital signs, multiresolution statistical analysis, and the Gini index, with a 12-hour window to capture the unique characteristics of CA. We extracted 3 types of features from each database to compare the performance of CA prediction between high-risk patient groups from MIMIC-IV and patients without CA from eICU-CRD. After feature extraction, we developed a tabular network (TabNet) model using feature screening with cost-sensitive learning. To assess real-time CA prediction performance, we used 10-fold leave-one-patient-out cross-validation and a cross-data set method. We evaluated MIMIC-IV and eICU-CRD across different cohort populations and subtypes of ICU within each database. Finally, external validation using the eICU-CRD and MIMIC-IV databases was conducted to assess the model's generalization ability. The decision mask of the proposed method was used to capture the interpretability of the model. RESULTS: The proposed method outperformed conventional approaches across different cohort populations in both MIMIC-IV and eICU-CRD. Additionally, it achieved higher accuracy than baseline models for various ICU subtypes within both databases. The interpretable prediction results can enhance clinicians' understanding of CA prediction by serving as a statistical comparison between non-CA and CA groups. Next, we tested the eICU-CRD and MIMIC-IV data sets using models trained on MIMIC-IV and eICU-CRD, respectively, to evaluate generalization ability. The results demonstrated superior performance compared with baseline models. CONCLUSIONS: Our novel framework for learning unique features provides stable predictive power across different ICU environments. Most of the interpretable global information reveals statistical differences between CA and non-CA groups, demonstrating its utility as an indicator for clinical decisions. Consequently, the proposed CA prediction system is a clinically validated algorithm that enables clinicians to intervene early based on CA prediction information and can be applied to clinical trials in digital health.
Asunto(s)
Paro Cardíaco , Unidades de Cuidados Intensivos , Aprendizaje Automático , Humanos , Estudios Retrospectivos , Paro Cardíaco/mortalidad , Masculino , Femenino , Persona de Mediana Edad , AncianoRESUMEN
[This corrects the article DOI: .].
RESUMEN
We aimed to characterize the genomes of monkeypox virus isolates from the Far East, providing insights into viral transmission and evolution. Genomic analysis was conducted on 8 isolates obtained from patients with monkeypox virus disease in the Republic of Korea between May 2022 and early 2023. These isolates were classified into Clade IIb. Distinct lineages, including B.1.1, A.2.1, and B.1.3, were observed in 2022 and 2023 isolates, with only the B.1.3 lineage detected in six isolates of 2023. These genetic features were specific to Far East isolates (the Republic of Korea, Japan, and Taiwan), distinguishing them from the diverse lineages found in the Americas, Europe, Africa, and Oceania. In early 2023, the prevalence of the B.1.3 lineage of monkeypox virus identified in six patients with no overseas travel history is considered as an indicator of the potential initiation of local transmission in the Republic of Korea.
Asunto(s)
Genoma Viral , Monkeypox virus , Mpox , Filogenia , República de Corea/epidemiología , Humanos , Mpox/epidemiología , Mpox/virología , Monkeypox virus/genética , Monkeypox virus/aislamiento & purificación , Epidemias , Genómica/métodos , Masculino , ARN Viral/genética , FemeninoRESUMEN
Photon avalanching nanoparticles (ANPs) exhibit extremely nonlinear upconverted emission valuable for subdiffraction imaging, nanoscale sensing, and optical computing. Avalanching has been demonstrated with Tm3+-, Pr3+-, or Nd3+-doped nanocrystals, but their emission is limited to a few wavelengths and materials. Here, we utilize Gd3+-assisted energy migration to tune the emission wavelengths of Tm3+-sensitized ANPs and generate highly nonlinear emission from Eu3+, Tb3+, Ho3+, and Er3+ ions. The upconversion intensities of these spectrally discrete ANPs scale with nonlinearity factor s = 10-17 under 1064 nm excitation at power densities as low as 7 kW cm-2. This strategy for imprinting avalanche behavior on remote emitters can be extended to fluorophores adjacent to ANPs, as we demonstrate with CdS/CdSe/CdS core/shell/shell quantum dots. ANPs with rationally designed energy transfer networks provide the means to transform conventional linear emitters into a highly nonlinear ones, expanding the use of photon avalanching in biological, chemical, and photonic applications.
RESUMEN
Recent studies have identified a urinary microbiome, dispelling the myth of urine sterility. Intravesical bacillus Calmette-Guérin (BCG) therapy is the preferred treatment for intermediate to high-risk non-muscle-invasive bladder cancer (BCa), although resistance occurs in 30-50% of cases. Progression to muscle-invasive cancer necessitates radical cystectomy. Our research uses 16S rRNA gene sequencing to investigate how the urinary microbiome influences BCa and its response to BCG therapy. Urine samples were collected via urethral catheterization from patients with benign conditions and non-muscle-invasive BCa, all of whom underwent BCG therapy. We utilized 16S rRNA gene sequencing to analyze the bacterial profiles and metabolic pathways in these samples. These pathways were validated using a real metabolite dataset, and we developed predictive models for malignancy and BCG response. In this study, 87 patients participated, including 29 with benign diseases and 58 with BCa. We noted distinct bacterial compositions between benign and malignant samples, indicating the potential role of the toluene degradation pathway in mitigating BCa development. Responders to BCG had differing microbial compositions and higher quinolone synthesis than non-responders, with two Bifidobacterium species being prevalent among responders, associated with prolonged recurrence-free survival. Additionally, we developed highly accurate predictive models for malignancy and BCG response. Our study delved into the mechanisms behind malignancy and BCG responses by focusing on the urinary microbiome and metabolic pathways. We pinpointed specific beneficial microbes and developed clinical models to predict malignancy and BCG therapy outcomes. These models can track recurrence and facilitate early predictions of treatment responses.
Asunto(s)
Vacuna BCG , Microbiota , ARN Ribosómico 16S , Neoplasias de la Vejiga Urinaria , Humanos , Neoplasias de la Vejiga Urinaria/microbiología , Neoplasias de la Vejiga Urinaria/tratamiento farmacológico , Vacuna BCG/uso terapéutico , Masculino , Femenino , ARN Ribosómico 16S/genética , Anciano , Persona de Mediana Edad , Bacterias/genética , Bacterias/clasificaciónRESUMEN
BACKGROUND: Nurses in neurointensive care units (NCUs) commonly use physical restraint (PR) to prevent adverse events like unplanned removal of devices (URDs) or falls. However, PR use should be based on evidenced decisions as it has drawbacks. Unfortunately, there is a lack of research-based PR protocol to support decision-making for nurses, especially for neurocritical patients. AIM: This study developed a restraint decision tree for neurocritical patients (RDT-N) to assist nurses in making PR decisions. We assessed its effectiveness in reducing PR use and adverse events. STUDY DESIGN: This study employed a baseline and post-intervention test design at a NCU with 19 beds and 45 nurses in a tertiary hospital in a metropolitan city in South Korea. Two-hundred and thirty-seven adult patients were admitted during the study period. During the intervention, nurses were trained on the RDT-N. PR use and adverse events between the baseline and post-intervention periods were compared. RESULTS: Post-intervention, total number of restrained patients decreased (20.7%-16.3%; χ2 = 7.68, p = .006), and the average number of PR applied per restrained patient decreased (2.42-1.71; t = 5.74, p < .001). The most frequently used PR type changed from extremity cuff to mitten (χ2 = 397.62, p < .001). No falls occurred during the study periods. On the other hand, URDs at baseline were 18.67 cases per 1000 patient days in the high-risk group and 5.78 cases per 1000 patient days in the moderate-risk group; however, no URD cases were reported post-intervention. CONCLUSIONS: The RDT-N effectively reduced PR use and adverse events. Its application can enhance patient-centred care based on individual condition and potential risks in NCUs. RELEVANCE TO CLINICAL PRACTICE: Nurses can use the RDT-N to assess the need for PR in caring for neurocritical patients, reducing PR use and adverse events.
Asunto(s)
Árboles de Decisión , Unidades de Cuidados Intensivos , Restricción Física , Humanos , Restricción Física/estadística & datos numéricos , Restricción Física/psicología , República de Corea , Masculino , Femenino , Persona de Mediana Edad , Enfermería de Cuidados Críticos , AdultoRESUMEN
In longitudinal studies, it is not uncommon to make multiple attempts to collect a measurement after baseline. Recording whether these attempts are successful provides useful information for the purposes of assessing missing data assumptions. This is because measurements from subjects who provide the data after numerous failed attempts may differ from those who provide the measurement after fewer attempts. Previous models for these designs were parametric and/or did not allow sensitivity analysis. For the former, there are always concerns about model misspecification and for the latter, sensitivity analysis is essential when conducting inference in the presence of missing data. Here, we propose a new approach which minimizes issues with model misspecification by using Bayesian nonparametrics for the observed data distribution. We also introduce a novel approach for identification and sensitivity analysis. We re-analyze the repeated attempts data from a clinical trial involving patients with severe mental illness and conduct simulations to better understand the properties of our approach.
Asunto(s)
Trastornos Mentales , Modelos Estadísticos , Humanos , Teorema de Bayes , Estudios LongitudinalesRESUMEN
PURPOSE: When conducting trials aimed at the improvement of cancer-related and/or cancer treatment-related toxicities, it is important to determine the best means of measuring patients' symptoms. METHODS: The authors of this current manuscript have an extensive experience with the conduct of symptom-control clinical trials. This experience is utilized to provide insight into the best means of measuring symptoms caused by cancer and/or cancer therapy. RESULTS: Patient-reported outcome data are preferable for measuring bothersome symptoms, for determining toxicities caused by treatment approaches, and offer more accurate and detailed information compared with health care practitioners recording their impressions of patient experiences. Well-validated patient friendly measures are recommended when they are available. When such are not readily available, face-valid, single-item numerical rating scales are effective instruments to document both treatment trial outcomes and cancer treatment side effects/toxicities. CONCLUSION: The use of numerical rating scales are effective means of measuring symptoms caused by cancer, by cancer treatments, and/or alleviated by symptom control treatment approaches.
Asunto(s)
Neoplasias , Humanos , Neoplasias/complicaciones , Neoplasias/terapia , Resultado del TratamientoRESUMEN
BACKGROUND: This study compares classical test theory and item response theory frameworks to determine reliable change. Reliable change followed by anchoring to the change in categorically distinct responses on a criterion measure is a useful method to detect meaningful change on a target measure. METHODS: Adult cancer patients were recruited from five cancer centers. Baseline and follow-up assessments at 6 weeks were administered. We investigated short forms derived from PROMIS® item banks on anxiety, depression, fatigue, pain intensity, pain interference, and sleep disturbance. We detected reliable change using reliable change index (RCI). We derived the T-scores corresponding to the RCI calculated under IRT and CTT frameworks using PROMIS® short forms. For changes that were reliable, meaningful change was identified using patient-reported change in PRO-CTCAE by at least one level. For both CTT and IRT approaches, we applied one-sided tests to detect reliable improvement or worsening using RCI. We compared the percentages of patients with reliable change and reliable/meaningful change. RESULTS: The amount of change in T score corresponding to RCICTT of 1.65 ranged from 5.1 to 9.2 depending on domains. The amount of change corresponding to RCIIRT of 1.65 varied across the score range, and the minimum change ranged from 3.0 to 8.2 depending on domains. Across domains, the RCICTT and RCIIRT classified 80% to 98% of the patients consistently. When there was disagreement, the RCIIRT tended to identify more patients as having reliably changed compared to RCICTT if scores at both timepoints were in the range of 43 to 78 in anxiety, 45 to 70 in depression, 38 to 80 in fatigue, 35 to 78 in sleep disturbance, and 48 to 74 in pain interference, due to smaller standard errors in these ranges using the IRT method. The CTT method found more changes compared to IRT for the pain intensity domain that was shorter in length. Using RCICTT, 22% to 66% had reliable change in either direction depending on domains, and among these patients, 62% to 83% had meaningful change. Using RCIIRT, 37% to 68% had reliable change in either direction, and among these patients, 62% to 81% had meaningful change. CONCLUSION: Applying the two-step criteria demonstrated in this study, we determined how much change is needed to declare reliable change at different levels of baseline scores. We offer reference values for percentage of patients who meaningfully change for investigators using the PROMIS instruments in oncology.
Asunto(s)
Neoplasias , Calidad de Vida , Adulto , Humanos , Calidad de Vida/psicología , Dolor , Ansiedad/diagnóstico , Medición de Resultados Informados por el Paciente , FatigaRESUMEN
Childhood intimate partner violence (IPV) exposure increases the likelihood of internalizing and externalizing problems. There is substantial variability in children's outcomes following IPV exposure, but the reasons behind this are unclear, particularly among preschool-age children. The current study aimed to examine the direct and indirect effects of IPV on preschoolers' mental health through parent factors (parenting and parental depression), exploring child temperament as a potential moderator of the relation between IPV and child outcomes. Participants were 186 children (85 girls) and their parents living in the United States. Data were initially collected when children were age three, with follow-up at ages four and six. Both parents' baseline IPV perpetration had adverse effects on child outcomes. Mothers' IPV was associated with greater paternal depression, paternal overractivity, and maternal laxness, whereas fathers' IPV was associated with more paternal overreactivity. Only paternal depression mediated the effect of mothers' IPV on child outcomes. Parenting did not mediate nor did child temperament moderate the relation between IPV and child outcomes. Results shed insight into the need to address parental mental health in families experiencing IPV and underline the need for a further exploration of individual- and family-level mechanisms of adjustment following IPV exposure.
RESUMEN
BACKGROUND: Cardiac arrest (CA) is the leading cause of death in critically ill patients. Clinical research has shown that early identification of CA reduces mortality. Algorithms capable of predicting CA with high sensitivity have been developed using multivariate time series data. However, these algorithms suffer from a high rate of false alarms, and their results are not clinically interpretable. OBJECTIVE: We propose an ensemble approach using multiresolution statistical features and cosine similarity-based features for the timely prediction of CA. Furthermore, this approach provides clinically interpretable results that can be adopted by clinicians. METHODS: Patients were retrospectively analyzed using data from the Medical Information Mart for Intensive Care-IV database and the eICU Collaborative Research Database. Based on the multivariate vital signs of a 24-hour time window for adults diagnosed with heart failure, we extracted multiresolution statistical and cosine similarity-based features. These features were used to construct and develop gradient boosting decision trees. Therefore, we adopted cost-sensitive learning as a solution. Then, 10-fold cross-validation was performed to check the consistency of the model performance, and the Shapley additive explanation algorithm was used to capture the overall interpretability of the proposed model. Next, external validation using the eICU Collaborative Research Database was performed to check the generalization ability. RESULTS: The proposed method yielded an overall area under the receiver operating characteristic curve (AUROC) of 0.86 and area under the precision-recall curve (AUPRC) of 0.58. In terms of the timely prediction of CA, the proposed model achieved an AUROC above 0.80 for predicting CA events up to 6 hours in advance. The proposed method simultaneously improved precision and sensitivity to increase the AUPRC, which reduced the number of false alarms while maintaining high sensitivity. This result indicates that the predictive performance of the proposed model is superior to the performances of the models reported in previous studies. Next, we demonstrated the effect of feature importance on the clinical interpretability of the proposed method and inferred the effect between the non-CA and CA groups. Finally, external validation was performed using the eICU Collaborative Research Database, and an AUROC of 0.74 and AUPRC of 0.44 were obtained in a general intensive care unit population. CONCLUSIONS: The proposed framework can provide clinicians with more accurate CA prediction results and reduce false alarm rates through internal and external validation. In addition, clinically interpretable prediction results can facilitate clinician understanding. Furthermore, the similarity of vital sign changes can provide insights into temporal pattern changes in CA prediction in patients with heart failure-related diagnoses. Therefore, our system is sufficiently feasible for routine clinical use. In addition, regarding the proposed CA prediction system, a clinically mature application has been developed and verified in the future digital health field.
Asunto(s)
Paro Cardíaco , Insuficiencia Cardíaca , Adulto , Humanos , Inteligencia Artificial , Estudios Retrospectivos , Paro Cardíaco/diagnóstico , Paro Cardíaco/terapia , Insuficiencia Cardíaca/diagnóstico , HospitalesRESUMEN
BACKGROUND: The growing public interest and awareness regarding the significance of sleep is driving the demand for sleep monitoring at home. In addition to various commercially available wearable and nearable devices, sound-based sleep staging via deep learning is emerging as a decent alternative for their convenience and potential accuracy. However, sound-based sleep staging has only been studied using in-laboratory sound data. In real-world sleep environments (homes), there is abundant background noise, in contrast to quiet, controlled environments such as laboratories. The use of sound-based sleep staging at homes has not been investigated while it is essential for practical use on a daily basis. Challenges are the lack of and the expected huge expense of acquiring a sufficient size of home data annotated with sleep stages to train a large-scale neural network. OBJECTIVE: This study aims to develop and validate a deep learning method to perform sound-based sleep staging using audio recordings achieved from various uncontrolled home environments. METHODS: To overcome the limitation of lacking home data with known sleep stages, we adopted advanced training techniques and combined home data with hospital data. The training of the model consisted of 3 components: (1) the original supervised learning using 812 pairs of hospital polysomnography (PSG) and audio recordings, and the 2 newly adopted components; (2) transfer learning from hospital to home sounds by adding 829 smartphone audio recordings at home; and (3) consistency training using augmented hospital sound data. Augmented data were created by adding 8255 home noise data to hospital audio recordings. Besides, an independent test set was built by collecting 45 pairs of overnight PSG and smartphone audio recording at homes to examine the performance of the trained model. RESULTS: The accuracy of the model was 76.2% (63.4% for wake, 64.9% for rapid-eye movement [REM], and 83.6% for non-REM) for our test set. The macro F1-score and mean per-class sensitivity were 0.714 and 0.706, respectively. The performance was robust across demographic groups such as age, gender, BMI, or sleep apnea severity (accuracy 73.4%-79.4%). In the ablation study, we evaluated the contribution of each component. While the supervised learning alone achieved accuracy of 69.2% on home sound data, adding consistency training to the supervised learning helped increase the accuracy to a larger degree (+4.3%) than adding transfer learning (+0.1%). The best performance was shown when both transfer learning and consistency training were adopted (+7.0%). CONCLUSIONS: This study shows that sound-based sleep staging is feasible for home use. By adopting 2 advanced techniques (transfer learning and consistency training) the deep learning model robustly predicts sleep stages using sounds recorded at various uncontrolled home environments, without using any special equipment but smartphones only.