Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
1.
Clin Orthop Relat Res ; 482(9): 1710-1721, 2024 Sep 01.
Article in English | MEDLINE | ID: mdl-38517402

ABSTRACT

BACKGROUND: Bone metastasis in advanced cancer is challenging because of pain, functional issues, and reduced life expectancy. Treatment planning is complex, with consideration of factors such as location, symptoms, and prognosis. Prognostic models help guide treatment choices, with Skeletal Oncology Research Group machine-learning algorithms (SORG-MLAs) showing promise in predicting survival for initial spinal metastases and extremity metastases treated with surgery or radiotherapy. Improved therapies extend patient lifespans, increasing the risk of subsequent skeletal-related events (SREs). Patients experiencing subsequent SREs often suffer from disease progression, indicating a deteriorating condition. For these patients, a thorough evaluation, including accurate survival prediction, is essential to determine the most appropriate treatment and avoid aggressive surgical treatment for patients with a poor survival likelihood. Patients experiencing subsequent SREs often suffer from disease progression, indicating a deteriorating condition. However, some variables in the SORG prediction model, such as tumor histology, visceral metastasis, and previous systemic therapies, might remain consistent between initial and subsequent SREs. Given the prognostic difference between patients with and without a subsequent SRE, the efficacy of established prognostic models-originally designed for individuals with an initial SRE-in addressing a subsequent SRE remains uncertain. Therefore, it is crucial to verify the model's utility for subsequent SREs. QUESTION/PURPOSE: We aimed to evaluate the reliability of the SORG-MLAs for survival prediction in patients undergoing surgery or radiotherapy for a subsequent SRE for whom both the initial and subsequent SREs occurred in the spine or extremities. METHODS: We retrospectively included 738 patients who were 20 years or older who received surgery or radiotherapy for initial and subsequent SREs at a tertiary referral center and local hospital in Taiwan between 2010 and 2019. We excluded 74 patients whose initial SRE was in the spine and in whom the subsequent SRE occurred in the extremities and 37 patients whose initial SRE was in the extremities and the subsequent SRE was in the spine. The rationale was that different SORG-MLAs were exclusively designed for patients who had an initial spine metastasis and those who had an initial extremity metastasis, irrespective of whether they experienced metastatic events in other areas (for example, a patient experiencing an extremity SRE before his or her spinal SRE would also be regarded as a candidate for an initial spinal SRE). Because these patients were already validated in previous studies, we excluded them in case we overestimated our result. Five patients with malignant primary bone tumors and 38 patients in whom the metastasis's origin could not be identified were excluded, leaving 584 patients for analysis. The 584 included patients were categorized into two subgroups based on the location of initial and subsequent SREs: the spine group (68% [399]) and extremity group (32% [185]). No patients were lost to follow-up. Patient data at the time they presented with a subsequent SRE were collected, and survival predictions at this timepoint were calculated using the SORG-MLAs. Multiple imputation with the Missforest technique was conducted five times to impute the missing proportions of each predictor. The effectiveness of SORG-MLAs was gauged through several statistical measures, including discrimination (measured by the area under the receiver operating characteristic curve [AUC]), calibration, overall performance (Brier score), and decision curve analysis. Discrimination refers to the model's ability to differentiate between those with the event and those without the event. An AUC ranges from 0.5 to 1.0, with 0.5 indicating the worst discrimination and 1.0 indicating perfect discrimination. An AUC of 0.7 is considered clinically acceptable discrimination. Calibration is the comparison between the frequency of observed events and the predicted probabilities. In an ideal calibration, the observed and predicted survival rates should be congruent. The logarithm of observed-to-expected survival ratio [log(O:E)] offers insight into the model's overall calibration by considering the total number of observed (O) and expected (E) events. The Brier score measures the mean squared difference between the predicted probability of possible outcomes for each individual and the observed outcomes, ranging from 0 to 1, with 0 indicating perfect overall performance and 1 indicating the worst performance. Moreover, the prevalence of the outcome should be considered, so a null-model Brier score was also calculated by assigning a probability equal to the prevalence of the outcome (in this case, the actual survival rate) to each patient. The benefit of the prediction model is determined by comparing its Brier score with that of the null model. If a prediction model's Brier score is lower than the null model's Brier score, the prediction model is deemed as having good performance. A decision curve analysis was performed for models to evaluate the "net benefit," which weighs the true positive rate over the false positive rate against the "threshold probabilities," the ratio of risk over benefit after an intervention was derived based on a comprehensive clinical evaluation and a well-discussed shared-decision process. A good predictive model should yield a higher net benefit than default strategies (treating all patients and treating no patients) across a range of threshold probabilities. RESULTS: For the spine group, the algorithms displayed acceptable AUC results (median AUCs of 0.69 to 0.72) for 42-day, 90-day, and 1-year survival predictions after treatment for a subsequent SRE. In contrast, the extremity group showed median AUCs ranging from 0.65 to 0.73 for the corresponding survival periods. All Brier scores were lower than those of their null model, indicating the SORG-MLAs' good overall performances for both cohorts. The SORG-MLAs yielded a net benefit for both cohorts; however, they overestimated 1-year survival probabilities in patients with a subsequent SRE in the spine, with a median log(O:E) of -0.60 (95% confidence interval -0.77 to -0.42). CONCLUSION: The SORG-MLAs maintain satisfactory discriminatory capacity and offer considerable net benefits through decision curve analysis, indicating their continued viability as prediction tools in this clinical context. However, the algorithms overestimate 1-year survival rates for patients with a subsequent SRE of the spine, warranting consideration of specific patient groups. Clinicians and surgeons should exercise caution when using the SORG-MLAs for survival prediction in these patients and remain aware of potential mispredictions when tailoring treatment plans, with a preference for less invasive treatments. Ultimately, this study emphasizes the importance of enhancing prognostic algorithms and developing innovative tools for patients with subsequent SREs as the life expectancy in patients with bone metastases continues to improve and healthcare providers will encounter these patients more often in daily practice. LEVEL OF EVIDENCE: Level III, prognostic study.


Subject(s)
Bone Neoplasms , Humans , Bone Neoplasms/secondary , Bone Neoplasms/mortality , Male , Female , Middle Aged , Aged , Retrospective Studies , Reproducibility of Results , Machine Learning , Adult , Prognosis , Predictive Value of Tests , Disease Progression , Risk Assessment , Decision Support Techniques , Risk Factors
2.
J Formos Med Assoc ; 2024 Oct 17.
Article in English | MEDLINE | ID: mdl-39424535

ABSTRACT

AIMS: Hip fractures are a significant health concern, especially in the elderly. Hemiarthroplasty has been the preferred treatment for displaced femoral neck fractures. The use of ceramic femoral heads has recently become popular due to their claimed durability. This study aimed to determine long-term outcomes associated with different implant choices in hemiarthroplasty. METHODS: The study sample included patients aged 50 years and above, with an index femoral neck fracture admission and hip hemiarthroplasty identified from Taiwan's National Health Insurance (NHI) claims data (2009-2019). To compare two groups of users of different heads, we performed 1:2 matching of the ceramic group versus metal group according to age, gender, index year, and six major comorbidities. Cumulative incidence rates were assessed for revision, post-operative complications, and medical complications. Cause-Specific hazard Cox models were used to estimate the hazard ratios for the two different implants groups. RESULTS: Among 47,158 patients, 2559 out of 2637 who received ceramic head hemiarthroplasty with co-payment, were successfully matched with 5118 receiving metal head prostheses fully covered by the NHI. Over a mean follow-up of 3.12 years, no significant differences were observed in revision rates between the ceramic and metal head groups. The ceramic head group demonstrated significantly lower risks of postoperative complications and medical complications within 90 days than the metal head group. CONCLUSIONS: This study found ceramic implant had lower postoperative complications and medical complication rates than metal head implant in hip hemiarthroplasty, but there was no difference in the revision rates between the two heads.

3.
Article in English | MEDLINE | ID: mdl-37306629

ABSTRACT

BACKGROUND: The Skeletal Oncology Research Group machine-learning algorithm (SORG-MLA) was developed to predict the survival of patients with spinal metastasis. The algorithm was successfully tested in five international institutions using 1101 patients from different continents. The incorporation of 18 prognostic factors strengthens its predictive ability but limits its clinical utility because some prognostic factors might not be clinically available when a clinician wishes to make a prediction. QUESTIONS/PURPOSES: We performed this study to (1) evaluate the SORG-MLA's performance with data and (2) develop an internet-based application to impute the missing data. METHODS: A total of 2768 patients were included in this study. The data of 617 patients who were treated surgically were intentionally erased, and the data of the other 2151 patients who were treated with radiotherapy and medical treatment were used to impute the artificially missing data. Compared with those who were treated nonsurgically, patients undergoing surgery were younger (median 59 years [IQR 51 to 67 years] versus median 62 years [IQR 53 to 71 years]) and had a higher proportion of patients with at least three spinal metastatic levels (77% [474 of 617] versus 72% [1547 of 2151]), more neurologic deficit (normal American Spinal Injury Association [E] 68% [301 of 443] versus 79% [1227 of 1561]), higher BMI (23 kg/m2 [IQR 20 to 25 kg/m2] versus 22 kg/m2 [IQR 20 to 25 kg/m2]), higher platelet count (240 × 103/µL [IQR 173 to 327 × 103/µL] versus 227 × 103/µL [IQR 165 to 302 × 103/µL], higher lymphocyte count (15 × 103/µL [IQR 9 to 21× 103/µL] versus 14 × 103/µL [IQR 8 to 21 × 103/µL]), lower serum creatinine level (0.7 mg/dL [IQR 0.6 to 0.9 mg/dL] versus 0.8 mg/dL [IQR 0.6 to 1.0 mg/dL]), less previous systemic therapy (19% [115 of 617] versus 24% [526 of 2151]), fewer Charlson comorbidities other than cancer (28% [170 of 617] versus 36% [770 of 2151]), and longer median survival. The two patient groups did not differ in other regards. These findings aligned with our institutional philosophy of selecting patients for surgical intervention based on their level of favorable prognostic factors such as BMI or lymphocyte counts and lower levels of unfavorable prognostic factors such as white blood cell counts or serum creatinine level, as well as the degree of spinal instability and severity of neurologic deficits. This approach aims to identify patients with better survival outcomes and prioritize their surgical intervention accordingly. Seven factors (serum albumin and alkaline phosphatase levels, international normalized ratio, lymphocyte and neutrophil counts, and the presence of visceral or brain metastases) were considered possible missing items based on five previous validation studies and clinical experience. Artificially missing data were imputed using the missForest imputation technique, which was previously applied and successfully tested to fit the SORG-MLA in validation studies. Discrimination, calibration, overall performance, and decision curve analysis were applied to evaluate the SORG-MLA's performance. The discrimination ability was measured with an area under the receiver operating characteristic curve. It ranges from 0.5 to 1.0, with 0.5 indicating the worst discrimination and 1.0 indicating perfect discrimination. An area under the curve of 0.7 is considered clinically acceptable discrimination. Calibration refers to the agreement between the predicted outcomes and actual outcomes. An ideal calibration model will yield predicted survival rates that are congruent with the observed survival rates. The Brier score measures the squared difference between the actual outcome and predicted probability, which captures calibration and discrimination ability simultaneously. A Brier score of 0 indicates perfect prediction, whereas a Brier score of 1 indicates the poorest prediction. A decision curve analysis was performed for the 6-week, 90-day, and 1-year prediction models to evaluate their net benefit across different threshold probabilities. Using the results from our analysis, we developed an internet-based application that facilitates real-time data imputation for clinical decision-making at the point of care. This tool allows healthcare professionals to efficiently and effectively address missing data, ensuring that patient care remains optimal at all times. RESULTS: Generally, the SORG-MLA demonstrated good discriminatory ability, with areas under the curve greater than 0.7 in most cases, and good overall performance, with up to 25% improvement in Brier scores in the presence of one to three missing items. The only exceptions were albumin level and lymphocyte count, because the SORG-MLA's performance was reduced when these two items were missing, indicating that the SORG-MLA might be unreliable without these values. The model tended to underestimate the patient survival rate. As the number of missing items increased, the model's discriminatory ability was progressively impaired, and a marked underestimation of patient survival rates was observed. Specifically, when three items were missing, the number of actual survivors was up to 1.3 times greater than the number of expected survivors, while only 10% discrepancy was observed when only one item was missing. When either two or three items were omitted, the decision curves exhibited substantial overlap, indicating a lack of consistent disparities in performance. This finding suggests that the SORG-MLA consistently generates accurate predictions, regardless of the two or three items that are omitted. We developed an internet application (https://sorg-spine-mets-missing-data-imputation.azurewebsites.net/) that allows the use of SORG-MLA with up to three missing items. CONCLUSION: The SORG-MLA generally performed well in the presence of one to three missing items, except for serum albumin level and lymphocyte count (which are essential for adequate predictions, even using our modified version of the SORG-MLA). We recommend that future studies should develop prediction models that allow for their use when there are missing data, or provide a means to impute those missing data, because some data are not available at the time a clinical decision must be made. CLINICAL RELEVANCE: The results suggested the algorithm could be helpful when a radiologic evaluation owing to a lengthy waiting period cannot be performed in time, especially in situations when an early operation could be beneficial. It could help orthopaedic surgeons to decide whether to intervene palliatively or extensively, even when the surgical indication is clear.

4.
Stroke ; 52(7): 2356-2362, 2021 07.
Article in English | MEDLINE | ID: mdl-33874751

ABSTRACT

Background and Purpose: We explored whether high-degree magnetic resonance imaging­visible perivascular spaces in centrum semiovale (CSO) are more prevalent in cerebral amyloid angiopathy (CAA) than hypertensive small vessel disease and their relationship to brain amyloid retention in patients with primary intracerebral hemorrhage (ICH). Methods: One hundred and eight spontaneous ICH patients who underwent magnetic resonance imaging and Pittsburgh compound B were enrolled. Topography and severity of enlarged perivascular spaces were compared between CAA-related ICH (CAA-ICH) and hypertensive small vessel disease­related ICH (non-CAA ICH). Clinical and image characteristics associated with high-degree perivascular spaces were evaluated in univariate and multivariable analyses. Univariate and multivariable models were performed to evaluate associations between the severity of perivascular spaces in CSO and amyloid retention in CAA-ICH and non­CAA-ICH cases. Results: Patients with CAA-ICH (n=29) and non­CAA-ICH (n=79) had similar prevalence of high-degree perivascular spaces in CSO (44.8% versus 36.7%; P=0.507) and in basal ganglia (34.5% versus 51.9%; P=0.131). High-degree perivascular spaces in CSO were independently associated with the presence of lobar microbleed (odds ratio, 3.0 [95% CI, 1.1­8.0]; P=0.032). The amyloid retention was higher in those with high-degree than those with low-degree CSO-perivascular spaces in CAA-ICH (global Pittsburgh compound B standardized uptake value ratio, 1.55 [1.33­1.61] versus 1.13 [1.01­1.48]; P=0.003) but not in non­CAA-ICH. In CAA-ICH, the association between cerebral amyloid retention and the degree of perivascular spaces in CSO remained significant after adjustment for age and lobar microbleed number (P=0.004). Conclusions: Although high-degree magnetic resonance imaging­visible perivascular spaces are equally prevalent between CAA-ICH and non­CAA-ICH in the Asian cohort, the severity of magnetic resonance imaging­visible CSO-perivascular spaces may be an indicator of higher brain amyloid deposition in patients with CAA-ICH.


Subject(s)
Cerebral Amyloid Angiopathy/diagnostic imaging , Cerebral Cortex/diagnostic imaging , Cerebral Hemorrhage/diagnostic imaging , Glymphatic System/diagnostic imaging , White Matter/diagnostic imaging , Aged , Aged, 80 and over , Cerebral Amyloid Angiopathy/epidemiology , Cerebral Amyloid Angiopathy/metabolism , Cerebral Cortex/metabolism , Cerebral Hemorrhage/epidemiology , Cerebral Hemorrhage/metabolism , Female , Glymphatic System/metabolism , Humans , Magnetic Resonance Imaging/methods , Male , Middle Aged , Positron-Emission Tomography/methods , Prospective Studies , White Matter/metabolism
6.
Am Surg ; 90(5): 1037-1044, 2024 May.
Article in English | MEDLINE | ID: mdl-38085592

ABSTRACT

BACKGROUND: Outcomes of trauma "walk-in" patients (using private vehicles or on foot) are understudied. We compared outcomes of ground ambulance vs walk-ins, hypothesizing that delayed resuscitation and uncoordinated care may worsen walk-in outcomes. METHODS: A retrospective analysis 2020 American College of Surgeons Trauma Quality Programs (ACS-TQP) databases compared outcomes between ambulance vs "walk-ins." The primary outcome was in-hospital mortality, excluding external facility transfers and air transports. Data was analyzed with descriptive statistics, bivariate, multivariable logistic regression, including an Inverse Probability Weighted Regression Adjustment with adjustments for injury severity and vital signs. The primary outcome for the 2019 (pre-COVID-19 pandemic) data was similarly analyzed. RESULTS: In 2020, 707,899 patients were analyzed, 556,361 (78.59%) used ambulance, and 151,538 (21.41%) were walk-ins. We observed differences in demographics, hospital attributes, medical comorbidities, and injury mechanism. Ambulance patients had more chronic conditions and severe injuries. Walk-ins had lower in-hospital mortality (850 (.56%) vs 23,131 (4.16%)) and arrived with better vital signs. Multivariable logistic regression models (inverse probability weighting for regression adjustment), adjusting for injury severity, demographics, injury mechanism, and vital signs, confirmed that walk-in status had lower odds of mortality. For the 2019 (pre-COVID-19 pandemic) database, walk-ins also had lower in-hospital mortality. DISCUSSION: Our results demonstrate better survival rates for walk-ins before and during COVID-19 pandemic. Despite limitations of patient selection bias, this study highlights the need for further research into transportation modes, geographic and socioeconomic factors affecting patient transport, and tailoring management strategies based on their mode of arrival.


Subject(s)
COVID-19 , Surgeons , Wounds and Injuries , Humans , Retrospective Studies , Pandemics , Ambulances , COVID-19/epidemiology , Trauma Centers , Injury Severity Score
7.
Int J Cardiol ; 407: 132103, 2024 Jul 15.
Article in English | MEDLINE | ID: mdl-38677333

ABSTRACT

BACKGROUND: Data regarding the prognostic value of left atrial (LA) strain in aortic stenosis (AS) is scarce, especially in Asian population and moderate AS. METHOD: Left ventricular global longitudinal strain (LVGLS), LA reservoir strain (LASr), conduit strain (LAScd), and contractile strain (LASct) were measured using automated speckle-tracking echocardiography in consecutive patients with moderate or severe AS. The primary endpoint was a composite of all-cause death (ACD) and major adverse cardiovascular events (MACE; myocardial infarction, syncope, and heart failure hospitalization). RESULTS: Of 712 patients (mean age, 78 ± 12 years; 370 [52%] moderate AS; 342 [48%] severe AS), average LV ejection fraction (LVEF) was 68 with SD of 12%. At a median follow-up of 18 months (interquartile range, 11-26 months), the primary endpoint occurred in 93 patients (60 deaths and 35 MACEs) and 221 patients underwent surgical or transcatheter aortic valve replacement (AVR). In the entire cohort, separate multivariable models adjusted for age, Charlson index, symptomatic status, time-dependent AVR, AS-severity, LA volume index and LVEF demonstrated that only LASr was associated with MACE+ACD (Hazard ratio, 0.97; P = 0.014). Subgroup analysis for MACE+ACD demonstrated consistent prognostication for LASr in moderate and severe AS; LVGLS was prognostic only in severe AS (all P ≤ 0.023). The optimal MACE+ACD cutoff for LASr from spline curves was 21.3%. Adjusted Kaplan-Meier curves demonstrated better event-free survival in patients with LASr >21.3% versus those with LASr ≤21.3% (P = 0.04). CONCLUSIONS: In both moderate and severe AS, only LASr robustly predicted outcomes; thus, including LASr in the AS staging algorithm should be considered.


Subject(s)
Aortic Valve Stenosis , Asian People , Echocardiography , Severity of Illness Index , Humans , Aortic Valve Stenosis/diagnostic imaging , Aortic Valve Stenosis/surgery , Aortic Valve Stenosis/mortality , Aortic Valve Stenosis/physiopathology , Aortic Valve Stenosis/diagnosis , Male , Female , Aged , Prognosis , Aged, 80 and over , Echocardiography/methods , Heart Atria/diagnostic imaging , Heart Atria/physiopathology , Follow-Up Studies , Ventricular Function, Left/physiology , Atrial Function, Left/physiology , Heart Ventricles/diagnostic imaging , Heart Ventricles/physiopathology , Cohort Studies
8.
Article in English | MEDLINE | ID: mdl-34072276

ABSTRACT

Participation in enjoyable activities is essential for the health and development of young children with and without disabilities. For preschool children with autism spectrum disorder (ASD), there is limited knowledge regarding their participation in play, learning, recreation, and social activities. This was a preliminary study that compares the participation between children 2-6 years of age with ASD (n = 25) and age- and sex-matched typically developing (TD) (n = 25) children. The Chinese version of the Assessment of Preschool Children's Participation (APCP-C) measures participation in play, skill development, active physical recreation, and social activities. Parents of the children in this study completed the APCP-C by structured interview. The results showed that children with ASD had lower participation diversity and intensity than TD children in play activities. A lower percentage of children participating in individual activity was found for children with ASD than TD children in most APCP-C activities. Professionals that serve young children with special needs are encouraged to partner with parents to provide playful and socially enhancing activities for preschool children with ASD.


Subject(s)
Autism Spectrum Disorder , Child, Preschool , Humans , Leisure Activities , Recreation , Social Behavior , Taiwan/epidemiology
9.
J Infect Public Health ; 13(9): 1354-1359, 2020 Sep.
Article in English | MEDLINE | ID: mdl-32376234

ABSTRACT

BACKGROUND: Treatment of latent tuberculosis infection (LTBI) is an important strategy for active disease prevention. Conventional in-person DOT (CDOT) programs are challenged by patient dissatisfaction over problems of convenience and privacy. The present study assessed satisfaction to DOT program and treatment adherence of synchronous video observed treatment (SVOT) programs from patients' perspectives. METHODS: A two-part questionnaire was presented to 240 subjects with LTBI who received a 9-month isoniazid treatment regimen along with mandatory DOT monitoring during January 2014 to December 2017. RESULTS: Satisfactions with location arrangement (p<0.001), ensuring treatment adherence (p=0.027), and privacy issues (p=0.005) were superior in the SVOT group. The overall rate of LTBI treatment completion was 91.25%. One (1.25%) and 20 (12.50%) of the participants in the SVOT and CDOT groups, respectively, quit LTBI treatment (p=0.008). Development of adverse events [adjusted hazard ratio, aHR 8.01 (3.42-18.79)], and the concern of privacy infringement [aHR 5.86 (2.69-12.76)] by the DOT program independently increase the risk of withdrawal. SVOT program [aHR 0.21 (0.06-0.68)] and a belief in the importance of adherence on treatment efficacy [aHR 0.29 (0.08-0.98)] were independent predictors preventing patients from withdrawing from treatment. CONCLUSIONS: A comprehensive patient-centered DOT program enables high treatment adherence for the 9-month isoniazid LTBI treatment. Furthermore, SVOT was associated with superior patients' satisfactions which translate into higher treatment completion rates. As treatment adherence is the key to the efficacy of LTBI treatment, SVOT should be a reasonable supplement for LTBI treatment.


Subject(s)
Directly Observed Therapy/methods , Latent Tuberculosis/drug therapy , Privacy , Remote Consultation/methods , Treatment Adherence and Compliance , Adolescent , Adult , Antitubercular Agents/therapeutic use , Directly Observed Therapy/legislation & jurisprudence , Female , Humans , Isoniazid/therapeutic use , Male , Middle Aged , Stereotyping , Surveys and Questionnaires , Treatment Outcome , Video Recording , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL