Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
1.
PLoS One ; 18(12): e0294813, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38113202

RESUMEN

OBJECTIVE: Specialty care may improve diabetic foot ulcer outcomes. Medically underserved populations receive less specialty care. We aimed to determine the association between specialty care and ulcer progression, major amputation, or death. If a beneficial association is found, increasing access to specialty care might help advance health equity. RESEARCH DESIGN AND METHODS: We retrospectively analyzed a cohort of Wisconsin and Illinois Medicare patients with diabetic foot ulcers (n = 55,409), stratified by ulcer severity (i.e., early stage, osteomyelitis, or gangrene). Within each stratum, we constructed Kaplan-Meier curves for event-free survival, defining events as: ulcer progression, major amputation, or death. Patients were grouped based on whether they received specialty care from at least one of six disciplines: endocrinology, infectious disease, orthopedic surgery, plastic surgery, podiatry, and vascular surgery. Multivariate Cox proportional hazard models estimated the association between specialty care and event-free survival, adjusting for sociodemographic factors and comorbidities, and stratifying on ulcer severity. RESULTS: Patients who received specialty care had longer event-free survival compared to those who did not (log-rank p<0.001 for all ulcer severity strata). After adjusting, receipt of specialty care, compared to never, remained associated with improved outcomes for all ulcer severities (early stage adjusted hazard ratio 0.34, 95% CI 0.33-0.35, p<0.001; osteomyelitis aHR 0.22, 95% CI 0.20-0.23, p<0.001; gangrene aHR 0.22, 95% CI 0.20-0.24, p<0.001). CONCLUSIONS: Specialty care was associated with longer event-free survivals for patients with diabetic foot ulcers. Increased, equitable access to specialty care might improve diabetic foot ulcer outcomes and disparities.


Asunto(s)
Diabetes Mellitus , Pie Diabético , Osteomielitis , Humanos , Anciano , Estados Unidos , Pie Diabético/complicaciones , Estudios Retrospectivos , Gangrena/complicaciones , Medicare , Osteomielitis/complicaciones
2.
J Am Med Inform Assoc ; 30(2): 292-300, 2023 01 18.
Artículo en Inglés | MEDLINE | ID: mdl-36308445

RESUMEN

OBJECTIVE: To develop a machine learning framework to forecast emergency department (ED) crowding and to evaluate model performance under spatial and temporal data drift. MATERIALS AND METHODS: We obtained 4 datasets, identified by the location: 1-large academic hospital and 2-rural hospital, and time period: pre-coronavirus disease (COVID) (January 1, 2019-February 1, 2020) and COVID-era (May 15, 2020-February 1, 2021). Our primary target was a binary outcome that is equal to 1 if the number of patients with acute respiratory illness that were ED boarding for more than 4 h was above a prescribed historical percentile. We trained a random forest and used the area under the curve (AUC) to evaluate out-of-sample performance for 2 experiments: (1) we evaluated the impact of sudden temporal drift by training models using pre-COVID data and testing them during the COVID-era, (2) we evaluated the impact of spatial drift by testing models trained at location 1 on data from location 2, and vice versa. RESULTS: The baseline AUC values for ED boarding ranged from 0.54 (pre-COVID at location 2) to 0.81 (COVID-era at location 1). Models trained with pre-COVID data performed similarly to COVID-era models (0.82 vs 0.78 at location 1). Models that were transferred from location 2 to location 1 performed worse than models trained at location 1 (0.51 vs 0.78). DISCUSSION AND CONCLUSION: Our results demonstrate that ED boarding is a predictable metric for ED crowding, models were not significantly impacted by temporal data drift, and any attempts at implementation must consider spatial data drift.


Asunto(s)
COVID-19 , Aglomeración , Servicio de Urgencia en Hospital , Humanos , Predicción , Pandemias , Estudios Retrospectivos
3.
BMJ Glob Health ; 7(12)2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-36455988

RESUMEN

INTRODUCTION: Tuberculosis (TB) is a global health emergency and low treatment adherence among patients is a major barrier to ending the TB epidemic. The WHO promotes digital adherence technologies (DATs) as facilitators for improving treatment adherence in resource-limited settings. However, limited research has investigated whether DATs improve outcomes for high-risk patients (ie, those with a high probability of an unsuccessful outcome), leading to concerns that DATs may cause intervention-generated inequality. METHODS: We conducted secondary analyses of data from a completed individual-level randomised controlled trial in Nairobi, Kenya during 2016-2017, which evaluated the average intervention effect of a novel DAT-based behavioural support programme. We trained a causal forest model to answer three research questions: (1) Was the effect of the intervention heterogeneous across individuals? (2) Was the intervention less effective for high-risk patients? nd (3) Can differentiated care improve programme effectiveness and equity in treatment outcomes? RESULTS: We found that individual intervention effects-the percentage point reduction in the likelihood of an unsuccessful treatment outcome-ranged from 4.2 to 12.4, with an average of 8.2. The intervention was beneficial for 76% of patients, and most beneficial for high-risk patients. Differentiated enrolment policies, targeted at high-risk patients, have the potential to (1) increase the average intervention effect of DAT services by up to 28.5% and (2) decrease the population average and standard deviation (across patients) of the probability of an unsuccessful treatment outcome by up to 8.5% and 31.5%, respectively. CONCLUSION: This DAT-based intervention can improve outcomes among high-risk patients, reducing inequity in the likelihood of an unsuccessful treatment outcome. In resource-limited settings where universal provision of the intervention is infeasible, targeting high-risk patients for DAT enrolment is a worthwhile strategy for programmes that involve human support sponsors, enabling them to achieve the highest possible impact for high-risk patients at a substantially improved cost-effectiveness ratio.


Asunto(s)
Tecnología Digital , Tuberculosis , Humanos , Kenia , Resultado del Tratamiento , Tuberculosis/prevención & control , Probabilidad
5.
JMIR Aging ; 5(3): e36975, 2022 08 04.
Artículo en Inglés | MEDLINE | ID: mdl-35925654

RESUMEN

BACKGROUND: People living with Alzheimer disease and related dementias (ADRD) require prolonged and complex care that is primarily managed by informal caregivers who face significant unmet needs regarding support for communicating and coordinating across their informal care network. To address this unmet need, we developed CareVirtue, which provides (1) the ability to invite care network members; (2) a care guide detailing the care plan; (3) a journal where care network members can document, communicate, and coordinate; (4) a shared calendar; and (5) vetted geolocated caregiver resources. OBJECTIVE: This study aims to evaluate CareVirtue's feasibility based on: (1) Who used CareVirtue? (2) How did caregivers use CareVirtue? (3) How did caregivers perceive the acceptability of CareVirtue? (4) What factors were associated with CareVirtue use? METHODS: We conducted a feasibility study with 51 care networks over a period of 8 weeks and used a mixed methods approach that included both quantitative CareVirtue usage data and semistructured interviews. RESULTS: Care networks ranged from 1 to 8 members. Primary caregivers were predominantly female (38/51, 75%), White (44/51, 86%), married (37/51, 73%), college educated (36/51, 71%), and were, on average, 60.3 (SD 9.8) years of age, with 18% (9/51) living in a rural area. CareVirtue usage varied along 2 axes (total usage and type of usage), with heterogeneity in how the most engaged care networks interacted with CareVirtue. Interviews identified a range of ways CareVirtue was useful, including practically, organizationally, and emotionally. On the Behavioral Intention Scale, 72% (26/36) of primary caregivers reported an average score of at least 3, indicating an above average intention to use. The average was 81.8 (SD 12.8) for the System Usability Scale score, indicating "good" usability, and 3.4 (SD 1.0) for perceived usefulness, suggesting above average usefulness. The average confidence score increased significantly over the study duration from 7.8 in week 2 to 8.9 in week 7 (P=.005; r=0.91, 95% CI 0.84-0.95). The following sociodemographic characteristics were associated with posting in the journal: retired (mean 59.5 posts for retired caregivers and mean 16.9 for nonretired caregivers), income (mean 13 posts for those reporting >US $100K and mean 55.4 for those reporting

6.
BMC Health Serv Res ; 22(1): 639, 2022 May 13.
Artículo en Inglés | MEDLINE | ID: mdl-35562823

RESUMEN

BACKGROUND: Pre-hospital and emergency services in Indonesia are still developing. Despite recent improvements in the Indonesian healthcare system, issues with the provision of pre-hospital and emergency services persist. The demand for pre-hospital and emergency services has not been the subject of previous research and, therefore, has not been fully understood. Our research explored the utilization of emergency medical services by patients attending hospital emergency departments in Jakarta, Indonesia. METHODS: The study used a cross-sectional survey design involving five general hospitals (four government-funded and one private). Each patient's demographic profile, medical conditions, time to treatment, and mode of transport to reach the hospital were analysed using descriptive statistics. RESULTS: A total of 1964 (62%) patients were surveyed. The median age of patients was 44 years with an interquartile range (IQR) of 26 to 58 years. Life-threatening conditions such as trauma and cardiovascular disease were found in 8.6 and 6.6% of patients, respectively. The majority of patients with trauma travelled to the hospital using a motorcycle or car (59.8%). An ambulance was used by only 9.3% of all patients and 38% of patients reported that they were not aware of the availability of ambulances. Ambulance response time was longer as compared to other modes of transportation (median: 24 minutes and IQR: 12 to 54 minutes). The longest time to treatment was experienced by patients with neurological disease, with a median time of 120 minutes (IQR: 78 to 270 minutes). Patients who used ambulances incurred higher costs as compared to those patients who did not use ambulances. CONCLUSION: The low utilization of emergency ambulances in Jakarta could be contributed to patients' lack of awareness of medical symptoms and the existence of ambulance services, and patients' disinclination to use ambulances due to high costs and long response times. The emergency ambulance services can be improved by increasing population awareness on symptoms that warrant the use of ambulances and reducing the cost burden related to ambulance use.


Asunto(s)
Servicios Médicos de Urgencia , Utilización de Instalaciones y Servicios , Adulto , Estudios Transversales , Servicio de Urgencia en Hospital , Hospitales , Humanos , Indonesia/epidemiología , Persona de Mediana Edad
7.
Resuscitation ; 169: 31-38, 2021 12.
Artículo en Inglés | MEDLINE | ID: mdl-34678334

RESUMEN

BACKGROUND: Although several Utstein variables are known to independently improve survival, how they moderate the effect of emergency medical service (EMS) response times on survival is unknown. OBJECTIVES: To quantify how public location, witnessed status, bystander CPR, and bystander AED shock individually and jointly moderate the effect of EMS response time delays on OHCA survival. METHODS: This retrospective cohort study was a secondary analysis of the Resuscitation Outcomes Consortium Epistry-Cardiac Arrest database (December 2005 to June 2015). We included all adult, non-traumatic, non-EMS witnessed, and EMS-treated OHCAs from eleven sites across the US and Canada. We trained a logistic regression model with standard Utstein control variables and interaction terms between EMS response time and the four aforementioned OHCA characteristics. RESULTS: 102,216 patients were included. Three of the four characteristics - witnessed OHCAs (OR = 0.962), bystander CPR (OR = 0.968) and public location (OR = 0.980) - increased the negative effect of a one-minute delay on the odds of survival. In contrast, a bystander AED shock decreased the negative effect of a one-minute response time delay on the odds of survival (OR = 1.064). The magnitude of the effect of a one-minute delay in EMS response time on the odds of survival ranged from 1.3% to 9.8% (average: 5.3%), depending on the underlying OHCA characteristics. CONCLUSIONS: Delays in EMS response time had the largest reduction in survival odds for OHCAs that did not receive a bystander AED shock but were witnessed, occurred in public, and/or received bystander CPR. A bystander AED shock appears to be protective against a delay in EMS response time.


Asunto(s)
Reanimación Cardiopulmonar , Servicios Médicos de Urgencia , Paro Cardíaco Extrahospitalario , Adulto , Humanos , Paro Cardíaco Extrahospitalario/terapia , Tiempo de Reacción , Estudios Retrospectivos
8.
JAMIA Open ; 4(1): ooab004, 2021 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-33796821

RESUMEN

OBJECTIVES: The objectives of this study are to construct the high definition phenotype (HDP), a novel time-series data structure composed of both primary and derived parameters, using heterogeneous clinical sources and to determine whether different predictive models can utilize the HDP in the neonatal intensive care unit (NICU) to improve neonatal mortality prediction in clinical settings. MATERIALS AND METHODS: A total of 49 primary data parameters were collected from July 2018 to May 2020 from eight level-III NICUs. From a total of 1546 patients, 757 patients were found to contain sufficient fixed, intermittent, and continuous data to create HDPs. Two different predictive models utilizing the HDP, one a logistic regression model (LRM) and the other a deep learning long-short-term memory (LSTM) model, were constructed to predict neonatal mortality at multiple time points during the patient hospitalization. The results were compared with previous illness severity scores, including SNAPPE, SNAPPE-II, CRIB, and CRIB-II. RESULTS: A HDP matrix, including 12 221 536 minutes of patient stay in NICU, was constructed. The LRM model and the LSTM model performed better than existing neonatal illness severity scores in predicting mortality using the area under the receiver operating characteristic curve (AUC) metric. An ablation study showed that utilizing continuous parameters alone results in an AUC score of >80% for both LRM and LSTM, but combining fixed, intermittent, and continuous parameters in the HDP results in scores >85%. The probability of mortality predictive score has recall and precision of 0.88 and 0.77 for the LRM and 0.97 and 0.85 for the LSTM. CONCLUSIONS AND RELEVANCE: The HDP data structure supports multiple analytic techniques, including the statistical LRM approach and the machine learning LSTM approach used in this study. LRM and LSTM predictive models of neonatal mortality utilizing the HDP performed better than existing neonatal illness severity scores. Further research is necessary to create HDP-based clinical decision tools to detect the early onset of neonatal morbidities.

9.
Mol Psychiatry ; 26(7): 3395-3406, 2021 07.
Artículo en Inglés | MEDLINE | ID: mdl-33658605

RESUMEN

We identified biologically relevant moderators of response to tumor necrosis factor (TNF)-α inhibitor, infliximab, among 60 individuals with bipolar depression. Data were derived from a 12-week, randomized, placebo-controlled clinical trial secondarily evaluating the efficacy of infliximab on a measure of anhedonia (i.e., Snaith-Hamilton Pleasure Scale). Three inflammatory biotypes were derived from peripheral cytokine measurements using an iterative, machine learning-based approach. Infliximab-randomized participants classified as biotype 3 exhibited lower baseline concentrations of pro- and anti-inflammatory cytokines and soluble TNF receptor-1 and reported greater pro-hedonic improvements, relative to those classified as biotype 1 or 2. Pretreatment biotypes also moderated changes in neuroinflammatory substrates relevant to infliximab's hypothesized mechanism of action. Neuronal origin-enriched extracellular vesicle (NEV) protein concentrations were reduced to two factors using principal axis factoring: phosphorylated nuclear factorκB (p-NFκB), Fas-associated death domain (p-FADD), and IκB kinase (p-IKKα/ß) and TNF receptor-1 (TNFR1) comprised factor "NEV1," whereas phosphorylated insulin receptor substrate-1 (p-IRS1), p38 mitogen-activated protein kinase (p-p38), and c-Jun N-terminal kinase (p-JNK) constituted "NEV2". Among infliximab-randomized subjects classified as biotype 3, NEV1 scores were decreased at weeks 2 and 6 and increased at week 12, relative to baseline, and NEV2 scores increased over time. Decreases in NEV1 scores and increases in NEV2 scores were associated with greater reductions in anhedonic symptoms in our classification and regression tree model (r2 = 0.22, RMSE = 0.08). Our findings provide preliminary evidence supporting the hypothesis that the pro-hedonic effects of infliximab require modulation of multiple TNF-α signaling pathways, including NF-κB, IRS1, and MAPK.


Asunto(s)
Trastorno Bipolar , Infliximab/uso terapéutico , Biomarcadores , Trastorno Bipolar/tratamiento farmacológico , Humanos , Proteínas Sustrato del Receptor de Insulina , Sistema de Señalización de MAP Quinasas , FN-kappa B , Factor de Necrosis Tumoral alfa
10.
J Med Internet Res ; 23(1): e20123, 2021 01 21.
Artículo en Inglés | MEDLINE | ID: mdl-33475518

RESUMEN

BACKGROUND: The impending scale up of noncommunicable disease screening programs in low- and middle-income countries coupled with limited health resources require that such programs be as accurate as possible at identifying patients at high risk. OBJECTIVE: The aim of this study was to develop machine learning-based risk stratification algorithms for diabetes and hypertension that are tailored for the at-risk population served by community-based screening programs in low-resource settings. METHODS: We trained and tested our models by using data from 2278 patients collected by community health workers through door-to-door and camp-based screenings in the urban slums of Hyderabad, India between July 14, 2015 and April 21, 2018. We determined the best models for predicting short-term (2-month) risk of diabetes and hypertension (a model for diabetes and a model for hypertension) and compared these models to previously developed risk scores from the United States and the United Kingdom by using prediction accuracy as characterized by the area under the receiver operating characteristic curve (AUC) and the number of false negatives. RESULTS: We found that models based on random forest had the highest prediction accuracy for both diseases and were able to outperform the US and UK risk scores in terms of AUC by 35.5% for diabetes (improvement of 0.239 from 0.671 to 0.910) and 13.5% for hypertension (improvement of 0.094 from 0.698 to 0.792). For a fixed screening specificity of 0.9, the random forest model was able to reduce the expected number of false negatives by 620 patients per 1000 screenings for diabetes and 220 patients per 1000 screenings for hypertension. This improvement reduces the cost of incorrect risk stratification by US $1.99 (or 35%) per screening for diabetes and US $1.60 (or 21%) per screening for hypertension. CONCLUSIONS: In the next decade, health systems in many countries are planning to spend significant resources on noncommunicable disease screening programs and our study demonstrates that machine learning models can be leveraged by these programs to effectively utilize limited resources by improving risk stratification.


Asunto(s)
Diabetes Mellitus/diagnóstico , Hipertensión/diagnóstico , Aprendizaje Automático/normas , Diabetes Mellitus/economía , Diagnóstico Precoz , Femenino , Humanos , Hipertensión/economía , Masculino , Persona de Mediana Edad , Estudios Retrospectivos , Medición de Riesgo
11.
Children (Basel) ; 8(1)2020 Dec 22.
Artículo en Inglés | MEDLINE | ID: mdl-33375101

RESUMEN

Our objective in this study was to determine if machine learning (ML) can automatically recognize neonatal manipulations, along with associated changes in physiological parameters. A retrospective observational study was carried out in two Neonatal Intensive Care Units (NICUs) between December 2019 to April 2020. Both the video and physiological data (heart rate (HR) and oxygen saturation (SpO2)) were captured during NICU hospitalization. The proposed classification of neonatal manipulations was achieved by a deep learning system consisting of an Inception-v3 convolutional neural network (CNN), followed by transfer learning layers of Long Short-Term Memory (LSTM). Physiological signals prior to manipulations (baseline) were compared to during and after manipulations. The validation of the system was done using the leave-one-out strategy with input of 8 s of video exhibiting manipulation activity. Ten neonates were video recorded during an average length of stay of 24.5 days. Each neonate had an average of 528 manipulations during their NICU hospitalization, with the average duration of performing these manipulations varying from 28.9 s for patting, 45.5 s for a diaper change, and 108.9 s for tube feeding. The accuracy of the system was 95% for training and 85% for the validation dataset. In neonates <32 weeks' gestation, diaper changes were associated with significant changes in HR and SpO2, and, for neonates ≥32 weeks' gestation, patting and tube feeding were associated with significant changes in HR. The presented system can classify and document the manipulations with high accuracy. Moreover, the study suggests that manipulations impact physiological parameters.

12.
J Affect Disord ; 274: 1211-1215, 2020 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-32663953

RESUMEN

The authors regret an error in one of the extracted data points in the meta-analysis. The classification accuracy for Serretti et al. (2007) was corrected to 64% (Table 3b). The overall results before and after this correction remain directionally consistent and are summarized below (Figures 2 and 3; Table 2; results subsection 3.6). The authors apologise for any inconvenience caused.

13.
J Affect Disord ; 241: 519-532, 2018 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-30153635

RESUMEN

BACKGROUND: No previous study has comprehensively reviewed the application of machine learning algorithms in mood disorders populations. Herein, we qualitatively and quantitatively evaluate previous studies of machine learning-devised models that predict therapeutic outcomes in mood disorders populations. METHODS: We searched Ovid MEDLINE/PubMed from inception to February 8, 2018 for relevant studies that included adults with bipolar or unipolar depression; assessed therapeutic outcomes with a pharmacological, neuromodulatory, or manual-based psychotherapeutic intervention for depression; applied a machine learning algorithm; and reported predictors of therapeutic response. A random-effects meta-analysis of proportions and meta-regression analyses were conducted. RESULTS: We identified 639 records: 75 full-text publications were assessed for eligibility; 26 studies (n=17,499) and 20 studies (n=6325) were included in qualitative and quantitative review, respectively. Classification algorithms were able to predict therapeutic outcomes with an overall accuracy of 0.82 (95% confidence interval [CI] of [0.77, 0.87]). Pooled estimates of classification accuracy were significantly greater (p < 0.01) in models informed by multiple data types (e.g., composite of phenomenological patient features and neuroimaging or peripheral gene expression data; pooled proportion [95% CI] = 0.93[0.86, 0.97]) when compared to models with lower-dimension data types (pooledproportion=0.68[0.62,0.74]to0.85[0.81,0.88]). LIMITATIONS: Most studies were retrospective; differences in machine learning algorithms and their implementation (e.g., cross-validation, hyperparameter tuning); cannot infer importance of individual variables fed into learning algorithm. CONCLUSIONS: Machine learning algorithms provide a powerful conceptual and analytic framework capable of integrating multiple data types and sources. An integrative approach may more effectively model neurobiological components as functional modules of pathophysiology embedded within the complex, social dynamics that influence the phenomenology of mental disorders.


Asunto(s)
Algoritmos , Antidepresivos/uso terapéutico , Trastorno Depresivo/tratamiento farmacológico , Diagnóstico por Computador , Aprendizaje Automático , Adulto , Trastorno Depresivo/diagnóstico , Femenino , Humanos , Masculino , Neuroimagen , Estudios Retrospectivos , Resultado del Tratamiento
14.
Phys Med Biol ; 63(19): 195004, 2018 09 21.
Artículo en Inglés | MEDLINE | ID: mdl-29998853

RESUMEN

Current practice for treatment planning optimization can be both inefficient and time consuming. In this paper, we propose an automated planning methodology that aims to combine both explorative and prescriptive approaches for improving the efficiency and the quality of the treatment planning process. Given a treatment plan, our explorative approach explores trade-offs between different objectives and finds an acceptable region for objective function weights via inverse optimization. Intuitively, the shape and size of these regions describe how 'sensitive' a patient is to perturbations in objective function weights. We then develop an integer programming-based prescriptive approach that exploits the information encoded by these regions to find a set of five representative objective function weight vectors such that for each patient there exists at least one representative weight vector that can produce a high quality treatment plan. Using 315 patients from Princess Margaret Cancer Centre, we show that the produced treatment plans are comparable and, for [Formula: see text] of cases, improve upon the inversely optimized plans that are generated from the historical clinical treatment plans.


Asunto(s)
Neoplasias de la Próstata/radioterapia , Planificación de la Radioterapia Asistida por Computador/métodos , Radioterapia de Intensidad Modulada/métodos , Humanos , Masculino , Dosificación Radioterapéutica
15.
Phys Med Biol ; 63(10): 105004, 2018 05 10.
Artículo en Inglés | MEDLINE | ID: mdl-29633957

RESUMEN

We developed and evaluated a novel inverse optimization (IO) model to estimate objective function weights from clinical dose-volume histograms (DVHs). These weights were used to solve a treatment planning problem to generate 'inverse plans' that had similar DVHs to the original clinical DVHs. Our methodology was applied to 217 clinical head and neck cancer treatment plans that were previously delivered at Princess Margaret Cancer Centre in Canada. Inverse plan DVHs were compared to the clinical DVHs using objective function values, dose-volume differences, and frequency of clinical planning criteria satisfaction. Median differences between the clinical and inverse DVHs were within 1.1 Gy. For most structures, the difference in clinical planning criteria satisfaction between the clinical and inverse plans was at most 1.4%. For structures where the two plans differed by more than 1.4% in planning criteria satisfaction, the difference in average criterion violation was less than 0.5 Gy. Overall, the inverse plans were very similar to the clinical plans. Compared with a previous inverse optimization method from the literature, our new inverse plans typically satisfied the same or more clinical criteria, and had consistently lower fluence heterogeneity. Overall, this paper demonstrates that DVHs, which are essentially summary statistics, provide sufficient information to estimate objective function weights that result in high quality treatment plans. However, as with any summary statistic that compresses three-dimensional dose information, care must be taken to avoid generating plans with undesirable features such as hotspots; our computational results suggest that such undesirable spatial features were uncommon. Our IO-based approach can be integrated into the current clinical planning paradigm to better initialize the planning process and improve planning efficiency. It could also be embedded in a knowledge-based planning or adaptive radiation therapy framework to automatically generate a new plan given a predicted or updated target DVH, respectively.


Asunto(s)
Órganos en Riesgo/efectos de la radiación , Neoplasias Orofaríngeas/radioterapia , Planificación de la Radioterapia Asistida por Computador/métodos , Planificación de la Radioterapia Asistida por Computador/normas , Radioterapia de Intensidad Modulada/métodos , Canadá , Humanos , Dosificación Radioterapéutica
16.
Med Phys ; 45(7): 2875-2883, 2018 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-29679492

RESUMEN

PURPOSE: The purpose of this study was to automatically generate radiation therapy plans for oropharynx patients by combining knowledge-based planning (KBP) predictions with an inverse optimization (IO) pipeline. METHODS: We developed two KBP approaches, the bagging query (BQ) method and the generalized principal component analysis-based (gPCA) method, to predict achievable dose-volume histograms (DVHs). These approaches generalize existing methods by predicting physically feasible organ-at-risk (OAR) and target DVHs in sites with multiple targets. Using leave-one-out cross validation, we applied both models to a large dataset of 217 oropharynx patients. The predicted DVHs were input into an IO pipeline that generated treatment plans (BQ and gPCA plans) via an intermediate step that estimated objective function weights for an inverse planning model. The KBP predictions were compared to the clinical DVHs for benchmarking. To assess the complete pipeline, we compared the BQ and gPCA plans to both the predictions and clinical plans. To isolate the effect of the KBP predictions, we put clinical DVHs through the IO pipeline to produce clinical inverse optimized (CIO) plans. This approach also allowed us to estimate the complexity of the clinical plans. The BQ and gPCA plans were benchmarked against the CIO plans using DVH differences and clinical planning criteria. Iso-complexity plans (relative to CIO) were also generated and evaluated. RESULTS: The BQ method tended to predict that less dose is delivered than what was observed in the clinical plans while the gPCA predictions were more similar to clinical DVHs. Both populations of KBP predictions were reproduced with inverse plans to within a median DVH difference of 3 Gy. Clinical planning criteria for OARs were satisfied most frequently by the BQ plans (74.4%), by 6.3% points more than the clinical plans. Meanwhile, target criteria were satisfied most frequently by the gPCA plans (90.2%), and by 21.2% points more than clinical plans. However, once the complexity of the plans was constrained to that of the CIO plans, the performance of the BQ plans degraded significantly. In contrast, the gPCA plans still satisfied more clinical criteria than both the clinical and CIO plans, with the most notable improvement being in target criteria. CONCLUSION: Our automated pipeline can successfully use DVH predictions to generate high-quality plans without human intervention. Between the two KBP methods, gPCA plans tend to achieve comparable performance as clinical plans, even when controlling for plan complexity, whereas BQ plans tended to underperform.


Asunto(s)
Neoplasias Orofaríngeas/radioterapia , Planificación de la Radioterapia Asistida por Computador/métodos , Automatización , Humanos , Órganos en Riesgo/efectos de la radiación , Análisis de Componente Principal , Dosificación Radioterapéutica , Radioterapia de Intensidad Modulada/efectos adversos
17.
Circulation ; 135(25): 2454-2465, 2017 Jun 20.
Artículo en Inglés | MEDLINE | ID: mdl-28254836

RESUMEN

BACKGROUND: Public access defibrillation programs can improve survival after out-of-hospital cardiac arrest, but automated external defibrillators (AEDs) are rarely available for bystander use at the scene. Drones are an emerging technology that can deliver an AED to the scene of an out-of-hospital cardiac arrest for bystander use. We hypothesize that a drone network designed with the aid of a mathematical model combining both optimization and queuing can reduce the time to AED arrival. METHODS: We applied our model to 53 702 out-of-hospital cardiac arrests that occurred in the 8 regions of the Toronto Regional RescuNET between January 1, 2006, and December 31, 2014. Our primary analysis quantified the drone network size required to deliver an AED 1, 2, or 3 minutes faster than historical median 911 response times for each region independently. A secondary analysis quantified the reduction in drone resources required if RescuNET was treated as a large coordinated region. RESULTS: The region-specific analysis determined that 81 bases and 100 drones would be required to deliver an AED ahead of median 911 response times by 3 minutes. In the most urban region, the 90th percentile of the AED arrival time was reduced by 6 minutes and 43 seconds relative to historical 911 response times in the region. In the most rural region, the 90th percentile was reduced by 10 minutes and 34 seconds. A single coordinated drone network across all regions required 39.5% fewer bases and 30.0% fewer drones to achieve similar AED delivery times. CONCLUSIONS: An optimized drone network designed with the aid of a novel mathematical model can substantially reduce the AED delivery time to an out-of-hospital cardiac arrest event.


Asunto(s)
Reanimación Cardiopulmonar/normas , Desfibriladores/normas , Servicios Médicos de Urgencia/normas , Modelos Teóricos , Paro Cardíaco Extrahospitalario/terapia , Tiempo de Tratamiento/normas , Anciano , Reanimación Cardiopulmonar/métodos , Reanimación Cardiopulmonar/tendencias , Desfibriladores/tendencias , Servicios Médicos de Urgencia/métodos , Servicios Médicos de Urgencia/tendencias , Femenino , Humanos , Masculino , Persona de Mediana Edad , Ontario/epidemiología , Paro Cardíaco Extrahospitalario/epidemiología , Tiempo de Tratamiento/tendencias
18.
Med Phys ; 43(3): 1212-21, 2016 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-26936706

RESUMEN

PURPOSE: To determine how training set size affects the accuracy of knowledge-based treatment planning (KBP) models. METHODS: The authors selected four models from three classes of KBP approaches, corresponding to three distinct quantities that KBP models may predict: dose-volume histogram (DVH) points, DVH curves, and objective function weights. DVH point prediction is done using the best plan from a database of similar clinical plans; DVH curve prediction employs principal component analysis and multiple linear regression; and objective function weights uses either logistic regression or K-nearest neighbors. The authors trained each KBP model using training sets of sizes n = 10, 20, 30, 50, 75, 100, 150, and 200. The authors set aside 100 randomly selected patients from their cohort of 315 prostate cancer patients from Princess Margaret Cancer Center to serve as a validation set for all experiments. For each value of n, the authors randomly selected 100 different training sets with replacement from the remaining 215 patients. Each of the 100 training sets was used to train a model for each value of n and for each KBT approach. To evaluate the models, the authors predicted the KBP endpoints for each of the 100 patients in the validation set. To estimate the minimum required sample size, the authors used statistical testing to determine if the median error for each sample size from 10 to 150 is equal to the median error for the maximum sample size of 200. RESULTS: The minimum required sample size was different for each model. The DVH point prediction method predicts two dose metrics for the bladder and two for the rectum. The authors found that more than 200 samples were required to achieve consistent model predictions for all four metrics. For DVH curve prediction, the authors found that at least 75 samples were needed to accurately predict the bladder DVH, while only 20 samples were needed to predict the rectum DVH. Finally, for objective function weight prediction, at least 10 samples were needed to train the logistic regression model, while at least 150 samples were required to train the K-nearest neighbor methodology. CONCLUSIONS: In conclusion, the minimum required sample size needed to accurately train KBP models for prostate cancer depends on the specific model and endpoint to be predicted. The authors' results may provide a lower bound for more complicated tumor sites.


Asunto(s)
Planificación de la Radioterapia Asistida por Computador/métodos , Humanos , Masculino , Neoplasias de la Próstata/radioterapia , Radioterapia de Intensidad Modulada , Tamaño de la Muestra
19.
Med Phys ; 42(4): 1586-95, 2015 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-25832049

RESUMEN

PURPOSE: To develop and evaluate the clinical applicability of advanced machine learning models that simultaneously predict multiple optimization objective function weights from patient geometry for intensity-modulated radiation therapy of prostate cancer. METHODS: A previously developed inverse optimization method was applied retrospectively to determine optimal objective function weights for 315 treated patients. The authors used an overlap volume ratio (OV) of bladder and rectum for different PTV expansions and overlap volume histogram slopes (OVSR and OVSB for the rectum and bladder, respectively) as explanatory variables that quantify patient geometry. Using the optimal weights as ground truth, the authors trained and applied three prediction models: logistic regression (LR), multinomial logistic regression (MLR), and weighted K-nearest neighbor (KNN). The population average of the optimal objective function weights was also calculated. RESULTS: The OV at 0.4 cm and OVSR at 0.1 cm features were found to be the most predictive of the weights. The authors observed comparable performance (i.e., no statistically significant difference) between LR, MLR, and KNN methodologies, with LR appearing to perform the best. All three machine learning models outperformed the population average by a statistically significant amount over a range of clinical metrics including bladder/rectum V53Gy, bladder/rectum V70Gy, and dose to the bladder, rectum, CTV, and PTV. When comparing the weights directly, the LR model predicted bladder and rectum weights that had, on average, a 73% and 74% relative improvement over the population average weights, respectively. The treatment plans resulting from the LR weights had, on average, a rectum V70Gy that was 35% closer to the clinical plan and a bladder V70Gy that was 29% closer, compared to the population average weights. Similar results were observed for all other clinical metrics. CONCLUSIONS: The authors demonstrated that the KNN and MLR weight prediction methodologies perform comparably to the LR model and can produce clinical quality treatment plans by simultaneously predicting multiple weights that capture trade-offs associated with sparing multiple OARs.


Asunto(s)
Aprendizaje Automático , Neoplasias de la Próstata/radioterapia , Radioterapia de Intensidad Modulada/métodos , Conjuntos de Datos como Asunto , Humanos , Modelos Logísticos , Masculino , Órganos en Riesgo/efectos de la radiación , Fotones/uso terapéutico , Próstata/efectos de la radiación , Dosificación Radioterapéutica , Planificación de la Radioterapia Asistida por Computador/métodos , Recto/efectos de la radiación , Estudios Retrospectivos , Vejiga Urinaria/efectos de la radiación
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...