Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 75
Filtrar
Mais filtros

Bases de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Entropy (Basel) ; 26(4)2024 Apr 14.
Artigo em Inglês | MEDLINE | ID: mdl-38667889

RESUMO

We consider a constructive definition of the multivariate Pareto that factorizes the random vector into a radial component and an independent angular component. The former follows a univariate Pareto distribution, and the latter is defined on the surface of the positive orthant of the infinity norm unit hypercube. We propose a method for inferring the distribution of the angular component by identifying its support as the limit of the positive orthant of the unit p-norm spheres and introduce a projected gamma family of distributions defined through the normalization of a vector of independent random gammas to the space. This serves to construct a flexible family of distributions obtained as a Dirichlet process mixture of projected gammas. For model assessment, we discuss scoring methods appropriate to distributions on the unit hypercube. In particular, working with the energy score criterion, we develop a kernel metric that produces a proper scoring rule and presents a simulation study to compare different modeling choices using the proposed metric. Using our approach, we describe the dependence structure of extreme values in the integrated vapor transport (IVT), data describing the flow of atmospheric moisture along the coast of California. We find clear but heterogeneous geographical dependence.

2.
Stat Med ; 42(9): 1338-1352, 2023 04 30.
Artigo em Inglês | MEDLINE | ID: mdl-36757145

RESUMO

Outcome-dependent sampling (ODS) is a commonly used class of sampling designs to increase estimation efficiency in settings where response information (and possibly adjuster covariates) is available, but the exposure is expensive and/or cumbersome to collect. We focus on ODS within the context of a two-phase study, where in Phase One the response and adjuster covariate information is collected on a large cohort that is representative of the target population, but the expensive exposure variable is not yet measured. In Phase Two, using response information from Phase One, we selectively oversample a subset of informative subjects in whom we collect expensive exposure information. Importantly, the Phase Two sample is no longer representative, and we must use ascertainment-correcting analysis procedures for valid inferences. In this paper, we focus on likelihood-based analysis procedures, particularly a conditional-likelihood approach and a full-likelihood approach. Whereas the full-likelihood retains incomplete Phase One data for subjects not selected into Phase Two, the conditional-likelihood explicitly conditions on Phase Two sample selection (ie, it is a "complete case" analysis procedure). These designs and analysis procedures are typically implemented assuming a known, parametric model for the response distribution. However, in this paper, we approach analyses implementing a novel semi-parametric extension to generalized linear models (SPGLM) to develop likelihood-based procedures with improved robustness to misspecification of distributional assumptions. We specifically focus on the common setting where standard GLM distributional assumptions are not satisfied (eg, misspecified mean/variance relationship). We aim to provide practical design guidance and flexible tools for practitioners in these settings.


Assuntos
Modelos Estatísticos , Humanos , Modelos Lineares , Funções Verossimilhança
3.
BMC Med Res Methodol ; 23(1): 87, 2023 04 10.
Artigo em Inglês | MEDLINE | ID: mdl-37038100

RESUMO

BACKGROUND: Multi-state models are used to study several clinically meaningful research questions. Depending on the research question of interest and the information contained in the data, different multi-state structures and modelling choices can be applied. We aim to explore different research questions using a series of multi-state models of increasing complexity when studying repeated prescriptions data, while also evaluating different modelling choices. METHODS: We develop a series of research questions regarding the probability of being under antidepressant medication across time using multi-state models, among Swedish women diagnosed with breast cancer (n = 18,313) and an age-matched population comparison group of cancer-free women (n = 92,454) using a register-based database (Breast Cancer Data Base Sweden 2.0). Research questions were formulated ranging from simple to more composite ones. Depending on the research question, multi-state models were built with structures ranging from simpler ones, like single-event survival analysis and competing risks, up to complex bidirectional and recurrent multi-state structures that take into account the recurring start and stop of medication. We also investigate modelling choices, such as choosing a time-scale for the transition rates and borrowing information across transitions. RESULTS: Each structure has its own utility and answers a specific research question. However, the more complex structures (bidirectional, recurrent) enable accounting for the intermittent nature of prescribed medication data. These structures deliver estimates of the probability of being under medication and total time spent under medication over the follow-up period. Sensitivity analyses over different definitions of the medication cycle and different choices of timescale when modelling the transition intensity rates show that the estimates of total probabilities of being in a medication cycle over follow-up derived from the complex structures are quite stable. CONCLUSIONS: Each research question requires the definition of an appropriate multi-state structure, with more composite ones requiring such an increase in the complexity of the multi-state structure. When a research question is related with an outcome of interest that repeatedly changes over time, such as the medication status based on prescribed medication, the use of novel multi-state models of adequate complexity coupled with sensible modelling choices can successfully address composite, more realistic research questions.


Assuntos
Neoplasias da Mama , Humanos , Feminino , Neoplasias da Mama/tratamento farmacológico , Recidiva Local de Neoplasia , Antidepressivos/uso terapêutico , Sistema de Registros , Prescrições de Medicamentos
4.
J Toxicol Environ Health A ; 86(7): 217-229, 2023 04 03.
Artigo em Inglês | MEDLINE | ID: mdl-36809963

RESUMO

Probabilistic survival methods have been used in health research to analyze risk factors and adverse health outcomes associated with COVID-19. The aim of this study was to employ a probabilistic model selected among three distributions (exponential, Weibull, and lognormal) to investigate the time from hospitalization to death and determine the mortality risks among hospitalized patients with COVID-19. A retrospective cohort study was conducted for patients hospitalized due to COVID-19 within 30 days in Londrina, Brazil, between January 2021 and February 2022, registered in the database for severe acute respiratory infections (SIVEP-Gripe). Graphical and Akaike Information Criterion (AIC) methods were used to compare the efficiency of the three probabilistic models. The results from the final model were presented as hazard and event time ratios. Our study comprised of 7,684 individuals, with an overall case fatality rate of 32.78%. Data suggested that older age, male sex, severe comorbidity score, intensive care unit admission, and invasive ventilation significantly increased risks for in-hospital mortality. Our study highlights the conditions that confer higher risks for adverse clinical outcomes attributed to COVID-19. The step-by-step process for selecting appropriate probabilistic models may be extended to other investigations in health research to provide more reliable evidence on this topic.


Assuntos
COVID-19 , Humanos , Masculino , SARS-CoV-2 , Estudos Retrospectivos , América Latina , Hospitalização
5.
Lifetime Data Anal ; 29(2): 342-371, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36472759

RESUMO

Nested case-control sampled event time data under a highly stratified proportional hazards model, in which the number of strata increases proportional to sample size, is described and analyzed. The data can be characterized as stratified sampling from the event time risk sets and the analysis approach of Borgan et al. (Ann Stat 23:1749-1778, 1995) is adapted to accommodate both the stratification and case-control sampling from the stratified risk sets. Conditions for the consistency and asymptotic normality of the maximum partial likelihood estimator are provided and the results are used to compare the efficiency of the stratified analysis to an unstratified analysis when the baseline hazards can be semi-parametrically modeled in two special cases. Using the stratified sampling representation of the stratified analysis, methods for absolute risk estimation described by Borgan et al. (1995) for nested case-control data are used to develop methods for absolute risk estimation under the stratified model. The methods are illustrated by a year of birth stratified analysis of radon exposure and lung cancer mortality in a cohort of uranium miners from the Colorado Plateau.


Assuntos
Neoplasias Pulmonares , Humanos , Modelos de Riscos Proporcionais , Estudos de Casos e Controles , Estudos de Coortes , Tamanho da Amostra
6.
Biometrics ; 78(1): 128-140, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-33249556

RESUMO

In biomedical practices, multiple biomarkers are often combined using a prespecified classification rule with tree structure for diagnostic decisions. The classification structure and cutoff point at each node of a tree are usually chosen on an ad hoc basis, depending on decision makers' experience. There is a lack of analytical approaches that lead to optimal prediction performance, and that guide the choice of optimal cutoff points in a pre-specified classification tree. In this paper, we propose to search for and estimate the optimal decision rule through an approach of rank correlation maximization. The proposed method is flexible, theoretically sound, and computationally feasible when many biomarkers are available for classification or prediction. Using the proposed approach, for a prespecified tree-structured classification rule, we can guide the choice of optimal cutoff points at tree nodes and estimate optimal prediction performance from multiple biomarkers combined.


Assuntos
Biomarcadores
7.
Biom J ; 64(7): 1161-1177, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35708221

RESUMO

In competing risks settings where the events are death due to cancer and death due to other causes, it is common practice to use time since diagnosis as the timescale for all competing events. However, attained age has been proposed as a more natural choice of timescale for modeling other cause mortality. We examine the choice of using time since diagnosis versus attained age as the timescale when modeling other cause mortality, assuming that the hazard rate is a function of attained age, and how this choice can influence the cumulative incidence functions ( C I F $CIF$ s) derived using flexible parametric survival models. An initial analysis on the colon cancer data from the population-based Swedish Cancer Register indicates such an influence. A simulation study is conducted in order to assess the impact of the choice of timescale for other cause mortality on the bias of the estimated C I F s $CIFs$ and how different factors may influence the bias. We also use regression standardization methods in order to obtain marginal C I F $CIF$ estimates. Using time since diagnosis as the timescale for all competing events leads to a low degree of bias in C I F $CIF$ for cancer mortality ( C I F 1 $CIF_{1}$ ) under all approaches. It also leads to a low degree of bias in C I F $CIF$ for other cause mortality ( C I F 2 $CIF_{2}$ ), provided that the effect of age at diagnosis is included in the model with sufficient flexibility, with higher bias under scenarios where a covariate has a time-varying effect on the hazard rate for other cause mortality on the attained age scale.


Assuntos
Análise de Regressão , Viés , Simulação por Computador , Incidência , Modelos de Riscos Proporcionais , Medição de Risco
8.
J Environ Manage ; 308: 114639, 2022 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-35151104

RESUMO

Forests play a vital role in maintaining the global carbon balance. However, globally, forest ecosystems are increasingly threatened by climate change and deforestation in recent years. Monitoring forests, specifically forest biomass is essential for tracking changes in carbon stocks and the global carbon cycle. However, developing countries lack the capacity to actively monitor forest carbon stocks, which ultimately adds uncertainties in estimating country specific contribution to the global carbon emissions. In India, authorities use field-based measurements to estimate biomass, which becomes unfeasible to implement at finer scales due to higher costs. To address this, the present study proposed a framework to monitor above-ground biomass (AGB) at finer scales using open-source satellite data. The framework integrated four machine learning (ML) techniques with field surveys and satellite data to provide continuous spatial estimates of AGB at finer resolution. The application of this framework is exemplified as a case study for a dry deciduous tropical forest in India. The results revealed that for wet season Sentinel-2 satellite data, the Random Forest (adjusted R2 = 0.91) and Artificial Neural Network (adjusted R2 = 0.77) ML models were better-suited for estimating AGB in the study area. For dry season satellite data, all the ML models failed to estimate AGB adequately (adjusted R2 between -0.05 - 0.43). Ensemble analysis of ML predictions not only made the results more reliable, but also quantified spatial uncertainty in the predictions as a metric to identify its robustness.


Assuntos
Ecossistema , Tecnologia de Sensoriamento Remoto , Biomassa , Carbono , Aprendizado de Máquina , Clima Tropical
9.
Value Health ; 24(11): 1643-1650, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34711365

RESUMO

OBJECTIVES: To compare finite mixture models with common survival models with respect to how well they fit heterogenous data used to estimate mean survival times required for cost-effectiveness analysis. METHODS: Publicly available overall survival (OS) and progression-free survival (PFS) curves were digitized to produce nonproprietary data. Regression models based on the following distributions were fit to the data: Weibull, lognormal, log-logistic, generalized F, generalized gamma, Gompertz, mixture of 2 Weibulls, and mixture of 3 Weibulls. A second set of analyses was performed based on data in which patients who had not experienced an event by 30 months were censored. Model performance was compared based on the Akaike information criterion (AIC). RESULTS: For PFS, the 3-Weibull mixture (AIC = 479.94) and 2-Weibull mixture (AIC = 488.24) models outperformed other models by more than 40 points and produced the most accurate estimates of mean survival times. For OS, the AIC values for all models were similar (all within 4 points). The means for the mixture 3-Weibulls mixture model (17.60 months) and the 2-Weibull mixture model (17.59 months) were the closest to the Kaplan-Meier mean estimate of (17.58 months). The results and conclusions from the censored analysis of PFS were similar to the uncensored PFS analysis. On the basis of extrapolated mean OS, all models produced estimates within 10% of the Kaplan-Meier mean survival time. CONCLUSIONS: Finite mixture models offer a flexible modeling approach that has benefits over standard parametric models when analyzing heterogenous data for estimating survival times needed for cost-effectiveness analysis.


Assuntos
Análise Custo-Benefício , Intervalo Livre de Progressão , Taxa de Sobrevida , Ensaios Clínicos como Assunto , Humanos , Estimativa de Kaplan-Meier , Modelos Estatísticos
10.
BMC Med Res Methodol ; 21(1): 177, 2021 08 28.
Artigo em Inglês | MEDLINE | ID: mdl-34454428

RESUMO

BACKGROUND: Non-proportional hazards are common with time-to-event data but the majority of randomised clinical trials (RCTs) are designed and analysed using approaches which assume the treatment effect follows proportional hazards (PH). Recent advances in oncology treatments have identified two forms of non-PH of particular importance - a time lag until treatment becomes effective, and an early effect of treatment that ceases after a period of time. In sample size calculations for treatment effects on time-to-event outcomes where information is based on the number of events rather than the number of participants, there is crucial importance in correct specification of the baseline hazard rate amongst other considerations. Under PH, the shape of the baseline hazard has no effect on the resultant power and magnitude of treatment effects using standard analytical approaches. However, in a non-PH context the appropriateness of analytical approaches can depend on the shape of the underlying hazard. METHODS: A simulation study was undertaken to assess the impact of clinically plausible non-constant baseline hazard rates on the power, magnitude and coverage of commonly utilized regression-based measures of treatment effect and tests of survival curve difference for these two forms of non-PH used in RCTs with time-to-event outcomes. RESULTS: In the presence of even mild departures from PH, the power, average treatment effect size and coverage were adversely affected. Depending on the nature of the non-proportionality, non-constant event rates could further exacerbate or somewhat ameliorate the losses in power, treatment effect magnitude and coverage observed. No single summary measure of treatment effect was able to adequately describe the full extent of a potentially time-limited treatment benefit whilst maintaining power at nominal levels. CONCLUSIONS: Our results show the increased importance of considering plausible potentially non-constant event rates when non-proportionality of treatment effects could be anticipated. In planning clinical trials with the potential for non-PH, even modest departures from an assumed constant baseline hazard could appreciably impact the power to detect treatment effects depending on the nature of the non-PH. Comprehensive analysis plans may be required to accommodate the description of time-dependent treatment effects.


Assuntos
Projetos de Pesquisa , Simulação por Computador , Humanos , Modelos de Riscos Proporcionais , Tamanho da Amostra
11.
Sensors (Basel) ; 21(2)2021 Jan 10.
Artigo em Inglês | MEDLINE | ID: mdl-33435201

RESUMO

The soil water retention curve (SWRC) shows the relationship between soil water (θ) and water potential (ψ) and provides fundamental information for quantifying and modeling soil water entry, storage, flow, and groundwater recharge processes. While traditionally it is measured in a laboratory through cumbersome and time-intensive methods, soil sensors measuring in-situ θ and ψ show strong potential to estimate in-situ SWRC. The objective of this study was to estimate in-situ SWRC at different depths under two different soil types by integrating measured θ and ψ using two commercial sensors: time-domain reflectometer (TDR) and dielectric field water potential (e.g., MPS-6) principles. Parametric models were used to quantify θ-ψ relationships at various depths and were compared to laboratory-measured SWRC. The results of the study show that combining TDR and MPS-6 sensors can be used to estimate plant-available water and SWRC, with a mean difference of -0.03 to 0.23 m3m-3 between the modeled data and laboratory data, which could be caused by the sensors' lack of site-specific calibration or possible air entrapment of field soil. However, consistent trends (with magnitude differences) indicated the potential to use these sensors in estimating in-situ and dynamic SWRC at depths and provided a way forward in overcoming resource-intensive laboratory measurements.

12.
J Intern Med ; 287(6): 723-733, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-32012369

RESUMO

OBJECTIVES: A family history of colorectal cancer (CRC) is an established risk factor for developing CRC, whilst the impact of family history on prognosis is unclear. The present study assessed the association between family history and prognosis and, based on current evidence, explored whether this association was modified by age at diagnosis. METHODS: Using data from the Swedish Colorectal Cancer Registry (SCRCR) linked with the Multigeneration Register and the National Cancer Register, we identified 31 801 patients with a CRC diagnosed between 2007 and 2016. The SCRCR is a clinically rich database which includes information on the cancer stage, grade, location, treatment, complications and postoperative follow-up. RESULTS: We estimated excess mortality rate ratios (EMRR) for relative survival and hazard ratios (HR) for disease-free survival with 95% confidence intervals (CIs) using flexible parametric models. We found no association between family history and relative survival (EMRR = 0.96, 95% CIs: 0.89-1.03, P = 0.21) or disease-free survival (HR = 0.98, 95% CIs: 0.91-1.06, P = 0.64). However, age was found to modify the impact of family history on prognosis. Young patients (<50 at diagnosis) with a positive family history had less advanced (i.e. stages I and II) cancers than those with no family history (OR = 0.71, 95% CI: 0.56-0.89, P = 0.004) and lower excess mortality even after adjusting for cancer stage (EMMR = 0.63, 95% CIs: 0.47-0.84, P = 0.002). CONCLUSIONS: Our results suggest that young individuals with a family history of CRC may have greater health awareness, attend opportunistic screening and adopt lifestyle changes, leading to earlier diagnosis and better prognosis.


Assuntos
Neoplasias Colorretais/genética , Anamnese/estatística & dados numéricos , Fatores Etários , Idoso , Neoplasias Colorretais/diagnóstico , Neoplasias Colorretais/mortalidade , Intervalo Livre de Doença , Feminino , Conhecimentos, Atitudes e Prática em Saúde , Humanos , Masculino , Pessoa de Meia-Idade , Prognóstico , Sistema de Registros , Fatores de Risco , Análise de Sobrevida , Suécia/epidemiologia
13.
Stat Med ; 39(22): 2949-2961, 2020 09 30.
Artigo em Inglês | MEDLINE | ID: mdl-32519771

RESUMO

Pseudo-observations based on the nonparametric Kaplan-Meier estimator of the survival function have been proposed as an alternative to the widely used Cox model for analyzing censored time-to-event data. Using a spline-based estimator of the survival has some potential benefits over the nonparametric approach in terms of less variability. We propose to define pseudo-observations based on a flexible parametric estimator and use these for analysis in regression models to estimate parameters related to the cumulative risk. We report the results of a simulation study that compares the empirical standard errors of estimates based on parametric and nonparametric pseudo-observations in various settings. Our simulations show that in some situations there is a substantial gain in terms of reduced variability using the proposed parametric pseudo-observations compared with the nonparametric pseudo-observations. The gain can be measured as a reduction of the empirical standard error by up to about one third; corresponding to an additional 125% larger sample size. We illustrate the use of the proposed method in a brief data example.


Assuntos
Análise de Sobrevida , Simulação por Computador , Humanos , Modelos de Riscos Proporcionais , Tamanho da Amostra
14.
BMC Med Res Methodol ; 20(1): 17, 2020 01 29.
Artigo em Inglês | MEDLINE | ID: mdl-31996148

RESUMO

BACKGROUND: Patients infected with the Human Immunodeficiency Virus (HIV) are susceptible to many diseases. In these patients, the occurrence of one disease alters the chance of contracting another. Under such circumstances, methods for competing risks are required. Recently, competing risks analyses in the scope of flexible parametric models have risen to address this requirement. These lesser-known analyses have considerable advantages over conventional methods. METHODS: Using data from Multi Centre AIDS Cohort Study (MACS), this paper reviews and applies methods of competing risks flexible parametric models to analyze the risk of the first disease (AIDS or non-AIDS) among HIV-infected patients. We compared two alternative subdistribution hazard flexible parametric models (SDHFPM1 and SDHFPM2) with the Fine & Gray model. To make a complete inference, we performed cause-specific hazard flexible parametric models for each event separately as well. RESULTS: Both SDHFPM1 and SDHFPM2 provided consistent results regarding the magnitude of coefficients and risk estimations compared with estimations obtained from the Fine & Gray model, However, competing risks flexible parametric models provided more efficient and smoother estimations for the baseline risks of the first disease. We found that age at HIV diagnosis indirectly affected the risk of AIDS as the first event by increasing the number of patients who experience a non-AIDS disease prior to AIDS among > 40 years. Other significant covariates had direct effects on the risks of AIDS and non-AIDS. DISCUSSION: The choice of an appropriate model depends on the research goals and computational challenges. The SDHFPM1 models each event separately and requires calculating censoring weights which is time-consuming. In contrast, SDHFPM2 models all events simultaneously and is more appropriate for large datasets, however, when the focus is on one particular event SDHFPM1 is more preferable.


Assuntos
Infecções Oportunistas Relacionadas com a AIDS/epidemiologia , Síndrome da Imunodeficiência Adquirida/patologia , Coinfecção/epidemiologia , Modelos Estatísticos , Estudos de Coortes , Interpretação Estatística de Dados , Humanos , Masculino , Modelos de Riscos Proporcionais , Estudos Prospectivos , Medição de Risco/métodos , Fatores de Risco
15.
Biom J ; 62(4): 989-1011, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-31957910

RESUMO

Cure models are used in time-to-event analysis when not all individuals are expected to experience the event of interest, or when the survival of the considered individuals reaches the same level as the general population. These scenarios correspond to a plateau in the survival and relative survival function, respectively. The main parameters of interest in cure models are the proportion of individuals who are cured, termed the cure proportion, and the survival function of the uncured individuals. Although numerous cure models have been proposed in the statistical literature, there is no consensus on how to formulate these. We introduce a general parametric formulation of mixture cure models and a new class of cure models, termed latent cure models, together with a general estimation framework and software, which enable fitting of a wide range of different models. Through simulations, we assess the statistical properties of the models with respect to the cure proportion and the survival of the uncured individuals. Finally, we illustrate the models using survival data on colon cancer, which typically display a plateau in the relative survival. As demonstrated in the simulations, mixture cure models which are not guaranteed to be constant after a finite time point, tend to produce accurate estimates of the cure proportion and the survival of the uncured. However, these models are very unstable in certain cases due to identifiability issues, whereas LC models generally provide stable results at the price of more biased estimates.


Assuntos
Biometria/métodos , Modelos Estatísticos , Neoplasias do Colo/terapia , Humanos , Análise de Sobrevida
16.
Biom J ; 62(1): 136-156, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31661560

RESUMO

When modeling survival data, it is common to assume that the (log-transformed) survival time (T) is conditionally independent of the (log-transformed) censoring time (C) given a set of covariates. There are numerous situations in which this assumption is not realistic, and a number of correction procedures have been developed for different models. However, in most cases, either some prior knowledge about the association between T and C is required, or some auxiliary information or data is/are supposed to be available. When this is not the case, the application of many existing methods turns out to be limited. The goal of this paper is to overcome this problem by developing a flexible parametric model, that is a type of transformed linear model. We show that the association between T and C is identifiable in this model. The performance of the proposed method is investigated both in an asymptotic way and through finite sample simulations. We also develop a formal goodness-of-fit test approach to assess the quality of the fitted model. Finally, the approach is applied to data coming from a study on liver transplants.


Assuntos
Biometria/métodos , Modelos Estatísticos , Análise de Sobrevida
17.
Stat Med ; 38(20): 3896-3910, 2019 09 10.
Artigo em Inglês | MEDLINE | ID: mdl-31209905

RESUMO

In competing risks setting, we account for death according to a specific cause and the quantities of interest are usually the cause-specific hazards (CSHs) and the cause-specific cumulative probabilities. A cause-specific cumulative probability can be obtained with a combination of the CSHs or via the subdistribution hazard. Here, we modeled the CSH with flexible hazard-based regression models using B-splines for the baseline hazard and time-dependent (TD) effects. We derived the variance of the cause-specific cumulative probabilities at the population level using the multivariate delta method and showed how we could easily quantify the impact of a covariate on the cumulative probability scale using covariate-adjusted cause-specific cumulative probabilities and their difference. We conducted a simulation study to evaluate the performance of this approach in its ability to estimate the cumulative probabilities using different functions for the cause-specific log baseline hazard and with or without a TD effect. In the scenario with TD effect, we tested both well-specified and misspecified models. We showed that the flexible regression models perform nearly as well as the nonparametric method, if we allow enough flexibility for the baseline hazards. Moreover, neglecting the TD effect hardly affects the cumulative probabilities estimates of the whole population but impacts them in the various subgroups. We illustrated our approach using data from people diagnosed with monoclonal gammopathy of undetermined significance and provided the R-code to derive those quantities, as an extension of the R-package mexhaz.


Assuntos
Análise de Sobrevida , Simulação por Computador , Humanos , Probabilidade , Análise de Regressão
18.
Crit Care ; 23(1): 377, 2019 11 27.
Artigo em Inglês | MEDLINE | ID: mdl-31775837

RESUMO

BACKGROUND: African children hospitalised with severe febrile illness have a high risk of mortality. The Fluid Expansion As Supportive Therapy (FEAST) trial (ISCRTN 69856593) demonstrated increased mortality risk associated with fluid boluses, but the temporal relationship to bolus therapy and underlying mechanism remains unclear. METHODS: In a post hoc retrospective analysis, flexible parametric models were used to compare change in mortality risk post-randomisation in children allocated to bolus therapy with 20-40 ml/kg 5% albumin or 0.9% saline over 1-2 h or no bolus (control, 4 ml/kg/hour maintenance), overall and for different terminal clinical events (cardiogenic, neurological, respiratory, or unknown/other). RESULTS: Two thousand ninety-seven and 1041 children were randomised to bolus vs no bolus, of whom 254 (12%) and 91 (9%) respectively died within 28 days. Median (IQR) bolus fluid in the bolus groups received by 4 h was 20 (20, 40) ml/kg and was the same at 8 h; total fluids received in bolus groups at 4 h and 8 h were 38 (28, 43) ml/kg and 40 (30, 50) ml/kg, respectively. Total fluid volumes received in the control group by 4 h and 8 h were median (IQR) 10 (6, 15) ml/kg and 10 (10, 26) ml/kg, respectively. Mortality risk was greatest 30 min post-randomisation in both groups, declining sharply to 4 h and then more slowly to 28 days. Maximum mortality risk was similar in bolus and no bolus groups; however, the risk declined more slowly in the bolus group, with significantly higher mortality risk compared to the no bolus group from 1.6 to 101 h (4 days) post-randomisation. The delay in decline in mortality risk in the bolus groups was most pronounced for cardiogenic modes of death. CONCLUSIONS: The increased risk from bolus therapy was not due to a mechanism occurring immediately after bolus administration. Excess mortality risk in the bolus group resulted from slower decrease in mortality risk over the ensuing 4 days. Thus, administration of modest bolus volumes appeared to prevent mortality risk declining at the same rate that it would have done without a bolus, rather than harm associated with bolus resulting from a concurrent increased risk of death peri-bolus administration. TRIAL REGISTRATION: ISRCTN69856593. Date of registration 15 December 2008.


Assuntos
Hidratação , Infecções , Criança , Humanos , Ressuscitação , Estudos Retrospectivos , Tempo
19.
Sensors (Basel) ; 18(5)2018 May 17.
Artigo em Inglês | MEDLINE | ID: mdl-29772818

RESUMO

Thermal comfort has become a topic issue in building performance assessment as well as energy efficiency. Three methods are mainly recognized for its assessment. Two of them based on standardized methodologies, face the problem by considering the indoor environment in steady-state conditions (PMV and PPD) and users as active subjects whose thermal perception is influenced by outdoor climatic conditions (adaptive approach). The latter method is the starting point to investigate thermal comfort from an overall perspective by considering endogenous variables besides the traditional physical and environmental ones. Following this perspective, the paper describes the results of an in-field investigation of thermal conditions through the use of nearable and wearable solutions, parametric models and machine learning techniques. The aim of the research is the exploration of the reliability of IoT-based solutions combined with advanced algorithms, in order to create a replicable framework for the assessment and improvement of user thermal satisfaction. For this purpose, an experimental test in real offices was carried out involving eight workers. Parametric models are applied for the assessment of thermal comfort; IoT solutions are used to monitor the environmental variables and the users' parameters; the machine learning CART method allows to predict the users' profile and the thermal comfort perception respect to the indoor environment.

20.
Early Child Res Q ; 42: 158-169, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29391663

RESUMO

Increasingly, states establish different thresholds on the Early Childhood Environment Rating Scale-Revised (ECERS-R), and use these thresholds to inform high-stakes decisions. However, the validity of the ECERS-R for these purposes is not well established. The objective of this study is to identify thresholds on the ECERS-R that are associated with preschool-aged children's social and cognitive development. Applying non-parametric modeling to the nationally-representative Early Childhood Longitudinal Study Birth Cohort (ECLS-B) dataset, we found that once classrooms achieved a score of 3.4 on the overall ECERS-R composite score, there was a leveling-off effect, such that no additional improvements to children's social, cognitive, or language outcomes were observed. Additional analyses found that ECERS-R subscales that focused on teaching and caregiving processes, as opposed to the physical environment, did not show leveling-off effects. The findings suggest that the usefulness of the ECERS-R for discerning associations with children's outcome may be limited to certain score ranges or subscales.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA