Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 28
Filtrar
1.
J Surv Stat Methodol ; 11(1): 260-283, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36714298

RESUMO

Multiple imputation (MI) is a popular and well-established method for handling missing data in multivariate data sets, but its practicality for use in massive and complex data sets has been questioned. One such data set is the Panel Study of Income Dynamics (PSID), a longstanding and extensive survey of household income and wealth in the United States. Missing data for this survey are currently handled using traditional hot deck methods because of the simple implementation; however, the univariate hot deck results in large random wealth fluctuations. MI is effective but faced with operational challenges. We use a sequential regression/chained-equation approach, using the software IVEware, to multiply impute cross-sectional wealth data in the 2013 PSID, and compare analyses of the resulting imputed data with those from the current hot deck approach. Practical difficulties, such as non-normally distributed variables, skip patterns, categorical variables with many levels, and multicollinearity, are described together with our approaches to overcoming them. We evaluate the imputation quality and validity with internal diagnostics and external benchmarking data. MI produces improvements over the existing hot deck approach by helping preserve correlation structures, such as the associations between PSID wealth components and the relationships between the household net worth and sociodemographic factors, and facilitates completed data analyses with general purposes. MI incorporates highly predictive covariates into imputation models and increases efficiency. We recommend the practical implementation of MI and expect greater gains when the fraction of missing information is large.

2.
J Off Stat ; 37(3): 751-769, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34566235

RESUMO

A non-probability sampling mechanism arising from non-response or non-selection is likely to bias estimates of parameters with respect to a target population of interest. This bias poses a unique challenge when selection is 'non-ignorable', i.e. dependent upon the unobserved outcome of interest, since it is then undetectable and thus cannot be ameliorated. We extend a simulation study by Nishimura et al. [International Statistical Review, 84, 43-62 (2016)], adding two recently published statistics: the so-called 'standardized measure of unadjusted bias (SMUB)' and 'standardized measure of adjusted bias (SMAB)', which explicitly quantify the extent of bias (in the case of SMUB) or non-ignorable bias (in the case of SMAB) under the assumption that a specified amount of non-ignorable selection exists. Our findings suggest that this new sensitivity diagnostic is more correlated with, and more predictive of, the true, unknown extent of selection bias than other diagnostics, even when the underlying assumed level of non-ignorability is incorrect.

3.
Stat Med ; 40(21): 4609-4628, 2021 09 20.
Artigo em Inglês | MEDLINE | ID: mdl-34405912

RESUMO

Randomized clinical trials with outcome measured longitudinally are frequently analyzed using either random effect models or generalized estimating equations. Both approaches assume that the dropout mechanism is missing at random (MAR) or missing completely at random (MCAR). We propose a Bayesian pattern-mixture model to incorporate missingness mechanisms that might be missing not at random (MNAR), where the distribution of the outcome measure at the follow-up time tk , conditional on the prior history, differs across the patterns of missing data. We then perform sensitivity analysis on estimates of the parameters of interest. The sensitivity parameters relate the distribution of the outcome of interest between subjects from a missing-data pattern at time tk with that of the observed subjects at time tk . The large number of the sensitivity parameters is reduced by treating them as random with a prior distribution having some pre-specified mean and variance, which are varied to explore the sensitivity of inferences. The missing at random (MAR) mechanism is a special case of the proposed model, allowing a sensitivity analysis of deviations from MAR. The proposed approach is applied to data from the Trial of Preventing Hypertension.


Assuntos
Modelos Estatísticos , Avaliação de Resultados em Cuidados de Saúde , Teorema de Bayes , Coleta de Dados , Humanos , Estudos Longitudinais , Pacientes Desistentes do Tratamento , Ensaios Clínicos Controlados Aleatórios como Assunto
4.
J Clin Epidemiol ; 134: 79-88, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33539930

RESUMO

Missing data are ubiquitous in medical research. Although there is increasing guidance on how to handle missing data, practice is changing slowly and misapprehensions abound, particularly in observational research. Importantly, the lack of transparency around methodological decisions is threatening the validity and reproducibility of modern research. We present a practical framework for handling and reporting the analysis of incomplete data in observational studies, which we illustrate using a case study from the Avon Longitudinal Study of Parents and Children. The framework consists of three steps: 1) Develop an analysis plan specifying the analysis model and how missing data are going to be addressed. An important consideration is whether a complete records' analysis is likely to be valid, whether multiple imputation or an alternative approach is likely to offer benefits and whether a sensitivity analysis regarding the missingness mechanism is required; 2) Examine the data, checking the methods outlined in the analysis plan are appropriate, and conduct the preplanned analysis; and 3) Report the results, including a description of the missing data, details on how the missing data were addressed, and the results from all analyses, interpreted in light of the missing data and the clinical relevance. This framework seeks to support researchers in thinking systematically about missing data and transparently reporting the potential effect on the study results, therefore increasing the confidence in and reproducibility of research findings.


Assuntos
Estudos Observacionais como Assunto/métodos , Projetos de Pesquisa/normas , Adulto , Criança , Interpretação Estatística de Dados , Humanos , Estudos Longitudinais , Reprodutibilidade dos Testes
5.
Stat Med ; 40(7): 1653-1677, 2021 03 30.
Artigo em Inglês | MEDLINE | ID: mdl-33462862

RESUMO

We consider comparative effectiveness research (CER) from observational data with two or more treatments. In observational studies, the estimation of causal effects is prone to bias due to confounders related to both treatment and outcome. Methods based on propensity scores are routinely used to correct for such confounding biases. A large fraction of propensity score methods in the current literature consider the case of either two treatments or continuous outcome. There has been extensive literature with multiple treatment and binary outcome, but interest often lies in the intersection, for which the literature is still evolving. The contribution of this article is to focus on this intersection and compare across methods, some of which are fairly recent. We describe propensity-based methods when more than two treatments are being compared, and the outcome is binary. We assess the relative performance of these methods through a set of simulation studies. The methods are applied to assess the effect of four common therapies for castration-resistant advanced-stage prostate cancer. The data consist of medical and pharmacy claims from a large national private health insurance network, with the adverse outcome being admission to the emergency room within a short time window of treatment initiation.


Assuntos
Pesquisa Comparativa da Efetividade , Modelos Estatísticos , Viés , Causalidade , Simulação por Computador , Humanos , Masculino , Pontuação de Propensão
6.
J Surv Stat Methodol ; 8(5): 932-964, 2020 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-33381610

RESUMO

With the current focus of survey researchers on "big data" that are not selected by probability sampling, measures of the degree of potential sampling bias arising from this nonrandom selection are sorely needed. Existing indices of this degree of departure from probability sampling, like the R-indicator, are based on functions of the propensity of inclusion in the sample, estimated by modeling the inclusion probability as a function of auxiliary variables. These methods are agnostic about the relationship between the inclusion probability and survey outcomes, which is a crucial feature of the problem. We propose a simple index of degree of departure from ignorable sample selection that corrects this deficiency, which we call the standardized measure of unadjusted bias (SMUB). The index is based on normal pattern-mixture models for nonresponse applied to this sample selection problem and is grounded in the model-based framework of nonignorable selection first proposed in the context of nonresponse by Don Rubin in 1976. The index depends on an inestimable parameter that measures the deviation from selection at random, which ranges between the values zero and one. We propose the use of a central value of this parameter, 0.5, for computing a point index, and computing the values of SMUB at zero and one to provide a range of the index in a sensitivity analysis. We also provide a fully Bayesian approach for computing credible intervals for the SMUB, reflecting uncertainty in the values of all of the input parameters. The proposed methods have been implemented in R and are illustrated using real data from the National Survey of Family Growth.

7.
J Surv Stat Methodol ; 7(3): 334-364, 2019 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-31428658

RESUMO

The most widespread method of computing confidence intervals (CIs) in complex surveys is to add and subtract the margin of error (MOE) from the point estimate, where the MOE is the estimated standard error multiplied by the suitable Gaussian quantile. This Wald-type interval is used by the American Community Survey (ACS), the largest US household sample survey. For inferences on small proportions with moderate sample sizes, this method often results in marked under-coverage and lower CI endpoint less than 0. We assess via simulation the coverage and width, in complex sample surveys, of seven alternatives to the Wald interval for a binomial proportion with sample size replaced by the 'effective sample size,' that is, the sample size divided by the design effect. Building on previous work by the present authors, our simulations address the impact of clustering, stratification, different stratum sampling fractions, and stratum-specific proportions. We show that all intervals undercover when there is clustering and design effects are computed from a simple design-based estimator of sampling variance. Coverage can be better calibrated for the alternatives to Wald by improving estimation of the effective sample size through superpopulation modeling. This approach is more effective in our simulations than previously proposed modifications of effective sample size. We recommend intervals of the Wilson or Bayes uniform prior form, with the Jeffreys prior interval not far behind.

8.
J R Stat Soc Ser C Appl Stat ; 68(5): 1465-1483, 2019 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-33304001

RESUMO

Rising costs of survey data collection and declining response rates have caused researchers to turn to non-probability samples to make descriptive statements about populations. However, unlike probability samples, non-probability samples may produce severely biased descriptive estimates due to selection bias. The paper develops and evaluates a simple model-based index of the potential selection bias in estimates of population proportions due to non-ignorable selection mechanisms. The index depends on an inestimable parameter ranging from 0 to 1 that captures the amount of deviation from selection at random and is thus well suited to a sensitivity analysis. We describe modified maximum likelihood and Bayesian estimation approaches and provide new and easy-to-use R functions for their implementation. We use simulation studies to evaluate the ability of the proposed index to reflect selection bias in non-probability samples and show how the index outperforms a previously proposed index that relies on an underlying normality assumption. We demonstrate the use of the index in practice with real data from the National Survey of Family Growth.

9.
Prev Med ; 111: 299-306, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-29155224

RESUMO

Accidents are a leading cause of deaths in U.S. active duty personnel. Understanding accident deaths during wartime could facilitate future operational planning and inform risk prevention efforts. This study expands prior research, identifying health risk factors associated with U.S. Army accident deaths during the Afghanistan and Iraq war. Military records for 2004-2009 enlisted, active duty, Regular Army soldiers were analyzed using logistic regression modeling to identify mental health, injury, and polypharmacy (multiple narcotic and/or psychotropic medications) predictors of accident deaths for current, previously, and never deployed groups. Deployed soldiers with anxiety diagnoses showed higher risk for accident deaths. Over half had anxiety diagnoses prior to being deployed, suggesting anticipatory anxiety or symptom recurrence may contribute to high risk. For previously deployed soldiers, traumatic brain injury (TBI) indicated higher risk. Two-thirds of these soldiers had first TBI medical-encounter while non-deployed, but mild, combat-related TBIs may have been undetected during deployments. Post-Traumatic Stress Disorder (PTSD) predicted higher risk for never deployed soldiers, as did polypharmacy which may relate to reasons for deployment ineligibility. Health risk predictors for Army accident deaths are identified and potential practice and policy implications discussed. Further research could test for replicability and expand models to include unobserved factors or modifiable mechanisms related to high risk. PTSD predicted high risk among those never deployed, suggesting importance of identification, treatment, and prevention of non-combat traumatic events. Finally, risk predictors overlapped with those identified for suicides, suggesting effective intervention might reduce both types of deaths.


Assuntos
Acidentes de Trabalho/mortalidade , Transtornos Mentais/diagnóstico , Militares/estatística & dados numéricos , Polimedicação , Ferimentos e Lesões , Acidentes de Trabalho/prevenção & controle , Adulto , Feminino , Humanos , Masculino , Medição de Risco , Fatores de Risco , Estados Unidos/epidemiologia
10.
Stat Med ; 35(17): 2894-906, 2016 07 30.
Artigo em Inglês | MEDLINE | ID: mdl-26888661

RESUMO

A case study is presented assessing the impact of missing data on the analysis of daily diary data from a study evaluating the effect of a drug for the treatment of insomnia. The primary analysis averaged daily diary values for each patient into a weekly variable. Following the commonly used approach, missing daily values within a week were ignored provided there was a minimum number of diary reports (i.e., at least 4). A longitudinal model was then fit with treatment, time, and patient-specific effects. A treatment effect at a pre-specified landmark time was obtained from the model. Weekly values following dropout were regarded as missing, but intermittent daily missing values were obscured. Graphical summaries and tables are presented to characterize the complex missing data patterns. We use multiple imputation for daily diary data to create completed data sets so that exactly 7 daily diary values contribute to each weekly patient average. Standard analysis methods are then applied for landmark analysis of the completed data sets, and the resulting estimates are combined using the standard multiple imputation approach. The observed data are subject to digit heaping and patterned responses (e.g., identical values for several consecutive days), which makes accurate modeling of the response data difficult. Sensitivity analyses under different modeling assumptions for the data were performed, along with pattern mixture models assessing the sensitivity to the missing at random assumption. The emphasis is on graphical displays and computational methods that can be implemented with general-purpose software. Copyright © 2016 John Wiley & Sons, Ltd.


Assuntos
Ensaios Clínicos como Assunto , Confiabilidade dos Dados , Interpretação Estatística de Dados , Autorrelato , Humanos , Distúrbios do Início e da Manutenção do Sono/terapia , Software
11.
J Clin Epidemiol ; 67(1): 15-32, 2014 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-24262770

RESUMO

OBJECTIVES: To recommend methodological standards in the prevention and handling of missing data for primary patient-centered outcomes research (PCOR). STUDY DESIGN AND SETTING: We searched National Library of Medicine Bookshelf and Catalog as well as regulatory agencies' and organizations' Web sites in January 2012 for guidance documents that had formal recommendations regarding missing data. We extracted the characteristics of included guidance documents and recommendations. Using a two-round modified Delphi survey, a multidisciplinary panel proposed mandatory standards on the prevention and handling of missing data for PCOR. RESULTS: We identified 1,790 records and assessed 30 as having relevant recommendations. We proposed 10 standards as mandatory, covering three domains. First, the single best approach is to prospectively prevent missing data occurrence. Second, use of valid statistical methods that properly reflect multiple sources of uncertainty is critical when analyzing missing data. Third, transparent and thorough reporting of missing data allows readers to judge the validity of the findings. CONCLUSION: We urge researchers to adopt rigorous methodology and promote good science by applying best practices to the prevention and handling of missing data. Developing guidance on the prevention and handling of missing data for observational studies and studies that use existing records is a priority for future research.


Assuntos
Pesquisa Biomédica/normas , Avaliação de Resultados da Assistência ao Paciente , Projetos de Pesquisa/normas , Consenso , Humanos
12.
Genome Biol Evol ; 4(8): 709-19, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22593551

RESUMO

Gene sequences are routinely used to determine the topologies of unrooted phylogenetic trees, but many of the most important questions in evolution require knowing both the topologies and the roots of trees. However, general algorithms for calculating rooted trees from gene and genomic sequences in the absence of gene paralogs are few. Using the principles of evolutionary parsimony (EP) (Lake JA. 1987a. A rate-independent technique for analysis of nucleic acid sequences: evolutionary parsimony. Mol Biol Evol. 4:167-181) and its extensions (Cavender, J. 1989. Mechanized derivation of linear invariants. Mol Biol Evol. 6:301-316; Nguyen T, Speed TP. 1992. A derivation of all linear invariants for a nonbalanced transversion model. J Mol Evol. 35:60-76), we explicitly enumerate all linear invariants that solely contain rooting information and derive algorithms for rooting gene trees directly from gene and genomic sequences. These new EP linear rooting invariants allow one to determine rooted trees, even in the complete absence of outgroups and gene paralogs. EP rooting invariants are explicitly derived for three taxon trees, and rules for their extension to four or more taxa are provided. The method is demonstrated using 18S ribosomal DNA to illustrate how the new animal phylogeny (Aguinaldo AMA et al. 1997. Evidence for a clade of nematodes, arthropods, and other moulting animals. Nature 387:489-493; Lake JA. 1990. Origin of the metazoa. Proc Natl Acad Sci USA 87:763-766) may be rooted directly from sequences, even when they are short and paralogs are unavailable. These results are consistent with the current root (Philippe H et al. 2011. Acoelomorph flatworms are deuterostomes related to Xenoturbella. Nature 470:255-260).


Assuntos
Classificação/métodos , Eucariotos/classificação , Evolução Molecular , Técnicas Genéticas , Filogenia , Algoritmos , Animais , Sequência de Bases , Eucariotos/genética , Humanos , Modelos Genéticos , Dados de Sequência Molecular , RNA Ribossômico 18S/genética
13.
Surv Methodol ; 38(2): 203-214, 2012 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-29200607

RESUMO

This paper develops two Bayesian methods for inference about finite population quantiles of continuous survey variables from unequal probability sampling. The first method estimates cumulative distribution functions of the continuous survey variable by fitting a number of probit penalized spline regression models on the inclusion probabilities. The finite population quantiles are then obtained by inverting the estimated distribution function. This method is quite computationally demanding. The second method predicts non-sampled values by assuming a smoothly-varying relationship between the continuous survey variable and the probability of inclusion, by modeling both the mean function and the variance function using splines. The two Bayesian spline-model-based estimators yield a desirable balance between robustness and efficiency. Simulation studies show that both methods yield smaller root mean squared errors than the sample-weighted estimator and the ratio and difference estimators described by Rao, Kovar, and Mantel (RKM 1990), and are more robust to model misspecification than the regression through the origin model-based estimator described in Chambers and Dunstan (1986). When the sample size is small, the 95% credible intervals of the two new methods have closer to nominal confidence coverage than the sample-weighted estimator.

14.
Biometrics ; 67(4): 1434-41, 2011 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-21627627

RESUMO

In clinical trials, a biomarker (S ) that is measured after randomization and is strongly associated with the true endpoint (T) can often provide information about T and hence the effect of a treatment (Z ) on T. A useful biomarker can be measured earlier than T and cost less than T. In this article, we consider the use of S as an auxiliary variable and examine the information recovery from using S for estimating the treatment effect on T, when S is completely observed and T is partially observed. In an ideal but often unrealistic setting, when S satisfies Prentice's definition for perfect surrogacy, there is the potential for substantial gain in precision by using data from S to estimate the treatment effect on T. When S is not close to a perfect surrogate, it can provide substantial information only under particular circumstances. We propose to use a targeted shrinkage regression approach that data-adaptively takes advantage of the potential efficiency gain yet avoids the need to make a strong surrogacy assumption. Simulations show that this approach strikes a balance between bias and efficiency gain. Compared with competing methods, it has better mean squared error properties and can achieve substantial efficiency gain, particularly in a common practical setting when S captures much but not all of the treatment effect and the sample size is relatively small. We apply the proposed method to a glaucoma data example.


Assuntos
Interpretação Estatística de Dados , Determinação de Ponto Final/métodos , Glaucoma/epidemiologia , Glaucoma/terapia , Avaliação de Resultados em Cuidados de Saúde/métodos , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Biomarcadores/análise , Glaucoma/diagnóstico , Humanos , Prevalência , Prognóstico , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Resultado do Tratamento
15.
J R Stat Soc Ser C Appl Stat ; 59(5): 821-838, 2010 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-21132099

RESUMO

This work is motivated by a quantitative Magnetic Resonance Imaging study of the differential tumor/healthy tissue change in contrast uptake induced by radiation. The goal is to determine the time in which there is maximal contrast uptake (a surrogate for permeability) in the tumor relative to healthy tissue. A notable feature of the data is its spatial heterogeneity. Zhang, Johnson, Little, and Cao (2008a and 2008b) discuss two parallel approaches to "denoise" a single image of change in contrast uptake from baseline to one follow-up visit of interest. In this work we extend the image model to explore the longitudinal profile of the tumor/healthy tissue contrast uptake in multiple images over time. We fit a two-stage model. First, we propose a longitudinal image model for each subject. This model simultaneously accounts for the spatial and temporal correlation and denoises the observed images by borrowing strength both across neighboring pixels and over time. We propose to use the Mann-Whitney U statistic to summarize the tumor contrast uptake relative to healthy tissue. In the second stage, we fit a population model to the U statistic and estimate when it achieves its maximum. Our initial findings suggest that the maximal contrast uptake of the tumor core relative to healthy tissue peaks around three weeks after initiation of radiotherapy, though this warrants further investigation.

16.
J Med Internet Res ; 12(4): e52, 2010 Nov 18.
Artigo em Inglês | MEDLINE | ID: mdl-21087922

RESUMO

BACKGROUND: The Internet provides us with tools (user metrics or paradata) to evaluate how users interact with online interventions. Analysis of these paradata can lead to design improvements. OBJECTIVE: The objective was to explore the qualities of online participant engagement in an online intervention. We analyzed the paradata in a randomized controlled trial of alternative versions of an online intervention designed to promote consumption of fruit and vegetables. METHODS: Volunteers were randomized to 1 of 3 study arms involving several online sessions. We created 2 indirect measures of breadth and depth to measure different dimensions and dynamics of program engagement based on factor analysis of paradata measures of Web pages visited and time spent online with the intervention materials. Multiple regression was used to assess influence of engagement on retention and change in dietary intake. RESULTS: Baseline surveys were completed by 2513 enrolled participants. Of these, 86.3% (n = 2168) completed the follow-up surveys at 3 months, 79.6% (n = 2027) at 6 months, and 79.4% (n = 1995) at 12 months. The 2 tailored intervention arms exhibited significantly more engagement than the untailored arm (P < .01). Breadth and depth measures of engagement were significantly associated with completion of follow-up surveys (odds ratios [OR] = 4.11 and 2.12, respectively, both P values < .001). The breadth measure of engagement was also significantly positively associated with a key study outcome, the mean increase in fruit and vegetable consumption (P < .001). CONCLUSIONS: By exploring participants' exposures to online interventions, paradata are valuable in explaining the effects of tailoring in increasing participant engagement in the intervention. Controlling for intervention arm, greater engagement is also associated with retention of participants and positive change in a key outcome of the intervention, dietary change. This paper demonstrates the utility of paradata capture and analysis for evaluating online health interventions. TRIAL REGISTRATION: NCT00169312; http://clinicaltrials.gov/ct2/show/NCT00169312 (Archived by WebCite at http://www.webcitation.org/5u8sSr0Ty).


Assuntos
Participação da Comunidade/estatística & dados numéricos , Aconselhamento/métodos , Fidelidade a Diretrizes/estatística & dados numéricos , Promoção da Saúde/métodos , Internet/estatística & dados numéricos , Adulto , Comportamento Alimentar , Feminino , Frutas , Humanos , Masculino , Pessoa de Meia-Idade , Retenção Psicológica , Autocuidado/métodos , Inquéritos e Questionários , Terapia Assistida por Computador/métodos , Verduras , Adulto Jovem
17.
Stat Med ; 29(17): 1769-78, 2010 Jul 30.
Artigo em Inglês | MEDLINE | ID: mdl-20552576

RESUMO

Disclosure limitation is an important consideration in the release of public use data sets. It is particularly challenging for longitudinal data sets, since information about an individual accumulates with repeated measures over time. Research on disclosure limitation methods for longitudinal data has been very limited. We consider here problems created by high ages in cohort studies. Because of the risk of disclosure, ages of very old respondents can often not be released; in particular, this is a specific stipulation of the Health Insurance Portability and Accountability Act (HIPAA) for the release of health data for individuals. Top-coding of individuals beyond a certain age is a standard way of dealing with this issue, and it may be adequate for cross-sectional data, when a modest number of cases are affected. However, this approach leads to serious loss of information in longitudinal studies when individuals have been followed for many years. We propose and evaluate an alternative to top-coding for this situation based on multiple imputation (MI). This MI method is applied to a survival analysis of simulated data, and data from the Charleston Heart Study (CHS), and is shown to work well in preserving the relationship between hazard and covariates.


Assuntos
Estudos de Coortes , Interpretação Estatística de Dados , Revelação , Estudos Longitudinais , Fatores Etários , Simulação por Computador , Feminino , Cardiopatias , Humanos , Masculino
18.
Bayesian Anal ; 5(1): 189-212, 2010.
Artigo em Inglês | MEDLINE | ID: mdl-20448832

RESUMO

This work is motivated by a quantitative Magnetic Resonance Imaging study of the relative change in tumor vascular permeability during the course of radiation therapy. The differences in tumor and healthy brain tissue physiology and pathology constitute a notable feature of the image data-spatial heterogeneity with respect to its contrast uptake profile (a surrogate for permeability) and radiation induced changes in this profile. To account for these spatial aspects of the data, we employ a Gaussian hidden Markov random field (MRF) model. The model incorporates a latent set of discrete labels from the MRF governed by a spatial regularization parameter. We estimate the MRF regularization parameter and treat the number of MRF states as a random variable and estimate it via a reversible jump Markov chain Monte Carlo algorithm. We conduct simulation studies to examine the performance of the model and compare it with a recently proposed method using the Expectation-Maximization (EM) algorithm. Simulation results show that the Bayesian algorithm performs as well, if not slightly better than the EM based algorithm. Results on real data suggest that the tumor "core" vascular permeability increases relative to healthy tissue three weeks after starting radiotherapy, which may be an opportune time to initiate chemotherapy and warrants further investigation.

19.
J Sch Health ; 80(2): 80-7, 2010 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-20236406

RESUMO

BACKGROUND: Asthma is a serious problem for low-income preteens living in disadvantaged communities. Among the chronic diseases of childhood and adolescence, asthma has the highest prevalence and related health care use. School-based asthma interventions have proven successful for older and younger students, but results have not been demonstrated for those in middle school. METHODS: This randomized controlled study screened students 10-13 years of age in 19 middle schools in low-income communities in Detroit, Michigan. Of the 6,872 students who were screened, 1,292 students were identified with asthma. Schools were matched and randomly assigned to Program 1 or 2 or control. Baseline, 12, and 24 months data were collected by telephone (parents), at school (students) and from school system records. Measures were the students' asthma symptoms, quality of life, academic performance, self-regulation, and asthma management practices. Data were analyzed using multiple imputation with sequential regression analysis. Mixed models and Poisson regressions were used to develop final models. RESULTS: Neither program produced significant change in asthma symptoms or quality of life. One produced improved school grades (p = .02). The other enhanced self-regulation (p = .01) at 24 months. Both slowed the decline in self-regulation in undiagnosed preteens at 12 months and increased self-regulation at 24 months (p = .04; p = .003). CONCLUSION: Programs had effects on academic performance and self-regulation capacities of students. More developmentally focused interventions may be needed for students at this transitional stage. Disruptive factors in the schools may have reduced both program impact and the potential for outcome assessment.


Assuntos
Asma/terapia , Educação em Saúde/métodos , Conhecimentos, Atitudes e Prática em Saúde , Negro ou Afro-Americano , Asma/complicações , Criança , Currículo , Feminino , Seguimentos , Humanos , Masculino , Michigan , Avaliação de Programas e Projetos de Saúde , Qualidade de Vida , Autocuidado/métodos , Índice de Gravidade de Doença , Meio Social , Fatores Socioeconômicos , Inquéritos e Questionários , População Urbana
20.
Epidemiology ; 21 Suppl 4: S51-7, 2010 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-20220524

RESUMO

BACKGROUND: The goal of the present study was to quantify the population-based background serum concentrations of 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) by using data from the reference population of the 2005 University of Michigan Dioxin Exposure Study (UMDES) and the 2003-2004 National Health and Nutrition Examination Survey (NHANES). METHODS: Multiple imputation was used to impute the serum TCDD concentrations below the limit of detection by combining the 2 data sources. The background mean, quartiles, and 95th percentile serum TCDD concentrations were estimated by age and sex by using linear and quantile regressions for complex survey data. RESULTS: Any age- and sex-specific mean, quartiles, and 95th percentiles of background serum TCDD concentrations of study participants between ages 18 and 85 years can be estimated from the regressions for the UMDES reference population and the NHANES non-Hispanic white population. For example, for a 50-year-old man in the reference population of UMDES, the mean, quartiles, and 95th percentile serum TCDD concentrations are estimated to be 1.1, 0.6, 1.1, 1.8, and 3.3 parts per trillion, respectively. The study also shows that the UMDES reference population is a valid reference population for serum TCDD concentrations for other predominantly white populations in Michigan. CONCLUSION: The serum TCDD concentrations increased with age and increased more over age in women than in men, and hence estimation of background concentrations must be adjusted for age and sex. The methods and results discussed in this article have wide application in studies of the concentrations of chemicals in human serum and in environmental samples.


Assuntos
Poluentes Ambientais/sangue , Inquéritos Nutricionais , Dibenzodioxinas Policloradas/sangue , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Exposição Ambiental/análise , Feminino , Humanos , Masculino , Michigan , Pessoa de Meia-Idade , Valores de Referência , Análise de Regressão , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...