Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
1.
Brain Topogr ; 36(6): 797-815, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-37626239

RESUMEN

Event-related potentials (ERPs) recorded on the surface of the head are a mixture of signals from many sources in the brain due to volume conductions. As a result, the spatial resolution of the ERPs is quite low. Blind source separation can help to recover source signals from multichannel ERP records. In this study, we present a novel implementation of a method for decomposing multi-channel ERP into components, which is based on the modeling of second-order statistics of ERPs. We also report a new implementation of Bayesian Information Criteria (BIC), which is used to select the optimal number of hidden signals (components) in the original ERPs. We tested these methods using both synthetic datasets and real ERPs data arrays. Testing has shown that the ERP decomposition method can reconstruct the source signals from their mixture with acceptable accuracy even when these signals overlap significantly in time and the presence of noise. The use of BIC allows us to determine the correct number of source signals at the signal-to-noise ratio commonly observed in ERP studies. The proposed approach was compared with conventionally used methods for the analysis of ERPs. It turned out that the use of this new method makes it possible to observe such phenomena that are hidden by other signals in the original ERPs. The proposed method for decomposing a multichannel ERP into components can be useful for studying cognitive processes in laboratory settings, as well as in clinical studies.


Asunto(s)
Electroencefalografía , Potenciales Evocados , Humanos , Electroencefalografía/métodos , Teorema de Bayes , Encéfalo , Mapeo Encefálico/métodos
2.
J Am Stat Assoc ; 118(541): 135-146, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37346228

RESUMEN

With rapid advances in information technology, massive datasets are collected in all fields of science, such as biology, chemistry, and social science. Useful or meaningful information is extracted from these data often through statistical learning or model fitting. In massive datasets, both sample size and number of predictors can be large, in which case conventional methods face computational challenges. Recently, an innovative and effective sampling scheme based on leverage scores via singular value decompositions has been proposed to select rows of a design matrix as a surrogate of the full data in linear regression. Analogously, variable screening can be viewed as selecting rows of the design matrix. However, effective variable selection along this line of thinking remains elusive. In this article, we bridge this gap to propose a weighted leverage variable screening method by utilizing both the left and right singular vectors of the design matrix. We show theoretically and empirically that the predictors selected using our method can consistently include true predictors not only for linear models but also for complicated general index models. Extensive simulation studies show that the weighted leverage screening method is highly computationally efficient and effective. We also demonstrate its success in identifying carcinoma related genes using spatial transcriptome data.

3.
BMC Med Inform Decis Mak ; 23(1): 101, 2023 05 25.
Artículo en Inglés | MEDLINE | ID: mdl-37231392

RESUMEN

BACKGROUND: This study used machine learning techniques to evaluate cardiovascular disease risk factors (CVD) and the relationship between sex and these risk factors. The objective was pursued in the context of CVD being a major global cause of death and the need for accurate identification of risk factors for timely diagnosis and improved patient outcomes. The researchers conducted a literature review to address previous studies' limitations in using machine learning to assess CVD risk factors. METHODS: This study analyzed data from 1024 patients to identify the significant CVD risk factors based on sex. The data comprising 13 features, such as demographic, lifestyle, and clinical factors, were obtained from the UCI repository and preprocessed to eliminate missing information. The analysis was performed using principal component analysis (PCA) and latent class analysis (LCA) to determine the major CVD risk factors and to identify any homogeneous subgroups between male and female patients. Data analysis was performed using XLSTAT Software. This software provides a comprehensive suite of tools for Data Analysis, Machine Learning, and Statistical Solutions for MS Excel. RESULTS: This study showed significant sex differences in CVD risk factors. 8 out of 13 risk factors affecting male and female patients found that males and females share 4 of the eight risk factors. Identified latent profiles of CVD patients, suggesting the presence of subgroups among CVD patients. These findings provide valuable insights into the impact of sex differences on CVD risk factors. Moreover, they have important implications for healthcare professionals, who can use this information to develop individualized prevention and treatment plans. The results highlight the need for further research to elucidate these disparities better and develop more effective CVD prevention measures. CONCLUSIONS: The study explored the sex differences in the CVD risk factors and the presence of subgroups among CVD patients using ML techniques. The results revealed sex-specific differences in risk factors and the existence of subgroups among CVD patients, thus providing essential insights for personalized prevention and treatment plans. Hence, further research is necessary to understand these disparities better and improve CVD prevention.


Asunto(s)
Enfermedades Cardiovasculares , Humanos , Masculino , Femenino , Enfermedades Cardiovasculares/epidemiología , Enfermedades Cardiovasculares/prevención & control , Análisis de Clases Latentes , Análisis de Componente Principal , Factores de Riesgo , Factores de Riesgo de Enfermedad Cardiaca
4.
JMIR Public Health Surveill ; 9: e38371, 2023 02 10.
Artículo en Inglés | MEDLINE | ID: mdl-36395334

RESUMEN

BACKGROUND: Many nations swiftly designed and executed government policies to contain the rapid rise in COVID-19 cases. Government actions can be broadly segmented as movement and mass gathering restrictions (such as travel restrictions and lockdown), public awareness (such as face covering and hand washing), emergency health care investment, and social welfare provisions (such as poor welfare schemes to distribute food and shelter). The Blavatnik School of Government, University of Oxford, tracked various policy initiatives by governments across the globe and released them as composite indices. We assessed the overall government response using the Oxford Comprehensive Health Index (CHI) and Stringency Index (SI) to combat the COVID-19 pandemic. OBJECTIVE: This study aims to demonstrate the utility of CHI and SI to gauge and evaluate the government responses for containing the spread of COVID-19. We expect a significant inverse relationship between policy indices (CHI and SI) and COVID-19 severity indices (morbidity and mortality). METHODS: In this ecological study, we analyzed data from 2 publicly available data sources released between March 2020 and October 2021: the Oxford Covid-19 Government Response Tracker and the World Health Organization. We used autoregressive integrated moving average (ARIMA) and seasonal ARIMA to model the data. The performance of different models was assessed using a combination of evaluation criteria: adjusted R2, root mean square error, and Bayesian information criteria. RESULTS: implementation of policies by the government to contain the COVID-19 crises resulted in higher CHI and SI in the beginning. Although the value of CHI and SI gradually fell, they were consistently higher at values of >80% points. During the initial investigation, we found that cases per million (CPM) and deaths per million (DPM) followed the same trend. However, the final CPM and DPM models were seasonal ARIMA (3,2,1)(1,0,1) and ARIMA (1,1,1), respectively. This study does not support the hypothesis that COVID-19 severity (CPM and DPM) is associated with stringent policy measures (CHI and SI). CONCLUSIONS: Our study concludes that the policy measures (CHI and SI) do not explain the change in epidemiological indicators (CPM and DPM). The study reiterates our understanding that strict policies do not necessarily lead to better compliance but may overwhelm the overstretched physical health systems. Twenty-first-century problems thus demand 21st-century solutions. The digital ecosystem was instrumental in the timely collection, curation, cloud storage, and data communication. Thus, digital epidemiology can and should be successfully integrated into existing surveillance systems for better disease monitoring, management, and evaluation.


Asunto(s)
COVID-19 , Ecosistema , Humanos , Teorema de Bayes , Pandemias/prevención & control , COVID-19/epidemiología , COVID-19/prevención & control , Control de Enfermedades Transmisibles , Gobierno , India/epidemiología
5.
J Clin Exp Hepatol ; 12(1): 118-128, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35068792

RESUMEN

BACKGROUND: Gastrointestinal candidiasis is often neglected and potentially serious infection in cirrhosis patients. Therefore, we evaluated the prevalence, risk factors, and outcomes of esophageal candidiasis (EC) in cirrhotics and did a systematic review to summarize EC's available evidence in cirrhosis. METHODS: Consecutive patients with cirrhosis posted for esophagogastroduodenoscopy (EGD) at a tertiary care institute were screened for EC (cases) between January 2019 and March 2020. EC was diagnosed on EGD findings and/or brush cytology. Controls (without EC) were recruited randomly, and EC's risk factors and outcomes were compared between cases and controls.Four electronic databases were searched for studies describing EC in cirrhosis. Prevalence estimates of EC were pooled on random-effects meta-analysis, and heterogeneity was assessed by I2. A checklist for prevalence studies was used to evaluate the risk of bias in studies. RESULTS: EC was diagnosed in 100 of 2762 patients with cirrhosis (3.6%). Patients with EC had a higher model for end-stage liver disease (MELD) (12.4 vs. 11.2; P = 0.007), acute-on-chronic liver failure (ACLF) (26% vs. 10%; P = 0.003) and concomitant bacterial infections (24% vs. 7%; P = 0.001), as compared with controls. A multivariable model, including recent alcohol binge, hepatocellular carcinoma (HCC), upper gastrointestinal (UGI) bleed, ACLF, diabetes, and MELD, predicted EC's development in cirrhosis with excellent discrimination (C-index: 0.918). Six percent of cases developed the invasive disease and worsened with multiorgan failures, and four patients with EC died on follow-up.Of 236 articles identified, EC's pooled prevalence from 8 studies (all with low-risk of bias) was 2.1% (95% CI: 0.8-5.8). Risk factors and outcomes of EC in cirrhosis were not reported in the literature. CONCLUSIONS: EC is not a rare infection in cirrhosis patients, and it may predispose to invasive candidiasis and untimely deaths. Alcohol binge, HCC, UGI bleed, ACLF, diabetes, and higher MELD are the independent predictors of EC in cirrhosis. At-risk patients with cirrhosis or those with deglutition symptoms should be rapidly screened and treated for EC.

6.
Entropy (Basel) ; 25(1)2022 Dec 21.
Artículo en Inglés | MEDLINE | ID: mdl-36673154

RESUMEN

In this paper, the LASSO method with extended Bayesian information criteria (EBIC) for feature selection in high-dimensional models is studied. We propose the use of the energy distance correlation in place of the ordinary correlation coefficient to measure the dependence of two variables. The energy distance correlation detects linear and non-linear association between two variables, unlike the ordinary correlation coefficient, which detects only linear association. EBIC is adopted as the stopping criterion. It is shown that the new method is more powerful than Luo and Chen's method for feature selection. This is demonstrated by simulation studies and illustrated by a real-life example. It is also proved that the new algorithm is selection-consistent.

7.
Entropy (Basel) ; 22(3)2020 Mar 17.
Artículo en Inglés | MEDLINE | ID: mdl-33286116

RESUMEN

MaxEnt is a popular maximum entropy-based algorithm originally developed for modelling species distribution, but increasingly used for land-cover classification. In this article, we used MaxEnt as a single-class land-cover classification and explored if recommended procedures for generating high-quality species distribution models also apply for generating high-accuracy land-cover classification. We used remote sensing imagery and randomly selected ground-true points for four types of land covers (built, grass, deciduous, evergreen) to generate 1980 classification maps using MaxEnt. We calculated different accuracy discrimination and quality model metrics to determine if these metrics were suitable proxies for estimating the accuracy of land-cover classification outcomes. Correlation analysis between model quality metrics showed consistent patterns for the relationships between metrics, but not for all land-covers. Relationship between model quality metrics and land-cover classification accuracy were land-cover-dependent. While for built cover there was no consistent patterns of correlations for any quality metrics; for grass, evergreen and deciduous, there was a consistent association between quality metrics and classification accuracy. We recommend evaluating the accuracy of land-cover classification results by using proper discrimination accuracy coefficients (e.g., Kappa, Overall Accuracy), and not placing all the confidence in model's quality metrics as a reliable indicator of land-cover classification results.

8.
Stat Methods Med Res ; 29(12): 3605-3622, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-33019901

RESUMEN

Despite a large choice of models, functional forms and types of effects, the selection of excess hazard models for prediction of population cancer survival is not widespread in the literature. We propose multi-model inference based on excess hazard model(s) selected using Akaike information criteria or Bayesian information criteria for prediction and projection of cancer survival. We evaluate the properties of this approach using empirical data of patients diagnosed with breast, colon or lung cancer in 1990-2011. We artificially censor the data on 31 December 2010 and predict five-year survival for the 2010 and 2011 cohorts. We compare these predictions to the observed five-year cohort estimates of cancer survival and contrast them to predictions from an a priori selected simple model, and from the period approach. We illustrate the approach by replicating it for cohorts of patients for which stage at diagnosis and other important prognosis factors are available. We find that model-averaged predictions and projections of survival have close to minimal differences with the Pohar-Perme estimation of survival in many instances, particularly in subgroups of the population. Advantages of information-criterion based model selection include (i) transparent model-building strategy, (ii) accounting for model selection uncertainty, (iii) no a priori assumption for effects, and (iv) projections for patients outside of the sample.


Asunto(s)
Neoplasias , Teorema de Bayes , Estudios de Cohortes , Humanos , Modelos de Riesgos Proporcionales , Análisis de Supervivencia
9.
Front Genet ; 11: 431, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32499813

RESUMEN

BACKGROUND: Multivariate testing tools that integrate multiple genome-wide association studies (GWAS) have become important as the number of phenotypes gathered from study cohorts and biobanks has increased. While these tools have been shown to boost statistical power considerably over univariate tests, an important remaining challenge is to interpret which traits are driving the multivariate association and which traits are just passengers with minor contributions to the genotype-phenotypes association statistic. RESULTS: We introduce MetaPhat, a novel bioinformatics tool to conduct GWAS of multiple correlated traits using univariate GWAS results and to decompose multivariate associations into sets of central traits based on intuitive trace plots that visualize Bayesian Information Criterion (BIC) and P-value statistics of multivariate association models. We validate MetaPhat with Global Lipids Genetics Consortium GWAS results, and we apply MetaPhat to univariate GWAS results for 21 heritable and correlated polyunsaturated lipid species from 2,045 Finnish samples, detecting seven independent loci associated with a cluster of lipid species. In most cases, we are able to decompose these multivariate associations to only three to five central traits out of all 21 traits included in the analyses. We release MetaPhat as an open source tool written in Python with built-in support for multi-processing, quality control, clumping and intuitive visualizations using the R software. CONCLUSION: MetaPhat efficiently decomposes associations between multivariate phenotypes and genetic variants into smaller sets of central traits and improves the interpretation and specificity of genome-phenome associations. MetaPhat is freely available under the MIT license at: https://sourceforge.net/projects/meta-pheno-association-tracer.

10.
Sensors (Basel) ; 20(1)2020 Jan 02.
Artículo en Inglés | MEDLINE | ID: mdl-31906590

RESUMEN

The time-difference method is a common one for measuring wind speed ultrasonically, and its core is the precise arrival-time determination of the ultrasonic echo signal. However, because of background noise and different types of ultrasonic sensors, it is difficult to measure the arrival time of the echo signal accurately in practice. In this paper, a method based on the wavelet transform (WT) and Bayesian information criteria (BIC) is proposed for determining the arrival time of the echo signal. First, the time-frequency distribution of the echo signal is obtained by using the determined WT and rough arrival time. After setting up a time window around the rough arrival time point, the BIC function is calculated in the time window, and the arrival time is determined by using the BIC function. The proposed method is tested in a wind tunnel with an ultrasonic anemometer. The experimental results show that, even in the low-signal-to-noise-ratio area, the deviation between mostly measured values and preset standard values is mostly within 5 µs, and the standard deviation of measured wind speed is within 0.2 m/s.

11.
Mol Biol Evol ; 37(2): 549-562, 2020 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-31688943

RESUMEN

The information criteria Akaike information criterion (AIC), AICc, and Bayesian information criterion (BIC) are widely used for model selection in phylogenetics, however, their theoretical justification and performance have not been carefully examined in this setting. Here, we investigate these methods under simple and complex phylogenetic models. We show that AIC can give a biased estimate of its intended target, the expected predictive log likelihood (EPLnL) or, equivalently, expected Kullback-Leibler divergence between the estimated model and the true distribution for the data. Reasons for bias include commonly occurring issues such as small edge-lengths or, in mixture models, small weights. The use of partitioned models is another issue that can cause problems with information criteria. We show that for partitioned models, a different BIC correction is required for it to be a valid approximation to a Bayes factor. The commonly used AICc correction is not clearly defined in partitioned models and can actually create a substantial bias when the number of parameters gets large as is the case with larger trees and partitioned models. Bias-corrected cross-validation corrections are shown to provide better approximations to EPLnL than AIC. We also illustrate how EPLnL, the estimation target of AIC, can sometimes favor an incorrect model and give reasons for why selection of incorrectly under-partitioned models might be desirable in partitioned model settings.


Asunto(s)
Biología Computacional/métodos , Filogenia , Algoritmos , Teorema de Bayes , Funciones de Verosimilitud , Modelos Genéticos , Selección Genética
12.
Biometrics ; 76(1): 47-60, 2020 03.
Artículo en Inglés | MEDLINE | ID: mdl-31350909

RESUMEN

Conditional screening approaches have emerged as a powerful alternative to the commonly used marginal screening, as they can identify marginally weak but conditionally important variables. However, most existing conditional screening methods need to fix the initial conditioning set, which may determine the ultimately selected variables. If the conditioning set is not properly chosen, the methods may produce false negatives and positives. Moreover, screening approaches typically need to involve tuning parameters and extra modeling steps in order to reach a final model. We propose a sequential conditioning approach by dynamically updating the conditioning set with an iterative selection process. We provide its theoretical properties under the framework of generalized linear models. Powered by an extended Bayesian information criterion as the stopping rule, the method will lead to a final model without the need to choose tuning parameters or threshold parameters. The practical utility of the proposed method is examined via extensive simulations and analysis of a real clinical study on predicting multiple myeloma patients' response to treatment based on their genomic profiles.


Asunto(s)
Biometría/métodos , Modelos Lineales , Algoritmos , Teorema de Bayes , Simulación por Computador , Perfilación de la Expresión Génica/estadística & datos numéricos , Humanos , Funciones de Verosimilitud , Modelos Logísticos , Modelos Estadísticos , Mieloma Múltiple/genética , Mieloma Múltiple/terapia
13.
World Allergy Organ J ; 12(9): 100057, 2019 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-31641405

RESUMEN

BACKGROUND: The natural history of allergic sensitization in childhood, and its impact on allergic disease development, needs to be clarified. This study aims to identify allergic sensitization and morbidity patterns during the first 8 years of life. METHODS: The study was conducted in the on-going population-based prospective Pollution and Asthma Risk: an Infant Study (PARIS) birth cohort. Sensitization profiles were identified by k-means clustering based upon allergen-specific IgE levels measured at 18 months and 8/9 years. Allergic morbidity profiles were identified by latent class analysis based on symptoms, symptom severity, treatments, and lifetime doctor-diagnoses of asthma, allergic rhinitis, and atopic dermatitis and on lower respiratory infections before 2 years. RESULTS: Five sensitization and 5 allergic morbidity patterns were established in 714 children. Children not sensitized or with isolated and low allergen-specific sensitization were grouped together (76.8%). A profile of early and transient sensitization to foods that increased the risk of asthma later in childhood was identified (4.9%). Children strongly sensitized (≥3.5 kUA/L) to house dust mite at 8/9 years (9.0%) had the highest risk of asthma and allergic rhinitis. Finally, timothy grass pollen at 8/9 years sensitization profile (5.3%) was related to respiratory allergic diseases, as was early onset and persistent sensitization profile (4.1%), this latter being also strongly associated with atopic dermatitis. CONCLUSIONS & CLINICAL RELEVANCE: We show that accurate assessment of the risk of allergic disease should rely on earliness and multiplicity of sensitization, involved allergens, and allergen-specific IgE levels, and not considering solely allergic sensitization as a dichotomous variable (allergen-specific IgE ≥0.35 kUA/L), as usually done. This is particularly striking for house dust mite. We are hopeful that, pending further confirmation in other populations, our findings will improve clinical practice as part of an approach to allergic disease prevention.

14.
Psychometrika ; 84(3): 802-829, 2019 09.
Artículo en Inglés | MEDLINE | ID: mdl-31297664

RESUMEN

Typical Bayesian methods for models with latent variables (or random effects) involve directly sampling the latent variables along with the model parameters. In high-level software code for model definitions (using, e.g., BUGS, JAGS, Stan), the likelihood is therefore specified as conditional on the latent variables. This can lead researchers to perform model comparisons via conditional likelihoods, where the latent variables are considered model parameters. In other settings, however, typical model comparisons involve marginal likelihoods where the latent variables are integrated out. This distinction is often overlooked despite the fact that it can have a large impact on the comparisons of interest. In this paper, we clarify and illustrate these issues, focusing on the comparison of conditional and marginal Deviance Information Criteria (DICs) and Watanabe-Akaike Information Criteria (WAICs) in psychometric modeling. The conditional/marginal distinction corresponds to whether the model should be predictive for the clusters that are in the data or for new clusters (where "clusters" typically correspond to higher-level units like people or schools). Correspondingly, we show that marginal WAIC corresponds to leave-one-cluster out cross-validation, whereas conditional WAIC corresponds to leave-one-unit out. These results lead to recommendations on the general application of the criteria to models with latent variables.


Asunto(s)
Teorema de Bayes , Simulación por Computador/normas , Análisis de Clases Latentes , Funciones de Verosimilitud , Análisis por Conglomerados , Mediciones Epidemiológicas , Humanos , Masculino , Cadenas de Markov , Método de Montecarlo , Valor Predictivo de las Pruebas , Psicometría , Programas Informáticos
15.
J Multivar Anal ; 173: 268-290, 2019 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-31007300

RESUMEN

Forward regression, a classical variable screening method, has been widely used for model building when the number of covariates is relatively low. However, forward regression is seldom used in high-dimensional settings because of the cumbersome computation and unknown theoretical properties. Some recent works have shown that forward regression, coupled with an extended Bayesian information criterion (EBIC)-based stopping rule, can consistently identify all relevant predictors in high-dimensional linear regression settings. However, the results are based on the sum of residual squares from linear models and it is unclear whether forward regression can be applied to more general regression settings, such as Cox proportional hazards models. We introduce a forward variable selection procedure for Cox models. It selects important variables sequentially according to the increment of partial likelihood, with an EBIC stopping rule. To our knowledge, this is the first study that investigates the partial likelihood-based forward regression in high-dimensional survival settings and establishes selection consistency results. We show that, if the dimension of the true model is finite, forward regression can discover all relevant predictors within a finite number of steps and their order of entry is determined by the size of the increment in partial likelihood. As partial likelihood is not a regular density-based likelihood, we develop some new theoretical results on partial likelihood and use these results to establish the desired sure screening properties. The practical utility of the proposed method is examined via extensive simulations and analysis of a subset of the Boston Lung Cancer Survival Cohort study, a hospital-based study for identifying biomarkers related to lung cancer patients' survival.

16.
Ultrasonics ; 66: 111-124, 2016 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-26596649

RESUMEN

This paper investigates the use of sparse priors in creating original two-dimensional beamforming methods for ultrasound imaging. The proposed approaches detect the strong reflectors from the scanned medium based on the well known Bayesian Information Criteria used in statistical modeling. Moreover, they allow a parametric selection of the level of speckle in the final beamformed image. These methods are applied on simulated data and on recorded experimental data. Their performance is evaluated considering the standard image quality metrics: contrast ratio (CR), contrast-to-noise ratio (CNR) and signal-to-noise ratio (SNR). A comparison is made with the classical delay-and-sum and minimum variance beamforming methods to confirm the ability of the proposed methods to precisely detect the number and the position of the strong reflectors in a sparse medium and to accurately reduce the speckle and highly enhance the contrast in a non-sparse medium. We confirm that our methods improve the contrast of the final image for both simulated and experimental data. In all experiments, the proposed approaches tend to preserve the speckle, which can be of major interest in clinical examinations, as it can contain useful information. In sparse mediums we achieve a highly improvement in contrast compared with the classical methods.


Asunto(s)
Ultrasonografía/métodos , Teorema de Bayes , Ecocardiografía , Humanos , Modelos Teóricos
17.
J Allergy Clin Immunol ; 132(3): 575-583.e12, 2013 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-23906378

RESUMEN

BACKGROUND: Previous studies have suggested the presence of different childhood wheeze phenotypes through statistical modeling based on parentally reported wheezing. OBJECTIVE: We sought to investigate whether joint modeling of observations from both medical records and parental reports helps to more accurately define wheezing disorders during childhood and whether incorporating information from medical records better characterizes severity. METHODS: In a population-based birth cohort (n = 1184), we analyzed data from 2 sources (parentally reported current wheeze at 4 follow-ups and physician-confirmed wheeze from medical records in each year from birth to age 8 years) to determine classes of children who differ in wheeze trajectories. We tested the validity of these classes by examining their relationships with objective outcomes (lung function, airway hyperreactivity, and atopy), asthma medication, and severe exacerbations. RESULTS: Longitudinal latent class modeling identified a 5-class model that best described the data. We assigned classes as follows: no wheezing (53.3%), transient early wheeze (13.7%), late-onset wheeze (16.7%), persistent controlled wheeze (13.1%), and persistent troublesome wheeze (PTW; 3.2%). Longitudinal trajectories of atopy and lung function differed significantly between classes. Patients in the PTW class had diminished lung function and more hyperreactive airways compared with all other classes. We observed striking differences in exacerbations, hospitalizations, and unscheduled visits, all of which were markedly higher in patients in the PTW class compared with those in the other classes. For example, the risk of exacerbation was much higher in patients in the PTW class compared with patients with persistent controlled wheeze (odds ratio [OR], 3.58; 95% CI, 1.27-10.09), late-onset wheeze (OR, 15.92; 95% CI, 5.61-45.15), and transient early wheeze (OR, 12.24; 95% CI, 4.28-35.03). CONCLUSION: We identified a novel group of children with persistent troublesome wheezing, who have markedly different outcomes compared with persistent wheezers with controlled disease.


Asunto(s)
Modelos Biológicos , Ruidos Respiratorios/clasificación , Alérgenos/inmunología , Hiperreactividad Bronquial/inmunología , Hiperreactividad Bronquial/fisiopatología , Niño , Preescolar , Femenino , Humanos , Hipersensibilidad Inmediata/inmunología , Hipersensibilidad Inmediata/fisiopatología , Inmunoglobulina E/sangre , Inmunoglobulina E/inmunología , Lactante , Masculino , Padres , Médicos , Ruidos Respiratorios/inmunología , Ruidos Respiratorios/fisiopatología , Espirometría , Encuestas y Cuestionarios
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA