Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 59.372
Filtrar
3.
Artigo em Inglês | MEDLINE | ID: mdl-33080869

RESUMO

There are different patterns in the COVID-19 outbreak in the general population and amongst nursing home patients. We investigate the time from symptom onset to diagnosis and hospitalization or the length of stay (LoS) in the hospital, and whether there are differences in the population. Sciensano collected information on 14,618 hospitalized patients with COVID-19 admissions from 114 Belgian hospitals between 14 March and 12 June 2020. The distributions of different event times for different patient groups are estimated accounting for interval censoring and right truncation of the time intervals. The time between symptom onset and hospitalization or diagnosis are similar, with median length between symptom onset and hospitalization ranging between 3 and 10.4 days, depending on the age of the patient (longest delay in age group 20-60 years) and whether or not the patient lives in a nursing home (additional 2 days for patients from nursing home). The median LoS in hospital varies between 3 and 10.4 days, with the LoS increasing with age. The hospital LoS for patients that recover is shorter for patients living in a nursing home, but the time to death is longer for these patients. Over the course of the first wave, the LoS has decreased.


Assuntos
Infecções por Coronavirus/mortalidade , Infecções por Coronavirus/terapia , Hospitalização/estatística & dados numéricos , Pneumonia Viral/mortalidade , Pneumonia Viral/terapia , Tempo para o Tratamento/estatística & dados numéricos , Adulto , Idoso , Bélgica/epidemiologia , Interpretação Estatística de Dados , Humanos , Tempo de Internação/estatística & dados numéricos , Pessoa de Meia-Idade , Casas de Saúde/estatística & dados numéricos , Pandemias , Resultado do Tratamento , Adulto Jovem
4.
Lancet Oncol ; 21(10): e488-e494, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-33002444

RESUMO

Patient-reported outcome (PRO) measures describe how a patient feels or functions and are increasingly being used in benefit-risk assessments in the development of cancer drugs. However, PRO research objectives are often ill-defined in clinical cancer trials, which can lead to misleading conclusions about patient experiences. The estimand framework is a structured approach to aligning a clinical trial objective with the study design, including endpoints and analysis. The estimand framework uses a multidisciplinary approach and can improve design, analysis, and interpretation of PRO results. On the basis of the International Council for Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use E9(R1) addendum, we provide an overview of the estimand framework intended for a multistakeholder audience. We apply the estimand framework to a hypothetical trial for breast cancer, using physical function to develop specific PRO research objectives. This Policy Review is not an endorsement of a specific study design or outcome; rather, it is meant to show the application of principles of the estimand framework to research study design and add to ongoing discussion. Use of the estimand framework to review medical products and label PROs in oncology can improve communication between stakeholders and ultimately provide a clearer interpretation of patient experience in the development of oncological drugs.


Assuntos
Protocolos de Ensaio Clínico como Assunto , Oncologia/normas , Medidas de Resultados Relatados pelo Paciente , Antineoplásicos/uso terapêutico , Interpretação Estatística de Dados , Desenvolvimento de Medicamentos/legislação & jurisprudência , Desenvolvimento de Medicamentos/normas , Humanos , Comunicação Interdisciplinar , Oncologia/estatística & dados numéricos , Neoplasias/tratamento farmacológico , Projetos de Pesquisa/normas
5.
BMC Med Res Methodol ; 20(1): 248, 2020 10 06.
Artigo em Inglês | MEDLINE | ID: mdl-33023505

RESUMO

BACKGROUND: Classic epidemic curves - counts of daily events or cumulative events over time -emphasise temporal changes in the growth or size of epidemic outbreaks. Like any graph, these curves have limitations: they are impractical for comparisons of large and small outbreaks or of asynchronous outbreaks, and they do not display the relative growth rate of the epidemic. Our aim was to propose two additional graphical displays for the monitoring of epidemic outbreaks that overcome these limitations. METHODS: The first graph shows the growth of the epidemic as a function of its size; specifically, the logarithm of new cases on a given day, N(t), is plotted against the logarithm of cumulative cases C(t). Logarithm transformations facilitate comparisons of outbreaks of different sizes, and the lack of a time scale overcomes the need to establish a starting time for each outbreak. Notably, on this graph, exponential growth corresponds to a straight line with a slope equal to one. The second graph represents the logarithm of the relative rate of growth of the epidemic over time; specifically, log10(N(t)/C(t-1)) is plotted against time (t) since the 25th event. We applied these methods to daily death counts attributed to COVID-19 in selected countries, reported up to June 5, 2020. RESULTS: In most countries, the log(N) over log(C) plots showed initially a near-linear increase in COVID-19 deaths, followed by a sharp downturn. They enabled comparisons of small and large outbreaks (e.g., Switzerland vs UK), and identified outbreaks that were still growing at near-exponential rates (e.g., Brazil or India). The plots of log10(N(t)/C(t-1)) over time showed a near-linear decrease (on a log scale) of the relative growth rate of most COVID-19 epidemics, and identified countries in which this decrease failed to set in in the early weeks (e.g., USA) or abated late in the outbreak (e.g., Portugal or Russia). CONCLUSIONS: The plot of log(N) over log(C) displays simultaneously the growth and size of an epidemic, and allows easy identification of exponential growth. The plot of the logarithm of the relative growth rate over time highlights an essential parameter of epidemic outbreaks.


Assuntos
Infecções por Coronavirus/epidemiologia , Modelos Teóricos , Pneumonia Viral/epidemiologia , Betacoronavirus , Interpretação Estatística de Dados , Métodos Epidemiológicos , Humanos , Pandemias
6.
Glob Heart ; 15(1): 39, 2020 05 20.
Artigo em Inglês | MEDLINE | ID: mdl-32923333

RESUMO

An in-depth analysis of gathered data collected in several countries on the pre-infectious characteristics of patients who have developed severe forms of COVID-19 disease could be the basis for developing tools to estimate individual risk and tailor protective measures for a safer route through the Phase 2 of the pandemic.


Assuntos
Infecções por Coronavirus/epidemiologia , Infecções por Coronavirus/prevenção & controle , Pandemias/prevenção & controle , Pneumonia Viral/epidemiologia , Pneumonia Viral/prevenção & controle , Infecções por Coronavirus/transmissão , Interpretação Estatística de Dados , Feminino , Humanos , Masculino , Pneumonia Viral/transmissão , Medição de Risco , Fatores de Risco , Fatores de Tempo
7.
Gesundheitswesen ; 82(8-09): 716-722, 2020 Sep.
Artigo em Alemão | MEDLINE | ID: mdl-32961567

RESUMO

" There are more and more good reasons for using existing care data, with the focus in particular on the use of register data. The associated, clearly structured methodological procedure has so far been insufficiently combined, prepared and presented transparently. The German Network for Health Services Research (DNVF) has therefore set up an ad hoc commission for the use of routine practice data (RWE/RWD). The rapid report prepared by IQWiG on the scientific development of concepts for "generation of care-related data and their evaluation for the purpose of benefit assessment of medicinal products according to § 35a SGB V" is an essential step for the use of register data for the generation of evidence. The "Memorandum Register - Update 2019" published by DNVF 2020 also describes the requirements and methodological foundations of registers. Best practice examples from oncology, which are based on the uniform oncological basic data set for clinical cancer registration (§ 65c SGB V), show, for example, that guidelines can be checked and recommendations for guidelines and necessary interventions can be derived in the sense of knowledge-generating health services research using register data. At the same time, however, there are no clear quality requirements and structured formal and content-related procedures in the areas of data consolidation, data verification and the use of specific methods depending on the question at hand. The previously inconsistent requirements are to be revised and a method guide for the use of suited data is to be developed and published. The first chapter of the manual on methods of care-related data explains the objective and structure of the manual. It explains why the use of the term "routine practice data" is more effective than the use of the terms Real Word Data (RWD) and Real World Evidence (RWE). By avoiding the term "real world" it should be emphasized in particular that high-quality research can also be based on routine practice data (e. g. register-based comparative studies).


Assuntos
Pesquisa sobre Serviços de Saúde , Projetos de Pesquisa , Análise de Dados , Interpretação Estatística de Dados , Alemanha
8.
BMC Med Res Methodol ; 20(1): 220, 2020 08 31.
Artigo em Inglês | MEDLINE | ID: mdl-32867708

RESUMO

BACKGROUND: Because of unknown features of the COVID-19 and the complexity of the population affected, standard clinical trial designs on treatments may not be optimal in such patients. We propose two independent clinical trials designs based on careful grouping of patient and outcome measures. METHODS: Using the World Health Organization ordinal scale on patient status, we classify treatable patients (Stages 3-7) into two risk groups. Patients in Stages 3, 4 and 5 are categorized as the intermediate-risk group, while patients in Stages 6 and 7 are categorized as the high-risk group. To ensure that an intervention, if deemed efficacious, is promptly made available to vulnerable patients, we propose a group sequential design incorporating four factors stratification, two interim analyses, and a toxicity monitoring rule for the intermediate-risk group. The primary response variable (binary variable) is based on the proportion of patients discharged from hospital by the 15th day. The goal is to detect a significant improvement in this response rate. For the high-risk group, we propose a group sequential design incorporating three factors stratification, and two interim analyses, with no toxicity monitoring. The primary response variable for this design is 30 day mortality, with the goal of detecting a meaningful reduction in mortality rate. RESULTS: Required sample size and toxicity boundaries are calculated for each scenario. Sample size requirements for designs with interim analyses are marginally greater than ones without. In addition, for both the intermediate-risk group and the high-risk group, the required sample size with two interim analyses is almost identical to analyses with just one interim analysis. CONCLUSIONS: We recommend using a binary outcome with composite endpoints for patients in Stage 3, 4 or 5 with a power of 90% to detect an improvement of 20% in the response rate, and a 30 day mortality rate outcome for those in Stage 6 or 7 with a power of 90% to detect 15% (effect size) reduction in mortality rate. For the intermediate-risk group, two interim analyses for efficacy evaluation along with toxicity monitoring are encouraged. For the high-risk group, two interim analyses without toxicity monitoring is advised.


Assuntos
Betacoronavirus , Infecções por Coronavirus/terapia , Interpretação Estatística de Dados , Pneumonia Viral/terapia , Projetos de Pesquisa , Ensaios Clínicos Fase II como Assunto , Ensaios Clínicos Fase III como Assunto , Humanos , Avaliação de Resultados em Cuidados de Saúde , Pandemias
10.
Drug Des Devel Ther ; 14: 3803-3813, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32982184

RESUMO

Objective: This study aimed to evaluate the pharmacological mechanisms of antiviral drugs against the novel coronavirus disease (COVID-19) and the study designs in clinical trials registered with the International Clinical Trials Registry Platform (ICTRP). Methods: Clinical trials involving antiviral drugs for treating COVID-19 were retrieved from the ICTRP database. For each trial, the study design, number of participants, primary endpoints, source register, antiviral mechanism, and results were evaluated. Results: On June 10, 2020, 145 eligible clinical trials were retrieved from the ICTRP, of which 99 (68.3%) were randomized trials, 109 (75.2%) were parallel assignment trials, 38 (26.2%) were double or single blinded, 130 (89.7%) involved two groups, and 75 (51.6%) included more than 100 participants; and clinical improvement or recovery and virus-negative conversion were the two most common endpoints, accounting for 40.7% and 18.6%, respectively. The drugs were divided according to the antiviral mechanism into HIV reverse transcriptase inhibitors, RNA-dependent RNA polymerase inhibitors, HIV protease inhibitors (PIs), hepatitis C virus NS3 PIs, and anti-influenza drugs. Conclusion: The design characteristics of clinical trials of antiviral drugs for treating COVID-19 as well as the mechanism of action and antiviral efficacy of the drugs were evaluated in this study. The results of these trials could constitute a reference for future clinical trials to be executed on COVID-19 treatment and prevention.


Assuntos
Antivirais/uso terapêutico , Ensaios Clínicos como Assunto , Infecções por Coronavirus/tratamento farmacológico , Pneumonia Viral/tratamento farmacológico , Sistema de Registros , Antivirais/farmacologia , Betacoronavirus/efeitos dos fármacos , Coleta de Dados , Interpretação Estatística de Dados , Gerenciamento de Dados , Combinação de Medicamentos , Humanos , Pandemias , Projetos de Pesquisa
11.
Epidemiol Infect ; 148: e192, 2020 08 26.
Artigo em Inglês | MEDLINE | ID: mdl-32843111

RESUMO

Given the fast spread of the novel coronavirus (COVID-19) worldwide and its classification by the World Health Organization (WHO) as being one of the worst pandemics in history, several scientific studies are carried out using various statistical and mathematical models to predict and study the likely evolution of this pandemic in the world. In the present research paper, we present a brief study aiming to predict the probability of reaching a new record number of COVID-19 cases in Lebanon, based on the record theory, giving more insights about the rate of its quick spread in Lebanon. The main advantage of the records theory resides in avoiding several statistical constraints concerning the choice of the underlying distribution and the quality of the residuals. In addition, this theory could be used, in cases where the number of available observations is somehow small. Moreover, this theory offers an alternative solution in case where machine learning techniques and long-term memory models are inapplicable because they need a considerable amount of data to be performant. The originality of this paper lies in presenting a new statistical approach allowing the early detection of unexpected phenomena such as the new pandemic COVID-19. For this purpose, we used epidemiological data from Johns Hopkins University to predict the trend of COVID-2019 in Lebanon. Our method is useful in calculating the probability of reaching a new record as well as studying the propagation of the disease. It also computes the probabilities of the waiting time to observe the future COVID-19 record. Our results obviously confirm the quick spread of the disease in Lebanon over a short time.


Assuntos
Infecções por Coronavirus/epidemiologia , Pneumonia Viral/epidemiologia , Betacoronavirus , Infecções por Coronavirus/transmissão , Interpretação Estatística de Dados , Humanos , Líbano/epidemiologia , Modelos Estatísticos , Pandemias , Pneumonia Viral/transmissão
13.
Curr Opin Ophthalmol ; 31(5): 351-356, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-32740068

RESUMO

PURPOSE OF REVIEW: The use of artificial intelligence (AI) in ophthalmology has increased dramatically. However, interpretation of these studies can be a daunting prospect for the ophthalmologist without a background in computer or data science. This review aims to share some practical considerations for interpretation of AI studies in ophthalmology. RECENT FINDINGS: It can be easy to get lost in the technical details of studies involving AI. Nevertheless, it is important for clinicians to remember that the fundamental questions in interpreting these studies remain unchanged - What does this study show, and how does this affect my patients? Being guided by familiar principles like study purpose, impact, validity, and generalizability, these studies become more accessible to the ophthalmologist. Although it may not be necessary for nondomain experts to understand the exact AI technical details, we explain some broad concepts in relation to AI technical architecture and dataset management. SUMMARY: The expansion of AI into healthcare and ophthalmology is here to stay. AI systems have made the transition from bench to bedside, and are already being applied to patient care. In this context, 'AI education' is crucial for ophthalmologists to be confident in interpretation and translation of new developments in this field to their own clinical practice.


Assuntos
Inteligência Artificial , Interpretação Estatística de Dados , Oftalmologistas , Assistência à Saúde , Humanos , Oftalmologia/métodos
14.
PLoS One ; 15(8): e0237779, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32834004

RESUMO

Microbiome data consists of operational taxonomic unit (OTU) counts characterized by zero-inflation, over-dispersion, and grouping structure among samples. Currently, statistical testing methods are commonly performed to identify OTUs that are associated with a phenotype. The limitations of statistical testing methods include that the validity of p-values/q-values depend sensitively on the correctness of models and that the statistical significance does not necessarily imply predictivity. Predictive analysis using methods such as LASSO is an alternative approach for identifying associated OTUs and for measuring the predictability of the phenotype variable with OTUs and other covariate variables. We investigate three strategies of performing predictive analysis: (1) LASSO: fitting a LASSO multinomial logistic regression model to all OTU counts with specific transformation; (2) screening+GLM: screening OTUs with q-values returned by fitting a GLMM to each OTU, then fitting a GLM model using a subset of selected OTUs; (3) screening+LASSO: fitting a LASSO to a subset of OTUs selected with GLMM. We have conducted empirical studies using three simulation datasets generated using Dirichlet-multinomial models and a real gut microbiome data related to Parkinson's disease to investigate the performance of the three strategies for predictive analysis. Our simulation studies show that the predictive performance of LASSO with appropriate variable transformation works remarkably well on zero-inflated data. Our results of real data analysis show that Parkinson's disease can be predicted based on selected OTUs after the binary transformation, age, and sex with high accuracy (Error Rate = 0.199, AUC = 0.872, AUPRC = 0.912). These results provide strong evidences of the relationship between Parkinson's disease and the gut microbiome.


Assuntos
Bactérias/classificação , Interpretação Estatística de Dados , Microbioma Gastrointestinal/genética , Modelos Biológicos , Doença de Parkinson/diagnóstico , Adulto , Fatores Etários , Idoso , Idoso de 80 Anos ou mais , Bactérias/genética , Bactérias/isolamento & purificação , Estudos de Coortes , Simulação por Computador , DNA Bacteriano/isolamento & purificação , Conjuntos de Dados como Assunto , Feminino , Humanos , Modelos Logísticos , Masculino , Pessoa de Meia-Idade , Doença de Parkinson/microbiologia , Valor Preditivo dos Testes , Prognóstico , RNA Ribossômico 16S/genética , Fatores Sexuais
15.
BMJ ; 370: m2898, 2020 08 26.
Artigo em Inglês | MEDLINE | ID: mdl-32847800

RESUMO

OBJECTIVE: To assess the risk of bias associated with missing outcome data in systematic reviews. DESIGN: Imputation study. SETTING: Systematic reviews. POPULATION: 100 systematic reviews that included a group level meta-analysis with a statistically significant effect on a patient important dichotomous efficacy outcome. MAIN OUTCOME MEASURES: Median percentage change in the relative effect estimate when applying each of the following assumption (four commonly discussed but implausible assumptions (best case scenario, none had the event, all had the event, and worst case scenario) and four plausible assumptions for missing data based on the informative missingness odds ratio (IMOR) approach (IMOR 1.5 (least stringent), IMOR 2, IMOR 3, IMOR 5 (most stringent)); percentage of meta-analyses that crossed the threshold of the null effect for each method; and percentage of meta-analyses that qualitatively changed direction of effect for each method. Sensitivity analyses based on the eight different methods of handling missing data were conducted. RESULTS: 100 systematic reviews with 653 randomised controlled trials were included. When applying the implausible but commonly discussed assumptions, the median change in the relative effect estimate varied from 0% to 30.4%. The percentage of meta-analyses crossing the threshold of the null effect varied from 1% (best case scenario) to 60% (worst case scenario), and 26% changed direction with the worst case scenario. When applying the plausible assumptions, the median percentage change in relative effect estimate varied from 1.4% to 7.0%. The percentage of meta-analyses crossing the threshold of the null effect varied from 6% (IMOR 1.5) to 22% (IMOR 5) of meta-analyses, and 2% changed direction with the most stringent (IMOR 5). CONCLUSION: Even when applying plausible assumptions to the outcomes of participants with definite missing data, the average change in pooled relative effect estimate is substantive, and almost a quarter (22%) of meta-analyses crossed the threshold of the null effect. Systematic review authors should present the potential impact of missing outcome data on their effect estimates and use this to inform their overall GRADE (grading of recommendations assessment, development, and evaluation) ratings of risk of bias and their interpretation of the results.


Assuntos
Metanálise como Assunto , Projetos de Pesquisa/normas , Revisões Sistemáticas como Assunto , Viés , Interpretação Estatística de Dados , Humanos , Ensaios Clínicos Controlados Aleatórios como Assunto
16.
Med Image Anal ; 65: 101794, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32781377

RESUMO

The COVID-19 pandemic is causing a major outbreak in more than 150 countries around the world, having a severe impact on the health and life of many people globally. One of the crucial step in fighting COVID-19 is the ability to detect the infected patients early enough, and put them under special care. Detecting this disease from radiography and radiology images is perhaps one of the fastest ways to diagnose the patients. Some of the early studies showed specific abnormalities in the chest radiograms of patients infected with COVID-19. Inspired by earlier works, we study the application of deep learning models to detect COVID-19 patients from their chest radiography images. We first prepare a dataset of 5000 Chest X-rays from the publicly available datasets. Images exhibiting COVID-19 disease presence were identified by board-certified radiologist. Transfer learning on a subset of 2000 radiograms was used to train four popular convolutional neural networks, including ResNet18, ResNet50, SqueezeNet, and DenseNet-121, to identify COVID-19 disease in the analyzed chest X-ray images. We evaluated these models on the remaining 3000 images, and most of these networks achieved a sensitivity rate of 98% ( ±  3%), while having a specificity rate of around 90%. Besides sensitivity and specificity rates, we also present the receiver operating characteristic (ROC) curve, precision-recall curve, average prediction, and confusion matrix of each model. We also used a technique to generate heatmaps of lung regions potentially infected by COVID-19 and show that the generated heatmaps contain most of the infected areas annotated by our board certified radiologist. While the achieved performance is very encouraging, further analysis is required on a larger set of COVID-19 images, to have a more reliable estimation of accuracy rates. The dataset, model implementations (in PyTorch), and evaluations, are all made publicly available for research community at https://github.com/shervinmin/DeepCovid.git.


Assuntos
Infecções por Coronavirus/diagnóstico por imagem , Conjuntos de Dados como Assunto , Aprendizado Profundo , Pneumonia Viral/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador , Radiografia Torácica , Betacoronavirus , Interpretação Estatística de Dados , Diagnóstico Diferencial , Humanos , Redes Neurais de Computação , Pandemias , Valor Preditivo dos Testes , Sensibilidade e Especificidade
18.
Value Health ; 23(7): 918-927, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-32762994

RESUMO

OBJECTIVES: To develop efficient approaches for fitting network meta-analysis (NMA) models with time-varying hazard ratios (such as fractional polynomials and piecewise constant models) to allow practitioners to investigate a broad range of models rapidly and to achieve a more robust and comprehensive model selection strategy. METHODS: We reformulated the fractional polynomial and piecewise constant NMA models using analysis of variance-like parameterization. With this approach, both models are expressed as generalized linear models (GLMs) with time-varying covariates. Such models can be fitted efficiently with standard frequentist techniques. We applied our approach to the example data from the study by Jansen et al, in which fractional polynomial NMA models were introduced. RESULTS: Fitting frequentist fixed-effect NMAs for a large initial set of candidate models took less than 1 second with standard GLM routines. This allowed for model selection from a large range of hazard ratio structures by comparing a set of criteria including Akaike information criterion/Bayesian information criterion, visual inspection of goodness-of-fit, and long-term extrapolations. The "best" models were then refitted in a Bayesian framework. Estimates agreed very closely. CONCLUSIONS: NMA models with time-varying hazard ratios can be explored efficiently with a stepwise approach. A frequentist fixed-effect framework enables rapid exploration of different models. The best model can then be assessed further in a Bayesian framework to capture and propagate uncertainty for decision-making.


Assuntos
Interpretação Estatística de Dados , Modelos Estatísticos , Metanálise em Rede , Teorema de Bayes , Humanos , Modelos Lineares , Fatores de Tempo
19.
PLoS Med ; 17(8): e1003232, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-32764746

RESUMO

BACKGROUND: Obesity is closely related to the development of insulin resistance and type 2 diabetes (T2D). The prevention of T2D has become imperative to stem the rising rates of this disease. Weight loss is highly effective in preventing T2D; however, the at-risk pool is large, and a clinically meaningful metric for risk stratification to guide interventions remains a challenge. The objective of this study is to predict T2D risk using full-information continuous analysis of nationally sampled data from white and black American adults age ≥45 years. METHODS AND FINDINGS: A sample of 12,043 black (33%) and white individuals from a population-based cohort, REasons for Geographic And Racial Differences in Stroke (REGARDS) (enrolled 2003-2007), was observed through 2013-2016. The mean participant age was 63.12 ± 8.62 years, and 43.7% were male. Mean BMI was 28.55 ± 5.61 kg/m2. Risk factors for T2D regularly recorded in the primary care setting were used to evaluate future T2D risk using Bayesian logistic regression. External validation was performed using 9,710 participants (19% black) from Atherosclerotic Risk in Communities (ARIC) (enrolled 1987-1989), observed through 1996-1998. The mean participant age in this cohort was 53.86 ± 5.65 years, and 44.6% were male. Mean BMI was 27.15 ± 4.92 kg/m2. Predictive performance was assessed using the receiver operating characteristic (ROC) curves and area under the curve (AUC) statistics. The primary outcome was incident T2D. By 2016 in REGARDS, there were 1,602 incident cases of T2D. Risk factors used to predict T2D progression included age, sex, race, BMI, triglycerides, high-density lipoprotein, blood pressure, and blood glucose. The Bayesian logistic model (AUC = 0.79) outperformed the Framingham risk score (AUC = 0.76), the American Diabetes Association risk score (AUC = 0.64), and a cardiometabolic disease system (using Adult Treatment Panel III criteria) (AUC = 0.75). Validation in ARIC was robust (AUC = 0.85). Main limitations include the limited generalizability of the REGARDS sample to black and white, older Americans, and no time to diagnosis for T2D. CONCLUSIONS: Our results show that a Bayesian logistic model using full-information continuous predictors has high predictive discrimination, and can be used to quantify race- and sex-specific T2D risk, providing a new, powerful predictive tool. This tool can be used for T2D prevention efforts including weight loss therapy by allowing clinicians to target high-risk individuals in a manner that could be used to optimize outcomes.


Assuntos
Afro-Americanos , Interpretação Estatística de Dados , Diabetes Mellitus Tipo 2/sangue , Diabetes Mellitus Tipo 2/epidemiologia , Grupo com Ancestrais do Continente Europeu , Idoso , Idoso de 80 Anos ou mais , Teorema de Bayes , Glicemia/metabolismo , Estudos de Coortes , Diabetes Mellitus Tipo 2/diagnóstico , Feminino , Seguimentos , Humanos , Incidência , Resistência à Insulina/fisiologia , Modelos Logísticos , Estudos Longitudinais , Masculino , Pessoa de Meia-Idade , Obesidade/sangue , Obesidade/diagnóstico , Obesidade/epidemiologia , Valor Preditivo dos Testes , Reprodutibilidade dos Testes
20.
Zhonghua Yu Fang Yi Xue Za Zhi ; 54(7): 804-812, 2020 Jul 06.
Artigo em Chinês | MEDLINE | ID: mdl-32842307

RESUMO

Repeated measurement data is a common type of data in medicine, whichcan not be simply compared at each time point, and a professional statistical analysis method should be used to analysis this kind of data. Three common statistical methods were introduced for repeated measurement data, including repeated measurement analysis of variance, generalized estimation equations and multilevel models.The implementation of specific software and related results for the three methods based on some cases were also explainedin the article. Additionally, we compared the actual application of the three methods, in order to help clinical researchers to analyze repeated measurement data correctly and to improve their efficiency of data analysis.


Assuntos
Modelos Estatísticos , Projetos de Pesquisa , Interpretação Estatística de Dados , Software
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA