Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros











Intervalo de ano de publicação
1.
Contemp Clin Trials ; 125: 107057, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36539060

RESUMO

BACKGROUND: Effective recruitment and retention strategies are essential in clinical trials. METHODS: The MemAID trial consisted of 12 visits during 24 weeks of intranasal insulin or placebo treatment and 24 weeks of post-treatment follow-up in older people with and without diabetes. Enhanced retention strategies were implemented mid study to address high drop-out rate. Baseline variables used in Cox regression models to identify dropout risk factors were: demographics and social characteristics, functional measures, metabolic and cardiovascular parameters, and medications. RESULTS: 244 participants were randomized; 13 (5.3%) were discontinued due to adverse events. From the remaining 231 randomized participants, 65 (28.1%) dropped out, and 166 (71.9%) did not. The Non-retention group included 95 participants not exposed to retention strategies, of which 43 (45.2%) dropped out. The Retention group included 136 participants exposed to enhanced retention strategies, of which 22 (16.2%) dropped out. Dropout risk factors included being unmarried, a longer diabetes duration, using oral antidiabetics as compared to not using, worse executive function and chronic pain. After adjusting for exposure to retention strategies, worse baseline executive function composite score (p = 0.001) and chronic pain diagnosis (p = 0.032) were independently associated with a greater risk of dropping out. The probability of dropping out decreased with longer exposure to retention strategies and the dropout rate per month decreased from 4.1% to 1.8% (p = 0.04) on retention strategies. CONCLUSIONS: Baseline characteristics allow prediction of dropping out from a clinical trial in older participants. Retention strategies has been effective at minimizing the impact of dropout-related risk factors. TRIAL REGISTRATION: Clinical trials.gov NCT2415556 3/23/2015 (www. CLINICALTRIALS: gov).


Assuntos
Dor Crônica , Diabetes Mellitus Tipo 2 , Humanos , Idoso , Diabetes Mellitus Tipo 2/tratamento farmacológico , Insulina/uso terapêutico , Hipoglicemiantes/uso terapêutico , Administração Intranasal
2.
J Am Med Inform Assoc ; 19(5): 809-16, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22707743

RESUMO

OBJECTIVE: This study explores active learning algorithms as a way to reduce the requirements for large training sets in medical text classification tasks. DESIGN: Three existing active learning algorithms (distance-based (DIST), diversity-based (DIV), and a combination of both (CMB)) were used to classify text from five datasets. The performance of these algorithms was compared to that of passive learning on the five datasets. We then conducted a novel investigation of the interaction between dataset characteristics and the performance results. MEASUREMENTS: Classification accuracy and area under receiver operating characteristics (ROC) curves for each algorithm at different sample sizes were generated. The performance of active learning algorithms was compared with that of passive learning using a weighted mean of paired differences. To determine why the performance varies on different datasets, we measured the diversity and uncertainty of each dataset using relative entropy and correlated the results with the performance differences. RESULTS: The DIST and CMB algorithms performed better than passive learning. With a statistical significance level set at 0.05, DIST outperformed passive learning in all five datasets, while CMB was found to be better than passive learning in four datasets. We found strong correlations between the dataset diversity and the DIV performance, as well as the dataset uncertainty and the performance of the DIST algorithm. CONCLUSION: For medical text classification, appropriate active learning algorithms can yield performance comparable to that of passive learning with considerably smaller training sets. In particular, our results suggest that DIV performs better on data with higher diversity and DIST on data with lower uncertainty.


Assuntos
Mineração de Dados/métodos , Processamento de Linguagem Natural , Algoritmos , Inteligência Artificial , Humanos , Curva ROC
3.
BMC Med Inform Decis Mak ; 12: 8, 2012 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-22336388

RESUMO

BACKGROUND: Supervised learning methods need annotated data in order to generate efficient models. Annotated data, however, is a relatively scarce resource and can be expensive to obtain. For both passive and active learning methods, there is a need to estimate the size of the annotated sample required to reach a performance target. METHODS: We designed and implemented a method that fits an inverse power law model to points of a given learning curve created using a small annotated training set. Fitting is carried out using nonlinear weighted least squares optimization. The fitted model is then used to predict the classifier's performance and confidence interval for larger sample sizes. For evaluation, the nonlinear weighted curve fitting method was applied to a set of learning curves generated using clinical text and waveform classification tasks with active and passive sampling methods, and predictions were validated using standard goodness of fit measures. As control we used an un-weighted fitting method. RESULTS: A total of 568 models were fitted and the model predictions were compared with the observed performances. Depending on the data set and sampling method, it took between 80 to 560 annotated samples to achieve mean average and root mean squared error below 0.01. Results also show that our weighted fitting method outperformed the baseline un-weighted method (p < 0.05). CONCLUSIONS: This paper describes a simple and effective sample size prediction algorithm that conducts weighted fitting of learning curves. The algorithm outperformed an un-weighted algorithm described in previous literature. It can help researchers determine annotation sample size for supervised machine learning.


Assuntos
Algoritmos , Curva de Aprendizado , Aprendizagem Baseada em Problemas/métodos , Tamanho da Amostra , Interpretação Estatística de Dados , Diagnóstico por Computador , Humanos , Modelos Estatísticos , Dinâmica não Linear , Reconhecimento Automatizado de Padrão , Valor Preditivo dos Testes , Aprendizagem por Probabilidade , Reprodutibilidade dos Testes , Processos Estocásticos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA