RESUMO
BACKGROUND: Over the past several decades, metrics have been defined to assess the quality of various types of models and to compare their performance depending on their capacity to explain the variance found in real-life data. However, available validation methods are mostly designed for statistical regressions rather than for mechanistic models. To our knowledge, in the latter case, there are no consensus standards, for instance for the validation of predictions against real-world data given the variability and uncertainty of the data. In this work, we focus on the prediction of time-to-event curves using as an application example a mechanistic model of non-small cell lung cancer. We designed four empirical methods to assess both model performance and reliability of predictions: two methods based on bootstrapped versions of parametric statistical tests: log-rank and combined weighted log-ranks (MaxCombo); and two methods based on bootstrapped prediction intervals, referred to here as raw coverage and the juncture metric. We also introduced the notion of observation time uncertainty to take into consideration the real life delay between the moment when an event happens, and the moment when it is observed and reported. RESULTS: We highlight the advantages and disadvantages of these methods according to their application context. We have shown that the context of use of the model has an impact on the model validation process. Thanks to the use of several validation metrics we have highlighted the limit of the model to predict the evolution of the disease in the whole population of mutations at the same time, and that it was more efficient with specific predictions in the target mutation populations. The choice and use of a single metric could have led to an erroneous validation of the model and its context of use. CONCLUSIONS: With this work, we stress the importance of making judicious choices for a metric, and how using a combination of metrics could be more relevant, with the objective of validating a given model and its predictions within a specific context of use. We also show how the reliability of the results depends both on the metric and on the statistical comparisons, and that the conditions of application and the type of available information need to be taken into account to choose the best validation strategy.
Assuntos
Adenocarcinoma de Pulmão , Carcinoma Pulmonar de Células não Pequenas , Neoplasias Pulmonares , Humanos , Carcinoma Pulmonar de Células não Pequenas/genética , Reprodutibilidade dos Testes , Incerteza , Neoplasias Pulmonares/genética , Adenocarcinoma de Pulmão/genética , Receptores ErbB/genéticaRESUMO
Reimbursement of drugs by public or private insurance systems is increasingly problematic, including in supposedly "rich" countries. There is an international consensus to consider the benefit of a Health technology assessment to clarify decisions on reimbursement by the collectivity, and this includes taking account of the target population of the new drug. The authors debate about the urgent need of a better quantification of the target population, which must include a qualitative description of this target population and a scientific extrapolation of the target population, which is certainly the most challenging problem.
Assuntos
Reembolso de Seguro de Saúde/economia , Seguro de Serviços Farmacêuticos/economia , Mecanismo de Reembolso/economia , Tomada de Decisões , Humanos , Avaliação da Tecnologia Biomédica/métodosRESUMO
BACKGROUND: The UK Prospective Diabetes Study showed that metformin decreases mortality compared to diet alone in overweight patients with type 2 diabetes mellitus. Since then, it has been the first-line treatment in overweight patients with type 2 diabetes. However, metformin-sulphonylurea bitherapy may increase mortality. METHODS AND FINDINGS: This meta-analysis of randomised controlled trials evaluated metformin efficacy (in studies of metformin versus diet alone, versus placebo, and versus no treatment; metformin as an add-on therapy; and metformin withdrawal) against cardiovascular morbidity or mortality in patients with type 2 diabetes. We searched Medline, Embase, and the Cochrane database. Primary end points were all-cause mortality and cardiovascular death. Secondary end points included all myocardial infarctions, all strokes, congestive heart failure, peripheral vascular disease, leg amputations, and microvascular complications. Thirteen randomised controlled trials (13,110 patients) were retrieved; 9,560 patients were given metformin, and 3,550 patients were given conventional treatment or placebo. Metformin did not significantly affect the primary outcomes all-cause mortality, risk ratio (RR)=0.99 (95% CI: 0.75 to 1.31), and cardiovascular mortality, RR=1.05 (95% CI: 0.67 to 1.64). The secondary outcomes were also unaffected by metformin treatment: all myocardial infarctions, RR=0.90 (95% CI: 0.74 to 1.09); all strokes, RR=0.76 (95% CI: 0.51 to 1.14); heart failure, RR=1.03 (95% CI: 0.67 to 1.59); peripheral vascular disease, RR=0.90 (95% CI: 0.46 to 1.78); leg amputations, RR=1.04 (95% CI: 0.44 to 2.44); and microvascular complications, RR=0.83 (95% CI: 0.59 to 1.17). For all-cause mortality and cardiovascular mortality, there was significant heterogeneity when including the UK Prospective Diabetes Study subgroups (I(2)=41% and 59%). There was significant interaction with sulphonylurea as a concomitant treatment for myocardial infarction (p=0.10 and 0.02, respectively). CONCLUSIONS: Although metformin is considered the gold standard, its benefit/risk ratio remains uncertain. We cannot exclude a 25% reduction or a 31% increase in all-cause mortality. We cannot exclude a 33% reduction or a 64% increase in cardiovascular mortality. Further studies are needed to clarify this situation.
Assuntos
Diabetes Mellitus Tipo 2/tratamento farmacológico , Angiopatias Diabéticas/mortalidade , Metformina/uso terapêutico , Avaliação de Resultados em Cuidados de Saúde , Diabetes Mellitus Tipo 2/complicações , Diabetes Mellitus Tipo 2/mortalidade , Humanos , Sobrepeso/complicações , Sobrepeso/mortalidade , Compostos de Sulfonilureia/efeitos adversosRESUMO
Health technology assessment (HTA) aims to be a systematic, transparent, unbiased synthesis of clinical efficacy, safety, and value of medical products (MPs) to help policymakers, payers, clinicians, and industry to make informed decisions. The evidence available for HTA has gaps-impeding timely prediction of the individual long-term effect in real clinical practice. Also, appraisal of an MP needs cross-stakeholder communication and engagement. Both aspects may benefit from extended use of modeling and simulation. Modeling is used in HTA for data-synthesis and health-economic projections. In parallel, regulatory consideration of model informed drug development (MIDD) has brought attention to mechanistic modeling techniques that could in fact be relevant for HTA. The ability to extrapolate and generate personalized predictions renders the mechanistic MIDD approaches suitable to support translation between clinical trial data into real-world evidence. In this perspective, we therefore discuss concrete examples of how mechanistic models could address HTA-related questions. We shed light on different stakeholder's contributions and needs in the appraisal phase and suggest how mechanistic modeling strategies and reporting can contribute to this effort. There are still barriers dissecting the HTA space and the clinical development space with regard to modeling: lack of an adapted model validation framework for decision-making process, inconsistent and unclear support by stakeholders, limited generalizable use cases, and absence of appropriate incentives. To address this challenge, we suggest to intensify the collaboration between competent authorities, drug developers and modelers with the aim to implement mechanistic models central in the evidence generation, synthesis, and appraisal of HTA so that the totality of mechanistic and clinical evidence can be leveraged by all relevant stakeholders.
RESUMO
Respiratory disease trials are profoundly affected by non-pharmaceutical interventions (NPIs) against COVID-19 because they perturb existing regular patterns of all seasonal viral epidemics. To address trial design with such uncertainty, we developed an epidemiological model of respiratory tract infection (RTI) coupled to a mechanistic description of viral RTI episodes. We explored the impact of reduced viral transmission (mimicking NPIs) using a virtual population and in silico trials for the bacterial lysate OM-85 as prophylaxis for RTI. Ratio-based efficacy metrics are only impacted under strict lockdown whereas absolute benefit already is with intermediate NPIs (eg. mask-wearing). Consequently, despite NPI, trials may meet their relative efficacy endpoints (provided recruitment hurdles can be overcome) but are difficult to assess with respect to clinical relevance. These results advocate to report a variety of metrics for benefit assessment, to use adaptive trial design and adapted statistical analyses. They also question eligibility criteria misaligned with the actual disease burden.
Assuntos
COVID-19 , Transtornos Respiratórios , Infecções Respiratórias , Viroses , COVID-19/prevenção & controle , Ensaios Clínicos como Assunto , Controle de Doenças Transmissíveis/métodos , Humanos , Infecções Respiratórias/epidemiologia , SARS-CoV-2 , Viroses/epidemiologiaRESUMO
In order to propose a more precise definition and explore how to reduce ethical losses in randomized controlled clinical trials (RCTs), we set out to identify trial participants who do not contribute to demonstrating that the treatment in the experimental arm is superior to that in the control arm. RCTs emerged mid-last century as the gold standard for assessing efficacy, becoming the cornerstone of the value of new therapies, yet their ethical grounds are a matter of debate. We introduce the concept of unnecessary participants in RCTs, the sum of non-informative participants and non-responders. The non-informative participants are considered not informative with respect to the efficacy measured in the trial in contrast to responders who carry all the information required to conclude on the treatment's efficacy. The non-responders present the event whether or not they are treated with the experimental treatment. The unnecessary participants carry the burden of having to participate in a clinical trial without benefiting from it, which might include experiencing side effects. Thus, these unnecessary participants carry the ethical loss that is inherent to the RCT methodology. On the contrary, responders to the experimental treatment bear its entire efficacy in the RCT. Starting from the proportions observed in a real placebo-controlled trial from the literature, we carried out simulations of RCTs progressively increasing the proportion of responders up to 100%. We show that the number of unnecessary participants decreases steadily until the RCT's ethical loss reaches a minimum. In parallel, the trial sample size decreases (presumably its cost as well), although the trial's statistical power increases as shown by the increase of the chi-square comparing the event rates between the two arms. Thus, we expect that increasing the proportion of responders in RCTs would contribute to making them more ethically acceptable, with less false negative outcomes.
Assuntos
Ensaios Clínicos Controlados Aleatórios como Assunto/éticaRESUMO
The value of in silico methods in drug development and evaluation has been demonstrated repeatedly and convincingly. While their benefits are now unanimously recognized, international standards for their evaluation, accepted by all stakeholders involved, are still to be established. In this white paper, we propose a risk-informed evaluation framework for mechanistic model credibility evaluation. To properly frame the proposed verification and validation activities, concepts such as context of use, regulatory impact and risk-based analysis are discussed. To ensure common understanding between all stakeholders, an overview is provided of relevant in silico terminology used throughout this paper. To illustrate the feasibility of the proposed approach, we have applied it to three real case examples in the context of drug development, using a credibility matrix currently being tested as a quick-start tool by regulators. Altogether, this white paper provides a practical approach to model evaluation, applicable in both scientific and regulatory evaluation contexts.
Assuntos
Simulação por Computador , Desenvolvimento de Medicamentos/métodos , Modelos Teóricos , Desenvolvimento de Medicamentos/legislação & jurisprudência , Humanos , Medição de Risco/métodos , Terminologia como AssuntoRESUMO
Ischemic stroke involves numerous and complex pathophysiological mechanisms including blood flow reduction, ionic exchanges, spreading depressions and cell death through necrosis or apoptosis. We used a mathematical model based on these phenomena to study the influences of intensity and duration of ischemia on the final size of the infarcted area. This model relies on a set of ordinary and partial differential equations. After a sensibility study, the model was used to carry out in silico experiments in various ischemic conditions. The simulation results show that the proportion of apoptotic cells increases when the intensity of ischemia decreases, which contributes to the model validation. The simulation results also show that the influence of ischemia duration on the infarct size is more complicated. They suggest that reperfusion is beneficial when performed in the early stroke but may be either inefficacious or even deleterious when performed later after the stroke onset. This aggravation could be explained by the depolarisation waves which might continue to spread ischemic damage and by the speeding up of the apoptotic process leading to cell death. The effect of reperfusion on cell death through these two phenomena needs to be further studied in order to develop new therapeutic strategies for stroke patients.
Assuntos
Isquemia Encefálica/patologia , Isquemia Encefálica/fisiopatologia , Modelos Neurológicos , Acidente Vascular Cerebral/patologia , Acidente Vascular Cerebral/fisiopatologia , Algoritmos , Apoptose , Infarto Encefálico/patologia , Infarto Encefálico/fisiopatologia , Circulação Cerebrovascular , Humanos , Modelos Cardiovasculares , Necrose , Fluxo Sanguíneo Regional , Fatores de TempoAssuntos
Transplante de Pulmão/efeitos adversos , Disfunção Primária do Enxerto/diagnóstico , Insuficiência Respiratória/terapia , Algoritmos , Aloenxertos , Bronquiolite Obliterante/etiologia , Bronquiolite Obliterante/mortalidade , Doença Crônica , Simulação por Computador , Rejeição de Enxerto , Humanos , Transplante de Pulmão/métodos , Fenótipo , Disfunção Primária do Enxerto/mortalidade , Pneumologia/normas , Teoria de Sistemas , Resultado do TratamentoRESUMO
Diseases are complex systems. Modelling them, i.e. systems physiopathology, is a quite demanding, complicated, multidimensional, multiscale process. As such, in order to achieve the goal of the model and further to optimise a rather-time and resource-consuming process, a relevant and easy to practice methodology is required. It includes guidance for validation. Also, the model development should be managed as a complicated process, along a strategy which has been elaborated in the beginning. It should be flexible enough to meet every case. A model is a representation of the available knowledge. All available knowledge does not have the same level of evidence and, further, there is a large variability of the values of all parameters (e.g. affinity constant or ionic current) across the literature. In addition, in a complex biological system there are always values lacking for a few or sometimes many parameters. All these three aspects are sources of uncertainty on the range of validity of the models and raise unsolved problems for designing a relevant model. Tools and techniques for integrating the parameter range of experimental values, level of evidence and missing data are needed.
Assuntos
Doença , Modelos Biológicos , Biologia de Sistemas/métodos , Animais , Simulação por Computador , HumanosRESUMO
Ischemic stroke is the third cause of death in industrialised countries, but no satisfactory treatment is currently available. The hundreds of neuroprotective drugs developed to block the ischemic cascade gave very promising results in animal models but the clinical trials performed with these drugs showed no beneficial effects in stroke patients. Many hypotheses were advanced to explain this discrepancy, among which the morphological and functional differences between human and rodent brains. This discrepancy could be partly due to the differences in white matter and glial cell proportions between human and rodent brains. In order to test this hypothesis, we built a mathematical model of the main early pathophysiological mechanisms of stroke in rodent and in human brains. This model is a two-scale model and relies on a set of ordinary differential equations. We built two versions of this model (for human and rodent brains) differing in their white matter and glial cell proportions. Then, we carried out in silico experiments with various neuroprotective drugs. The simulation results obtained with a sodium channel blocker show that the proportion of penumbra recovery is much higher in rodent than in human brain and the results are similar with some other neuroprotective drugs tested during phase III trials. This in silico investigation suggests that the proportions of glial cells and white matter have an influence on neuroprotective drug efficacy. It reinforces the hypothesis that histological and morphological differences between rodent and human brains can partly explain the failure of these agents in clinical trials.
Assuntos
Velocidade do Fluxo Sanguíneo/efeitos dos fármacos , Isquemia Encefálica/prevenção & controle , Isquemia Encefálica/fisiopatologia , Circulação Cerebrovascular/efeitos dos fármacos , Modelos Neurológicos , Fármacos Neuroprotetores/administração & dosagem , Acidente Vascular Cerebral/prevenção & controle , Acidente Vascular Cerebral/fisiopatologia , Animais , Simulação por Computador , HumanosRESUMO
Tumor angiogenesis is the process by which new blood vessels are formed and enhance the oxygenation and growth of tumors. As angiogenesis is recognized as being a critical event in cancer development, considerable efforts have been made to identify inhibitors of this process. Cytostatic treatments that target the molecular events of the angiogenesis process have been developed, and have met with some success. However, it is usually difficult to preclinically assess the effectiveness of targeted therapies, and apparently promising compounds sometimes fail in clinical trials. We have developed a multiscale mathematical model of angiogenesis and tumor growth. At the molecular level, the model focuses on molecular competition between pro- and anti-angiogenic substances modeled on the basis of pharmacological laws. At the tissue scale, the model uses partial differential equations to describe the spatio-temporal changes in cancer cells during three stages of the cell cycle, as well as those of the endothelial cells that constitute the blood vessel walls. This model is used to qualitatively assess how efficient endostatin gene therapy is. Endostatin is an anti-angiogenic endogenous substance. The gene therapy entails overexpressing endostatin in the tumor and in the surrounding tissue. Simulations show that there is a critical treatment dose below which increasing the duration of treatment leads to a loss of efficacy. This theoretical model may be useful to evaluate the efficacy of therapies targeting angiogenesis, and could therefore contribute to designing prospective clinical trials.
Assuntos
Inibidores da Angiogênese/uso terapêutico , Modelos Biológicos , Neoplasias/irrigação sanguínea , Neovascularização Patológica/terapia , Angiopoietinas/metabolismo , Endostatinas/biossíntese , Endostatinas/genética , Endotélio Vascular/patologia , Terapia Genética/métodos , Humanos , Proteínas de Neoplasias/metabolismo , Neoplasias/metabolismo , Neoplasias/terapia , Neovascularização Patológica/metabolismo , Neovascularização Patológica/patologia , Consumo de Oxigênio/fisiologia , Resultado do Tratamento , Fator A de Crescimento do Endotélio Vascular/metabolismoRESUMO
BACKGROUND: Numerous studies have examined the validity of available scores to predict the absolute cardiovascular risk. DESIGN: We developed a virtual population based on data representative of the French population and compared the performances of the two most popular risk equations to predict cardiovascular death: Framingham and SCORE. METHODS: A population was built based on official French demographic statistics and summarized data from representative observational studies. The 10-year coronary and cardiovascular death risk and their ratio were computed for each individual by SCORE and Framingham equations. The resulting rates were compared with those derived from national vital statistics. RESULTS: Framingham overestimated French coronary deaths by 2.8 in men and 1.9 in women, and cardiovascular deaths by 1.5 in men and 1.3 in women. SCORE overestimated coronary death by 1.6 in men and 1.7 in women, and underestimated cardiovascular death by 0.94 in men and 0.85 in women. Our results revealed an exaggerated representation of coronary among cardiovascular death predicted by Framingham, with coronary death exceeding cardiovascular death in some individual profiles. Sensitivity analyses gave some insights to explain the internal inconsistency of the Framingham equations. CONCLUSION: Evidence is that SCORE should be preferred to Framingham to predict cardiovascular death risk in French population. This discrepancy between prediction scores is likely to be observed in other populations. To improve the validation of risk equations, specific guidelines should be issued to harmonize the outcomes definition across epidemiologic studies. Prediction models should be calibrated for risk differences in the space and time dimensions.
Assuntos
Doenças Cardiovasculares/mortalidade , Indicadores Básicos de Saúde , Adulto , Doenças Cardiovasculares/etiologia , Simulação por Computador , Feminino , França/epidemiologia , Humanos , Masculino , Pessoa de Meia-Idade , Valor Preditivo dos Testes , Reprodutibilidade dos Testes , Medição de Risco , Fatores de Risco , Fatores de TempoRESUMO
BACKGROUND: In medical practice, it is generally accepted that the 'effect model' describing the relationship between baseline risk and risk under treatment is linear, i.e. 'relative risk' is constant. Absolute benefit is then proportional to a patient's baseline risk and the treatment is most effective among high-risk patients. Alternatively, the 'effect model' becomes curvilinear when 'odds ratio' is considered to be constant. However these two models are based on purely empirical considerations, and there is still no theoretical approach to support either the linear or the non-linear relation. PRESENTATION OF THE HYPOTHESIS: From logistic and sigmoidal Emax (Hill) models, we derived a phenomenological model which includes the possibility of integrating both beneficial and harmful effects. Instead of a linear relation, our model suggests that the relationship is curvilinear i.e. the moderate-risk patients gain most from the treatment in opposition to those with low or high risk. TESTING THE HYPOTHESIS: Two approaches can be proposed to investigate in practice such a model. The retrospective one is to perform a meta-analysis of clinical trials with subgroups of patients including a great range of baseline risks. The prospective one is to perform a large clinical trial in which patients are recruited according to several prestratified diverse and high risk groups. IMPLICATIONS OF THE HYPOTHESIS: For the quantification of the treatment effect and considering such a model, the discrepancy between odds ratio and relative risk may be related not only to the level of risk under control conditions, but also to the characteristics of the dose-effect relation and the amount of dose administered. In the proposed approach, OR may be considered as constant in the whole range of Rc, and depending only on the intrinsic characteristics of the treatment. Therefore, OR should be preferred rather than RR to summarize information on treatment efficacy.
RESUMO
BACKGROUND: Randomised, double-blind, clinical trial methodology minimises bias in the measurement of treatment efficacy. However, most phase III trials in non-orphan diseases do not include individuals from the population to whom efficacy findings will be applied in the real world. Thus, a translation process must be used to infer effectiveness for these populations. Current conventional translation processes are not formalised and do not have a clear theoretical or practical base. There is a growing need for accurate translation, both for public health considerations and for supporting the shift towards personalised medicine. OBJECTIVE: Our objective was to assess the results of translation of efficacy data to population efficacy from two simulated clinical trials for two drugs in three populations, using conventional methods. METHODS: We simulated three populations, two drugs with different efficacies and two trials with different sampling protocols. RESULTS: With few exceptions, current translation methods do not result in accurate population effectiveness predictions. The reason for this failure is the non-linearity of the translation method. One of the consequences of this inaccuracy is that pharmacoeconomic and postmarketing surveillance studies based on direct use of clinical trial efficacy metrics are flawed. CONCLUSION: There is a clear need to develop and validate functional and relevant translation approaches for the translation of clinical trial efficacy to the real-world setting.
RESUMO
BACKGROUND AND OBJECTIVES: A relation between the size of treatment efficacy and severity of the disease has been postulated and observed as linear for a few therapies. We have called this relation the effect model. Our objectives were to demonstrate that the relation is general and not necessarily linear. STUDY DESIGN AND SETTING: We extend the number of observed effect model. Then we established three numerical models of treatment activity corresponding to the three modes of action we have identified. Using these models, we simulated the relation. RESULTS: Empirical evidence confirms the effect model and suggests that it may be linear over a short range of event frequency. However, it provides an incomplete understanding of the phenomenon because of the inescapable limitations of data from randomized clinical trials. Numerical modeling and simulation show that the real effect model is likely to be more complicated. It is probably linear only in rare instances. The effect model is general. It depends on factors related to the individual, disease and outcome. CONCLUSION: Contrarily to common, assumption, since the effect model is often curvilinear, the relative risk cannot be granted as constant. The effect model should be taken into account when discovering and developing new therapies, when making, health care policy decisions or adjusting clinical decisions to the patient risk profile.
Assuntos
Modelos Estatísticos , Resultado do Tratamento , Antagonistas Adrenérgicos beta/uso terapêutico , Humanos , Infarto do Miocárdio/tratamento farmacológico , Infarto do Miocárdio/mortalidade , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Índice de Gravidade de DoençaRESUMO
The astrocytic response to stroke is extremely complex and incompletely understood. On the one hand, astrocytes are known to be neuroprotective when extracellular glutamate or potassium is slightly increased. But, on the other hand, they are considered to contribute to the extracellular glutamate increase during severe ischaemia. A mathematical model is used to reproduce the dynamics of the membrane potentials, intracellular and extracellular concentrations and volumes of neurons and astrocytes during ischaemia in order to study the role of astrocytes in grey matter during the first hour of a stroke. Under conditions of mild ischaemia, astrocytes are observed to take up glutamate via the glutamate transporter, and potassium via the Na/K/Cl cotransporter, which limits glutamate and potassium increase in the extracellular space. On the contrary, under conditions of severe ischaemia, astrocytes appear to be unable to maintain potassium homeostasis. Moreover, they are shown to contribute to the excitotoxicity process by expelling glutamate out of the cells via the reversed glutamate transporter. A detailed understanding of astrocytic function and influence on neuron survival during stroke is necessary to improve the neuroprotective strategies for stroke patients.
Assuntos
Astrócitos , Modelos Neurológicos , Substância Cinzenta Periaquedutal/fisiopatologia , Acidente Vascular Cerebral/fisiopatologia , Sistema X-AG de Transporte de Aminoácidos/metabolismo , Astrócitos/metabolismo , Transporte Biológico , Isquemia Encefálica/metabolismo , Isquemia Encefálica/fisiopatologia , Difusão , Ácido Glutâmico/metabolismo , Humanos , Potássio/metabolismo , Índice de Gravidade de Doença , Simportadores de Cloreto de Sódio-Potássio/metabolismo , Acidente Vascular Cerebral/patologiaRESUMO
The multiple test issue is a tricky one. There is no entirely correct solution. It is better to avoid it by careful thinking prior to beginning the experiment. In fundamental and clinical pharmacology, there are several established concepts that allow researchers to minimize the cases where the issue arises.
Assuntos
Interpretação Estatística de Dados , Modelos Estatísticos , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Anticolesterolemiantes/administração & dosagem , Anticolesterolemiantes/uso terapêutico , Atorvastatina , Colesterol/sangue , Protocolos Clínicos , Relação Dose-Resposta a Droga , Ácidos Heptanoicos/administração & dosagem , Ácidos Heptanoicos/uso terapêutico , Humanos , Pirróis/administração & dosagem , Pirróis/uso terapêutico , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Projetos de PesquisaRESUMO
To investigate the potential benefits of two modes of evidence-based knowledge transfer ('active' and 'passive' modes) in terms of improvement of intention of prescription, knowledge, and real prescription in practice, we performed an open randomized controlled trial (CardioDAS) using a factorial design (two tested interventions: 'active' and 'passive' knowledge transfer) and a hierarchical structure (cluster of physicians for each department level). The participants were cardiologists working in French public hospitals. In the 'passive' transfer group, cardiologists received evidence-based knowledge material (available on Internet) every week for a duration of 1 year. In the 'active' transfer group, two knowledge brokers (EA, PN) visited the participating departments (every 2 months for 1 year, 2 h per visit). The primary outcome consisted in the adjusted absolute mean variation of score (difference between post- and pre-study session) of answers to simulated cases assessing the intention to prescribe. Secondary outcomes were the variation of answers to a multiple-choice questionnaire (MCQ) assessing knowledge and of the conformity of real prescriptions to evidence-based reference assessing the behavioral change. Twenty-two French units (departments) of cardiology were randomized (72 participating cardiologists). In the 'active' transfer group, the primary outcome was more improved than that in the control (P = 0.031 at the department level, absolute mean improvement of 5 points/100). The change in knowledge transfer (MCQ) was also significant (P = 0.039 at the department level, absolute mean improvement of 6 points/100). However, no benefit was shown in terms of prescription conformity to evidence. For the 'passive' mode of knowledge transfer and for the three outcomes considered, no improvement was identified. CardioDAS findings confirm that 'active' knowledge transfer has some impact on participants' intent to prescribe and knowledge, but no effect on behavioral outcome. 'Passive' transfer seems far less efficient. In addition, the size of the benefit remains small and its consequences limited in practice.
Assuntos
Cardiologia/educação , Educação Médica Continuada/métodos , Medicina Baseada em Evidências/educação , Conhecimento , HumanosRESUMO
OBJECTIVE: To study the performance of a new method designed to measure discrepancy between real prescriptions and evidence-based reference treatments. METHODS: Two different indices (additive and multiplicative) are proposed to summarize deviation between prescription and reference. Deviations thought to be observed in a population of prescribers are simulated in diverse hypothetical situations in the presence or absence of evidence-based references. The performances of both indices are compared and their sensitivities to change are explored. RESULTS: Both indices are sensitive to variation in prescriber behaviour. The additive index allows a more accurate analysis of deviation while the multiplicative index is simpler to implement and interpret but more sensitive to change. CONCLUSION: The two deviation indices may be used as new tools in surveys or trials dealing with prescribing practices.