Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
1.
Innov Clin Neurosci ; 17(7-9): 30-40, 2020 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-33520402

RESUMEN

Objective: The goal of the Depression Inventory Development (DID) project is to develop a comprehensive and psychometrically sound rating scale for major depressive disorder (MDD) that reflects current diagnostic criteria and conceptualizations of depression. We report here the evaluation of the current DID item bank using Classical Test Theory (CTT), Item Response Theory (IRT) and Rasch Measurement Theory (RMT). Methods: The present study was part of a larger multisite, open-label study conducted by the Canadian Biomarker Integration Network in Depression (ClinicalTrials.gov: NCT01655706). Trained raters administered the 32 DID items at each of two visits (MDD: baseline, n=211 and Week 8, n=177; healthy participants: baseline, n=112 and Week 8, n=104). The DID's "grid" structure operationalizes intensity and frequency of each item, with clear symptom definitions and a structured interview guide, with the current iteration assessing symptoms related to anhedonia, cognition, fatigue, general malaise, motivation, anxiety, negative thinking, pain, and appetite. Participants were also administered the Montgomery- Åsberg Depression Rating Scale (MADRS) and Quick Inventory of Depressive Symptomatology-Self-Report (QIDS-SR) that allowed DID items to be evaluated against existing "benchmark" items. CTT was used to assess data quality/reliability (i.e., missing data, skewness, scoring frequency, internal consistency), IRT to assess individual item performance by modelling an item's ability to discriminate levels of depressive severity (as assessed by the MADRS), and RMT to assess how the items perform together as a scale to capture a range of depressive severity (item targeting). These analyses together provided empirical evidence to base decisions on which DID items to remove, modify, or advance. Results: Of the 32 DID items evaluated, eight items were identified by CTT as problematic, displaying low variability in the range of responses, floor effects, and/or skewness; and four items were identified by IRT to show poor discriminative properties that would limit their clinical utility. Five additional items were deemed to be redundant. The remaining 15 DID items all fit the Rasch model, with person and item difficulty estimates indicating satisfactory item targeting, with lower precision in participants with mild levels of depression. These 15 DID items also showed good internal consistency (alpha=0.95 and inter-item correlations ranging from r=0.49 to r=0.84) and all items were sensitive to change following antidepressant treatment (baseline vs. Week 8). RMT revealed problematic item targeting for the MADRS and QIDSSR, including an absence of MADRS items targeting participants with mild/moderate depression and an absence of QIDS-SR items targeting participants with mild or severe depression. Conclusion: The present study applied CTT, IRT, and RMT to assess the measurement properties of the DID items and identify those that should be advanced, modified, or removed. Of the 32 items evaluated, 15 items showed good measurement properties. These items (along with previously evaluated items) will provide the basis for validation of a penultimate DID scale assessing anhedonia, cognitive slowing, concentration, executive function, recent memory, drive, emotional fatigue, guilt, self-esteem, hopelessness, tension, rumination, irritability, reduced appetite, insomnia, sadness, worry, suicidality, and depressed mood. The strategies adopted by the DID process provide a framework for rating scale development and validation.

2.
J Affect Disord ; 256: 143-147, 2019 09 01.
Artículo en Inglés | MEDLINE | ID: mdl-31176186

RESUMEN

International Society for CNS Clinical Trials and Methodology convened an expert Working Group that assembled consistency/inconsistency flags for the Montgomery-Asberg Depression Rating Scale (MADRS). Twenty-two flags were identified. Seven flags are believed to be strong flags that suggest that a thorough review of rating is warranted. The flags were applied to assessments derived from the NEWMEDS data repository. Almost 65% of ratings had at least one inconsistency flag raised and 22% had two or more. Application of flags to clinical ratings may improve reliability of ratings and validity of trials.


Asunto(s)
Depresión/diagnóstico , Escalas de Valoración Psiquiátrica/normas , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Psicometría , Reproducibilidad de los Resultados
3.
Ther Innov Regul Sci ; 53(2): 176-182, 2019 03.
Artículo en Inglés | MEDLINE | ID: mdl-29758992

RESUMEN

Monitoring the quality of clinical trial efficacy outcome data has received increased attention in the past decade, with regulatory guidance encouraging it to be conducted proactively, and remotely. However, the methods utilized to develop and implement risk-based data monitoring (RBDM) programs vary, and there is a dearth of published material to guide these processes in the context of central nervous system (CNS) trials. We reviewed regulatory guidance published within the past 6 years, generic white papers, and studies applying RBDM to data from CNS clinical trials. Methodologic considerations and system requirements necessary to establish an effective, real-time risk-based monitoring platform in CNS trials are presented. Key RBDM terms are defined in the context of CNS trial data, such as "critical data," "risk indicators," "noninformative data," and "mitigation of risk." Additionally, potential benefits of, and challenges associated with implementation of data quality monitoring are highlighted. Application of methodological and system requirement considerations to real-time monitoring of clinical ratings in CNS trials has the potential to minimize risk and enhance the quality of clinical trial data.


Asunto(s)
Fármacos del Sistema Nervioso Central/uso terapéutico , Ensayos Clínicos como Asunto/normas , Humanos , Control de Calidad , Riesgo
4.
Innov Clin Neurosci ; 13(9-10): 20-31, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27974997

RESUMEN

The Depression Inventory Development project is an initiative of the International Society for CNS Drug Development whose goal is to develop a comprehensive and psychometrically sound measurement tool to be utilized as a primary endpoint in clinical trials for major depressive disorder. Using an iterative process between field testing and psychometric analysis and drawing upon expertise of international researchers in depression, the Depression Inventory Development team has established an empirically driven and collaborative protocol for the creation of items to assess symptoms in major depressive disorder. Depression-relevant symptom clusters were identified based on expert clinical and patient input. In addition, as an aid for symptom identification and item construction, the psychometric properties of existing clinical scales (assessing depression and related indications) were evaluated using blinded datasets from pharmaceutical antidepressant drug trials. A series of field tests in patients with major depressive disorder provided the team with data to inform the iterative process of scale development. We report here an overview of the Depression Inventory Development initiative, including results of the third iteration of items assessing symptoms related to anhedonia, cognition, fatigue, general malaise, motivation, anxiety, negative thinking, pain and appetite. The strategies adopted from the Depression Inventory Development program, as an empirically driven and collaborative process for scale development, have provided the foundation to develop and validate measurement tools in other therapeutic areas as well.

5.
Innov Clin Neurosci ; 13(1-2): 27-33, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27413584

RESUMEN

This paper summarizes the results of the CNS Summit Data Quality Monitoring Workgroup analysis of current data quality monitoring techniques used in central nervous system (CNS) clinical trials. Based on audience polls conducted at the CNS Summit 2014, the panel determined that current techniques used to monitor data and quality in clinical trials are broad, uncontrolled, and lack independent verification. The majority of those polled endorse the value of monitoring data. Case examples of current data quality methodology are presented and discussed. Perspectives of pharmaceutical companies and trial sites regarding data quality monitoring are presented. Potential future developments in CNS data quality monitoring are described. Increased utilization of biomarkers as objective outcomes and for patient selection is considered to be the most impactful development in data quality monitoring over the next 10 years. Additional future outcome measures and patient selection approaches are discussed.

7.
J Clin Psychopharmacol ; 30(2): 193-7, 2010 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-20520295

RESUMEN

The use of centralized raters who are remotely linked to sites and interview patients via videoconferencing or teleconferencing has been suggested as a way to improve interrater reliability and interview quality. This study compared the effect of site-based and centralized ratings on patient selection and placebo response in subjects with major depressive disorder. Subjects in a 2-center placebo and active comparator controlled depression trial were interviewed twice at each of 3 time points: baseline, 1-week postbaseline, and end point--once by the site rater and once remotely via videoconference by a centralized rater. Raters were blind to each others' scores. A site-based score of greater than 17 on the 17-item Hamilton Depression Rating Scale (HDRS-17) was required for study entry. When examining all subjects entering the study, site-based raters' HDRS-17 scores were significantly higher than centralized raters' at baseline and postbaseline but not at end point. At baseline, 35% of subjects given an HDRS-17 total score of greater than 17 by a site rater were given an HDRS total score of lower than 17 by a centralized rater and would have been ineligible to enter the study if the centralized rater's score was used to determine study entry. The mean placebo change for site raters (7.52) was significantly greater than the mean placebo change for centralized raters (3.18, P < 0.001). Twenty-eight percent were placebo responders (>50% reduction in HDRS) based on site ratings versus 14% for central ratings (P < 0.001). When examining data only from those subjects whom site and centralized raters agreed were eligible for the study, there was no significant difference in the HDRS-17 scores. Findings suggest that the use of centralized raters could significantly change the study sample in a major depressive disorder trial and lead to significantly less change in mood ratings among those randomized to placebo.


Asunto(s)
Trastorno Depresivo Mayor/diagnóstico , Trastorno Depresivo Mayor/tratamiento farmacológico , Selección de Paciente , Escalas de Valoración Psiquiátrica/normas , Consulta Remota/normas , Estudios Transversales , Trastorno Depresivo Mayor/psicología , Femenino , Humanos , Masculino , Variaciones Dependientes del Observador , Efecto Placebo , Sertralina/uso terapéutico , Método Simple Ciego , Resultado del Tratamiento
8.
Int Clin Psychopharmacol ; 23(3): 120-9, 2008 May.
Artículo en Inglés | MEDLINE | ID: mdl-18408526

RESUMEN

This report describes the GRID-Hamilton Depression Rating Scale (GRID-HAMD), an improved version of the Hamilton Depression Rating Scale that was developed through a broad-based international consensus process. The GRID-HAMD separates the frequency of the symptom from its intensity for most items, refines several problematic anchors, and integrates both a structured interview guide and consensus-derived conventions for all items. Usability was established in a small three-site sample of convenience, evaluating 29 outpatients, with most evaluators finding the scale easy to use. Test-retest (4-week) and interrater reliability were established in 34 adult outpatients with major depressive disorder, as part of an ongoing clinical trial. In a separate study, interrater reliability was found to be superior to the Guy version of the HAMD, and as good as the Structured Interview Guide for the Hamilton Depression Rating Scale (SIGH-D), across 30 interview pairs. Finally, using the SIGH-D as the criterion standard, the GRID-HAMD demonstrated high concurrent validity. Overall, these data suggest that the GRID-HAMD is an improvement over the original Guy version as well as the SIGH-D in its incorporation of innovative features and preservation of high reliability and validity.


Asunto(s)
Trastorno Depresivo Mayor/diagnóstico , Entrevista Psicológica/normas , Escalas de Valoración Psiquiátrica/normas , Encuestas y Cuestionarios/normas , Adulto , Conferencias de Consenso como Asunto , Trastorno Depresivo Mayor/psicología , Trastorno Depresivo Mayor/terapia , Humanos , Cooperación Internacional , Variaciones Dependientes del Observador , Proyectos Piloto , Valor Predictivo de las Pruebas , Psicometría , Reproducibilidad de los Resultados , Resultado del Tratamiento , Estados Unidos
9.
Psychiatry Res ; 158(1): 99-103, 2008 Feb 28.
Artículo en Inglés | MEDLINE | ID: mdl-17961715

RESUMEN

Poor inter-rater reliability (IRR) is an important methodological factor that may contribute to failed trials. The sheer number of raters at diverse sites in multicenter trials presents a formidable challenge in calibration. Videoconferencing allows for the evaluation of IRR of raters at diverse sites by enabling raters at different sites to each independently interview a common patient. This is a more rigorous test of IRR than passive rating of videotapes. To evaluate the potential impact of videoconferencing on IRR, we compared IRR obtained via videoconference to IRR obtained using face-to-face interviews. Four raters at three different locations were paired using all pair-wise combinations of raters. Using videoconferencing, each paired rater independently conducted an interview with the same patient, who was at a third, central location. Raters were blind to each others' scores. ICC from this cohort (n=22) was not significantly different from the ICC obtained by a cohort using two face-to-face interviews (n=21) (0.90 vs. 0.93, respectively) nor from a cohort using one face-to-face interview and one remote interview (n=21) (0.88). The mean Hamilton Depression Rating Scale (HAMD) scores obtained were not significantly different. There appears to be no loss of signal using remote methods of calibration compared with traditional face-to-face methods.


Asunto(s)
Trastorno Depresivo Mayor/diagnóstico , Trastorno Depresivo Mayor/psicología , Encuestas y Cuestionarios , Comunicación por Videoconferencia/estadística & datos numéricos , Trastorno Depresivo Mayor/epidemiología , Humanos , Variaciones Dependientes del Observador
10.
Depress Anxiety ; 25(9): 774-86, 2008.
Artículo en Inglés | MEDLINE | ID: mdl-17935212

RESUMEN

Efforts to improve the Hamilton Rating Scale for Depression (HRSD) have included shortening the scale by selecting the best performing items, lengthening the scale by assessing additional symptoms, modifying the format and scoring of existing items, and developing structured interview guides for administration. We defined item performance exclusively in terms of the ability of items to discriminate differences among levels of depressive severity which has not be used to guide any revisions of the HRSD conducted to date. Two techniques derived from item response theory were used to improve the ability of the HRSD to discriminate among individuals with different degrees of depressive severity. Item response curves were used to quantify the ability of items to discriminate among individual differences in depressive severity, on the basis of which the most discriminating items were selected. Maximum likelihood estimates were used to compute an optimal depressive severity score, using all items, but which weighted highly discriminating items more so than items that did not discriminate well. The utility of each method was evaluated by comparing a subset of optimally discriminating items and maximum likelihood estimates of depressive severity to the Maier Philipp subscale of the HRSD, in terms of how well scales discriminate treatment effects. Effect sizes for overall change in depression severity as well as effect sizes differentiating response to treatment versus placebo were evaluated in a sample of 491 patients receiving fluoxetine and 494 patients receiving placebo. Results of analyses identified a new subset of items (IRT-6), selected on the basis of their ability to discriminate among differences in depressive severity, that accounted for more variance in full-scale HRSD scores and was better at detecting change in illness severity than the Maier Philipp subscale of the HRSD. The IRT-6 subscale was equally good as the Maier Philipp subscale in differentiating treatment from placebo response. No evidence supporting the benefits of using maximum likelihood estimates to develop optimally performing subscales was found. Implications of the results are discussed in terms of strategies for optimizing the assessment of change in overall depression severity as well as differentiating treatment response.


Asunto(s)
Antidepresivos/uso terapéutico , Depresión/diagnóstico , Depresión/tratamiento farmacológico , Fluoxetina/uso terapéutico , Adulto , Depresión/psicología , Femenino , Humanos , Masculino , Reproducibilidad de los Resultados , Índice de Severidad de la Enfermedad , Encuestas y Cuestionarios
12.
Int Clin Psychopharmacol ; 22(4): 187-91, 2007 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-17519640

RESUMEN

Clinical trials are becoming increasingly international in scope. Global studies pose unique challenges in training and calibrating raters owing to language and cultural differences. Recent findings that poorly conducted interviews reduce study power, makes attention to raters' clinical skills critical. In this study, 109 raters from 14 countries went through a two-step certification process on the Hamilton Depression and Anxiety Rating Scales: (i) an online didactic tutorial on scoring conventions, and (ii) applied clinical training, consisting of small language-specific groups in which raters took turns interviewing patients while observed by an expert trainer, and observation and evaluation of individual interviews. Translators were used when native-language trainers were unavailable. Those who were unable to attend the startup meeting received the training individually via telephone. Results found a significant improvement in raters' knowledge of scoring conventions, with the mean number of correct answers on the 20-item test improving from 14.59 to 17.83, P<0.0001. In addition, raters' clinical skills improved significantly, with the mean score on the Rater Applied Performance Scale improving from their first to their second testing from 10.25 to 11.31, P=0.003. These results support the efficacy of this applied training model in improving raters' applied clinical skills in multinational trials.


Asunto(s)
Certificación , Ensayos Clínicos como Asunto/normas , Estudios Multicéntricos como Asunto/normas , Investigadores/educación , Investigadores/normas , Antidepresivos/uso terapéutico , Competencia Clínica/normas , Trastorno Depresivo/tratamiento farmacológico , Humanos , Cooperación Internacional , Lenguaje , Variaciones Dependientes del Observador , Escalas de Valoración Psiquiátrica , Enseñanza/métodos , Telecomunicaciones
13.
Schizophr Res ; 92(1-3): 63-7, 2007 May.
Artículo en Inglés | MEDLINE | ID: mdl-17336501

RESUMEN

Problems associated with the clinician-administered rating scales have led to new approaches to improve rater training. These include interactive, on-line didactic tutorials and live, remote evaluation of raters' clinical skills through the use of videoconferencing. The purpose of this study was to evaluate this approach in training novice raters on the administration of the Positive and Negative Symptom Scale (PANSS). Twelve trainees with no prior PANSS experience completed didactic training via CD-ROM and two remote training sessions where they interviewed a standardized patient-actor while being remotely observed in real time and given feedback. Results found a significant improvement in trainees' conceptual knowledge and an improvement in trainees' clinical skills. The use of these technologies allows for training to be more effectively delivered to diverse sites in multi-center trials, and for evaluation of raters' applied clinical skills, an area that has previously been overlooked.


Asunto(s)
Personal de Salud/educación , Internet/estadística & datos numéricos , Esquizofrenia/diagnóstico , Esquizofrenia/epidemiología , Encuestas y Cuestionarios , Enseñanza/métodos , Comunicación por Videoconferencia , Adulto , Femenino , Humanos , Masculino , Variaciones Dependientes del Observador , Satisfacción del Paciente , Proyectos Piloto
14.
J Clin Psychopharmacol ; 26(1): 71-4, 2006 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-16415710

RESUMEN

OBJECTIVE: The quality of clinical interviews conducted in industry-sponsored clinical drug trials is an important but frequently overlooked variable that may influence the outcome of a study. We evaluated the quality of Hamilton Rating Scale for Depression (HAM-D) clinical interviews performed at baseline in 2 similar multicenter, randomized, placebo-controlled depression trials sponsored by 2 pharmaceutical companies. METHODS: A total of 104 audiotaped HAM-D clinical interviews were evaluated by a blinded expert reviewer for interview quality using the Rater Applied Performance Scale (RAPS). The RAPS assesses adherence to a structured interview guide, clarification of and follow-up to patient responses, neutrality, rapport, and adequacy of information obtained. RESULTS: HAM-D interviews were brief and cursory and the quality of interviews was below what would be expected in a clinical drug trial. Thirty-nine percent of the interviews were conducted in 10 minutes or less, and most interviews were rated fair or unsatisfactory on most RAPS dimensions. CONCLUSIONS: Results from our small sample illustrate that the clinical interview skills of raters who administered the HAM-D were below what many would consider acceptable. Evaluation and training of clinical interview skills should be considered as part of a rater training program.


Asunto(s)
Entrevistas como Asunto , Escalas de Valoración Psiquiátrica , Investigadores/educación , Antidepresivos/uso terapéutico , Depresión/tratamiento farmacológico , Industria Farmacéutica , Adhesión a Directriz , Humanos , Entrevistas como Asunto/métodos , Guías de Práctica Clínica como Asunto , Competencia Profesional , Ensayos Clínicos Controlados Aleatorios como Asunto , Factores de Tiempo
15.
J Psychiatr Res ; 40(3): 192-9, 2006 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-16197959

RESUMEN

OBJECTIVE: The evaluation and training of raters who conduct efficacy evaluations in clinical trials is an important methodological variable that is often overlooked. Few rater training programs focus on teaching and assessing applied clinical skills, and even fewer have been empirically examined for efficacy. The goal of this study was to develop a comprehensive, standardized, interactive rater training program using new technologies, and to compare the relative effectiveness of this approach to "traditional" rater training in a multi-center clinical trial. METHOD: 12 sites from a 22 site multi-center study were randomly selected to participate (6=traditional, 6=enriched). Traditional training consisted of an overview of scoring conventions, watching and scoring videotapes with discussion, and observation of interviews in small groups with feedback. Enriched training consisted of an interactive web tutorial, and live, remote observation of trainees conducting interviews with real or standardized patients, via video- or teleconference. Outcome measures included a didactic exam on conceptual knowledge and blinded ratings of trainee's audiotaped interviews. RESULTS: A significant difference was found between enriched and traditional training on pre-to-post training improvement on didactic knowledge, t(27)=4.2, p<0.0001. Enriched trainees clinical skills also improved significantly more than traditional trainees, t(56)=2.1, p=0.035. All trainees found the applied training helpful, and wanted similar web tutorials with other scales. CONCLUSIONS: Results support the efficacy of enriched rater training in improving both conceptual knowledge and applied skills. Remote technologies enhance training efforts, and make training accessible and cost-effective. Future rater training efforts should be subject to empirical evaluation, and include training on applied skills.


Asunto(s)
Depresión/epidemiología , Educación/normas , Internet/instrumentación , Enseñanza/métodos , Tecnología , Competencia Clínica , Demografía , Femenino , Humanos , Entrevista Psicológica , Masculino , Persona de Mediana Edad , Variaciones Dependientes del Observador
17.
J Clin Psychopharmacol ; 25(5): 407-12, 2005 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-16160614

RESUMEN

Recent evidence demonstrates that the quality of raters' applied clinical skills is directly related to study outcome. As such, the training and evaluation of raters' clinical skill in administering symptom-rating scales is essential before being certified to rate patients in clinical trials. This study examined a novel approach to rater training and certification that focused on both conceptual knowledge and applied skills. Forty-six raters (MDs = 14; PhDs = 7; MA = 5; BA/LPN/RN = 20) in a large multicenter depression study went through a 2-step Hamilton Rating Scale for Depression (HAMD) certification process: didactic training, administered online via an interactive Web tutorial, and live, applied training, where raters interviewed depressed patients while being remotely observed via 3-way teleconference. Raters' applied skills were evaluated using the Rater Applied Performance Scale (RAPS), designed specifically to evaluate critical rater behaviors associated with good clinical interviews. Raters received feedback immediately following the interviews; those receiving a failing score were given 2 more opportunities to pass. Each subsequent session was accompanied by feedback, and was conducted by a different trainer, who was blind to the results of the previous session as well as to which session number it was, to avoid bias. Raters who failed on the third attempt were excluded from rating patients in the trial. All training and testing occurred prior to the startup meeting. Results found a significant improvement pre-to-post Web training in raters knowledge of scoring conventions, P < 0.001. On the applied component, raters' RAPS scores improved significantly on the second attempt following feedback, from 9.05 to 11.58, P < 0.001, and from their second to their third session (from 9.00 to 11.00, P = 0.033. Three raters failed all 3 attempts and were excluded from the study. Results support the efficacy of the approach in improving both conceptual knowledge and applied interviewing skill.


Asunto(s)
Ensayos Clínicos como Asunto/normas , Estudios Multicéntricos como Asunto/normas , Investigadores/educación , Investigadores/normas , Adulto , Antidepresivos/uso terapéutico , Certificación , Competencia Clínica/normas , Trastorno Depresivo/tratamiento farmacológico , Femenino , Humanos , Internet , Masculino , Escalas de Valoración Psiquiátrica
19.
J Psychiatr Res ; 38(3): 275-84, 2004.
Artículo en Inglés | MEDLINE | ID: mdl-15003433

RESUMEN

Although the Hamilton Depression Rating Scale (HAMD) remains the most widely used outcome measure in clinical trials of Major Depressive Disorder, the psychometric properties of the individual HAMD items have not been extensively studied. In the present paper, data from four separate clinical trials conducted independently by two pharmaceutical companies were analyzed to determine the relationship between scores on the individual HAMD items and overall depressive severity in an outpatient population. Option characteristic curves (the probability of scoring a particular option in relation to overall HAMD scores) were generated in order to illustrate the relationship between scoring patterns for each item and the range of total HAMD scores. Results showed that Items 1 (Depressed Mood) and 7 (Work and Activities), and to a lesser degree, Items 2 (Guilt), 10 (Anxiety/Psychic), 11 (Anxiety/Somatic), and 13 (Somatic/General) demonstrated a good relationship between item responses and overall depressive severity. However, other items (e.g. Insight, Hypochondriasis) appeared to be more problematic with regard to their ability to discriminate over the full range of depression severity. The present results illustrate that co-operative data sharing between pharmaceutical companies can be a useful tool for improving clinical methods.


Asunto(s)
Trastorno Depresivo/tratamiento farmacológico , Trastorno Depresivo/psicología , Industria Farmacéutica , Escalas de Valoración Psiquiátrica/normas , Encuestas y Cuestionarios , Ensayos Clínicos como Asunto , Trastorno Depresivo/clasificación , Determinación de Punto Final , Humanos , Escalas de Valoración Psiquiátrica/estadística & datos numéricos , Psicometría , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Índice de Severidad de la Enfermedad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA