RESUMO
BACKGROUND: Academics in all disciplines increasingly use social media to share their publications on the internet, reaching out to different audiences. In the last few years, specific indicators of social media impact have been developed (eg, Altmetrics), to complement traditional bibliometric indicators (eg, citation count and h-index). In health research, it is unclear whether social media impact also translates into research impact. OBJECTIVE: The primary aim of this study was to systematically review the literature on the impact of using social media on the dissemination of health research. The secondary aim was to assess the correlation between Altmetrics and traditional citation-based metrics. METHODS: We conducted a systematic review to identify studies that evaluated the use of social media to disseminate research published in health-related journals. We specifically looked at studies that described experimental or correlational studies linking the use of social media with outcomes related to bibliometrics. We searched the Medical Literature Analysis and Retrieval System Online (MEDLINE), Excerpta Medica dataBASE (EMBASE), and Cumulative Index to Nursing and Allied Health Literature (CINAHL) databases using a predefined search strategy (International Prospective Register of Systematic Reviews: CRD42017057709). We conducted independent and duplicate study selection and data extraction. Given the heterogeneity of the included studies, we summarized the findings through a narrative synthesis. RESULTS: Of a total of 18,624 retrieved citations, we included 51 studies: 7 (14%) impact studies (answering the primary aim) and 44 (86%) correlational studies (answering the secondary aim). Impact studies reported mixed results with several limitations, including the use of interventions of inappropriately low intensity and short duration. The majority of correlational studies suggested a positive association between traditional bibliometrics and social media metrics (eg, number of mentions) in health research. CONCLUSIONS: We have identified suggestive yet inconclusive evidence on the impact of using social media to increase the number of citations in health research. Further studies with better design are needed to assess the causal link between social media impact and bibliometrics.
Assuntos
Bibliometria , Pesquisa Biomédica/métodos , MEDLARS/normas , Mídias Sociais/normas , HumanosRESUMO
INTRODUCCIÓN. Desde la aparición del ultrasonido endoscópico, el campo de la gastroenterología y de manera principal, la endoscopia ha evolucionado, permite la realización de múltiples procedimientos, tanto diagnósticos como terapéuticos, con mínimas complicaciones con baja mortalidad. OBJETIVO. Determinar las caracte-rísticas de las lesiones subepiteliales en el tracto digestivo superior, mediante ul-trasonido endoscópico, sus opciones de diagnóstico y tratamiento. MATERIALES Y MÉTODOS. Estudio retrospectivo, de revisión bibliográfica y análisis sistemático de 95 artículos científicos y selección de muestra de 40 encontradas en las bases de datos Medline y PubMed y cuyas fechas de publicación corresponden a los últimos 10 años; el criterio de búsqueda empleado consistió en términos sobre el diagnósti-co y tratamiento de lesiones subepiteliales, mediante ultrasonido endoscópico. RE-SULTADOS. Se evidenció que las lesiones mayores de 1cm, tuvieron alto riesgo de malignización, la cuarta capa fue el sitio más frecuente de localización de este tipo de lesiones. La histopatología fue el método complementario confirmatorio. CON-CLUSIÓN. El ultrasonido endoscópico fue el método de elección para caracterizar y evaluar las lesiones subepiteliales, sean estas sintomáticas o incidentales, el acceso al mismo fue limitado.
INTRODUCTION. Since the appearance of endoscopic ultrasound, the field of gas-troenterology and, in a main way, endoscopy has evolved, allowing the performance of multiple procedures, both diagnostic and therapeutic, with minimal complications with low mortality. OBJECTIVE. To determine the characteristics of subepithelial le-sions in the upper digestive tract, using endoscopic ultrasound, its diagnostic and treatment options. MATERIALS AND METHODS. Retrospective, literature review and systematic analysis of 95 scientific articles and sample selection of 40 found in the Medline and PubMed databases and whose publication dates correspond to the last 10 years; The search criteria used consisted of terms on the diagnosis and treatment of subepithelial lesions, by endoscopic ultrasound. RESULTS. It was shown that le-sions larger than 1 cm, had a high risk of malignancy, the fourth layer was the most frequent site of location of this type of lesions. Histopathology was the complemen-tary confirmatory method. CONCLUSION. Endoscopic ultrasound was the method of choice to characterize and evaluate subepithelial lesions, whether symptomatic or incidental, access to it was limited.
Assuntos
Humanos , Masculino , Feminino , Terapêutica , Ultrassom , Biópsia por Agulha , Endoscopia do Sistema Digestório , Trato Gastrointestinal , Mucosa Intestinal , MEDLARS , Diagnóstico , Técnicas de Diagnóstico do Sistema Digestório , Endoscopia , GastroenterologiaRESUMO
BACKGROUND: Researchers are developing methods to automatically extract clinically relevant and useful patient characteristics from raw healthcare datasets. These characteristics, often capturing essential properties of patients with common medical conditions, are called computational phenotypes. Being generated by automated or semiautomated, data-driven methods, such potential phenotypes need to be validated as clinically meaningful (or not) before they are acceptable for use in decision making. OBJECTIVE: The objective of this study was to present Phenotype Instance Verification and Evaluation Tool (PIVET), a framework that uses co-occurrence analysis on an online corpus of publically available medical journal articles to build clinical relevance evidence sets for user-supplied phenotypes. PIVET adopts a conceptual framework similar to the pioneering prototype tool PheKnow-Cloud that was developed for the phenotype validation task. PIVET completely refactors each part of the PheKnow-Cloud pipeline to deliver vast improvements in speed without sacrificing the quality of the insights PheKnow-Cloud achieved. METHODS: PIVET leverages indexing in NoSQL databases to efficiently generate evidence sets. Specifically, PIVET uses a succinct representation of the phenotypes that corresponds to the index on the corpus database and an optimized co-occurrence algorithm inspired by the Aho-Corasick algorithm. We compare PIVET's phenotype representation with PheKnow-Cloud's by using PheKnow-Cloud's experimental setup. In PIVET's framework, we also introduce a statistical model trained on domain expert-verified phenotypes to automatically classify phenotypes as clinically relevant or not. Additionally, we show how the classification model can be used to examine user-supplied phenotypes in an online, rather than batch, manner. RESULTS: PIVET maintains the discriminative power of PheKnow-Cloud in terms of identifying clinically relevant phenotypes for the same corpus with which PheKnow-Cloud was originally developed, but PIVET's analysis is an order of magnitude faster than that of PheKnow-Cloud. Not only is PIVET much faster, it can be scaled to a larger corpus and still retain speed. We evaluated multiple classification models on top of the PIVET framework and found ridge regression to perform best, realizing an average F1 score of 0.91 when predicting clinically relevant phenotypes. CONCLUSIONS: Our study shows that PIVET improves on the most notable existing computational tool for phenotype validation in terms of speed and automation and is comparable in terms of accuracy.
Assuntos
Armazenamento e Recuperação da Informação/métodos , Internet/instrumentação , MEDLARS/normas , Algoritmos , Humanos , FenótipoRESUMO
There has been remarkable progress in identifying the causes of genetic conditions as well as understanding how changes in specific genes cause disease. Though difficult (and often superficial) to parse, an interesting tension involves emphasis on basic research aimed to dissect normal and abnormal biology versus more clearly clinical and therapeutic investigations. To examine one facet of this question and to better understand progress in Mendelian-related research, we developed an algorithm that classifies medical literature into three categories (Basic, Clinical, and Management) and conducted a retrospective analysis. We built a supervised machine learning classification model using the Azure Machine Learning (ML) Platform and analyzed the literature (1970-2014) from NCBI's Entrez Gene2Pubmed Database (http://www.ncbi.nlm.nih.gov/gene) using genes from the NHGRI's Clinical Genomics Database (http://research.nhgri.nih.gov/CGD/). We applied our model to 376,738 articles: 288,639 (76.6%) were classified as Basic, 54,178 (14.4%) as Clinical, and 24,569 (6.5%) as Management. The average classification accuracy was 92.2%. The rate of Clinical publication was significantly higher than Basic or Management. The rate of publication of article types differed significantly when divided into key eras: Human Genome Project (HGP) planning phase (1984-1990); HGP launch (1990) to publication (2001); following HGP completion to the "Next Generation" advent (2009); the era following 2009. In conclusion, in addition to the findings regarding the pace and focus of genetic progress, our algorithm produced a database that can be used in a variety of contexts including automating the identification of management-related literature.
Assuntos
Genética Médica/tendências , MEDLARS , Aprendizado de Máquina/tendências , Informática Médica/tendências , Bases de Dados Genéticas , Humanos , Internet , SoftwareAssuntos
Acesso à Informação , Pesquisa Biomédica/organização & administração , Comunicação , Disseminação de Informação , MEDLARS/organização & administração , National Library of Medicine (U.S.)/história , National Library of Medicine (U.S.)/organização & administração , História do Século XIX , História do Século XX , História do Século XXI , Humanos , Relações Interinstitucionais , Estados UnidosAssuntos
Disseminação de Informação , Serviços de Informação , Armazenamento e Recuperação da Informação , National Library of Medicine (U.S.) , District of Columbia , História do Século XIX , História do Século XX , História do Século XXI , Humanos , Disseminação de Informação/história , Disseminação de Informação/métodos , Serviços de Informação/história , Serviços de Informação/organização & administração , Armazenamento e Recuperação da Informação/história , Armazenamento e Recuperação da Informação/métodos , Armazenamento e Recuperação da Informação/estatística & dados numéricos , Armazenamento e Recuperação da Informação/tendências , MEDLARS , MEDLINE , Medical Subject Headings/história , National Library of Medicine (U.S.)/economia , National Library of Medicine (U.S.)/história , National Library of Medicine (U.S.)/organização & administração , National Library of Medicine (U.S.)/tendências , PubMed , Estados UnidosRESUMO
In the National Cancer Act of 1971, the Director of the National Cancer Institute (NCI) was given a mandate to "Collect, analyze, and disseminate all data useful in the prevention, diagnosis, and treatment of cancer, including the establishment of an International Cancer Research Data Bank (ICRDB) to collect, catalog, store, and disseminate insofar as feasible the results of cancer research undertaken in any country for the use of any person involved in cancer research in any country" (National Cancer Act of 1971, S 1828, 92nd Congress, 1st Sess (1971)). In subsequent legislation, the audience for NCI's information dissemination activities was expanded to include physicians and other healthcare professionals, patients and their families, and the general public, in addition to cancer researchers. The Institute's response to these legislative requirements was to create what is now known as the Physician Data Query (PDQ®) cancer information database. From its beginnings in 1977 as a database of NCI-sponsored cancer clinical trials, PDQ has grown to include extensive information about cancer treatment, screening, prevention, supportive and palliative care, genetics, drugs, and more. Herein, we describe the history, editorial processes, influence, and global reach of one component of the PDQ database, namely its evidence-based cancer information summaries for health professionals. These summaries are widely recognized as important cancer information and education resources, and they further serve as foundational documents for the development of other cancer information products by NCI and other organizations.
Assuntos
Ensaios Clínicos como Assunto , Bases de Dados Factuais , Educação em Saúde , Serviços de Informação/história , Oncologia , Neoplasias/diagnóstico , Neoplasias/terapia , Redes de Comunicação de Computadores , Difusão de Inovações , Educação Médica Continuada , História do Século XX , História do Século XXI , Humanos , Disseminação de Informação , Serviços de Informação/organização & administração , MEDLARS/organização & administração , National Institutes of Health (U.S.) , Terapia Assistida por Computador , Estados UnidosRESUMO
The retrieval of a given piece of information from memory increases the long-term retention of that information, a phenomenon often called testing effect. The current study aimed to select and review articles on the testing effect to verify the extent and importance of this phenomenon, bringing the main results of recent research. To accomplish this, a systematic review of articles on this subject published between 2006 and 2012 was conducted, a period in which there was an acute increase in the amount of publications on this subject. The articles were searched in the databases Web of Science, PubMed and PsycINFO. The results, which were organized according to test format (recall and recognition tests), demonstrated that tests can be remarkably beneficial to the retention of long-term memories. A theoretical explanation regarding the cognitive processes involved in this phenomenon still needs to be developed and tested. Such explanation would have important implications for the development of efficient educational practices.(AU)
Recordar informações previamente memorizadas aumenta a retenção destas informações na memória, um fenômeno chamado efeito de testagem. O presente estudo objetivou selecionar e avaliar artigos sobre o efeito de testagem, a fim de verificar a extensão e importância desse fenômeno, trazendo os principais resultados de pesquisas recentes. Para isto, realizou-se uma revisão sistemática de artigos publicados entre 2006 e 2012, período em que houve um aumento expressivo na quantidade de publicações sobre este tema. A busca dos artigos foi desenvolvida nas bases de dados Web of Science, PubMed e PsycINFO. Os resultados, organizados quanto ao formato de teste (recordação e reconhecimento), indicaram que a realização de testes produz efeitos notavelmente benéficos para a retenção de memórias de longo prazo. Entretanto, uma explanação teórica referente aos processos cognitivos envolvidos neste fenômeno ainda precisa ser desenvolvida e testada uma vez que tal explicação possuiria importantes implicações para o desenvolvimento de práticas educacionais eficazes.(AU)
Recordar información previamente almacenada aumenta la retención posterior de esta información, un fenómeno llamado de efecto de prueba. En el presente estudio se buscó seleccionar y evaluar artículos sobre el efecto de prueba para verificar el alcance y la importancia de este fenómeno, presentando los principales resultados de investigaciones recientes. Para eso, fue llevada a cabo una revisión sistemática de artículos publicados entre 2006 y 2012, periodo en el cual se observó un aumento significativo en el número de publicaciones respecto al tema. La búsqueda de los artículos se realizó en las bases de datos Web of Science, PubMed y PsycINFO. Los resultados, organizados por tipo de prueba (recuerdo y reconocimiento), dan evidencia de que las pruebas producen efectos significativos en beneficio de la retención de información a largo plazo. Por otro lado, una explicación teórica referente a los procesos cognitivos involucrados en este fenómeno tiene que ser desarrollados y evaluados, una vez que se tenga tal explicación se tendrán importantes implicaciones para el desarrollo de prácticas educativas eficientes.(AU)
Assuntos
Memória , Sistemas de Informação , MEDLARS , Aprendizagem , PesquisaRESUMO
The retrieval of a given piece of information from memory increases the long-term retention of that information, a phenomenon often called testing effect. The current study aimed to select and review articles on the testing effect to verify the extent and importance of this phenomenon, bringing the main results of recent research. To accomplish this, a systematic review of articles on this subject published between 2006 and 2012 was conducted, a period in which there was an acute increase in the amount of publications on this subject. The articles were searched in the databases Web of Science, PubMed and PsycINFO. The results, which were organized according to test format (recall and recognition tests), demonstrated that tests can be remarkably beneficial to the retention of long-term memories. A theoretical explanation regarding the cognitive processes involved in this phenomenon still needs to be developed and tested. Such explanation would have important implications for the development of efficient educational practices.
Recordar informações previamente memorizadas aumenta a retenção destas informações na memória, um fenômeno chamado efeito de testagem. O presente estudo objetivou selecionar e avaliar artigos sobre o efeito de testagem, a fim de verificar a extensão e importância desse fenômeno, trazendo os principais resultados de pesquisas recentes. Para isto, realizou-se uma revisão sistemática de artigos publicados entre 2006 e 2012, período em que houve um aumento expressivo na quantidade de publicações sobre este tema. A busca dos artigos foi desenvolvida nas bases de dados Web of Science, PubMed e PsycINFO. Os resultados, organizados quanto ao formato de teste (recordação e reconhecimento), indicaram que a realização de testes produz efeitos notavelmente benéficos para a retenção de memórias de longo prazo. Entretanto, uma explanação teórica referente aos processos cognitivos envolvidos neste fenômeno ainda precisa ser desenvolvida e testada uma vez que tal explicação possuiria importantes implicações para o desenvolvimento de práticas educacionais eficazes.
Recordar información previamente almacenada aumenta la retención posterior de esta información, un fenómeno llamado de efecto de prueba. En el presente estudio se buscó seleccionar y evaluar artículos sobre el efecto de prueba para verificar el alcance y la importancia de este fenómeno, presentando los principales resultados de investigaciones recientes. Para eso, fue llevada a cabo una revisión sistemática de artículos publicados entre 2006 y 2012, periodo en el cual se observó un aumento significativo en el número de publicaciones respecto al tema. La búsqueda de los artículos se realizó en las bases de datos Web of Science, PubMed y PsycINFO. Los resultados, organizados por tipo de prueba (recuerdo y reconocimiento), dan evidencia de que las pruebas producen efectos significativos en beneficio de la retención de información a largo plazo. Por otro lado, una explicación teórica referente a los procesos cognitivos involucrados en este fenómeno tiene que ser desarrollados y evaluados, una vez que se tenga tal explicación se tendrán importantes implicaciones para el desarrollo de prácticas educativas eficientes.
Assuntos
Sistemas de Informação , Aprendizagem , MEDLARS , Memória , PesquisaAssuntos
Humanos , Disciplinas das Ciências Biológicas , Pesquisa , MEDLARS , Citotoxicidade ImunológicaRESUMO
OBJECTIVE: Through a literature review, we investigated the geographic information systems (GIS) methods used to define the food environment and the types of spatial measurements they generate. DESIGN: Review study. SETTING: Searches were conducted in health science databases, including Medline/Pubmed, PsycINFO, Francis and GeoBase. We included studies using GIS-based measures of the food environment published up to 1 June 2008. RESULTS: Twenty-nine papers were included. Two different spatial approaches were identified. The density approach quantifies the availability of food outlets using the buffer method, kernel density estimation or spatial clustering. The proximity approach assesses the distance to food outlets by measuring distances or travel times. GIS network analysis tools enable the modelling of travel time between referent addresses (home) and food outlets for a given transportation network and mode, and the assumption of travel routing behaviours. Numerous studies combined both approaches to compare food outlet spatial accessibility between different types of neighbourhoods or to investigate relationships between characteristics of the food environment and individual food behaviour. CONCLUSIONS: GIS methods provide new approaches for assessing the food environment by modelling spatial accessibility to food outlets. On the basis of the available literature, it appears that only some GIS methods have been used, while other GIS methods combining availability and proximity, such as spatial interaction models, have not yet been applied to this field. Future research would also benefit from a combination of GIS methods with survey approaches to describe both spatial and social food outlet accessibility as important determinants of individual food behaviours.
Assuntos
Meio Ambiente , Alimentos/estatística & dados numéricos , Sistemas de Informação Geográfica , Preferências Alimentares , Abastecimento de Alimentos/estatística & dados numéricos , Humanos , MEDLARS , Inquéritos Nutricionais , PubMed , Características de Residência , Restaurantes/estatística & dados numéricos , Meios de TransporteRESUMO
BACKGROUND: Information technology (IT) is increasingly being used in general practice to manage health care including type 2 diabetes. However, there is conflicting evidence about whether IT improves diabetes outcomes. This review of the literature about IT-based diabetes management interventions explores whether methodological issues such as sample characteristics, outcome measures, and mechanisms causing change in the outcome measures could explain some of the inconsistent findings evident in IT-based diabetes management studies. METHODS: Databases were searched using terms related to IT and diabetes management. Articles eligible for review evaluated an IT-based diabetes management intervention in general practice and were published between 1999 and 2009 inclusive in English. Studies that did not include outcome measures were excluded. RESULTS: Four hundred and twenty-five articles were identified, sixteen met the inclusion criteria: eleven GP focussed and five patient focused interventions were evaluated. Nine were RCTs, five non-randomised control trials, and two single-sample before and after designs. Important sample characteristics such as diabetes type, familiarity with IT, and baseline diabetes knowledge were not addressed in any of the studies reviewed. All studies used HbA1c as a primary outcome measure, and nine reported a significant improvement in mean HbA1c over the study period; only two studies reported the HbA1c assay method. Five studies measured diabetes medications and two measured psychological outcomes. Patient lifestyle variables were not included in any of the studies reviewed. IT was the intervention method considered to effect changes in the outcome measures. Only two studies mentioned alternative possible causal mechanisms. CONCLUSION: Several limitations could affect the outcomes of IT-based diabetes management interventions to an unknown degree. These limitations make it difficult to attribute changes solely to such interventions.
Assuntos
Diabetes Mellitus/terapia , Gerenciamento Clínico , Informática Médica/métodos , Austrália/epidemiologia , Ensaios Clínicos como Assunto/estatística & dados numéricos , Diabetes Mellitus/sangue , Diabetes Mellitus/epidemiologia , Diabetes Mellitus Tipo 2/sangue , Diabetes Mellitus Tipo 2/epidemiologia , Diabetes Mellitus Tipo 2/terapia , Feminino , Hemoglobinas Glicadas/análise , Fidelidade a Diretrizes , Humanos , MEDLARS , Masculino , Pessoa de Meia-Idade , Avaliação de Resultados em Cuidados de Saúde/métodos , Guias de Prática Clínica como Assunto/normas , Projetos de Pesquisa , Autocuidado , Telemedicina/métodos , Resultado do TratamentoRESUMO
OBJECT: Transcranial motor evoked potential (TcMEP) monitoring is frequently used in complex spinal surgeries to prevent neurological injury. Anesthesia, however, can significantly affect the reliability of TcMEP monitoring. Understanding the impact of various anesthetic agents on neurophysiological monitoring is therefore essential. METHODS: A literature search of the National Library of Medicine database was conducted to identify articles pertaining to anesthesia and TcMEP monitoring during spine surgery. Twenty studies were selected and reviewed. RESULTS: Inhalational anesthetics and neuromuscular blockade have been shown to limit the ability of TcMEP monitoring to detect significant changes. Hypothermia can also negatively affect monitoring. Opioids, however, have little influence on TcMEPs. Total intravenous anesthesia regimens can minimize the need for inhalational anesthetics. CONCLUSIONS: In general, selecting the appropriate anesthetic regimen with maintenance of a stable concentration of inhalational or intravenous anesthetics optimizes TcMEP monitoring.
Assuntos
Anestesia/métodos , Potencial Evocado Motor/fisiologia , Monitorização Intraoperatória/métodos , Coluna Vertebral/cirurgia , Anestesia/efeitos adversos , Anestésicos Inalatórios/administração & dosagem , Anestésicos Inalatórios/efeitos adversos , Estimulação Elétrica/métodos , Potencial Evocado Motor/efeitos dos fármacos , Humanos , MEDLARS/estatística & dados numéricos , Bloqueio Neuromuscular/efeitos adversos , Bloqueio Neuromuscular/métodos , Neurofisiologia/estatística & dados numéricos , Procedimentos Ortopédicos/métodos , Traumatismos da Medula Espinal/prevenção & controleRESUMO
The foundation of evidence-based medicine is critical analysis and synthesis of the best data available concerning a given health problem. These factual data are accessible because of the availability on the Internet of web tools specialized in research for scientific publications. A bibliographic database is a collection of bibliographic references describing the documents indexed. Such a reference includes at least the title, summary (or abstract), a set of keywords, and the type of publication. To conduct a strategically effective search, it is necessary to formulate the question - clinical, diagnostic, prognostic, or related to treatment or prevention - in a form understandable by the research engine. Moreover, it is necessary to choose the specific database or databases, which may have particular specificity, and to analyze the results rapidly to refine the strategy. The search for information is facilitated by the knowledge of the standardized terms commonly used to describe the desired information. These come from a specific thesaurus devoted to document indexing. The most frequently used is MeSH (Medical Subject Heading). The principal bibliographic database whose references include a set of describers from the MeSH thesaurus is Medical Literature Analysis and Retrieval System Online (Medline), which has in turn become a subpart of a still more vast bibliography called PubMed, which indexes an additional 1.4 million references. Numerous other databases are maintained by national or international entities. These include the Cochrane Library, Embase, and the PASCAL and FRANCIS databases.
Assuntos
Indexação e Redação de Resumos , Bases de Dados como Assunto , Bases de Dados Bibliográficas , Medicina Baseada em Evidências , MEDLINE , Medical Subject Headings , PubMed , Humanos , Internet , MEDLARSRESUMO
OBJECTIVE: The research provides a chronology of the US National Library of Medicine's (NLM's) contribution to access to the world's biomedical literature through its computerization of biomedical indexes, particularly the Medical Literature Analysis and Retrieval System (MEDLARS). METHOD: Using material gathered from NLM's archives and from personal interviews with people associated with developing MEDLARS and its associated systems, the author discusses key events in the history of MEDLARS. DISCUSSION: From the development of the early mechanized bibliographic retrieval systems of the 1940s and to the beginnings of online, interactive computerized bibliographic search systems of the early 1970s chronicled here, NLM's contributions to automation and bibliographic retrieval have been extensive. CONCLUSION: As NLM's technological experience and expertise grew, innovative bibliographic storage and retrieval systems emerged. NLM's accomplishments regarding MEDLARS were cutting edge, placing the library at the forefront of incorporating mechanization and technologies into medical information systems.
Assuntos
Bibliotecas Médicas/organização & administração , MEDLARS/organização & administração , Descritores , Humanos , National Library of Medicine (U.S.) , Estados UnidosRESUMO
El Internet se ha convertido en un fenómeno social en nuestros tiempos y en un apoyo vital para el desarrollo de las investigacines en salud. El hecho de que es una herramienta para recuperar información e investigar no se discute. Las ciencias de la información avanzan a grandes pasos y los profesionales en salud que se apoyan o tienen contacto con la realidad de Internet, encuentran difícil seguir este paso, lo que disminuye la efectividad y eficiencia con que se utiliza este recurso. Motivado en los hechos previos, se realizó una revisión de algunos recursos encontrados en la red de redes, Internet, y se plantea una estrategia de búsqueda de información para recuperarla en un motor de búsqueda y, así, resolver las preguntas clínicas diarias. En especial, se focaliza la revisión del tema sobre los motores de búsqueda Pubmed y Google. Basado inicialmente en la construcción de una pregunta para hallar información mediante la metodología PICO, el profesional se orienta para definir los términos de búsqueda que le permitirán recuperar información con varios niveles de rigurosidad científica y aplicable a varias latitudes. Finalmente, se comentan algunas limitaciones que existen en torno a la recuperación de información y las oportunidades que ofrecen las iniciativas para la Web semántica.
Assuntos
Sistemas de Informação , Internet , MEDLARSRESUMO
This journal is 58 years old with a publishing identity, style, and quality that has been increasingly recognized throughout Latin America and is indexed in Lilacs, Scielo, and Latindex. Our journal has applied for indexation in the Index Medicus of the USA National Library of Medicine, but unfortunately it has been not selected. Sociedad de Neurología Psiquatría y Neurocirugía (Sonepsyn) and Revista Chilena de Neuro-Psiquiatría should use this answer as an opportunity to consider deeply the journal we have, the one we want, and need, and then to decide whether to continue with the attempt to be indexed and if we are indeed in a position to reach this goal. If we should choose to reapply for indexation in Index Medicus, this will require backing in terms of financing and policy, prioritization and dedication from the next Sonepsyn Board of Directors, professionalization of the publishing committee and of the support team: secretary, abstract translator, proofreader, web page handler, medical illustration specialist, and above all, the contributions from researchers and authors to publish their best papers with us and to cite regularly the papers published in this journal. If we choose not to make indexation in the Index Medicus a primary goal, we need to move into a more pragmatic view in deciding what kind of journal we want and need, one that is not necessarily less responsible, transcendental, or useful.
Assuntos
Indexação e Redação de Resumos , Políticas Editoriais , MEDLARS , Neurologia , Publicação Periódica , PsiquiatriaRESUMO
Se realizó un ensayo clínico fase II tardía, abierto, en paralelo, no secuencial y controlado durante el año 2002. a 150 pacientes con insomnio primario, según los criterios del CIE-10, con el objetivo de evaluar los resultados clínicos evolutivos de la aplicación de auriculoterapia comparándola con la aplicación de acupuntura corporal y con el uso del nitrazepán como hipnótico. Participaron el Hospital Psiquiátrico Provincial y la Clínica de Medicina Natural y Tradicional del ISCM de Camagüey. La muestra fue dividida en tres grupos con la misma cantidad de pacientes, fueron evaluados por entrevista clínica semanal y se les aplicó el test Cornell Index al inicio y al final de tratamiento. Los resultados clínicos de los tres grupos fueron comparados semanalmente hasta la cuarta y última semana de terapia. El análisis estadístico de los resultados se realizó mediante el programa Epi-Info 6, se halló frecuencia, por ciento y probabilidad (P<0.05). Predominaron los pacientes con edades entre 30 y 44 años, del sexo femenino y con manifestaciones de ansiedad como síntoma fundamental concomitante con el insomnio. Predominó el antecedente de insomnio en los tres grupos, los cuales habían recibido en su mayoría psicofármacos. La variante de insomnio caracterizada por despertarse varias veces durante el sueño fue la que predominó. La auriculoterapia resultó ser la más eficaz de las tres terapias utilizadas para el insomnio primario. La acupuntura fue más eficaz que el nitrazepán(AU)