Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 70
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
J Biomed Inform ; 79: 71-81, 2018 03.
Artigo em Inglês | MEDLINE | ID: mdl-29454107

RESUMO

Clinical Information Models (CIMs) expressed as archetypes play an essential role in the design and development of current Electronic Health Record (EHR) information structures. Although there exist many experiences about using archetypes in the literature, a comprehensive and formal methodology for archetype modeling does not exist. Having a modeling methodology is essential to develop quality archetypes, in order to guide the development of EHR systems and to allow the semantic interoperability of health data. In this work, an archetype modeling methodology is proposed. This paper describes its phases, the inputs and outputs of each phase, and the involved participants and tools. It also includes the description of the possible strategies to organize the modeling process. The proposed methodology is inspired by existing best practices of CIMs, software and ontology development. The methodology has been applied and evaluated in regional and national EHR projects. The application of the methodology provided useful feedback and improvements, and confirmed its advantages. The conclusion of this work is that having a formal methodology for archetype development facilitates the definition and adoption of interoperable archetypes, improves their quality, and facilitates their reuse among different information systems and EHR projects. Moreover, the proposed methodology can be also a reference for CIMs development using any other formalism.


Assuntos
Registros Eletrônicos de Saúde , Informática Médica/métodos , Informática Médica/normas , Registro Médico Coordenado , Confiabilidade dos Dados , Atenção à Saúde , Humanos , Reprodutibilidade dos Testes , Semântica , Software , Terminologia como Assunto , Interface Usuário-Computador
2.
J Biomed Inform ; 55: 143-52, 2015 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-25910958

RESUMO

Clinical information models are increasingly used to describe the contents of Electronic Health Records. Implementation guides are a common specification mechanism used to define such models. They contain, among other reference materials, all the constraints and rules that clinical information must obey. However, these implementation guides typically are oriented to human-readability, and thus cannot be processed by computers. As a consequence, they must be reinterpreted and transformed manually into an executable language such as Schematron or Object Constraint Language (OCL). This task can be difficult and error prone due to the big gap between both representations. The challenge is to develop a methodology for the specification of implementation guides in such a way that humans can read and understand easily and at the same time can be processed by computers. In this paper, we propose and describe a novel methodology that uses archetypes as basis for generation of implementation guides. We use archetypes to generate formal rules expressed in Natural Rule Language (NRL) and other reference materials usually included in implementation guides such as sample XML instances. We also generate Schematron rules from NRL rules to be used for the validation of data instances. We have implemented these methods in LinkEHR, an archetype editing platform, and exemplify our approach by generating NRL rules and implementation guides from EN ISO 13606, openEHR, and HL7 CDA archetypes.


Assuntos
Mineração de Dados/normas , Registros Eletrônicos de Saúde/normas , Registro Médico Coordenado/normas , Guias de Prática Clínica como Assunto , Interface Usuário-Computador , Vocabulário Controlado , Processamento de Linguagem Natural , Semântica
3.
J Med Syst ; 38(1): 4, 2014 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-24399281

RESUMO

The latest advances in eHealth and mHealth have propitiated the rapidly creation and expansion of mobile applications for health care. One of these types of applications are the clinical decision support systems, which nowadays are being implemented in mobile apps to facilitate the access to health care professionals in their daily clinical decisions. The aim of this paper is twofold. Firstly, to make a review of the current systems available in the literature and in commercial stores. Secondly, to analyze a sample of applications in order to obtain some conclusions and recommendations. Two reviews have been done: a literature review on Scopus, IEEE Xplore, Web of Knowledge and PubMed and a commercial review on Google play and the App Store. Five applications from each review have been selected to develop an in-depth analysis and to obtain more information about the mobile clinical decision support systems. Ninety-two relevant papers and 192 commercial apps were found. Forty-four papers were focused only on mobile clinical decision support systems. One hundred seventy-one apps were available on Google play and 21 on the App Store. The apps are designed for general medicine and 37 different specialties, with some features common in all of them despite of the different medical fields objective. The number of mobile clinical decision support applications and their inclusion in clinical practices has risen in the last years. However, developers must be careful with their interface or the easiness of use, which can impoverish the experience of the users.


Assuntos
Sistemas de Apoio a Decisões Clínicas/instrumentação , Pessoal de Saúde/estatística & dados numéricos , Medicina/estatística & dados numéricos , Aplicativos Móveis , Humanos , Qualidade da Assistência à Saúde , Interface Usuário-Computador
4.
NMR Biomed ; 26(5): 578-92, 2013 May.
Artigo em Inglês | MEDLINE | ID: mdl-23239454

RESUMO

The current challenge in automatic brain tumor classification based on MRS is the improvement of the robustness of the classification models that explicitly account for the probable breach of the independent and identically distributed conditions in the MRS data points. To contribute to this purpose, a new algorithm for the extraction of discriminant MRS features of brain tumors based on a functional approach is presented. Functional data analysis based on region segmentation (RSFDA) is based on the functional data analysis formalism using nonuniformly distributed B splines according to spectral regions that are highly correlated. An exhaustive characterization of the method is presented in this work using controlled and real scenarios. The performance of RSFDA was compared with other widely used feature extraction methods. In all simulated conditions, RSFDA was proven to be stable with respect to the number of variables selected and with respect to the classification performance against noise and baseline artifacts. Furthermore, with real multicenter datasets classification, RSFDA and peak integration (PI) obtained better performance than the other feature extraction methods used for comparison. Other advantages of the method proposed are its usefulness in selecting the optimal number of features for classification and its simplified functional representation of the spectra, which contributes to highlight the discriminative regions of the MR spectrum for each classification task.


Assuntos
Neoplasias Encefálicas/metabolismo , Espectroscopia de Ressonância Magnética/métodos , Algoritmos , Sistemas de Apoio a Decisões Clínicas , Humanos , Curva ROC
5.
J Biomed Inform ; 46(4): 676-89, 2013 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-23707417

RESUMO

Clinical decision-support systems (CDSSs) comprise systems as diverse as sophisticated platforms to store and manage clinical data, tools to alert clinicians of problematic situations, or decision-making tools to assist clinicians. Irrespective of the kind of decision-support task CDSSs should be smoothly integrated within the clinical information system, interacting with other components, in particular with the electronic health record (EHR). However, despite decades of developments, most CDSSs lack interoperability features. We deal with the interoperability problem of CDSSs and EHRs by exploiting the dual-model methodology. This methodology distinguishes a reference model and archetypes. A reference model is represented by a stable and small object-oriented model that describes the generic properties of health record information. For their part, archetypes are reusable and domain-specific definitions of clinical concepts in the form of structured and constrained combinations of the entities of the reference model. We rely on archetypes to make the CDSS compatible with EHRs from different institutions. Concretely, we use archetypes for modelling the clinical concepts that the CDSS requires, in conjunction with a series of knowledge-intensive mappings relating the archetypes to the data sources (EHR and/or other archetypes) they depend on. We introduce a comprehensive approach, including a set of tools as well as methodological guidelines, to deal with the interoperability of CDSSs and EHRs based on archetypes. Archetypes are used to build a conceptual layer of the kind of a virtual health record (VHR) over the EHR whose contents need to be integrated and used in the CDSS, associating them with structural and terminology-based semantics. Subsequently, the archetypes are mapped to the EHR by means of an expressive mapping language and specific-purpose tools. We also describe a case study where the tools and methodology have been employed in a CDSS to support patient recruitment in the framework of a clinical trial for colorectal cancer screening. The utilisation of archetypes not only has proved satisfactory to achieve interoperability between CDSSs and EHRs but also offers various advantages, in particular from a data model perspective. First, the VHR/data models we work with are of a high level of abstraction and can incorporate semantic descriptions. Second, archetypes can potentially deal with different EHR architectures, due to their deliberate independence of the reference model. Third, the archetype instances we obtain are valid instances of the underlying reference model, which would enable e.g. feeding back the EHR with data derived by abstraction mechanisms. Lastly, the medical and technical validity of archetype models would be assured, since in principle clinicians should be the main actors in their development.


Assuntos
Ensaios Clínicos como Assunto , Sistemas de Apoio a Decisões Clínicas , Registros Eletrônicos de Saúde , Neoplasias Colorretais/diagnóstico , Humanos , Registro Médico Coordenado
6.
J Biomed Inform ; 45(4): 746-62, 2012 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-22142945

RESUMO

Possibly the most important requirement to support co-operative work among health professionals and institutions is the ability of sharing EHRs in a meaningful way, and it is widely acknowledged that standardization of data and concepts is a prerequisite to achieve semantic interoperability in any domain. Different international organizations are working on the definition of EHR architectures but the lack of tools that implement them hinders their broad adoption. In this paper we present ResearchEHR, a software platform whose objective is to facilitate the practical application of EHR standards as a way of reaching the desired semantic interoperability. This platform is not only suitable for developing new systems but also for increasing the standardization of existing ones. The work reported here describes how the platform allows for the edition, validation, and search of archetypes, converts legacy data into normalized, archetypes extracts, is able to generate applications from archetypes and finally, transforms archetypes and data extracts into other EHR standards. We also include in this paper how ResearchEHR has made possible the application of the CEN/ISO 13606 standard in a real environment and the lessons learnt with this experience.


Assuntos
Sistemas de Gerenciamento de Base de Dados , Registros Eletrônicos de Saúde/normas , Semântica , Humanos , Reprodutibilidade dos Testes , Integração de Sistemas
7.
Stud Health Technol Inform ; 180: 53-7, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22874151

RESUMO

While HL7 CDA is a widely adopted standard for the documentation of clinical information, the archetype approach proposed by CEN/ISO 13606 and openEHR is gaining recognition as a means of describing domain models and medical knowledge. This paper describes our efforts in combining both standards. Using archetypes as an alternative for defining CDA templates permit new possibilities all based on the formal nature of archetypes and their ability to merge into the same artifact medical knowledge and technical requirements for semantic interoperability of electronic health records. We describe the process followed for the normalization of existing legacy data in a hospital environment, from the importation of the HL7 CDA model into an archetype editor, the definition of CDA archetypes and the application of those archetypes to obtain normalized CDA data instances.


Assuntos
Registros Eletrônicos de Saúde/normas , Nível Sete de Saúde , Armazenamento e Recuperação da Informação/normas , Registro Médico Coordenado/normas , Garantia da Qualidade dos Cuidados de Saúde/normas , Europa (Continente)
8.
Stud Health Technol Inform ; 180: 721-5, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22874286

RESUMO

Low biomedical Data Quality (DQ) leads into poor decisions which may affect the care process or the result of evidence-based studies. Most of the current approaches for DQ leave unattended the shifting behaviour of data underlying concepts and its relation to DQ. There is also no agreement on a common set of DQ dimensions and how they interact and relate to these shifts. In this paper we propose an organization of biomedical DQ assessment based on these concepts, identifying characteristics and requirements which will facilitate future research. As a result, we define the Data Quality Vector compiling a unified set of DQ dimensions (completeness, consistency, duplicity, correctness, timeliness, spatial stability, contextualization, predictive value and reliability), as the foundations to the further development of DQ assessment algorithms and platforms.


Assuntos
Bases de Dados Factuais , Controle de Formulários e Registros/normas , Sistemas de Informação em Saúde/normas , Armazenamento e Recuperação da Informação/normas , Garantia da Qualidade dos Cuidados de Saúde/métodos , Garantia da Qualidade dos Cuidados de Saúde/normas , Projetos de Pesquisa/normas , Espanha
9.
Neuroimage ; 54(2): 940-54, 2011 Jan 15.
Artigo em Inglês | MEDLINE | ID: mdl-20851199

RESUMO

Quantitative magnetic resonance analysis often requires accurate, robust, and reliable automatic extraction of anatomical structures. Recently, template-warping methods incorporating a label fusion strategy have demonstrated high accuracy in segmenting cerebral structures. In this study, we propose a novel patch-based method using expert manual segmentations as priors to achieve this task. Inspired by recent work in image denoising, the proposed nonlocal patch-based label fusion produces accurate and robust segmentation. Validation with two different datasets is presented. In our experiments, the hippocampi of 80 healthy subjects and the lateral ventricles of 80 patients with Alzheimer's disease were segmented. The influence on segmentation accuracy of different parameters such as patch size and number of training subjects was also studied. A comparison with an appearance-based method and a template-based method was also carried out. The highest median kappa index values obtained with the proposed method were 0.884 for hippocampus segmentation and 0.959 for lateral ventricle segmentation.


Assuntos
Doença de Alzheimer/patologia , Hipocampo/anatomia & histologia , Interpretação de Imagem Assistida por Computador/métodos , Ventrículos Laterais/anatomia & histologia , Imageamento por Ressonância Magnética , Humanos
10.
J Biomed Inform ; 44(4): 677-87, 2011 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-21377545

RESUMO

In the last decade, machine learning (ML) techniques have been used for developing classifiers for automatic brain tumour diagnosis. However, the development of these ML models rely on a unique training set and learning stops once this set has been processed. Training these classifiers requires a representative amount of data, but the gathering, preprocess, and validation of samples is expensive and time-consuming. Therefore, for a classical, non-incremental approach to ML, it is necessary to wait long enough to collect all the required data. In contrast, an incremental learning approach may allow us to build an initial classifier with a smaller number of samples and update it incrementally when new data are collected. In this study, an incremental learning algorithm for Gaussian Discriminant Analysis (iGDA) based on the Graybill and Deal weighted combination of estimators is introduced. Each time a new set of data becomes available, a new estimation is carried out and a combination with a previous estimation is performed. iGDA does not require access to the previously used data and is able to include new classes that were not in the original analysis, thus allowing the customization of the models to the distribution of data at a particular clinical center. An evaluation using five benchmark databases has been used to evaluate the behaviour of the iGDA algorithm in terms of stability-plasticity, class inclusion and order effect. Finally, the iGDA algorithm has been applied to automatic brain tumour classification with magnetic resonance spectroscopy, and compared with two state-of-the-art incremental algorithms. The empirical results obtained show the ability of the algorithm to learn in an incremental fashion, improving the performance of the models when new information is available, and converging in the course of time. Furthermore, the algorithm shows a negligible instance and concept order effect, avoiding the bias that such effects could introduce.


Assuntos
Algoritmos , Inteligência Artificial , Neoplasias Encefálicas/diagnóstico , Biologia Computacional/métodos , Análise Discriminante , Bases de Dados Factuais , Humanos , Imageamento por Ressonância Magnética
11.
MAGMA ; 24(1): 35-42, 2011 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-21249420

RESUMO

OBJECT: This study demonstrates that 3T SV-MRS data can be used with the currently available automatic brain tumour diagnostic classifiers which were trained on databases of 1.5T spectra. This will allow the existing large databases of 1.5T MRS data to be used for diagnostic classification of 3T spectra, and perhaps also the combination of 1.5T and 3T databases. MATERIALS AND METHODS: Brain tumour classifiers trained with 154 1.5T spectra to discriminate among high grade malignant tumours and common grade II glial tumours were evaluated with a subsequently-acquired set of 155 1.5T and 37 3T spectra. A similarity study between spectra and main brain tumour metabolite ratios for both field strengths (1.5T and 3T) was also performed. RESULTS: Our results showed that classifiers trained with 1.5T samples had similar accuracy for both test datasets (0.87 ± 0.03 for 1.5T and 0.88 ± 0.03 for 3.0T). Moreover, non-significant differences were observed with most metabolite ratios and spectral patterns. CONCLUSION: These results encourage the use of existing classifiers based on 1.5T datasets for diagnosis with 3T (1)H SV-MRS. The large 1.5T databases compiled throughout many years and the prediction models based on 1.5T acquisitions can therefore continue to be used with data from the new 3T instruments.


Assuntos
Neoplasias Encefálicas/diagnóstico , Bases de Dados Factuais , Espectroscopia de Ressonância Magnética/métodos , Reconhecimento Automatizado de Padrão/métodos , Neoplasias Encefálicas/metabolismo , Humanos , Prótons , Sensibilidade e Especificidade
12.
Neuroimage ; 53(2): 480-90, 2010 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-20600978

RESUMO

This paper addresses the problem of accurate voxel-level estimation of tissue proportions in the human brain magnetic resonance imaging (MRI). Due to the finite resolution of acquisition systems, MRI voxels can contain contributions from more than a single tissue type. The voxel-level estimation of this fractional content is known as partial volume coefficient estimation. In the present work, two new methods to calculate the partial volume coefficients under noisy conditions are introduced and compared with current similar methods. Concretely, a novel Markov Random Field model allowing sharp transitions between partial volume coefficients of neighbouring voxels and an advanced non-local means filtering technique are proposed to reduce the errors due to random noise in the partial volume coefficient estimation. In addition, a comparison was made to find out how the different methodologies affect the measurement of the brain tissue type volumes. Based on the obtained results, the main conclusions are that (1) both Markov Random Field modelling and non-local means filtering improved the partial volume coefficient estimation results, and (2) non-local means filtering was the better of the two strategies for partial volume coefficient estimation.


Assuntos
Encéfalo/anatomia & histologia , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Algoritmos , Simulação por Computador , Humanos , Cadeias de Markov , Modelos Estatísticos , Imagens de Fantasmas , Software
13.
J Magn Reson Imaging ; 31(1): 192-203, 2010 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-20027588

RESUMO

PURPOSE: To adapt the so-called nonlocal means filter to deal with magnetic resonance (MR) images with spatially varying noise levels (for both Gaussian and Rician distributed noise). MATERIALS AND METHODS: Most filtering techniques assume an equal noise distribution across the image. When this assumption is not met, the resulting filtering becomes suboptimal. This is the case of MR images with spatially varying noise levels, such as those obtained by parallel imaging (sensitivity-encoded), intensity inhomogeneity-corrected images, or surface coil-based acquisitions. We propose a new method where information regarding the local image noise level is used to adjust the amount of denoising strength of the filter. Such information is automatically obtained from the images using a new local noise estimation method. RESULTS: The proposed method was validated and compared with the standard nonlocal means filter on simulated and real MRI data showing an improved performance in all cases. CONCLUSION: The new noise-adaptive method was demonstrated to outperform the standard filter when spatially varying noise is present in the images.


Assuntos
Algoritmos , Artefatos , Encéfalo/anatomia & histologia , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Processamento de Sinais Assistido por Computador , Humanos , Imageamento por Ressonância Magnética/instrumentação , Imagens de Fantasmas , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
14.
Nucleic Acids Res ; 36(10): 3420-35, 2008 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-18445632

RESUMO

Functional genomics technologies have been widely adopted in the biological research of both model and non-model species. An efficient functional annotation of DNA or protein sequences is a major requirement for the successful application of these approaches as functional information on gene products is often the key to the interpretation of experimental results. Therefore, there is an increasing need for bioinformatics resources which are able to cope with large amount of sequence data, produce valuable annotation results and are easily accessible to laboratories where functional genomics projects are being undertaken. We present the Blast2GO suite as an integrated and biologist-oriented solution for the high-throughput and automatic functional annotation of DNA or protein sequences based on the Gene Ontology vocabulary. The most outstanding Blast2GO features are: (i) the combination of various annotation strategies and tools controlling type and intensity of annotation, (ii) the numerous graphical features such as the interactive GO-graph visualization for gene-set function profiling or descriptive charts, (iii) the general sequence management features and (iv) high-throughput capabilities. We used the Blast2GO framework to carry out a detailed analysis of annotation behaviour through homology transfer and its impact in functional genomics research. Our aim is to offer biologists useful information to take into account when addressing the task of functionally characterizing their sequence data.


Assuntos
Genômica , Análise de Sequência de DNA , Análise de Sequência de Proteína , Software , Animais , Biologia Computacional , Gráficos por Computador , Bases de Dados Genéticas , Etiquetas de Sequências Expressas/química , Genes/fisiologia , Vocabulário Controlado
15.
Stud Health Technol Inform ; 155: 212-8, 2010.
Artigo em Inglês | MEDLINE | ID: mdl-20543331

RESUMO

To build a semantically interoperable Electronic Health Record is one of the most challenging research fields in health informatics. In order to reach this objective, EHR standards that formally describe health data structures have to be used. CEN EN13606 is one of the most promising approaches. It covers the technical needs for semantic interoperability and, at the same time, it incorporates a mechanism (archetype model) that enables clinical domain experts to participate in building an EHR system. In this paper we present EHRflex, a generic system based on archetypes. It empowers the clinician and allows him to manage his own EHR system in a simple and generic way, assuring that the user works with underlying standardized data structures. These can be exchanged with other people and systems when needed. EHRflex introduces EHR standards into the clinical routine delivering a technical platform which works directly on archetype based data.


Assuntos
Registros Eletrônicos de Saúde , Registro Médico Coordenado/métodos , Sistemas Computadorizados de Registros Médicos/organização & administração , Sistemas de Gerenciamento de Base de Dados/organização & administração , Humanos , Sistemas Computadorizados de Registros Médicos/normas , Linguagens de Programação , Semântica , Integração de Sistemas , Vocabulário Controlado
16.
Stud Health Technol Inform ; 155: 129-35, 2010.
Artigo em Inglês | MEDLINE | ID: mdl-20543320

RESUMO

In this paper, we present the ResearchEHR project. It focuses on the usability of Electronic Health Record (EHR) sources and EHR standards for building advanced clinical systems. The aim is to support healthcare professional, institutions and authorities by providing a set of generic methods and tools for the capture, standardization, integration, description and dissemination of health related information. ResearchEHR combines several tools to manage EHR at two different levels. The internal level that deals with the normalization and semantic upgrading of exiting EHR by using archetypes and the external level that uses Semantic Web technologies to specify clinical archetypes for advanced EHR architectures and systems.


Assuntos
Pesquisa Biomédica/métodos , Registros Eletrônicos de Saúde/organização & administração , Registro Médico Coordenado/métodos , Semântica , Pesquisa Biomédica/normas , Registros Eletrônicos de Saúde/normas , Humanos , Integração de Sistemas
17.
Stud Health Technol Inform ; 155: 136-42, 2010.
Artigo em Inglês | MEDLINE | ID: mdl-20543321

RESUMO

Since the approval of the CEN EN13606 norm for the electronic health record communication, a growing interest around the application of this specification has emerged. The main objective of the norm is to serve as a mechanism to achieve the semantic interoperability of clinical data. This will require an effort to use common terminologies, to normalise the clinical knowledge domain and to combine all these formalisations with the existing information systems. This paper presents a methodology and developed tools to reach the seamless semantic interoperability of health data in legacy systems and several study cases where the developed framework has been applied.


Assuntos
Registros Eletrônicos de Saúde/organização & administração , Registro Médico Coordenado/métodos , Sistemas Computadorizados de Registros Médicos/organização & administração , Humanos , Semântica , Vocabulário Controlado
18.
MAGMA ; 22(1): 5-18, 2009 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-18989714

RESUMO

JUSTIFICATION: Automatic brain tumor classification by MRS has been under development for more than a decade. Nonetheless, to our knowledge, there are no published evaluations of predictive models with unseen cases that are subsequently acquired in different centers. The multicenter eTUMOUR project (2004-2009), which builds upon previous expertise from the INTERPRET project (2000-2002) has allowed such an evaluation to take place. MATERIALS AND METHODS: A total of 253 pairwise classifiers for glioblastoma, meningioma, metastasis, and low-grade glial diagnosis were inferred based on 211 SV short TE INTERPRET MR spectra obtained at 1.5 T (PRESS or STEAM, 20-32 ms) and automatically pre-processed. Afterwards, the classifiers were tested with 97 spectra, which were subsequently compiled during eTUMOUR. RESULTS: In our results based on subsequently acquired spectra, accuracies of around 90% were achieved for most of the pairwise discrimination problems. The exception was for the glioblastoma versus metastasis discrimination, which was below 78%. A more clear definition of metastases may be obtained by other approaches, such as MRSI + MRI. CONCLUSIONS: The prediction of the tumor type of in-vivo MRS is possible using classifiers developed from previously acquired data, in different hospitals with different instrumentation under the same acquisition protocols. This methodology may find application for assisting in the diagnosis of new brain tumor cases and for the quality control of multicenter MRS databases.


Assuntos
Inteligência Artificial , Biomarcadores Tumorais/análise , Neoplasias Encefálicas/classificação , Neoplasias Encefálicas/metabolismo , Diagnóstico por Computador/métodos , Espectroscopia de Ressonância Magnética/métodos , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Neoplasias Encefálicas/diagnóstico , Europa (Continente) , Humanos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
19.
Methods Inf Med ; 48(3): 291-8, 2009.
Artigo em Inglês | MEDLINE | ID: mdl-19387507

RESUMO

OBJECTIVE: The main goal of this paper is to obtain a classification model based on feed-forward multilayer perceptrons in order to improve postpartum depression prediction during the 32 weeks after childbirth with a high sensitivity and specificity and to develop a tool to be integrated in a decision support system for clinicians. MATERIALS AND METHODS: Multilayer perceptrons were trained on data from 1397 women who had just given birth, from seven Spanish general hospitals, including clinical, environmental and genetic variables. A prospective cohort study was made just after delivery, at 8 weeks and at 32 weeks after delivery. The models were evaluated with the geometric mean of accuracies using a hold-out strategy. RESULTS: Multilayer perceptrons showed good performance (high sensitivity and specificity) as predictive models for postpartum depression. CONCLUSIONS: The use of these models in a decision support system can be clinically evaluated in future work. The analysis of the models by pruning leads to a qualitative interpretation of the influence of each variable in the interest of clinical protocols.


Assuntos
Depressão Pós-Parto/diagnóstico , Adulto , Algoritmos , Estudos de Coortes , Feminino , Previsões , Humanos , Modelos Logísticos , Rede Nervosa , Estudos Prospectivos , Espanha
20.
Magn Reson Med ; 60(2): 288-98, 2008 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-18666120

RESUMO

This study examines the effect of feature extraction methods prior to automated pattern recognition based on magnetic resonance spectroscopy (MRS) for brain tumor diagnosis. Since individual inspection of spectra is time-consuming and requires specific spectroscopic expertise, the introduction of clinical decision support systems (DSSs) is expected to strongly promote the clinical use of MRS. This study focuses on the feature extraction step in the preprocessing protocol of MRS when using a DSS. On two independent data sets, encompassing single-voxel and multi-voxel data, it is observed that the use of the full spectra together with a kernel-based technique, handling high dimensional data, or using an automated pattern recognition method based on independent component analysis or Relief-F achieves accurate performances. In addition, these approaches have low cost and are easy to automate. When sophisticated quantification methods are used in a DSS, user interaction should be minimized. The computationally intensive quantification techniques do not tend to increase the performance in these circumstances. The results suggest to simplify the feature reduction step in the preprocessing protocol when using a DSS purely for classification purposes. This can greatly speed up the execution of classifiers and DSSs and may accelerate their introduction into clinical practice.


Assuntos
Inteligência Artificial , Biomarcadores Tumorais/análise , Neoplasias Encefálicas/diagnóstico , Neoplasias Encefálicas/metabolismo , Diagnóstico por Computador/métodos , Espectroscopia de Ressonância Magnética/métodos , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Humanos , Prótons , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA