Your browser doesn't support javascript.
loading
Medical Image Retrieval: A Multimodal Approach.
Cao, Yu; Steffey, Shawn; He, Jianbiao; Xiao, Degui; Tao, Cui; Chen, Ping; Müller, Henning.
Afiliação
  • Cao Y; Department of Computer Science, The University of Massachusetts Lowell, Lowell, MA, USA.
  • Steffey S; Department of Computer Science, The University of Massachusetts Lowell, Lowell, MA, USA.
  • He J; School of Information Science and Engineering, Central South University, Changsha, PR China.
  • Xiao D; College of Computer Science and Electronic Engineering, Hunan University, Changsha, PR China.
  • Tao C; School of Biomedical Informatics, The University of Texas, Health Science Center at Houston, Houston, TX, USA.
  • Chen P; Department of Computer Science, University of Massachusetts Boston, Boston, MA, USA.
  • Müller H; Department of Business Information Systems, University of Applied Sciences Western Switzerland (HES-SO), Medical Informatics, University Hospitals and University of Geneva, Geneva, Switzerland.
Cancer Inform ; 13(Suppl 3): 125-36, 2014.
Article em En | MEDLINE | ID: mdl-26309389
Medical imaging is becoming a vital component of war on cancer. Tremendous amounts of medical image data are captured and recorded in a digital format during cancer care and cancer research. Facing such an unprecedented volume of image data with heterogeneous image modalities, it is necessary to develop effective and efficient content-based medical image retrieval systems for cancer clinical practice and research. While substantial progress has been made in different areas of content-based image retrieval (CBIR) research, direct applications of existing CBIR techniques to the medical images produced unsatisfactory results, because of the unique characteristics of medical images. In this paper, we develop a new multimodal medical image retrieval approach based on the recent advances in the statistical graphic model and deep learning. Specifically, we first investigate a new extended probabilistic Latent Semantic Analysis model to integrate the visual and textual information from medical images to bridge the semantic gap. We then develop a new deep Boltzmann machine-based multimodal learning model to learn the joint density model from multimodal information in order to derive the missing modality. Experimental results with large volume of real-world medical images have shown that our new approach is a promising solution for the next-generation medical imaging indexing and retrieval system.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Tipo de estudo: Prognostic_studies Idioma: En Ano de publicação: 2014 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Tipo de estudo: Prognostic_studies Idioma: En Ano de publicação: 2014 Tipo de documento: Article