[A medical visual question answering approach based on co-attention networks].
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi
; 41(3): 560-568, 2024 Jun 25.
Article
in Zh
| MEDLINE
| ID: mdl-38932543
ABSTRACT
Recent studies have introduced attention models for medical visual question answering (MVQA). In medical research, not only is the modeling of "visual attention" crucial, but the modeling of "question attention" is equally significant. To facilitate bidirectional reasoning in the attention processes involving medical images and questions, a new MVQA architecture, named MCAN, has been proposed. This architecture incorporated a cross-modal co-attention network, FCAF, which identifies key words in questions and principal parts in images. Through a meta-learning channel attention module (MLCA), weights were adaptively assigned to each word and region, reflecting the model's focus on specific words and regions during reasoning. Additionally, this study specially designed and developed a medical domain-specific word embedding model, Med-GloVe, to further enhance the model's accuracy and practical value. Experimental results indicated that MCAN proposed in this study improved the accuracy by 7.7% on free-form questions in the Path-VQA dataset, and by 4.4% on closed-form questions in the VQA-RAD dataset, which effectively improves the accuracy of the medical vision question answer.
Key words
Full text:
1
Database:
MEDLINE
Main subject:
Neural Networks, Computer
Limits:
Humans
Language:
Zh
Journal:
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi
Journal subject:
ENGENHARIA BIOMEDICA
Year:
2024
Type:
Article