Your browser doesn't support javascript.
loading
Multimodal Abstractive Summarization using bidirectional encoder representations from transformers with attention mechanism.
Argade, Dakshata; Khairnar, Vaishali; Vora, Deepali; Patil, Shruti; Kotecha, Ketan; Alfarhood, Sultan.
Affiliation
  • Argade D; Terna Engineering College, Nerul, Navi Mumbai, 400706, India.
  • Khairnar V; Terna Engineering College, Nerul, Navi Mumbai, 400706, India.
  • Vora D; Symbiosis Institute of Technology, Pune Campus, Symbiosis International (Deemed University), Pune, 412115, India.
  • Patil S; Symbiosis Institute of Technology, Pune Campus, Symbiosis International (Deemed University), Pune, 412115, India.
  • Kotecha K; Symbiosis Centre for Applied Artificial Intelligence (SCAAI), Symbiosis Institute of Technology Pune Campus, Symbiosis International (Deemed University) (SIU), Lavale, Pune, 412115, India.
  • Alfarhood S; Symbiosis Centre for Applied Artificial Intelligence (SCAAI), Symbiosis Institute of Technology Pune Campus, Symbiosis International (Deemed University) (SIU), Lavale, Pune, 412115, India.
Heliyon ; 10(4): e26162, 2024 Feb 29.
Article in En | MEDLINE | ID: mdl-38420442
ABSTRACT
In recent decades, abstractive text summarization using multimodal input has attracted many researchers due to the capability of gathering information from various sources to create a concise summary. However, the existing methodologies based on multimodal summarization provide only a summary for the short videos and poor results for the lengthy videos. To address the aforementioned issues, this research presented the Multimodal Abstractive Summarization using Bidirectional Encoder Representations from Transformers (MAS-BERT) with an attention mechanism. The purpose of the video summarization is to increase the speed of searching for a large collection of videos so that the users can quickly decide whether the video is relevant or not by reading the summary. Initially, the data is obtained from the publicly available How2 dataset and is encoded using the Bidirectional Gated Recurrent Unit (Bi-GRU) encoder and the Long Short Term Memory (LSTM) encoder. The textual data which is embedded in the embedding layer is encoded using a bidirectional GRU encoder and the features with audio and video data are encoded with LSTM encoder. After this, BERT based attention mechanism is used to combine the modalities and finally, the BI-GRU based decoder is used for summarizing the multimodalities. The results obtained through the experiments that show the proposed MAS-BERT has achieved a better result of 60.2 for Rouge-1 whereas, the existing Decoder-only Multimodal Transformer (D-MmT) and the Factorized Multimodal Transformer based Decoder Only Language model (FLORAL) has achieved 49.58 and 56.89 respectively. Our work facilitates users by providing better contextual information and user experience and would help video-sharing platforms for customer retention by allowing users to search for relevant videos by looking at its summary.
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Heliyon Year: 2024 Document type: Article Affiliation country: India Country of publication: United kingdom

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Heliyon Year: 2024 Document type: Article Affiliation country: India Country of publication: United kingdom