Your browser doesn't support javascript.
loading
Text summarization for pharmaceutical sciences using hierarchical clustering with a weighted evaluation methodology.
Dalal, Avinash; Ranjan, Sumit; Bopaiah, Yajna; Chembachere, Divya; Steiger, Nick; Burns, Christopher; Daswani, Varsha.
Afiliação
  • Dalal A; Applied Sciences, Lumilytics LLC, 436 N. Main St. #1004, Doylestown, PA, 18901, USA. avinash.dalal@lumilyticsdata.com.
  • Ranjan S; Decision Sciences, MResult Corporation, 12 Roosevelt Avenue, Mystic, CT, 06355, USA. sumit.ranjan@mresult.com.
  • Bopaiah Y; Decision Sciences, MResult Corporation, 12 Roosevelt Avenue, Mystic, CT, 06355, USA.
  • Chembachere D; Decision Sciences, MResult Corporation, 12 Roosevelt Avenue, Mystic, CT, 06355, USA.
  • Steiger N; Biotherapeutics & Pharmaceutical Sciences, Pfizer INC., 235 E. 42nd Street, New York, NY, 10017, USA.
  • Burns C; Biotherapeutics & Pharmaceutical Sciences, Pfizer INC., 235 E. 42nd Street, New York, NY, 10017, USA.
  • Daswani V; Applied Sciences, Lumilytics LLC, 436 N. Main St. #1004, Doylestown, PA, 18901, USA.
Sci Rep ; 14(1): 20149, 2024 08 30.
Article em En | MEDLINE | ID: mdl-39209906
ABSTRACT
In the pharmaceutical industry, there is an abundance of regulatory documents used to understand the current regulatory landscape and proactively make project decisions. Due to the size of these documents, it is helpful for project teams to have informative summaries. We propose a novel solution, MedicoVerse, to summarize such documents using advanced machine learning techniques. MedicoVerse uses a multi-stage approach, combining word embeddings using the SapBERT model on regulatory documents. These embeddings are put through a critical hierarchical agglomerative clustering step, and the clusters are organized through a custom data structure. Each cluster is summarized using the bart-large-cnn-samsum model, and each summary is merged to create a comprehensive summary of the original document. We compare MedicoVerse results with established models T5, Google Pegasus, Facebook BART, and large language models such as Mixtral 8 × 7b instruct, GPT 3.5, and Llama-2-70b by introducing a scoring system that considers four factors ROUGE score, BERTScore, business entities and the Flesch Reading Ease. Our results show that MedicoVerse outperforms the compared models, thus producing informative summaries of large regulatory documents.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Aprendizado de Máquina Limite: Humans Idioma: En Revista: Sci Rep Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Estados Unidos País de publicação: Reino Unido

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Aprendizado de Máquina Limite: Humans Idioma: En Revista: Sci Rep Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Estados Unidos País de publicação: Reino Unido