Skip to content

goal is to use a pre-trained language model (PLM) and train a system that produces a layman’s summary given a research publication from the biomedical domain.

Notifications You must be signed in to change notification settings

Lalit439cs/Summarization-NLP

Repository files navigation

Lay Summarization- BioNLP

We have used a pre-trained language model (PLM) to train a system that produces a layman’s summary given a research publication from the biomedical domain.
We have also participated in a related competition BioLaysumm, hosted by BioNLP Workshop, ACL 2024 (https://www.codabench.org/competitions/1920/). Our submission (userid- lkm1ml) was in the top 10 at the time of this submission to the competition.

This repository contains:

-flant5-finetune.py - Training code to finetune a FLAN-T5 model
-flant5-inference.py - A code used to do inference with the model
-run_model.sh - Entry point for model training and inference.
-writeup.txt - A text file listing your approach, results and settings of experiments
-plos.txt and elife.txt - Outputs generated on the test data.
-link.txt - A link to the Google Drive folder containing the trained model
-screenshot.jpg - a screenshot of our submission to CodaBench
-prediction_result.zip - predicted results on the test data that is submitted to CodaBench
-requirements.txt - installation requirements to create a required environment. Other required things are assumed to be pre-installed or cached in the environment

DECLARATION

This work was done collaboratively by Tanishq and Lalit during COL772, Natural Language Processing course (Spring 2024,Prof. Mausam, IIT Delhi).

About

goal is to use a pre-trained language model (PLM) and train a system that produces a layman’s summary given a research publication from the biomedical domain.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published