This directory contains code necessary to replicate the training and evaluation for the AAAI 2023 paper:
"CoP: Factual Inconsistency Detection by Controlling the Preference" by Shuaijie She, Xiang Geng, Shujian Huang and Jiajun Chen.
I'm reorganizing the code for simplicity and convenience. I will release it gradually.
transformers 4.12.5
torch 1.11.0
tensorboard 2.9.0
spacy 3.2.3
en-core-web-sm 3.2.0
nltk 3.7
rouge 1.0.1
Download Pretrain Model from Huggingface (for example BARTCNN)
Using the script reproduce.sh
--TestOn support four data split mentioned in paper, including ['qagscnn','qagsxsum','frankcnn','frankxum']
We provide a simple inference usage as inference.sh (currently support Zero-shot token&summary Level Tasks)
1. Prepare data (a simple example in data/toy.json)
2. Specify Config in inference.sh
3. Create output Folder
4. Exec inference.sh
5. Check the result in output/result.json
Looking into PromptTuning folder.
Our experiments were conducted on single 3090 and take around 10G V-Memory (based on BARTCNN)
If you find our work useful, please consider citing our work.
@misc{she2022cop,
title={CoP: Factual Inconsistency Detection by Controlling the Preference},
author={Shuaijie She and Xiang Geng and Shujian Huang and Jiajun Chen},
year={2022},
eprint={2212.01611},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@article{to update with the AAAI2023 Processings,
title={==},
author={==},
journal={==},
year={==}
}