Tokenized dataset: https://drive.google.com/file/d/1tWl99-7tZ1WZRasgsUiC3YGX00ERT7as/view?usp=sharing
Untokenized dataset (w/o character description; we used it for training the summarization models): https://drive.google.com/file/d/1KqpLEXxQ85jds8bEOdO-fsSs8S7UY69M/view?usp=sharing