Automatic paper clustering and search tool by fastText from Facebook Research.
Based on CVPR_paper_search_tool by Jin Yamanaka. I decided to split the code into multiple projects:
- AI Papers Scrapper - Download papers pdfs and other information from main AI conferences
- AI Papers Cleaner - Extract text from papers PDFs and abstracts, and remove uninformative words
- this project - Automatic paper clustering
- AI Papers Searcher - Web app to search papers by keywords or similar papers
- AI Conferences Info - Contains the titles, abstracts, urls, and authors names extracted from the papers
I also added support for more conferences in a single web app, customized it a little further, and hosted it on PythonAnywhere. You can see a running example of the web app here.
Docker or, for local installation:
- Python 3.10+
- Poetry
Note: Poetry installation currently not working due to a bug when installing fasttext.
To make it easier to run the code, with or without Docker, I created a few helpers. Both ways use start_here.sh
as an entry point. Since there are a few quirks when calling the specific code, I created this file with all the necessary commands to run the code. All you need to do is to uncomment the relevant lines and run the script:
train_paper_finder=1
create_for_app=1
# skip_train_paper_finder=1
You first need to install Python Poetry. Then, you can install the dependencies and run the code:
poetry install
bash start_here.sh
To help with the Docker setup, I created a Dockerfile
and a Makefile
. The Dockerfile
contains all the instructions to create the Docker image. The Makefile
contains the commands to build the image, run the container, and run the code inside the container. To build the image, simply run:
make
To call start_here.sh
inside the container, run:
make run