From f99725adbc9cef2fd6cc6eef18a86e3d7c1e5339 Mon Sep 17 00:00:00 2001 From: Chao Ma Date: Thu, 23 May 2019 10:38:05 +0800 Subject: [PATCH] all demo use python-3 (#555) --- examples/mxnet/gat/README.md | 2 +- examples/mxnet/rgcn/README.md | 6 +++--- examples/mxnet/sampling/README.md | 18 +++++++++--------- examples/mxnet/tree_lstm/README.md | 2 +- examples/pytorch/appnp/README.md | 2 +- examples/pytorch/capsule/README.md | 4 ++-- examples/pytorch/dgi/README.md | 6 +++--- examples/pytorch/dgmg/README.md | 4 ++-- examples/pytorch/gat/README.md | 8 ++++---- examples/pytorch/gcn/README.md | 2 +- examples/pytorch/gin/README.md | 6 +++--- examples/pytorch/graphsage/README.md | 2 +- examples/pytorch/line_graph/README.md | 4 ++-- examples/pytorch/sampling/README.md | 12 ++++++------ examples/pytorch/sgc/README.md | 6 +++--- examples/pytorch/transformer/README.md | 4 ++-- examples/pytorch/tree_lstm/README.md | 2 +- 17 files changed, 45 insertions(+), 45 deletions(-) diff --git a/examples/mxnet/gat/README.md b/examples/mxnet/gat/README.md index 19bea5a88d6c..4b525b2dd55e 100644 --- a/examples/mxnet/gat/README.md +++ b/examples/mxnet/gat/README.md @@ -19,5 +19,5 @@ pip install requests ### Usage (make sure that DGLBACKEND is changed into mxnet) ```bash -DGLBACKEND=mxnet python gat_batch.py --dataset cora --gpu 0 --num-heads 8 +DGLBACKEND=mxnet python3 gat_batch.py --dataset cora --gpu 0 --num-heads 8 ``` diff --git a/examples/mxnet/rgcn/README.md b/examples/mxnet/rgcn/README.md index 1ebb1af878e5..3c8c772d1c3e 100644 --- a/examples/mxnet/rgcn/README.md +++ b/examples/mxnet/rgcn/README.md @@ -22,15 +22,15 @@ Example code was tested with rdflib 4.2.2 and pandas 0.23.4 ### Entity Classification AIFB: accuracy 97.22% (DGL), 95.83% (paper) ``` -DGLBACKEND=mxnet python entity_classify.py -d aifb --testing --gpu 0 +DGLBACKEND=mxnet python3 entity_classify.py -d aifb --testing --gpu 0 ``` MUTAG: accuracy 76.47% (DGL), 73.23% (paper) ``` -DGLBACKEND=mxnet python entity_classify.py -d mutag --l2norm 5e-4 --n-bases 40 --testing --gpu 0 +DGLBACKEND=mxnet python3 entity_classify.py -d mutag --l2norm 5e-4 --n-bases 40 --testing --gpu 0 ``` BGS: accuracy 79.31% (DGL, n-basese=20, OOM when >20), 83.10% (paper) ``` -DGLBACKEND=mxnet python entity_classify.py -d bgs --l2norm 5e-4 --n-bases 20 --testing --gpu 0 --relabel +DGLBACKEND=mxnet python3 entity_classify.py -d bgs --l2norm 5e-4 --n-bases 20 --testing --gpu 0 --relabel ``` diff --git a/examples/mxnet/sampling/README.md b/examples/mxnet/sampling/README.md index 278c7302c34b..de2cb46605db 100644 --- a/examples/mxnet/sampling/README.md +++ b/examples/mxnet/sampling/README.md @@ -15,44 +15,44 @@ pip install mxnet --pre ### Neighbor Sampling & Skip Connection cora: test accuracy ~83% with `--num-neighbors 2`, ~84% by training on the full graph ``` -DGLBACKEND=mxnet python examples/mxnet/sampling/train.py --model gcn_ns --dataset cora --self-loop --num-neighbors 2 --batch-size 1000 --test-batch-size 5000 +DGLBACKEND=mxnet python3 examples/mxnet/sampling/train.py --model gcn_ns --dataset cora --self-loop --num-neighbors 2 --batch-size 1000 --test-batch-size 5000 ``` citeseer: test accuracy ~69% with `--num-neighbors 2`, ~70% by training on the full graph ``` -DGLBACKEND=mxnet python examples/mxnet/sampling/train.py --model gcn_ns --dataset citeseer --self-loop --num-neighbors 2 --batch-size 1000 --test-batch-size 5000 +DGLBACKEND=mxnet python3 examples/mxnet/sampling/train.py --model gcn_ns --dataset citeseer --self-loop --num-neighbors 2 --batch-size 1000 --test-batch-size 5000 ``` pubmed: test accuracy ~78% with `--num-neighbors 3`, ~77% by training on the full graph ``` -DGLBACKEND=mxnet python examples/mxnet/sampling/train.py --model gcn_ns --dataset pubmed --self-loop --num-neighbors 3 --batch-size 1000 --test-batch-size 5000 +DGLBACKEND=mxnet python3 examples/mxnet/sampling/train.py --model gcn_ns --dataset pubmed --self-loop --num-neighbors 3 --batch-size 1000 --test-batch-size 5000 ``` reddit: test accuracy ~91% with `--num-neighbors 3` and `--batch-size 1000`, ~93% by training on the full graph ``` -DGLBACKEND=mxnet python examples/mxnet/sampling/train.py --model gcn_ns --dataset reddit-self-loop --num-neighbors 3 --batch-size 1000 --test-batch-size 5000 --n-hidden 64 +DGLBACKEND=mxnet python3 examples/mxnet/sampling/train.py --model gcn_ns --dataset reddit-self-loop --num-neighbors 3 --batch-size 1000 --test-batch-size 5000 --n-hidden 64 ``` ### Control Variate & Skip Connection cora: test accuracy ~84% with `--num-neighbors 1`, ~84% by training on the full graph ``` -DGLBACKEND=mxnet python examples/mxnet/sampling/train.py --model gcn_cv --dataset cora --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000 +DGLBACKEND=mxnet python3 examples/mxnet/sampling/train.py --model gcn_cv --dataset cora --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000 ``` citeseer: test accuracy ~69% with `--num-neighbors 1`, ~70% by training on the full graph ``` -DGLBACKEND=mxnet python examples/mxnet/sampling/train.py --model gcn_cv --dataset citeseer --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000 +DGLBACKEND=mxnet python3 examples/mxnet/sampling/train.py --model gcn_cv --dataset citeseer --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000 ``` pubmed: test accuracy ~79% with `--num-neighbors 1`, ~77% by training on the full graph ``` -DGLBACKEND=mxnet python examples/mxnet/sampling/train.py --model gcn_cv --dataset pubmed --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000 +DGLBACKEND=mxnet python3 examples/mxnet/sampling/train.py --model gcn_cv --dataset pubmed --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000 ``` reddit: test accuracy ~93% with `--num-neighbors 1` and `--batch-size 1000`, ~93% by training on the full graph ``` -DGLBACKEND=mxnet python examples/mxnet/sampling/train.py --model gcn_cv --dataset reddit-self-loop --num-neighbors 1 --batch-size 10000 --test-batch-size 5000 --n-hidden 64 +DGLBACKEND=mxnet python3 examples/mxnet/sampling/train.py --model gcn_cv --dataset reddit-self-loop --num-neighbors 1 --batch-size 10000 --test-batch-size 5000 --n-hidden 64 ``` ### Control Variate & GraphSAGE-mean @@ -61,7 +61,7 @@ Following [Control Variate](https://arxiv.org/abs/1710.10568), we use the mean p reddit: test accuracy 96.1% with `--num-neighbors 1` and `--batch-size 1000`, ~96.2% in [Control Variate](https://arxiv.org/abs/1710.10568) with `--num-neighbors 2` and `--batch-size 1000` ``` -DGLBACKEND=mxnet python examples/mxnet/sampling/train.py --model graphsage_cv --batch-size 1000 --test-batch-size 5000 --n-epochs 50 --dataset reddit --num-neighbors 1 --n-hidden 128 --dropout 0.2 --weight-decay 0 +DGLBACKEND=mxnet python3 examples/mxnet/sampling/train.py --model graphsage_cv --batch-size 1000 --test-batch-size 5000 --n-epochs 50 --dataset reddit --num-neighbors 1 --n-hidden 128 --dropout 0.2 --weight-decay 0 ``` ### Run multi-processing training diff --git a/examples/mxnet/tree_lstm/README.md b/examples/mxnet/tree_lstm/README.md index caee485806ab..c8113a37e342 100644 --- a/examples/mxnet/tree_lstm/README.md +++ b/examples/mxnet/tree_lstm/README.md @@ -21,7 +21,7 @@ The script will download the [SST dataset] (http://nlp.stanford.edu/sentiment/in ## Usage ``` -python train.py --gpu 0 +python3 train.py --gpu 0 ``` ## Speed Test diff --git a/examples/pytorch/appnp/README.md b/examples/pytorch/appnp/README.md index 20c4720f70b3..d31a2414ec9a 100644 --- a/examples/pytorch/appnp/README.md +++ b/examples/pytorch/appnp/README.md @@ -22,7 +22,7 @@ Results Run with following (available dataset: "cora", "citeseer", "pubmed") ```bash -python train.py --dataset cora --gpu 0 +python3 train.py --dataset cora --gpu 0 ``` * cora: 0.8370 (paper: 0.850) diff --git a/examples/pytorch/capsule/README.md b/examples/pytorch/capsule/README.md index bfa80e1779c4..7b940aae4f81 100644 --- a/examples/pytorch/capsule/README.md +++ b/examples/pytorch/capsule/README.md @@ -17,7 +17,7 @@ Training & Evaluation ---------------------- ```bash # Run with default config -python main.py +python3 main.py # Run with train and test batch size 128, and for 50 epochs -python main.py --batch-size 128 --test-batch-size 128 --epochs 50 +python3 main.py --batch-size 128 --test-batch-size 128 --epochs 50 ``` diff --git a/examples/pytorch/dgi/README.md b/examples/pytorch/dgi/README.md index 03de214a6f15..f02bca818c7e 100644 --- a/examples/pytorch/dgi/README.md +++ b/examples/pytorch/dgi/README.md @@ -20,15 +20,15 @@ How to run Run with following: ```bash -python train.py --dataset=cora --gpu=0 --self-loop +python3 train.py --dataset=cora --gpu=0 --self-loop ``` ```bash -python train.py --dataset=citeseer --gpu=0 +python3 train.py --dataset=citeseer --gpu=0 ``` ```bash -python train.py --dataset=pubmed --gpu=0 +python3 train.py --dataset=pubmed --gpu=0 ``` Results diff --git a/examples/pytorch/dgmg/README.md b/examples/pytorch/dgmg/README.md index 0d3fbe3fa73b..3d57b0029c5f 100644 --- a/examples/pytorch/dgmg/README.md +++ b/examples/pytorch/dgmg/README.md @@ -10,8 +10,8 @@ Yujia Li, Oriol Vinyals, Chris Dyer, Razvan Pascanu, Peter Battaglia. ## Usage -- Train with batch size 1: `python main.py` -- Train with batch size larger than 1: `python main_batch.py`. +- Train with batch size 1: `python3 main.py` +- Train with batch size larger than 1: `python3 main_batch.py`. ## Performance diff --git a/examples/pytorch/gat/README.md b/examples/pytorch/gat/README.md index a1f79ae8ef23..2380a68e3cad 100644 --- a/examples/pytorch/gat/README.md +++ b/examples/pytorch/gat/README.md @@ -23,19 +23,19 @@ How to run Run with following: ```bash -python train.py --dataset=cora --gpu=0 +python3 train.py --dataset=cora --gpu=0 ``` ```bash -python train.py --dataset=citeseer --gpu=0 +python3 train.py --dataset=citeseer --gpu=0 ``` ```bash -python train.py --dataset=pubmed --gpu=0 --num-out-heads=8 --weight-decay=0.001 +python3 train.py --dataset=pubmed --gpu=0 --num-out-heads=8 --weight-decay=0.001 ``` ```bash -python train_ppi.py --gpu=0 +python3 train_ppi.py --gpu=0 ``` Results diff --git a/examples/pytorch/gcn/README.md b/examples/pytorch/gcn/README.md index ea621c066301..5fa2e3c25c77 100644 --- a/examples/pytorch/gcn/README.md +++ b/examples/pytorch/gcn/README.md @@ -28,7 +28,7 @@ Results Run with following (available dataset: "cora", "citeseer", "pubmed") ```bash -python train.py --dataset cora --gpu 0 --self-loop +python3 train.py --dataset cora --gpu 0 --self-loop ``` * cora: ~0.810 (0.79-0.83) (paper: 0.815) diff --git a/examples/pytorch/gin/README.md b/examples/pytorch/gin/README.md index 109f0884e25d..e611ff560c38 100644 --- a/examples/pytorch/gin/README.md +++ b/examples/pytorch/gin/README.md @@ -20,12 +20,12 @@ How to run An experiment on the GIN in default settings can be run with ```bash -python main.py +python3 main.py ``` An experiment on the GIN in customized settings can be run with ```bash -python main.py [--device 0 | --disable-cuda] --dataset COLLAB \ +python3 main.py [--device 0 | --disable-cuda] --dataset COLLAB \ --graph_pooling_type max --neighbor_pooling_type sum ``` @@ -35,7 +35,7 @@ Results Run with following with the double SUM pooling way: (tested dataset: "MUTAG"(default), "COLLAB", "IMDBBINARY", "IMDBMULTI") ```bash -python train.py --dataset MUTAB --device 0 \ +python3 train.py --dataset MUTAB --device 0 \ --graph_pooling_type sum --neighbor_pooling_type sum ``` diff --git a/examples/pytorch/graphsage/README.md b/examples/pytorch/graphsage/README.md index 240d424de96d..eae32929fb73 100644 --- a/examples/pytorch/graphsage/README.md +++ b/examples/pytorch/graphsage/README.md @@ -19,7 +19,7 @@ Results Run with following (available dataset: "cora", "citeseer", "pubmed") ```bash -python graphsage.py --dataset cora --gpu 0 +python3 graphsage.py --dataset cora --gpu 0 ``` * cora: ~0.8470 diff --git a/examples/pytorch/line_graph/README.md b/examples/pytorch/line_graph/README.md index 88281befc755..af90425821cc 100644 --- a/examples/pytorch/line_graph/README.md +++ b/examples/pytorch/line_graph/README.md @@ -22,12 +22,12 @@ How to run An experiment on the Stochastic Block Model in default settings can be run with ```bash -python train.py +python3 train.py ``` An experiment on the Stochastic Block Model in customized settings can be run with ```bash -python train.py --batch-size BATCH_SIZE --gpu GPU --n-communities N_COMMUNITIES \ +python3 train.py --batch-size BATCH_SIZE --gpu GPU --n-communities N_COMMUNITIES \ --n-features N_FEATURES --n-graphs N_GRAPH --n-iterations N_ITERATIONS \ --n-layers N_LAYER --n-nodes N_NODE --model-path MODEL_PATH --radius RADIUS ``` diff --git a/examples/pytorch/sampling/README.md b/examples/pytorch/sampling/README.md index b4e14e35d266..d70d5cfc982b 100644 --- a/examples/pytorch/sampling/README.md +++ b/examples/pytorch/sampling/README.md @@ -16,32 +16,32 @@ pip install torch requests ### Neighbor Sampling & Skip Connection cora: test accuracy ~83% with --num-neighbors 2, ~84% by training on the full graph ``` -python gcn_ns_sc.py --dataset cora --self-loop --num-neighbors 2 --batch-size 1000000 --test-batch-size 1000000 --gpu 0 +python3 gcn_ns_sc.py --dataset cora --self-loop --num-neighbors 2 --batch-size 1000000 --test-batch-size 1000000 --gpu 0 ``` citeseer: test accuracy ~69% with --num-neighbors 2, ~70% by training on the full graph ``` -python gcn_ns_sc.py --dataset citeseer --self-loop --num-neighbors 2 --batch-size 1000000 --test-batch-size 1000000 --gpu 0 +python3 gcn_ns_sc.py --dataset citeseer --self-loop --num-neighbors 2 --batch-size 1000000 --test-batch-size 1000000 --gpu 0 ``` pubmed: test accuracy ~76% with --num-neighbors 3, ~77% by training on the full graph ``` -python gcn_ns_sc.py --dataset pubmed --self-loop --num-neighbors 3 --batch-size 1000000 --test-batch-size 1000000 --gpu 0 +python3 gcn_ns_sc.py --dataset pubmed --self-loop --num-neighbors 3 --batch-size 1000000 --test-batch-size 1000000 --gpu 0 ``` ### Control Variate & Skip Connection cora: test accuracy ~84% with --num-neighbors 1, ~84% by training on the full graph ``` -python gcn_cv_sc.py --dataset cora --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000 --gpu 0 +python3 gcn_cv_sc.py --dataset cora --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000 --gpu 0 ``` citeseer: test accuracy ~69% with --num-neighbors 1, ~70% by training on the full graph ``` -python gcn_cv_sc.py --dataset citeseer --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000 --gpu 0 +python3 gcn_cv_sc.py --dataset citeseer --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000 --gpu 0 ``` pubmed: test accuracy ~77% with --num-neighbors 1, ~77% by training on the full graph ``` -python gcn_cv_sc.py --dataset pubmed --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000 --gpu 0 +python3 gcn_cv_sc.py --dataset pubmed --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000 --gpu 0 ``` diff --git a/examples/pytorch/sgc/README.md b/examples/pytorch/sgc/README.md index 0c755221ae79..854517b1d260 100644 --- a/examples/pytorch/sgc/README.md +++ b/examples/pytorch/sgc/README.md @@ -22,9 +22,9 @@ Results Run with following (available dataset: "cora", "citeseer", "pubmed") ```bash -python sgc.py --dataset cora --gpu 0 -python sgc.py --dataset citeseer --weight-decay 5e-5 --n-epochs 150 --bias --gpu 0 -python sgc.py --dataset pubmed --weight-decay 5e-5 --bias --gpu 0 +python3 sgc.py --dataset cora --gpu 0 +python3 sgc.py --dataset citeseer --weight-decay 5e-5 --n-epochs 150 --bias --gpu 0 +python3 sgc.py --dataset pubmed --weight-decay 5e-5 --bias --gpu 0 ``` On NVIDIA V100 diff --git a/examples/pytorch/transformer/README.md b/examples/pytorch/transformer/README.md index 89dd502bfaf5..d9c75d963715 100644 --- a/examples/pytorch/transformer/README.md +++ b/examples/pytorch/transformer/README.md @@ -15,13 +15,13 @@ The folder contains training module and inferencing module (beam decoder) for Tr - For training: ``` - python translation_train.py [--gpus id1,id2,...] [--N #layers] [--dataset DATASET] [--batch BATCHSIZE] [--universal] + python3 translation_train.py [--gpus id1,id2,...] [--N #layers] [--dataset DATASET] [--batch BATCHSIZE] [--universal] ``` - For evaluating BLEU score on test set(by enabling `--print` to see translated text): ``` - python translation_test.py [--gpu id] [--N #layers] [--dataset DATASET] [--batch BATCHSIZE] [--checkpoint CHECKPOINT] [--print] [--universal] + python3 translation_test.py [--gpu id] [--N #layers] [--dataset DATASET] [--batch BATCHSIZE] [--checkpoint CHECKPOINT] [--print] [--universal] ``` Available datasets: `copy`, `sort`, `wmt14`, `multi30k`(default). diff --git a/examples/pytorch/tree_lstm/README.md b/examples/pytorch/tree_lstm/README.md index d3493d168290..0e032db23802 100644 --- a/examples/pytorch/tree_lstm/README.md +++ b/examples/pytorch/tree_lstm/README.md @@ -24,7 +24,7 @@ pip install torch requests nltk ## Usage ``` -python train.py --gpu 0 +python3 train.py --gpu 0 ``` ## Speed