Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

experimental result #1

Open
xiehou-design opened this issue Jun 27, 2023 · 2 comments
Open

experimental result #1

xiehou-design opened this issue Jun 27, 2023 · 2 comments

Comments

@xiehou-design
Copy link

Hi, your work is very interesting! But I met some question when I followed the readme's file to reproduct the project. I run the train_eae.sh file and the file's CONFIG was replaced to config_eae_ace05e_AMRBart+prefixgen+copy_enccrossprefix_tp40.json, and I just change the json's pretrainModel dir path. But I run this project, the result is lower that your paper's result.
The recent results section log information is as follows:

[2023-06-27 15:41:31] - __main__ - Epoch 45
Train 45: 100%|████████████████████████| 1051/1051 [05:07<00:00,  3.42it/s]
[2023-06-27 15:46:38] - __main__ - Average training loss : 0.039728544652462006...
Dev 45: 100%|██████████████████████████████| 38/38 [00:26<00:00,  1.45it/s]
[2023-06-27 15:47:05] - __main__ - --------------------------Dev Scores---------------------------------
[2023-06-27 15:47:05] - __main__ - Role I     - P: 70.79 ( 429/ 606), R: 54.79 ( 429/ 783), F: 61.77
[2023-06-27 15:47:05] - __main__ - Role C     - P: 66.01 ( 400/ 606), R: 51.09 ( 400/ 783), F: 57.60
[2023-06-27 15:47:05] - __main__ - ---------------------------------------------------------------------
[2023-06-27 15:47:05] - __main__ - {'epoch': 45, 'dev_scores': {'arg_id': (0.7079207920792079, 0.5478927203065134, 0.6177105831533477), 'arg_cls': (0.6600660066006601, 0.5108556832694764, 0.5759539236861051)}}
[2023-06-27 15:47:05] - __main__ - Current best
[2023-06-27 15:47:05] - __main__ - {'best_epoch': 36, 'best_scores': {'arg_id': (0.7207357859531772, 0.5504469987228607, 0.6241853729181752), 'arg_cls': (0.6822742474916388, 0.5210727969348659, 0.5908761766835626)}}
[2023-06-27 15:47:05] - __main__ - ./output/eae_ace05e_AMRBart+prefixgen+copy_enccrossprefix_tp40/20230627_101723/train.log
[2023-06-27 15:47:05] - __main__ - Done!

config_eae_ace05e_AMRBart+prefixgen+copy_enccrossprefix_tp40.json information is as follows:

{
    "dataset": "ace05e",
    "gpu_device": 0,
    "seed": 100,
    "train_file": "./processed_data/ace05e_bart/train.w1.oneie.json",
    "dev_file": "./processed_data/ace05e_bart/dev.w1.oneie.json",
    "test_file": "./processed_data/ace05e_bart/test.w1.oneie.json",
    "finetune_dir": "./data/eae_ace05e/",
    "train_finetune_file": "./data/eae_ace05e/train_all.pkl",
    "dev_finetune_file": "./data/eae_ace05e/dev_all.pkl",
    "test_finetune_file": "./data/eae_ace05e/test_all.pkl", 
    "vocab_file": "./data/eae_ace05e/vocab.json",
    "model_type": "AMR+prefixgen+copy",
    "output_dir": "./output/eae_ace05e_AMRBart+prefixgen+copy_enccrossprefix_tp40",
    "cache_dir": "./cache",
    "model_name": "/media/ubuntu/projects/pretrainModel/bart-large",
    "input_style": ["event_type_sent", "triggers", "template", "na_token"],   
    "output_style": ["argument:sentence"], 
    "max_epoch": 45,
    "warmup_epoch": 5,
    "train_batch_size": 4,
    "eval_batch_size": 12,
    "accumulate_step": 1,
    "learning_rate": 1e-05,
    "weight_decay": 1e-05,
    "grad_clipping": 5.0,
    "beam_size": 1,
    "max_length": 250,
    "max_output_length": 50,
    "ignore_first_header": true,
    "use_encoder_prefix": true,
    "use_cross_prefix": true,
    "use_decoder_prefix": false,
    "AMR_model_path": "/media/ubuntu/projects/pretrainModel/AMRBART-large-finetuned-AMR3.0-AMR2Text-v2",
    "latent_dim": 1024,
    "prefix_length": 40,
    "freeze_AMR": false,
    "freeze_bart": false,
    "freeze_prefixprojector": false,
    "pretrained_model_path": null
}
@xiehou-design
Copy link
Author

And replaceing CONFIG to ``, the results section log information is as follow:

Epoch 45
[2023-06-27 19:51:12] - __main__ - Average training loss : 0.037102121859788895...
[2023-06-27 19:51:41] - __main__ - --------------------------Dev Scores---------------------------------
[2023-06-27 19:51:41] - __main__ - Role I     - P: 71.85 ( 434/ 604), R: 55.43 ( 434/ 783), F: 62.58
[2023-06-27 19:51:41] - __main__ - Role C     - P: 67.05 ( 405/ 604), R: 51.72 ( 405/ 783), F: 58.40
[2023-06-27 19:51:41] - __main__ - ---------------------------------------------------------------------
[2023-06-27 19:51:41] - __main__ - {'epoch': 45, 'dev_scores': {'arg_id': (0.7185430463576159, 0.5542784163473818, 0.6258111031002163), 'arg_cls': (0.6705298013245033, 0.5172413793103449, 0.5839942321557319)}}
[2023-06-27 19:51:41] - __main__ - Current best
[2023-06-27 19:51:41] - __main__ - {'best_epoch': 32, 'best_scores': {'arg_id': (0.7337770382695508, 0.5632183908045977, 0.6372832369942195), 'arg_cls': (0.6921797004991681, 0.5312899106002554, 0.6011560693641619)}}
[2023-06-27 19:51:41] - __main__ - ./output/eae_ace05e_AMRRoberta+prefixgen+copy_enccrossprefix_tp40/20230627_155213/train.log
[2023-06-27 19:51:41] - __main__ - Done!

The result f1 is low! Look forward your reply.

@simon-p-j-r
Copy link

data.py里面有个方法是amr_preprocess。这里面有个路径需要改一下,不改也能跑,但是代码注释说了:“it's basically a baseline model”,你可以检查一下有没有改。还有用的啥GPU也很重要。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants