You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We tried to reproduce oagbert's downstream task — Author Name Disambiguation, but the results we reproduced were far from given in the paper
In the paper, the expression is ”apply the embeddings generated by pre-trained models to solve name disambiguation from scratch“ . And the results of Unsupervised are also shown in the table 1, so I tried to use OAG-BERT-v2 for verification, But the effect is very poor, when only use title, Macro Pairewise F1 scores < 0.2
At the same time,the paper open sourced another version of OAG-BERT-sim on github, I used this model for verification, and found that the results were consistent with the results given in the paper table 1, which makes I'm confused, according to github, OAG-BERT-sim is a supervised fine-tune task.
We tried to reproduce oagbert's downstream task — Author Name Disambiguation, but the results we reproduced were far from given in the paper
In the paper, the expression is ”apply the embeddings generated by pre-trained models to solve name disambiguation from scratch“ . And the results of Unsupervised are also shown in the table 1, so I tried to use OAG-BERT-v2 for verification, But the effect is very poor, when only use title, Macro Pairewise F1 scores < 0.2
At the same time,the paper open sourced another version of OAG-BERT-sim on github, I used this model for verification, and found that the results were consistent with the results given in the paper table 1, which makes I'm confused, according to github, OAG-BERT-sim is a supervised fine-tune task.
So, how can I reproduce the results in Table 1 ?
@Somefive
The text was updated successfully, but these errors were encountered: