Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added Llama3-PBM-Nova-70B model #395

Merged
merged 7 commits into from
Aug 24, 2024
Merged

Added Llama3-PBM-Nova-70B model #395

merged 7 commits into from
Aug 24, 2024

Conversation

PKU-Baichuan
Copy link
Contributor

Can you add our new models Llama3-PBM-Nova-70B to the leaderboard?

Llama3-PBM-Nova-70B has been developed using meticulously designed SFT and RLHF techniques, building on the Meta-Llama-3-70B model.

@@ -1,10 +1,10 @@
,win_rate,standard_error,n_wins,n_wins_base,n_draws,n_total,discrete_win_rate,mode,avg_length,length_controlled_winrate,lc_standard_error
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why remove all those models? Maybe you haven't rebased on main?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, I just corrected the file.

@YannDubs YannDubs merged commit fe99c17 into tatsu-lab:main Aug 24, 2024
2 checks passed
LLM-Alignment-sh pushed a commit to LLM-Alignment-sh/alpaca_eval that referenced this pull request Aug 28, 2024
* Add files via upload

* Add files via upload

* Delete src/alpaca_eval/leaderboards/data_AlpacaEval_2/weighted_alpaca_eval_gpt4_turbo_leaderboard.csv

* Add files via upload

* Delete results/Llama3-PBM-Nova-70B/weighted_alpaca_eval_gpt4_turbo/leaderboard.csv

* Delete src/alpaca_eval/leaderboards/data_AlpacaEval_2/weighted_alpaca_eval_gpt4_turbo_leaderboard.csv

* Add files via upload
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants