Skip to content

Commit

Permalink
Fix/better llm response selector (#498)
Browse files Browse the repository at this point in the history
* moved prompt with task to the end

* openass instead of chatgpt

* longer context and slightly different prompt for transformers

* chatgpt as default selector

* revert proxy file

* revert docker-compose file

* removed configs from response selection

* config fix
  • Loading branch information
smilni authored Jun 27, 2023
1 parent 23f9702 commit 24e159a
Show file tree
Hide file tree
Showing 6 changed files with 12 additions and 30 deletions.
8 changes: 8 additions & 0 deletions common/generative_configs/generative_config_long.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
{
"max_new_tokens": 256,
"min_new_tokens": 8,
"top_p": 0.9,
"temperature": 0.9,
"do_sample": true,
"num_return_sequences": 2
}

This file was deleted.

This file was deleted.

This file was deleted.

This file was deleted.

5 changes: 4 additions & 1 deletion response_selectors/llm_based_response_selector/server.py
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,10 @@ def select_response_by_scores(hypotheses, scores):

def select_response(dialog_context, hypotheses, human_uttr_attributes):
try:
curr_prompt = PROMPT + "\nHypotheses:\n" + "\n".join([f'"{hyp["text"]}"' for hyp in hypotheses])
if "transformers" in GENERATIVE_SERVICE_URL:
curr_prompt = "Hypotheses:\n" + "\n".join([f'"{hyp["text"]}"' for hyp in hypotheses]) + "\n" + PROMPT
else:
curr_prompt = PROMPT + "\nHypotheses:\n" + "\n".join([f'"{hyp["text"]}"' for hyp in hypotheses])
logger.info(f"llm_based_response_selector sends dialog context to llm:\n`{dialog_context}`")
logger.info(f"llm_based_response_selector sends prompt to llm:\n`{curr_prompt}`")

Expand Down

0 comments on commit 24e159a

Please sign in to comment.