You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In this model you can have several prompts to perform different tasks.
When you train or generate text with prompt you pass to the model a special tokens sequence, which consists from 2 parts. The important is the first part is a dummy tokens for prompt, with length of prompt size. It is been not passed to the embedding matrix - instead of it it is been replaced by needed prompt.
So, to train or use necessary prefix you should pass it's index to the first token of a sequence, I described.
For N prefixes (prompts) index is between [0, N-1]
Here the index of prefix for each item in batch transformed into it's specific trainable parameter: https://github.com/exelents/soft-prompt-tuning/blob/main/soft_embedding.py#L65
As the README states, the soft embedding code for prompt tuning comes from the repo here
However there are a few key changes, most notably the new parameter "n_prompts". Could you please explain what this is and how it is used?
I had a few guesses. Is it there to allow batching? And if so, must we always use the same batch size after we train it?
The text was updated successfully, but these errors were encountered: