Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fixing the clm-prompt-tuning that was causing unequal lengths in the label token id #487

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

bpkapkar
Copy link

The code snippet needs a correction in the line:
labels["input_ids"][i] = [-100] * (max_length - len(sample_input_ids)) + label_input_ids Change it to:
labels["input_ids"][i] = [-100] * (max_length - len(label_input_ids)) + label_input_ids This adjustment ensures that the label token ids are padded or truncated based on their own length, aligning with Hugging Face's recommended practice and avoiding issues with unequal lengths in input and label token ids. The same changes need to be corrected in documentation as well is been mentioned in the https://huggingface.co/docs/peft/main/en/task_guides/prompt_based_methods and https://huggingface.co/docs/peft/main/en/task_guides/clm-prompt-tuning

What does this PR do?

The Pull Request (PR) corrects a code snippet that pads or truncates label token ids based on their own length, aligning with best practices recommended in the Hugging Face documentation for prompt-based methods and CLM prompt tuning. This correction ensures compatibility with transformer models and resolves issues related to unequal lengths in input and label token ids

The code snippet needs a correction in the line:
labels["input_ids"][i] = [-100] * (max_length - len(sample_input_ids)) + label_input_ids
Change it to:
labels["input_ids"][i] = [-100] * (max_length - len(label_input_ids)) + label_input_ids
This adjustment ensures that the label token ids are padded or truncated based on their own length, aligning with Hugging Face's recommended practice and avoiding issues with unequal lengths in input and label token ids. The same changes need to be corrected in documentation as well is been mentioned  in the https://huggingface.co/docs/peft/main/en/task_guides/prompt_based_methods and https://huggingface.co/docs/peft/main/en/task_guides/clm-prompt-tuning
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@bpkapkar
Copy link
Author

Does anyone got chance to have check on this PR and review request.
PyTorch NLP & Accelerate: @sgugger
Tokenizers: @n1t0, @Narsil
huggingface_hub: @muellerzr, @LysandreJik

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant