Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for SentencePiece Tokenizers to add special tokens #222

Open
lllAlexanderlll opened this issue Aug 12, 2024 · 0 comments
Open
Labels
enhancement New feature or request

Comments

@lllAlexanderlll
Copy link
Contributor

Feature request

Add support for SentencePiece Tokenizers to add special tokens similar to the HuggingFace add_special_tokens method. A first step without also implementing resizing the embedding matrix is to allow marking tokens already known to the vocabulary as special tokens so that they are always tokenized before non-special tokens.

Motivation

This feature is needed if we pre-train a model based on the sentence-piece tokenizer and want to, e.g., instruction-tune it. Alternatively, we must first convert the model and tokenizer to a HuggingFace model and tokenizer.

@lllAlexanderlll lllAlexanderlll added the enhancement New feature or request label Aug 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant