Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llama: improve tokenization #2

Open
2 tasks
iboB opened this issue Oct 1, 2024 · 0 comments
Open
2 tasks

llama: improve tokenization #2

iboB opened this issue Oct 1, 2024 · 0 comments
Assignees
Labels
enhancement New feature or request

Comments

@iboB
Copy link
Member

iboB commented Oct 1, 2024

Currently the Vocab::tokenize is hard to use and inflexible.

  • It has some relatively obscure arguments which are hard for users to understand
  • It returns a new vector... which is easy to use, but requires allocations for each invocation

So:

  • Create wrapper methods which hide the arg complexity
  • Allow output arguments for tokenized buffer
@iboB iboB added the enhancement New feature or request label Oct 1, 2024
@iboB iboB self-assigned this Oct 1, 2024
@iboB iboB transferred this issue from alpaca-core/ac-local Nov 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Status: Todo
Development

No branches or pull requests

1 participant