Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llama : better replace_all #8852

Merged
merged 1 commit into from
Aug 5, 2024
Merged

llama : better replace_all #8852

merged 1 commit into from
Aug 5, 2024

Conversation

ggerganov
Copy link
Owner

fix #8841


Copy link
Collaborator

@ngxson ngxson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Actually I copied the original function somewhere in the internet without thinking much about it (we always pass a const string for search anw, so at least for now search is never empty)

@ggerganov ggerganov merged commit f1ea514 into master Aug 5, 2024
54 checks passed
@LostRuins
Copy link
Collaborator

There's also another duplicate of the original function at https://github.com/ggerganov/llama.cpp/blob/master/src/llama-vocab.cpp#L19 which you might wish to replace as well.

arthw pushed a commit to arthw/llama.cpp that referenced this pull request Aug 7, 2024
@ggerganov ggerganov mentioned this pull request Aug 8, 2024
4 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Bug: Possible infinite loop and poor performance
3 participants