Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add LM Caching #31

Merged
merged 9 commits into from
Nov 12, 2024
Merged

Add LM Caching #31

merged 9 commits into from
Nov 12, 2024

Conversation

sidjha1
Copy link
Collaborator

@sidjha1 sidjha1 commented Nov 12, 2024

Adds a Cache to the LM object. The cache implements LRU evection. Added tests to verify / demonstrate the behavior.

@sidjha1 sidjha1 requested a review from liana313 November 12, 2024 07:09
@liana313
Copy link
Collaborator

rather than having a disable_cache() function, can we move it to a flag in configure which is false by default

@liana313 liana313 merged commit 30ff6f7 into main Nov 12, 2024
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants