You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For enforcing model to give response in json format, I am using ExLlamaV2TokenEnforcerFilter and ExLlamaV2PrefixFilter classes and appending to to filters list and passing as filters for generating output from model. As my usecase are limited so ,I thought of caching these both class by storing it in a dict and reusing it. But by doing this I observed that system ram utilization is increasing and after few iterations it leads to Out of Memory. Usually it takes 10-15 GB of system RAM but overtime the memory usage goes over 128 GB causing OOM. I tried getting the class which is creating this issue and found that ExLlamaV2TokenEnforcerFilter is not resetting some captured memory which is creating this problem.
We tried reinitalizing certain variables as below but it did not impact any memory reclaiming.
Hi,
LMFE by default caches all encountered prefixes. The prefix cache cannot be emptied if there are in-flight requests.
However, from time to time, you can clear it. If you want to clear it without modifying any code, you can do something like
filter = ExLlamaV2TokenEnforcerFilter(filter_schema_parser, self.tokenizer)
for i in range(10000):
# use filter here
if i % 100 == 0:
filter.token_enforcer.prefix_states = {} # this is the important line
Hello ,
I tried the suggested solution, but the RAM usage is still increasing, although at a slower rate.There is something which is still being cached.
For enforcing model to give response in json format, I am using ExLlamaV2TokenEnforcerFilter and ExLlamaV2PrefixFilter classes and appending to to filters list and passing as filters for generating output from model. As my usecase are limited so ,I thought of caching these both class by storing it in a dict and reusing it. But by doing this I observed that system ram utilization is increasing and after few iterations it leads to Out of Memory. Usually it takes 10-15 GB of system RAM but overtime the memory usage goes over 128 GB causing OOM. I tried getting the class which is creating this issue and found that ExLlamaV2TokenEnforcerFilter is not resetting some captured memory which is creating this problem.
We tried reinitalizing certain variables as below but it did not impact any memory reclaiming.
I have logged this issue on ExllamaV2 ----- turboderp/exllamav2#639
I am sharing the code snippet for complete implementation.
The text was updated successfully, but these errors were encountered: