-
-
Notifications
You must be signed in to change notification settings - Fork 631
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove leaky lru_cache #2277
Remove leaky lru_cache #2277
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good catch! i hadn't considered that side effect of context objects remaining in the global LRU cache. and i generally agree that the CPU cost in typical usage scenarios is small.
i'm good with this, but i'll give a moment to @lafrech @deckar01 to look if they want. otherwise i'll plan to merge and release this over the weekend
in the meantime, do you mind adding yourself to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with the rationale.
Thanks.
TLDR: 👍 I tested this using our dump benchmark. Indexing
The benchmark shows ~30% of the hot path is tuple hashing, at least in |
Thanks for the benchmark! I updated |
Problem
After calling the
endpoint
, the instancedb_session
was not freed, causing a warning to be emitted.The reason is that the
MySchema
instance is kept in a global LRU cache. Callingendpoint
multiple times will eventually release the olderdb_session
instances, though.Analysis
The
@lru_cache(max_size=8)
decorator was used on a method, causing instances ofself
to be cached and outlive their intended scope. This led to unexpected behavior where instances persisted beyond a single web request, retaining references and consuming memory unnecessarily.This problem was called out by PyCQA/flake8-bugbear#310 but ignored via
noqa
.Options
cachetools.cachedmethod
(requires a third party dependency)Proposal
The LRU cache was introduced in #1309 (the PR reported 3% performance increase on the test base). I'm not sure the LRU cache actually has that much of an effect in real world scenarios:
max_size
is exhausted, leading to cache misses.load
causes 3 misses, 0 hits (warm -> 3 hits)dump
causes 2 misses, 0 hits (warm -> 3 hits).