-
-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache profiling (WIP) #188
base: master
Are you sure you want to change the base?
Conversation
Hello @mrflip, Have you checked this: https://github.com/dominictarr/bench-lru? Are you only trying to assess whether the cache is faster than the map and what is the impact of deletion here? Note that the Map is sometimes, on some node versions, faster if you handle specific keys such as long strings etc. |
Why not. What would the API looks like? Something taking a rng and some alpha params and returning a distributed rng? At some point it might start being too statistically complicated for pandemonium which is mostly about algorithms to fit another lib such as |
This is primarily in service of the later PR for TTK time-expiring the cache and knowing what the tradeoffs were on Cache vs Map, not so much on comparisons to others (I found out about this lib via that suite). Mixed in with that PR (I'll pull it back to here) is a shell script to run any of the benchmarks with the node profiler turned on. I'll make the test exercise long strings, shortish strings and number keys. Do you have any concerns on adding the dev dependency of benchmark.js? I'd also recommend adding the chai-js library -- it makes tests more explanatory and beautiful to read. I was surprised by how many more and deeper tests we wrote after adopting it. For just one example it becomes very pleasant to add type guards like |
* LRUCache and family .inspect limits its output -- showing the youngest items, and ellipsis, and the oldest item. Options allow dumping the raw object or controlling the size of output (and the number of items a console.log will mindlessly iterate over). * LRUCache and family all have inspect wired up to the magic 'nodejs.util.inspect.custom' symbol property that drives console.log output * LRUCache and family all have a summaryString method returning eg 'LRUCache[8/200]' for a cache with size 8 and capacity 200, wired to the magic Symbol.toStringTag property that drives string interpolation (partially addresses Yomguithereal#129).
22cea6b
to
a2d5172
Compare
Still working on this, but here are some benchmarks for the cache classes.
Results so far are that the LRUCache is about 1.5-2x as fast as the LRUMap through all initial tries, and that the differences between LRUCache and LRUCacheWithDelete (or Map/MapWD) is small.
I have added the ubiquitous benchmark.js library to help power this if that's OK.
@Yomguithereal I fumbled together a method to give back a pareto-distributed random integer, or at least fake it well enough to serve this benchmark, but I suspect that you will have good advice on how to do it right. If this isn't built in to pandemonium it would be nice to have a few such distributions in the toolbag.