-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reflect to proxy object, to observe changes made by setters (rebased) #44
Conversation
"should generate replace (changed by setter)", as suggested at https://github.com/Palindrom/JSONPatcherProxy/pull/41/files/f934dc50b8ae31f33135d7e45cc1c1a2aebaceed#r296681603 change `var` to `const`
It still needs remaining work as explained in #41 (comment). I will do it and present here for review. |
because the same handler was defined twice in a row
…sed or not because Reflect API has a performance penalty, it should only be used if the observed object is known to have setters
@tomalec please review the commit in which I make it configurable to use Reflect API or not. I developed 3 modes of operation:
From my benchmarks in Chrome and Firefox, Observe and generate, small object (JSONPatcherProxy - assignment)
Ops/sec: 76168 ±3.96% Ran 5520 times in 0.072 seconds.
Observe and generate, small object (JSONPatcherProxy - reflection)
Ops/sec: 66270 ±4.19% Ran 4989 times in 0.075 seconds.
Observe and generate, small object (JSONPatcherProxy - auto)
Ops/sec: 73282 ±4.24% Ran 5366 times in 0.073 seconds.
Observe and generate (JSONPatcherProxy - assignment)
Ops/sec: 1868 ±2.68% Ran 133 times in 0.071 seconds.
Observe and generate (JSONPatcherProxy - reflection)
Ops/sec: 1878 ±2.59% Ran 129 times in 0.069 seconds.
Observe and generate (JSONPatcherProxy - auto)
Ops/sec: 1886 ±1.71% Ran 203 times in 0.108 seconds.
Primitive mutation (JSONPatcherProxy - assignment)
Ops/sec: 521795 ±0.95% Ran 26891 times in 0.052 seconds.
Primitive mutation (JSONPatcherProxy - reflection)
Ops/sec: 435184 ±0.41% Ran 22308 times in 0.051 seconds.
Primitive mutation (JSONPatcherProxy - auto)
Ops/sec: 482910 ±1.19% Ran 25780 times in 0.053 seconds.
Complex mutation (JSONPatcherProxy - assignment)
Ops/sec: 13238 ±0.52% Ran 683 times in 0.052 seconds.
Complex mutation (JSONPatcherProxy - reflection)
Ops/sec: 8932 ±1.00% Ran 461 times in 0.052 seconds.
Complex mutation (JSONPatcherProxy - auto)
Ops/sec: 11083 ±1.66% Ran 583 times in 0.053 seconds.
Serialization (JSONPatcherProxy - assignment)
Ops/sec: 2053 ±0.98% Ran 106 times in 0.052 seconds.
Serialization (JSONPatcherProxy - reflection)
Ops/sec: 2060 ±0.86% Ran 106 times in 0.051 seconds.
Serialization (JSONPatcherProxy - auto)
Ops/sec: 2080 ±0.31% Ran 106 times in 0.051 seconds. In this case, the 3 modes of operation seem unneccessary. I could simplify the config and leave just 2 modes of operation:
WDYT? |
😎 Looks to good to me true ;) have you checked how this change degrades performance of no reflection vs. no reflection - How the additional function call in no-setter scenario slows down the operations? |
Ok, I found a scenario when class Foo{
set prop(newVal){
console.log('setter called on ', this,'.prop:',newVal);
}
};
class Baz extends Foo{};
obj = new Baz();
Object.getOwnPropertyDescriptor(obj, 'prop'); // undefined That's why I think we need three ways. Also, I'm not so convinced to |
Do you mean checking this kind of micro-benchmark? http://jsbench.github.io/#0bd655b5edb3f5b2560c8031c21332c3 I am not sure this is a bottleneck at all. Of course I could have completely different trap for all 3 modes, but that would make the code harder to maintain.
Great catch! I guess we would need I guess that this undermines the reason for having the |
And I am not convinced that we should choose our API based on nanobenchmarking, even if there is a difference (but seems there isn't: http://jsbench.github.io/#b9f6574b6478e2a9a02ef917c6382ac1). Choosing the API is one thing, optimizing the execution is another thing. |
I was thinking about just running our I agree we should not put micro-benchmarking over code maintenance, but if we will face more perf issues in real scenarios, that's a place to check and profile.
On my machine, your perf for " Thinking of YAGNI, as the application author, usually I know whether I used setters or not. Especially, if in long term, we would allow different strategies for different subtrees (or different Palindrom instances for different apps/partials), then the autodiscovery seems not so important sugar. If the (precise) |
…ptors as pointed out in #44 (comment)
# Conflicts: # src/jsonpatcherproxy.js # test/spec/proxySpec.js
This is now fixed. |
Please find the current benchmark below. There are 3 groups of benchmarks: For each group, the base value is provided for the current Based on the result in Node, I think that "auto" still is very promising and in fact could be made not only the default, but actually the only available option. At the worst case of complex mutation, it is only 9.5% slower than current master. @tomalec WDYT? _updateValueByAssignmentObserve and generate, small object (JSONPatcherProxy - assignment)
ec7b9bf: 136720 Ops/sec
92da649: 135498 Ops/sec (0.9% worse)
Observe and generate (JSONPatcherProxy - assignment)
ec7b9bf: 2762 Ops/sec
92da649: 2777 Ops/sec (0.5% better)
Primitive mutation (JSONPatcherProxy - assignment)
ec7b9bf: 781852 Ops/sec
92da649: 793861 Ops/sec (1.5% better)
Complex mutation (JSONPatcherProxy - assignment)
ec7b9bf: 11808 Ops/sec
92da649: 12349 Ops/sec (4.6% better)
Serialization (JSONPatcherProxy - assignment)
ec7b9bf: 1719 Ops/sec
92da649: 1717 Ops/sec (0.1% worse) _updateValueByReflectionObserve and generate, small object (JSONPatcherProxy - reflection)
ec7b9bf: 136720 Ops/sec
92da649: 115118 Ops/sec (15.8% worse)
Observe and generate (JSONPatcherProxy - reflection)
ec7b9bf: 2762 Ops/sec
92da649: 2797 Ops/sec (1.3% better)
Primitive mutation (JSONPatcherProxy - reflection)
ec7b9bf: 781852 Ops/sec
92da649: 597516 Ops/sec (23.6% worse)
Complex mutation (JSONPatcherProxy - reflection)
ec7b9bf: 11808 Ops/sec
92da649: 8165 Ops/sec (30.9% worse)
Serialization (JSONPatcherProxy - reflection)
ec7b9bf: 1719 Ops/sec
92da649: 1729 Ops/sec (0.6% better) _updateValueByAutoObserve and generate, small object (JSONPatcherProxy - auto)
ec7b9bf: 136720 Ops/sec
92da649: 136351 Ops/sec (0.3% worse)
Observe and generate (JSONPatcherProxy - auto)
ec7b9bf: 2762 Ops/sec
92da649: 2793 Ops/sec (1.1% better)
Primitive mutation (JSONPatcherProxy - auto)
ec7b9bf: 781852 Ops/sec
92da649: 781270 Ops/sec (0.1% worse)
Complex mutation (JSONPatcherProxy - auto)
ec7b9bf: 11808 Ops/sec
92da649: 10692 Ops/sec (9.5% worse)
Serialization (JSONPatcherProxy - auto)
ec7b9bf: 1719 Ops/sec
92da649: 1719 Ops/sec (no difference) |
It still confuses me how come, your auto is faster than |
Rather, I made a microbenchmark for the reflection method that I am using: http://jsbench.github.io/#974afc7ca9d3c7f6c79311a2221cf0ad Bottom line is that The above screenshots show that Chrome is anyway much faster than Firefox. Maybe then let's remove the questionable optimizations, and use only |
Closing because this PR turned out to be not needed |
This is PR #41 rebased on current
master
.