You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently EOS VM OC's code cache is implemented via a boost interprocess allocator that is mapped from a file (much like how chainbase works).
This is over-complication.
It should be refactored to simply de/serialize the code from/to disk on start/stop. This will incur a small performance impact at start/stop but if the serialized file is compressed I expect it to be only a minimal impact even when a slow disk is used. The benefits are substantial:
No longer subject to boost version breakage
Can eliminate the multi-process view of the cache cache
Can eliminate the multi-process architecture needing to track lifecycle of main-process' OC code_cache instances
nodeos data-dir can be on a noexec filesystem
OC's code cache can be in huge pages
It'll be easier to accommodate in-place upgrades
Probably more
The text was updated successfully, but these errors were encountered:
Currently EOS VM OC's code cache is implemented via a boost interprocess allocator that is mapped from a file (much like how chainbase works).
This is over-complication.
It should be refactored to simply de/serialize the code from/to disk on start/stop. This will incur a small performance impact at start/stop but if the serialized file is compressed I expect it to be only a minimal impact even when a slow disk is used. The benefits are substantial:
code_cache
instancesnoexec
filesystemThe text was updated successfully, but these errors were encountered: