-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[2.0] Optomise caching #167
Comments
Looks like a good first port of call would be to have the copy method cache the result, so Tapestry doesn't end up copying hundreds of files when the destination is identical to the source! |
There could also be a case for reducing memory footprint, although I have a feeling that optimising the build process will do that anyway. |
This should make it so that if you update one source file and it only has one dependency then only one file will be compiled rather than all files. Cache (AST Tree) invalidation:
For FileGenerators caching can become ineffective given that updating a title from one file in a collection will invalidate that entire collection for the files using it. In the case of blog posts that can mean that entire archives for taxonomy and history can be regenerated. A way around that would be to identify which part of the collection is in use and only invalidate files that depend upon that but this would be overly complex and therefore prone to bugs and so best avoided. Note: Files belong to one content type but also to many collections (such as taxonomy collections); therefore if one file changes it may not invalidate the content type, it will however invalidate itself and any collection it belongs to. |
Moved this to 1.0.9 due to it likely taking longer than 1.0.8's release cycle |
What I think needs to be done for this is to generate a dependency tree much like Look into how https://github.com/dependents/node-dependency-tree generates such data and possibly have Tapestry output something that Madge can then turn into a graph? |
Closed by #311 |
Currently caching is quite inefficient at its job.
Needs more intelligent cache invalidation.
The text was updated successfully, but these errors were encountered: