Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[2.0] Optomise caching #167

Closed
carbontwelve opened this issue Apr 26, 2017 · 7 comments
Closed

[2.0] Optomise caching #167

carbontwelve opened this issue Apr 26, 2017 · 7 comments

Comments

@carbontwelve
Copy link
Member

Currently caching is quite inefficient at its job.

Needs more intelligent cache invalidation.

@carbontwelve
Copy link
Member Author

Looks like a good first port of call would be to have the copy method cache the result, so Tapestry doesn't end up copying hundreds of files when the destination is identical to the source!

@carbontwelve
Copy link
Member Author

image

The majority of time appears to be spend between ParseContentTypes, Compile and WriteFiles.

@carbontwelve
Copy link
Member Author

There could also be a case for reducing memory footprint, although I have a feeling that optimising the build process will do that anyway.

@carbontwelve
Copy link
Member Author

carbontwelve commented May 17, 2017

  1. Build and cache an AST of the source directory.
  2. If cached AST exists, test for changes and rebuild only the branches that have changes

This should make it so that if you update one source file and it only has one dependency then only one file will be compiled rather than all files.

Cache (AST Tree) invalidation:

  • Upon configuration or application version changing from that in the cache then the whole cache will be invalidated.

  • ContentTypes should provide a hash via a getHash() method; with the hash being made up of the content types configuration and the view file that is associated to it. If it's hash is not identical to the one cashed then all files within that content type will be re-generated.

  • FileGenerators are linked to the use front matter. This makes things a little complex because you then also have to link each File to a collection and upon any file within the collection changing then the whole collection must be invalidated - this is because the template using the collection may use any part of it and generate dozens of pages doing so.

For FileGenerators caching can become ineffective given that updating a title from one file in a collection will invalidate that entire collection for the files using it. In the case of blog posts that can mean that entire archives for taxonomy and history can be regenerated. A way around that would be to identify which part of the collection is in use and only invalidate files that depend upon that but this would be overly complex and therefore prone to bugs and so best avoided.

Note: Files belong to one content type but also to many collections (such as taxonomy collections); therefore if one file changes it may not invalidate the content type, it will however invalidate itself and any collection it belongs to.

@carbontwelve carbontwelve modified the milestones: 1.0.8, 1.0.9 May 23, 2017
@carbontwelve
Copy link
Member Author

Moved this to 1.0.9 due to it likely taking longer than 1.0.8's release cycle

@carbontwelve carbontwelve modified the milestones: 1.1.0, 1.0.9 Jun 12, 2017
@carbontwelve carbontwelve removed this from the 1.1.0 milestone Jan 4, 2018
@carbontwelve carbontwelve added this to the 2.0.0 milestone Jan 22, 2018
@carbontwelve carbontwelve changed the title Optomise caching [2.0] Optomise caching Jan 22, 2018
@carbontwelve
Copy link
Member Author

What I think needs to be done for this is to generate a dependency tree much like Madge does for node https://www.npmjs.com/package/madge

Look into how https://github.com/dependents/node-dependency-tree generates such data and possibly have Tapestry output something that Madge can then turn into a graph?

@carbontwelve
Copy link
Member Author

Closed by #311

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant