Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unset variables at the end of the foreach loop #4

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

Unset variables at the end of the foreach loop #4

wants to merge 1 commit into from

Conversation

ghost
Copy link

@ghost ghost commented Apr 15, 2015

When running the cronjob script on many siteaccesses at once, there's a risk of fatal error (Memory exhausted).
Unsetting heavy variables and clearing eZ ContentObjectCache releases memory before generating the sitemap for an other siteaccess.

When running the cronjob script on many siteaccesses at once, there's a risk of fatal error (Memory exhausted).
Unsetting heavy variables and clearing eZ ContentObjectCache releases memory before generating the sitemap for an other siteaccess.
@brookinsconsulting
Copy link
Owner

Hello @yannoz

This reply was rushed in an effort to make a prompt first reply.

Can you please also unset the following variables as well: $rootNode, $object ?

They may not matter so much but these would seem to be the next bigest of the local variables used and we do not see how it would hurt.

Then after the xml is written to disk would you unset these variables as well: $root, $dom ?

Can you please review the following blog post related to the use you suggest of, 'eZContentObject::clearCache()'?

http://share.ez.no/blogs/vincent-robic/import-a-million-objects-from-a-csv-file-without-reaching-memory-limit

Because we are reading only from the database and not writing to it we would like to avoid a potentially expensive request to, 'eZContentObject::clearCache()' as this could affect the site(s) which are currently powering this cronjob unnecessarily.

We don't think that object cache (disk) is the real problem in terms of memory usage, rather the global variable usage (for the entire cronjob execution) which to some extent but not all that, 'eZContentObject::clearCache()' would clear but with the above mentioned sideaffects.

Which is why we would ask you test first to ensure the following modification does indeed prevent the memory usage problems you report in your use case / usage and then revise your PR ( and flatten to one commit).

Can you test replacing the call to 'eZContentObject::clearCache()' with the following:

unset( $GLOBALS['eZContentObjectContentObjectCache'] );
unset( $GLOBALS['eZContentObjectDataMapCache'] );
unset( $GLOBALS['eZContentObjectVersionCache'] );
unset( $GLOBALS['eZContentClassAttributeCache'] );
unset( $GLOBALS['eZContentClassObjectCache']);
unset( $GLOBALS['eZTranslationCacheTable'] );
unset( $GLOBALS['eZDebugGlobalInstance'] );

If the above code replacements do not solve your memory usage problem, please let us know and we can make some additional more specific suggestions baseed on actual cronjob execution testing.

This is another related thread touching on some of the topics discussed, http://share.ez.no/forums/developer/how-to-build-an-update-cli-script-without-memory-problem

EDIT: Removed invalid request to remove newline from cronjob changes. As we were recently reminded, the newline at the end of the file is actually part of the eZ coding standards. Apologies for the confusion.

Can you rename your PR commit (durring flatten) to 'Updated: Bugfix. Unset local and global variables durring execution to prevent excessive memory usage' (without the quotes)?

Also can you please make these changes to the other cronjob variations as well (master branch only)?

Apologies in advance for requesting so much in terms of additional code, PR changes and testing. We just want to do the very best we can for everyone who relies on this solution.

Please feel free to share your thoughts.

Cheers,
Brookins Consulting

@ghost
Copy link
Author

ghost commented Apr 28, 2015

Hey,

I havn't had the time to reply so far, but thanks a lot for your very exhaustive reply :)
I'll try to make it in the next days to come, depending on my workload.

Yann

@brookinsconsulting
Copy link
Owner

Hello @yannoz

Thank you very much for your reply, we were honestly afraid we had scared you away with our extended code change and testing requests.

We very much wish to merge your PR improvements as we know they are vital in larger scale deployments of eZ Publish and BC Google Sitemaps but we do need your help to do this the right way.

Honestly we have not had a large enough database / content tree size necessary to test and implement these improvements ourselves and without your continued help in testing our suggestions these improvements will probably end up not being made for a lot longer time than we would like.

We understand your busy, we are very busy ourselves. Thanks again for your update reply. We look forward to hearing from you again soon @yannoz

Cheers,
Brookins Consulting

@brookinsconsulting
Copy link
Owner

Gentle Ping @yannoz

We just wanted to check back in with you. We are still very eager in working with you to address this issue. Please let us know what you think.

Cheers,
Brookins Consulting

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant