Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

strange behaviour when using searchservices v2.0.0 #375

Open
maxodoble opened this issue Oct 27, 2020 · 4 comments
Open

strange behaviour when using searchservices v2.0.0 #375

maxodoble opened this issue Oct 27, 2020 · 4 comments

Comments

@maxodoble
Copy link

maxodoble commented Oct 27, 2020

hi,

we are getting a strange behaviour when trying to reindex a not so big repository with acs 6.2.0ga:

  • ACS 6.2.0GA generated with latest alfresco-docker-installer (uses searchservices 2.0.0)
  • about 1.5 mil docs, 400GB size
  • indexsize < 10GB

the reindex process is running through in about 3 hours. so far so good.

  • during indexing: high cpu/io usage (as expected)
  • after indexer is done: cpu/io usage drops perfectly fine to near zero (test system, almost no users active).
  • searching old/docs new docs: working fine.

BUT: after a redeployment of the containers (docker-compose down, docker-compose up --build --force-recreate -d, data volumes are external and exposed to the containers with bind-mounts) we have the usual cpu spike during startup of the system, and then the alfresco and solr containers are showing permanent cpu usage of about combined 75% - 100%. postgresql cpu usage also permanently high, and this never stops.
searching old docs/new docs still working fine though.

the solr weboverview shows both indexes as current (green checkmark).
the ootb-admin-console shows in the view "solr tracking status", that indexing is "active" for both indexes???

because there are no users active, and indexing should be done, i can't explain what acs/search services is trying to do (and obviously failing to do here, it seems stuck somewhere)?

when changing searchservices back to v1.4.3 i don't get this behaviour by the way:

  • reindex runs through
  • cpu/io staying stable low even after redeployment of the stack.

any idea how i could find out/debug the behaviour v2.0.0 is showing?

@maxodoble maxodoble changed the title strange behaviour when using serchservices v2.0.0 strange behaviour when using searchservices v2.0.0 Oct 27, 2020
@aborroy
Copy link
Contributor

aborroy commented Oct 27, 2020

Are you using 2.0.0?
If so, there are some changes in "shared.properties" file that changed from default 1.4 release.
Take a look at https://issues.alfresco.com/jira/browse/SEARCH-2400

Can you test that again using 2.0.0.1?

@maxodoble
Copy link
Author

yes, we tried 2.0.0 first, because it's default in your latest alfresco-docker-generator. right?

could the differences you linked to between 1.4.3 and 2.0.0 really cause the problems i listed above?

at the moment i can't test 2.0.0.1 unfortunately, because the system is in functional testing mode. i haven't tried to verify the problems with a different test-dataset though, so if i find time, i may be trying it again with 2.0.0.1.

do you know any debug-settings for looking deeper at what is going on, when the indexer isn't finishing or deadlocking?

@lmeunier
Copy link

lmeunier commented Dec 2, 2020

@maxodoble Could you check in the Tomcat access logs if you see a lot of requests for "GET /alfresco/service/api/solr/aclchangesets?fromTime=XXX&toTime=XXX&maxResults=XXX" ?

On the Solr side, could you enable the following logger log4j.logger.org.alfresco.solr.tracker.AclTracker=DEBUG?

I think I'm facing the same issue as you, and it seems it's caused by the ACL trackers. As soon as I disable the tracker, the cpu usage drops to zero.

@maxodoble
Copy link
Author

@lmeunier Sorry, i don't have access to the test-dataset i used at the moment, so i can't check it unfortunately.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants