-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ArrayMemoryError after triggering the weight computation #28
Comments
Dear VTKB I am aware of this issue and I'm planning to work on it soon. Thank you so much for reporting your finding. |
Thank you @nitrosx! I also wanted to check whether all of the documents that are already in the service need to be cleared and then added all over again each time new documents need to be added? Or can the new documents just be added and the weight computation then triggered? |
dear @VKTB, you should be able to add a new document or update an existing one and trigger the weight computation without clearing all the documents. |
That makes sense, thank you @nitrosx |
@nitrosx I'm following up on this issue to check with you if you have a timeline on when it will be addressed? thanks! |
I am running the service in a Docker container and trying to test it with some data. After populating the service with ~30K documents, I tried triggering the weight computation by sending a
POST
request to the/compute
endpoint but the API threw anArrayMemoryError
exception.Based on the error message and from what I read online, it looks as though the application is trying to copy all of the data in memory. I increased my VM's RAM to 16GB (from 8GB) this morning as the container was exiting due to Docker not having enough RAM to compute the weights of ~9.7K documents, so maybe the application will not throw this error if I increase the RAM again. However, this will not solve the actual problem because the more data a facility has, the more RAM they will need. For example, we have ~165K documents in our dev environment (and definitely more on prod) so we will need a lot of RAM to run this service.
If the above error does in fact occur as a result of the application attempting to copy all of the data in memory then I believe that this will need to be re-implemented differently.
The text was updated successfully, but these errors were encountered: