Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Indexing error: out of memory #262

Open
pavlovdog opened this issue Oct 15, 2024 · 4 comments
Open

Indexing error: out of memory #262

pavlovdog opened this issue Oct 15, 2024 · 4 comments

Comments

@pavlovdog
Copy link

pavlovdog commented Oct 15, 2024

Describe the bug
Self-hosted indexer works properly for some short period of time (5-10 minutes), then logs stop and memory consumption starts to grow. When it hits the limit, it fails with heap out of memory. It also feels that during that time, /metrics and /healthz response time starts to grow (discord message link).

2024-10-09 11:19:17.347	
2024-10-09 11:19:17.347	----- Native stack trace -----
2024-10-09 11:19:17.347	FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
2024-10-09 11:19:17.347	
2024-10-09 11:19:17.347	<--- JS stacktrace --->
2024-10-09 11:19:17.347	
2024-10-09 11:19:17.347	
2024-10-09 11:19:17.347	[17:0x7fb7b5b576b0]   369477 ms: Mark-Compact 4032.6 (4134.6) -> 4018.5 (4136.3) MB, 4261.67 / 0.00 ms  (average mu = 0.053, current mu = 0.016) allocation failure; scavenge might not succeed
2024-10-09 11:19:17.347	[17:0x7fb7b5b576b0]   365147 ms: Mark-Compact 4031.0 (4132.8) -> 4016.8 (4134.6) MB, 3964.42 / 0.00 ms  (average mu = 0.089, current mu = 0.018) allocation failure; scavenge might not succeed
2024-10-09 11:19:17.347	
2024-10-09 11:19:17.347	<--- Last few GCs --->
2024-10-09 11:19:17.347	

Here is the list of env options I'm using:

NODE_OPTIONS: "--max-old-space-size=4096"
TUI_OFF: "true"
LOG_LEVEL: "trace"
LOG_STRATEGY: "ecs-console"
ENVIO_API_TOKEN: "..."

ENVIO_PG_HOST: "..."
ENVIO_PG_PORT: "..."
ENVIO_PG_USER: "..."
ENVIO_POSTGRES_PASSWORD: "..."
ENVIO_PG_SSL_MODE: "..."
ENVIO_PG_DATABASE: "envio-3"

UNORDERED_MULTICHAIN_MODE: "true"

Local (please complete the following information):

  • envio 2.4.3
  • node v20.12.1
  • pnpm 8.15.4
  • Docker 26.1.4, build 5650f9b

Hosted Service (please complete the following information):

Additional context
Seems to happen only on self-hosted environments, can't see the error logs in the hosted service. But worth checking twice, since I have no access to the restarts counter and there is no log search. Feel free to reach out (https://t.me/p0tekhin), if you have any questions.

@JonoPrest
Copy link
Collaborator

Just taken a look at the indexer you linked and it is huge! 😅

I think the first thing is if you have a lot dynamic contract registrations try and set ENVIO_MAX_PARTITION_CONCURRENCY, it defaults to 10, this value * the number of chains could mean many requests are happening and resolving at the same time and values can't be garbage collected sequentially.

Secondly although it shouldn't have too much effect IMO is you can set MAX_QUEUE_SIZE. It defaults to 100,000 but this is divided per chain. So your queues shouldn't be too big per chain.

To be clear the MAX_QUEUE_SIZE is simply the threshold of size the queue reaches before it stops making requests. There's no way currently to limit the number of events returned via hypersync query (which can be many thousands at a time).

@pavlovdog
Copy link
Author

Thanks for the MAX_QUEUE_SIZE note, I haven't tried it yet! Will let you know if it helped.

@moose-code
Copy link
Contributor

Hey @pavlovdog - let us know if this helped :) If you could also bump to v2.7.0 there have been lots of fixes and improvements in the last 2 weeks that could have resolved your issue 👍

@DenhamPreen
Copy link
Contributor

@pavlovdog from cross referencing the hosted service deployment I believe your self hosted server is under resourced.
We're not experiencing the JavaScript heap out of memory on the hosted service instance you've deployed.
I'm going to close this issue and we can continue our tg conversation to discuss your server configuration. Please feel free to reopen it if you feel it's relevant.

@moose-code moose-code reopened this Nov 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants