Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

apollo-server-cache-memcached doesn't reconnect after ECONNRESET #3837

Closed
kdybicz opened this issue Feb 28, 2020 · 4 comments
Closed

apollo-server-cache-memcached doesn't reconnect after ECONNRESET #3837

kdybicz opened this issue Feb 28, 2020 · 4 comments

Comments

@kdybicz
Copy link

kdybicz commented Feb 28, 2020

I'm running my Lambda in NodeJS 12.x container on AWS with such dependencies:

    "apollo-datasource-rest": "^0.7.0"
    "apollo-server-cache-memcached": "^0.6.4"
        "memcached": "^2.2.2"
    "apollo-server-caching": "^0.5.1"
    "apollo-server-lambda": "^2.10.1"

As long there are no issues with connection to AWS Memcached servers everything seems to work fine, but sometimes for some reason connection is dropped with such error:

2020-02-27T21:24:12.533Z	437a4074-e1c3-408f-859d-7717273b5e2f	INFO	Connection error Error: read ECONNRESET
    at TCP.onStreamRead [as _originalOnread] (internal/stream_base_commons.js:200:27)
    at TCP.<anonymous> (/var/task/node_modules/async-listener/glue.js:188:31) {
  errno: 'ECONNRESET',
  code: 'ECONNRESET',
  syscall: 'read'
}

This itself is a separate problem, but core of my situation is that Memcached doesn't try to reconnect to the server after this issue and just hangs. This could be related to issues with legacy and not maintained anymore memcached client you're using in your apollo-server-cache-memcached module. I found this in their repo: 3rd-Eden/memcached#281 and 3rd-Eden/memcached#199

Is there a chance you could update your module to use more recent, maintained version of memcached client?

@adwaitmathkari
Copy link

adwaitmathkari commented Jan 29, 2021

Is there any resolution on this issue? Facing the same issue, the connection doesn't try to reconnect and doesn't throw any error also. Thus the complete call gets hanged until task times out from aws lambda.

@kdybicz
Copy link
Author

kdybicz commented Feb 1, 2021

Keeping in mind the state of that extension and the fact that it seems to be dead over the last two years I would suggest switching to Redis. Unless... you would like to patch it up :)

@adwaitmathkari
Copy link

@kdybicz actually we found out the root cause and that was because lambda was freezing the memcached connections and upon again calling the same lambda instance, the connection was trying to get reused and this was throwing the error. Thus had to close all the connections on request end.

@glasser
Copy link
Member

glasser commented Oct 20, 2022

We no longer maintain a wrapper around a memcache client implementing our cache interface; instead, we maintain @apollo/utils.keyvAdapter which wraps Keyv implementations such as @keyv/memcache. Issues like this can be addressed with the Keyv project.

@glasser glasser closed this as completed Oct 20, 2022
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Apr 19, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants