Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nice-grpc client stops sending outbound requests for existing client/channel #321

Open
SoftMemes opened this issue Mar 16, 2023 · 3 comments

Comments

@SoftMemes
Copy link

First of all, thank you for creating nice-grpc - it does indeed make things nice, especially when used together with code generators like ts-proto.

I am currently experiencing an issue that have proven to be difficult to reproduce. This may not have anything to do with nice-grpc, but I would appreciate any pointers of anyone has had similar problems or suggestions for how to troubleshoot.

In short, I have a nice-grpc based gRPC service, that in turn calls another gRPC service with a client created using nice-grpc. I create a channel and from that a client once at startup of this service, and everything works as expected.

However under some circumstances (potentially load related though I have been unable to reliably reproduce), the client stops sending out requests to the other service. I have some custom logging middleware in place and can see that the client method is being called on the service client proxy and the middleware runs, but there's no response. I can also see no trace of this call on the other side (in the service being called).

Restarting the calling service resolves this problem so it is definitely related to some in-memory state in the calling service, rather than a problem with the service being called or network layer.

Are there any known circumstances in which a client would die in this manner?

@aikoven
Copy link
Contributor

aikoven commented Mar 16, 2023

I think that I've seen problems like this. Too rare and not able to reproduce as well. We mostly work around these using deadlines, but this does not work e.g. for long-running server streams.

It may be a bug in grpc-js. Or in some proxy between the client and the server.

To troubleshoot this, you may enable verbose logging in grpc-js:

import {setLogVerbosity, logVerbosity} from '@grpc/grpc-js'

setLogVerbosity(logVerbosity.DEBUG);

Do this on both sides. It may give some insights.

@SoftMemes
Copy link
Author

Thank you for the pointers, I will turn on DEBUG logging and see what I can gather.

For anyone else that ends up here in the future: Setting the log level by itself doesn't actually give any output, you also need to set the GRPC_TRACE environment variable (I couldn't find a code equivalent) to "all" or more specific setting, see grpc/grpc-node#2298.

@anmol242
Copy link
Contributor

anmol242 commented Aug 6, 2023

I'm not sure if this would address the problem, however we recently had similar behaviour in our organization when the grpc client abruptly stopped delivering requests to our GRPC server. It was also occurring at random. When our organization raised the issue in their repo, they fixed something in a patch version. Perhaps this patch will also resolve the above reported issue.
grpc/grpc-node#2518 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants