You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, thank you for creating nice-grpc - it does indeed make things nice, especially when used together with code generators like ts-proto.
I am currently experiencing an issue that have proven to be difficult to reproduce. This may not have anything to do with nice-grpc, but I would appreciate any pointers of anyone has had similar problems or suggestions for how to troubleshoot.
In short, I have a nice-grpc based gRPC service, that in turn calls another gRPC service with a client created using nice-grpc. I create a channel and from that a client once at startup of this service, and everything works as expected.
However under some circumstances (potentially load related though I have been unable to reliably reproduce), the client stops sending out requests to the other service. I have some custom logging middleware in place and can see that the client method is being called on the service client proxy and the middleware runs, but there's no response. I can also see no trace of this call on the other side (in the service being called).
Restarting the calling service resolves this problem so it is definitely related to some in-memory state in the calling service, rather than a problem with the service being called or network layer.
Are there any known circumstances in which a client would die in this manner?
The text was updated successfully, but these errors were encountered:
I think that I've seen problems like this. Too rare and not able to reproduce as well. We mostly work around these using deadlines, but this does not work e.g. for long-running server streams.
It may be a bug in grpc-js. Or in some proxy between the client and the server.
To troubleshoot this, you may enable verbose logging in grpc-js:
Thank you for the pointers, I will turn on DEBUG logging and see what I can gather.
For anyone else that ends up here in the future: Setting the log level by itself doesn't actually give any output, you also need to set the GRPC_TRACE environment variable (I couldn't find a code equivalent) to "all" or more specific setting, see grpc/grpc-node#2298.
I'm not sure if this would address the problem, however we recently had similar behaviour in our organization when the grpc client abruptly stopped delivering requests to our GRPC server. It was also occurring at random. When our organization raised the issue in their repo, they fixed something in a patch version. Perhaps this patch will also resolve the above reported issue. grpc/grpc-node#2518 (comment)
First of all, thank you for creating nice-grpc - it does indeed make things nice, especially when used together with code generators like ts-proto.
I am currently experiencing an issue that have proven to be difficult to reproduce. This may not have anything to do with nice-grpc, but I would appreciate any pointers of anyone has had similar problems or suggestions for how to troubleshoot.
In short, I have a nice-grpc based gRPC service, that in turn calls another gRPC service with a client created using nice-grpc. I create a channel and from that a client once at startup of this service, and everything works as expected.
However under some circumstances (potentially load related though I have been unable to reliably reproduce), the client stops sending out requests to the other service. I have some custom logging middleware in place and can see that the client method is being called on the service client proxy and the middleware runs, but there's no response. I can also see no trace of this call on the other side (in the service being called).
Restarting the calling service resolves this problem so it is definitely related to some in-memory state in the calling service, rather than a problem with the service being called or network layer.
Are there any known circumstances in which a client would die in this manner?
The text was updated successfully, but these errors were encountered: