-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Public visor losing transports / can't establish stcpr transport to public visor #1936
Comments
when I check back at the number of transports on that public visor, now, it's very few. Just over a dozen.
This is certainly not the desired behavior. There is no reason for such a significant reduction in the number of transports over time for public visors. If the transport fails, remote visors should attempt to autoconnect to that visor again when i check the service discovery, the public visor is still registered there |
I further tested setting a persistent transport to the public visor - instead of using the public autoconnect or manually attempting transport creation.
My local visor was unable to create a stcpr transport to the public visor.
When I attempt to create a sudph transport from my local visor to the public visor, it shows success but when I attempt to use that transport in a route to access the proxy server, I'm never able to connect to anything via the proxy server connection! When I restart the visor as not a public visor and try again, I'm able to access the proxy server again immediately without issues or errors. |
on starting visor 03773464102a7fcb9021962a4a6cf3c6a23739a8720e8b43280b1fd078cfbd6bfb as a public visor,, it gets a lot of transports initially, over 200, but this number dwindles over time to less than a dozen, I've observed a few times
When I try to use a proxy server for a route that goes through said visor, the bandwidth seems low, but this may be unrelated or anomalous.
I'm unsure if this is an issue with the deployment or not as this visor is run inside the deployment.
The text was updated successfully, but these errors were encountered: