-
Notifications
You must be signed in to change notification settings - Fork 184
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
set the configured protocol transport for service metadata #9490
Conversation
fb5cbd3
to
fbb5c45
Compare
when registering the service with |
meh but now we need to configure all service names as or we register the service as is and read the port and dns name from metadata? nah how would we know the dns service name from the grpc server config, we need to make the service names configurable for clients ... 🤔 |
actually, none of the changes are necessary. the transport is ignored by clients anyway. the reva pool will just skip the service lookup if the address has a |
I added a commit that makes the endpoints used by the gateway configurable again. Together with cs3org/reva#4744 we can now test how ocis scales in kubernetes. To enable retries for cs3 grpc clients (unything that uses the reve pool to get a selector) the endoint must not be configured as a service name, eg. More things to test
|
Added a commit to fix the startup of ocis by parsing the Double checked this works by starting ocis with a gateway that listens on a unix socket: "OCIS_REVA_GATEWAY": "unix:ocis-reva-gateway.sock",
"GATEWAY_GRPC_ADDR": "ocis-reva-gateway.sock",
"GATEWAY_GRPC_PROTOCOL": "unix",
"STORAGE_USERS_GATEWAY_GRPC_ADDR": "unix:ocis-reva-gateway.sock",
"WEB_GATEWAY_GRPC_ADDR": "unix:ocis-reva-gateway.sock",
moved to dedicated issue: #9718 |
fe4f543
to
93a86f1
Compare
93a86f1
to
2b390e0
Compare
StorageSharesEndpoint string `yaml:"-"` | ||
AppRegistryEndpoint string `yaml:"-"` | ||
OCMEndpoint string `yaml:"-"` | ||
UsersEndpoint string `yaml:"users_endpoint" env:"GATEWAY_USERS_ENDPOINT" desc:"The USERS API endpoint." introductionVersion:"%%NEXT%%"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These are a lot of new envvars. Can't we make it more convenient for devops to configure them? I imagine they don't want to maintain another 13 envvars.
If it's only about the protocol, can't we have a global envvar for that and just add hardcoded service names?
If that is not possible, maybe it makes sense to NOT support env configuration? One could still add them to their yaml
file if they so choose.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Our current default uses the service name. and we need to make this configurable to test dns based load balancing that skips the registry lookup. the grpc client will then use the configured dns:///service or kubernetes:///service name to look up all available endpoints. that is why it has to be 13 different endpoint.
If we decide to change the default for eg unix sockets on single node deployments we need to change the helm charts as well. but since we do not change the default from the existing hardcoded service name this is not breaking the chart.
For testing the load balancing changing env vars in kubernetes is actuallly a lot easier than providing custom yaml files.
So I don't see any inconvenience for devops right now. And if we decide to change the default WE - as in engineering not devops - need to update the helm charts anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok then fine for me. But if it is real envvars they need a proper description please. The USERS API endpoint.
is not explanatory enough.
Not talking about the endpoint envvars like Taking your comment from above, it makes imho not that much sense to configure all services individually, see your example below.
Why not using additional global envvars and add them to each affecting service envvar to define defaults?
Note that endpoint envvar descriptions need and update because |
db19b78
to
892c51a
Compare
@kobergj please rereview |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Taking the pattern:
GATEWAY_service_name_ENDPOINT
This means that some, but not all servcies need to have an endpoint definition in the gateway service. We have a checklist when a new service is created. This needs to be added as item and a description when applicapable. - The README.md of the gateway service needs an update to describe the new settings (with more words) as stated in the changelog: "This allows configuring services to listan on
tcp
orunix
sockets and clients to use thedns
,kubernetes
orunix
protocol URIs instead of service names." GATEWAY_PERMISSIONS_ENDPOINT
-->The endpoint of the settings service.
The naming of the envvar does not match the serivce name.- There is a new envvar
COLLABORATION_GRPC_PROTOCOL
but just for that service. In former commits, we had that scheme in many other services too...?
When the gateway needs to talk to a new service it needs to follow the pattern. This maght happen when the CS3 API sees changes that affect the way the gateway interacts with other CS3 services. But that is not part of the process of introducing a new service.
I plan to add that to the documentation with an example that replaces all internal tcp connections with unix sockets. In a subsequent PR. As that requires more testing I did not want to document this change in depth. The idea is to also test the
well, the settings service implements the gRPC permissions endpoint. The documentation tells you what to set this to, the gateway only needs the permissions endpoint of the service. That is why the description differs.
Yes, it was missing from the collaboration service. All other services could already be configured to use unix sockets. this can now also be done for the collaboration service. Sound remarks! Alas, this PR is more intended to make changing and testing the way ocis behaves in kubernetes possible. The results of that testing will come in another PR. |
Thanks for the explanations.
You can of course overrule my change request (comments), but 2/3 should be fixed here and not in another PR... |
892c51a
to
ebb93a1
Compare
@mmattel I updated the gateway readme to give a hint at what this can do. but it really is only meant as a way to explore different types of deployment. I also added a Lastly, I changed the description of the GATEWAY_PERMISSIONS_ENDPOINT to aling with eg. STORAGE_USERS_OCIS_PERMISSIONS_ENDPOINT which also uses 'permissions service' in the description and uses the |
Signed-off-by: Jörn Friedrich Dreyer <[email protected]>
Signed-off-by: Jörn Friedrich Dreyer <[email protected]>
Signed-off-by: Jörn Friedrich Dreyer <[email protected]>
ebb93a1
to
78c160f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only one small suggestable commit, the rest is 👍
We should, I have first seen that by your comment:
GATEWAY_PERMISSIONS_ENDPOINT
and STORAGE_USERS_OCIS_PERMISSIONS_ENDPOINT
clarify somehow that permissions is not a service as written - but this is only a docs thing...
Signed-off-by: Jörn Friedrich Dreyer <[email protected]> Update services/gateway/README.md Co-authored-by: Martin <[email protected]> fix env tag Signed-off-by: Jörn Friedrich Dreyer <[email protected]>
81ae9c6
to
e552196
Compare
Quality Gate passedIssues Measures |
set the configured protocol transport for service metadata
This allows using
dns
orunix
as the grpc protocol for services. Required reva changes have been bumped.related:
With this PR we can configure the grpc clients to use dns:///service with headless services in kubernetes to gracefully handle connection errors when a pod goes down.
To also handle scale up we need to make grpc servers force close connections by setting the keep alive parameter: