Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Customize the number of glanceAPI workers #667

Closed
wants to merge 1 commit into from

Conversation

fmount
Copy link
Contributor

@fmount fmount commented Dec 5, 2024

No description provided.

@openshift-ci openshift-ci bot requested review from lewisdenny and viroel December 5, 2024 08:06
Copy link
Contributor

openshift-ci bot commented Dec 5, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: fmount

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved label Dec 5, 2024
@fmount fmount requested a review from konan-abhi December 5, 2024 08:07
@@ -12,7 +12,7 @@ show_multiple_locations={{ .ShowMultipleLocations }}
enabled_import_methods=[web-download,glance-direct]
bind_host=localhost
bind_port=9293
workers=3
workers={{ .Workers }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we limit the max cap for workers same as the number of CPUs?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can yes, but we might need more logic in case we don't expose the parameter and allow customers to override it via customServiceConfig

Copy link
Contributor

@stuggi stuggi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why should we add a parameter for this and not use the config customization if we need it? just wondering as we not add parameters for each of the settings in the service config, while this can be customized using the customServiceConfig.

@fmount
Copy link
Contributor Author

fmount commented Dec 5, 2024

why should we add a parameter for this and not use the config customization if we need it? just wondering as we not add parameters for each of the settings in the service config, while this can be customized using the customServiceConfig.

Mostly to keep consistency with other storage operators, but we can use customServiceConfig in this case.

@fmount
Copy link
Contributor Author

fmount commented Dec 5, 2024

@konan-abhi @stuggi should we simply document how to do override that value?
We might need more logic to control overrides on this parameter and control how GlanceAPI can scale in combination with replicas and workers.
We can check what is passed via customServiceConfig and set boundaries for that variable. @stuggi is this a good/better approach?

@fmount fmount marked this pull request as draft December 5, 2024 08:29
@fmount fmount changed the title Add parameter to customize the number of glanceAPI workers Customize the number of glanceAPI workers Dec 5, 2024
@stuggi
Copy link
Contributor

stuggi commented Dec 5, 2024

I think we should not have added this parameter to any operator

@stuggi
Copy link
Contributor

stuggi commented Dec 5, 2024

manila and cinder PRs are different. they customize the httpd processes. this should align with what was already done in keystone PR openstack-k8s-operators/keystone-operator#500 . in my opinion, if glance is different and does not use httpd process to control this, we should not replicate this here for the glance config. a user should use the default interface for this

@fmount
Copy link
Contributor Author

fmount commented Dec 5, 2024

manila and cinder PRs are different. they customize the httpd processes. this should align with what was already done in keystone PR openstack-k8s-operators/keystone-operator#500 . in my opinion, if glance is different and does not use httpd process to control this, we should not replicate this here for the glance config. a user should use the default interface for this

I'm ok to remove this parameter for Glance, and parse customServiceConfig to make sure we stay under certain limits. I will update this patch accordingly! Thanks @stuggi for the input.
BTW I can align Manila and Cinder with the keystone patch: I simply don't like the naming and we didn't agree on something we can use and spread across the operators. I think we should have periodic API reviews to keep consistency and agree on the parameters in advance.

@stuggi
Copy link
Contributor

stuggi commented Dec 5, 2024

manila and cinder PRs are different. they customize the httpd processes. this should align with what was already done in keystone PR openstack-k8s-operators/keystone-operator#500 . in my opinion, if glance is different and does not use httpd process to control this, we should not replicate this here for the glance config. a user should use the default interface for this

I'm ok to remove this parameter for Glance, and parse customServiceConfig to make sure we stay under certain limits. I will update this patch accordingly! Thanks @stuggi for the input. BTW I can align Manila and Cinder with the keystone patch: I simply don't like the naming and we didn't agree on something we can use and spread across the operators. I think we should have periodic API reviews to keep consistency and agree on the parameters in advance.

agreed we nee to do better on that + also to get those things across all operators. Its not efficient that this is done by each group

@fmount
Copy link
Contributor Author

fmount commented Dec 5, 2024

manila and cinder PRs are different. they customize the httpd processes. this should align with what was already done in keystone PR openstack-k8s-operators/keystone-operator#500 . in my opinion, if glance is different and does not use httpd process to control this, we should not replicate this here for the glance config. a user should use the default interface for this

I'm ok to remove this parameter for Glance, and parse customServiceConfig to make sure we stay under certain limits. I will update this patch accordingly! Thanks @stuggi for the input. BTW I can align Manila and Cinder with the keystone patch: I simply don't like the naming and we didn't agree on something we can use and spread across the operators. I think we should have periodic API reviews to keep consistency and agree on the parameters in advance.

agreed we nee to do better on that + also to get those things across all operators. Its not efficient that this is done by each group

For the record: this is a conversation about scalability we had yesterday within the storage group. The result about the control around scalability, placement, and setting upper limits involved this workers parameter for Glance, processes for Manila and Cinder API. I didn't plan to put these set of patches up but considering how simple is adding this parameter in the picture I started it. Being that said I'm ok to keep consistency across the operators, I simply think we should plan to either periodically review or centralize the API for k8s-operators, and don't risk to do to different directions because I simply thought about a different name for a struct.

@fmount
Copy link
Contributor Author

fmount commented Dec 5, 2024

@stuggi I'm closing this patch for now, I will reopen in case we need some validation around workers parameter.

@fmount fmount closed this Dec 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants