Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Addon Multiple Configs with Same GVK #124

Conversation

haoqing0110
Copy link
Member

Ref: #123

@openshift-ci openshift-ci bot requested review from deads2k and qiujian16 July 8, 2024 11:04
@haoqing0110
Copy link
Member Author

/assign @qiujian16 @JoaoBraveCoding

Copy link

openshift-ci bot commented Jul 8, 2024

@haoqing0110: GitHub didn't allow me to assign the following users: JoaoBraveCoding.

Note that only open-cluster-management-io members with read permissions, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time.
For more information please see the contributor guide

In response to this:

/assign @qiujian16 @JoaoBraveCoding

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Copy link

@JoaoBraveCoding JoaoBraveCoding left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @haoqing0110 for putting this together it's a great start!

Configs will be ordered, with the last configured on top. For example, configs in mca will be on top of cma configs, and mca configs with higher indices
will be on top of those with lower indices.

The addon's own controller will determine how to use the configs (e.g., choose one, read all, etc).

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this approach in the sense that we allow the addon controller to make the decision on which configs to use. However, from the perspective of the addon controller, there is no way to tell which configs were specified by CMA vs MCA which in a multi-same-GVK scenario might be relevant information.
Example: Assume CMA was configured with c1 & c2 and MCA with c3, the addon controller will see c3, c2 and c1 but will not know which configuration was specified in CMA vs MCA.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the merge option, the configs in mca and cma will be listed in order. For example, if c1 is an OpenTelemetryCollector cr configured to collect traces and c2 is to collect logs, c3 is a new config to collect logs, the controller will see c3, c2, c1, it could always choose the first one to collect logs which is c3, and could also choose the other one for traces which is c1. Does this make sense from addon's view?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand the configs will be shown in order, but this is insufficient to inform the addon if a resource comes from the global config in CMA or a cluster-specific config in MCA. For example, assume a OTELCollector config that would collect logs (c1), but then for a given cluster we are troubleshooting something and we want to send only metrics & traces so we specify (c2 & c3, two separate OpenTelemetryCollector). From the addon perspective, it's not clear what config should be used even if they are sorted by (c3, c2, c1). The only alternative is to go through the resources in the Spec of MCA and calculate the intersection with the Status.

After discussing with the team from our POV we prefer option 2. However, if the OCM team favours option 1 then we also do not oppose since we have the workaround to calculate the intersection.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, this case makes sense. For this proposal, the option 2 can be selected to be implemented first, since it makes more sense for real use cases. As for the "global configs" issues, to avoid override, the workaround would be to configure the "global configs" at both cma and cma.

Will update the doc and leave the other 2 options for future needs.

Comment on lines 148 to 149
The gaps is if the r2n1 and r2n2 resource are actually some global configs, you may not want it to be overriden by r2n3. Compared with "Merge" options,
this options can not cover such case and is lack of flexibility.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see this point but at the same time, this approach is more similar to the current behaviour. However, "global configs" use case is a valid one that maybe could be addressed in a follow-up enhancement, I was thinking along the lines of CMA having a dedicated place for default configs like it does for defaultConfigs (just an idea).


### API changes

`cma.spec.installStrategy.placements[].configs` and `mca.spec.configs` already support mutiple configs.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Multiple configurations are supported but since the GVK is used as the identifier, only one is persisted, right?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, today user can define multiple configs with the same gvk at the API level, but only one will be listed in the mca.status.

name: n3
```

#### Merge

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For this strategy: what will happen when we have one resource in the ManagedClusterAddon and another one with the same name, GVK and namespace in ClusterManagemenetAddon? I would expect an automatic override but I'm not sure if you want to do something to keep both of them in order or something.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if the one in ClusterManagemenetAddon has the same gvk, name and namespace as the one in the ManagedClusterAddon, they are actually the same config. In this case, since merge uses gvk + name + namespace as the key, so only one will be listed in the mca.status.

@haoqing0110 haoqing0110 force-pushed the br_multi-configs branch 2 times, most recently from a498083 to 0b1c6c3 Compare July 25, 2024 07:56
Copy link

@JoaoBraveCoding JoaoBraveCoding left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just some suggestions to make the proposal easier to read. But in general LGTM 👍 great job @haoqing0110

@haoqing0110
Copy link
Member Author

@qiujian16 plz help do a final review.

Copy link
Member

@qiujian16 qiujian16 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/approve
/lgtm

@openshift-ci openshift-ci bot added the lgtm label Jul 30, 2024
Copy link

openshift-ci bot commented Jul 30, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: haoqing0110, qiujian16

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-merge-bot openshift-merge-bot bot merged commit 3e9e332 into open-cluster-management-io:main Jul 30, 2024
2 checks passed
@haoqing0110 haoqing0110 deleted the br_multi-configs branch July 30, 2024 03:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants