Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tagging/organizing nodes #677

Open
larsks opened this issue Dec 19, 2024 · 3 comments
Open

Tagging/organizing nodes #677

larsks opened this issue Dec 19, 2024 · 3 comments
Assignees

Comments

@larsks
Copy link
Member

larsks commented Dec 19, 2024

As a lessee, I want a way to organize the nodes I have leased. That is, if I have 7 nodes:

+--------------------+
| Name               |
+--------------------+
| MOC-R4PAC22U33-S3A |
| MOC-R4PAC22U33-S3C |
| MOC-R4PAC22U31-S1B |
| MOC-R4PAC22U31-S1C |
| MOC-R4PAC22U31-S3A |
| MOC-R4PAC08U33-S1A |
| MOC-R4PAC08U31-S1A |
+--------------------+

I want some way to say, "list the nodes I'm using in cluster1" vs "list the nodes I'm using in cluster2". In a single user environment you might be able to manage this using the description field, but in a multi-tenant environment, particularly one with the concept of node owners vs. node lessees, this isn't a viable solution: both the owner and the lessee might want to use that field, potentially for completely different purposes.

This is a tough problem to solve, because it's really endemic to openstack.

The only idea I've been able to come up with is to introduce something like "node groups" -- basically, boxes for holding nodes, so I could do something like:

openstack esi nodegroup create cluster1
openstack esi nodegroup add MOC-R4PAC22U31-S3A
openstack esi nodegroup add MOC-R4PAC08U33-S1A
openstack esi nodegroup add MOC-R4PAC08U31-S1A

This by itself would be useful in scripts, because we could do something like:

# Deploy nodes in nodegroup cluster1
openstack esi nodegroup members cluster1 -f value -c name |
  xargs -n1 openstack baremetal node deploy

And then perhaps use those groups in other commands, like:

openstack esi node network list --group cluster1
openstack esi node network attach --group cluster1 --network cluster1-network

And perhaps introduce some other convenience commands:

openstack esi nodegroup deploy <group>
openstack esi nodegroup undeploy <group>

Etc.

@tzumainn tzumainn self-assigned this Dec 19, 2024
@tzumainn
Copy link
Contributor

I actually started doing something similar, documented at https://esi.readthedocs.io/en/latest/usage/cluster.html. The main difference is that I've been reluctant to put the concept of a cluster of nodes into ESI-Leap, since in my mind that's entirely a leasing service, and having it keep track of something like this feels like it's crossing some boundary. I think my partial implementation relies on an orchestration file, and then we set a value on the Ironic node to mark it as belonging to a cluster.

@larsks
Copy link
Member Author

larsks commented Dec 20, 2024

I really like that idea of having this information stored in the api somewhere. With a group of people responsible for a cluster, requiring everyone to maintain an orchestration file locally seems like asking for trouble ("oh, your local file wasn't up-to-date, so you only operated on a partial set of nodes...").

I'm not even arguing that we should make "cluster" a concept in ESI. I'm just asking for a mechanism for a tenant to categorize nodes in a way that is "local" to the tenant, and that won't conflict with anything the owner wants to do.

Introducing a "nodegroup" resource seems like one way of doing that.

An alternative would be to substantially refactor things and use a model analagous to the openshift "persistentVolumeClaim" vs "persistentVolume" model: the thing you get as a lessee is only a proxy to the actual resource; you can set whatever metadata you want on your proxy, but you don't get direct access to the underlying resource at all.

@tzumainn
Copy link
Contributor

That's kind of what I did; the cluster ID is stored in an Ironic field, and openstack esi cluster list pulls from that. The difference is that the cluster ID isn't created until the cluster is orchestrated; however we could potentially get around that as long as you're fine with the cluster create command requiring at least one node to start off with. Is that sufficient?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants