Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add patches to fix calculation of num_osd #488

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

janhorstmann
Copy link
Contributor

Original commit message:

Subject: [PATCH] ceph-config: fix calculation of num_osds

The number of OSDs defined by the lvm_volumes variable is added to num_osds in task Count number of osds for lvm scenario. Therefore theses devices must not be counted in task
Set_fact num_osds (add existing osds).
There are currently three problems with the existing approach:

  1. Bluestore DB and WAL devices are counted as OSDs
  2. lvm_volumes supports a second notation to directly specify logical volumes instead of devices when the data_vg key exists. This scenario is not yet accounted for.
  3. The difference filter used to remove devices from lvm_volumes returns a list of unique elements, thus not accounting for multiple OSDs on a single device

The first problem is solved by filtering the list of logical volumes for devices used as type block.
For the second and third problem lists are created from lvm_volumes containing either paths to devices or logical volumes devices. For the second problem the output of ceph-volume is simply filtered for lv_paths appearing in the list of logical volume devices described above.
To solve the third problem the remaining OSDs in the output are compiled into a list of their used devices, which is then filtered for devices appearing in the list of devices from lvm_volumes.

Fixes: ceph/ceph-ansible#7435

@janhorstmann janhorstmann requested a review from berendt May 25, 2024 09:59
@janhorstmann
Copy link
Contributor Author

iThis does not seem to move along upstream, so I propose to add the patches in our container build

Original commit message:

Subject: [PATCH] ceph-config: fix calculation of `num_osds`

The number of OSDs defined by the `lvm_volumes` variable is added to
`num_osds` in task `Count number of osds for lvm scenario`. Therefore
theses devices must not be counted in task
`Set_fact num_osds (add existing osds)`.
There are currently three problems with the existing approach:
1. Bluestore DB and WAL devices are counted as OSDs
2. `lvm_volumes` supports a second notation to directly specify logical
   volumes instead of devices when the `data_vg` key exists.
   This scenario is not yet accounted for.
3. The `difference` filter used to remove devices from `lvm_volumes`
   returns a list of **unique** elements, thus not accounting for
   multiple OSDs on a single device

The first problem is solved by filtering the list of logical volumes for
devices used as `type` `block`.
For the second and third problem lists are created from `lvm_volumes`
containing either paths to devices or logical volumes devices.
For the second problem the output of `ceph-volume` is simply filtered
for `lv_path`s appearing in the list of logical volume devices described
above.
To solve the third problem the remaining OSDs in the output are
compiled into a list of their used devices, which is then filtered for
devices appearing in the list of devices from `lvm_volumes`.

Fixes: ceph/ceph-ansible#7435

Signed-off-by: Jan Horstmann <[email protected]>
@janhorstmann janhorstmann force-pushed the fix/ceph-config-fix-calculation-of-num_osds branch from 7308b66 to 1569e58 Compare May 29, 2024 14:24
@janhorstmann
Copy link
Contributor Author

quincy patch fixed, CI is happy now

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Calculated value for osd target memory too high for deployments with multiple OSDs per device
1 participant