-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to deploy /dev/sdb ,/dev/sdc only use /dev/nvme0n1 as db device ? #1763
Comments
@akumacxd You might try the drive_group_hdd_nvme:
target: 'I@roles:storage'
data_devices:
rotational: 1
db_devices:
rotational: 0
limit: 1
block_db_size: '2G' But like ceph-volume complains, please remove any GTP headers from the disks.
Those probably need to be removed form sda/b and all the nvme devices. HTH |
how to remove /dev/sda ? /dev/sda is OS node004:~ # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 20G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 19G 0 part ├─vgoo-lvroot 254:0 0 17G 0 lvm / └─vgoo-lvswap 254:1 0 2G 0 lvm [SWAP] |
if add an OSD disk to a node , drive group will not find nvme disk (nvme0n2) node004:~ # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 20G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 19G 0 part ├─vgoo-lvroot 254:0 0 17G 0 lvm / └─vgoo-lvswap 254:1 0 2G 0 lvm [SWAP] sdb 8:16 0 10G 0 disk └─ceph--block--0515f9d7--3407--46a5--be68--db80fc789dcc-osd--block--9a914f7d--ae9c--451a--ac7e--bcb6cb1fc926 254:4 0 9G 0 lvm sdc 8:32 0 10G 0 disk └─ceph--block--9f7394b2--3ad3--4cd8--8267--7e5993af1271-osd--block--79f5920f--b41c--4dd0--94e9--dc85dbb2e7e4 254:5 0 9G 0 lvm sdd <== new disk 8:48 0 10G 0 disk sr0 11:0 1 1024M 0 rom nvme0n1 259:0 0 20G 0 disk nvme0n2 259:1 0 20G 0 disk ├─ceph--block--dbs--57d07a01--4440--4892--b44c--eae536613586-osd--block--db--2b295cc9--caff--45ad--a179--d7e3ba46a39d 254:2 0 1G 0 lvm └─ceph--block--dbs--57d07a01--4440--4892--b44c--eae536613586-osd--block--db--2244293e--ca96--4847--a5cb--9112f59836fa 254:3 0 1G 0 lvm nvme0n3 259:2 0 20G 0 disk cat /srv/salt/ceph/configuration/files/drive_groups.yml # This is the default configuration and # will create an OSD on all available drives drive_group_hdd_nvme: target: 'I@roles:storage' data_devices: size: '9GB:12GB' db_devices: rotational: 0 limit: 1 block_db_size: '2G' admin:~ # salt-run disks.report node004.example.com: |_ - 0 - Total OSDs: 1 Type Path LV Size % of device ---------------------------------------------------------------------------------------------------- [data] /dev/sdd 9.00 GB 100.0% admin:~ # salt-run state.orch ceph.stage.3 node004: # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 20G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 19G 0 part ├─vgoo-lvroot 254:0 0 17G 0 lvm / └─vgoo-lvswap 254:1 0 2G 0 lvm [SWAP] sdb 8:16 0 10G 0 disk └─ceph--block--0515f9d7--3407--46a5--be68--db80fc789dcc-osd--block--9a914f7d--ae9c--451a--ac7e--bcb6cb1fc926 254:4 0 9G 0 lvm sdc 8:32 0 10G 0 disk └─ceph--block--9f7394b2--3ad3--4cd8--8267--7e5993af1271-osd--block--79f5920f--b41c--4dd0--94e9--dc85dbb2e7e4 254:5 0 9G 0 lvm sdd 8:48 0 10G 0 disk └─ceph--dc28a338--71c6--4d73--8838--ee098719571b-osd--data--ce8df5f1--7b2e--4641--80e0--7f0e44dee652 254:6 0 9G 0 lvm sr0 11:0 1 1024M 0 rom nvme0n1 259:0 0 20G 0 disk nvme0n2 259:1 0 20G 0 disk ├─ceph--block--dbs--57d07a01--4440--4892--b44c--eae536613586-osd--block--db--2b295cc9--caff--45ad--a179--d7e3ba46a39d 254:2 0 1G 0 lvm └─ceph--block--dbs--57d07a01--4440--4892--b44c--eae536613586-osd--block--db--2244293e--ca96--4847--a5cb--9112f59836fa 254:3 0 1G 0 lvm nvme0n3 259:2 0 20G 0 disk |
At present, all my nodes have three NVME devices, the same model and size. I want to specify nvme0n1 as db device, nvme0n2 as RGW index, and the last nvme0n3 as LVM Cache. How to configure drive group? |
The following way to create OSD manually step by step to create, drive group through this way to create OSD ?(1)node004 disk layout # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 20G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 19G 0 part ├─vgoo-lvroot 254:0 0 17G 0 lvm / └─vgoo-lvswap 254:1 0 2G 0 lvm [SWAP] sdb 8:16 0 10G 0 disk └─ceph--block--0515f9d7--3407--46a5--be68--db80fc789dcc-osd--block--9a914f7d--ae9c--451a--ac7e--bcb6cb1fc926 254:4 0 9G 0 lvm sdc 8:32 0 10G 0 disk └─ceph--block--9f7394b2--3ad3--4cd8--8267--7e5993af1271-osd--block--79f5920f--b41c--4dd0--94e9--dc85dbb2e7e4 254:5 0 9G 0 lvm sdd 8:48 0 10G 0 disk sr0 11:0 1 1024M 0 rom nvme0n1 259:0 0 20G 0 disk nvme0n2 259:1 0 20G 0 disk ├─ceph--block--dbs--57d07a01--4440--4892--b44c--eae536613586-osd--block--db--2b295cc9--caff--45ad--a179--d7e3ba46a39d 254:2 0 1G 0 lvm └─ceph--block--dbs--57d07a01--4440--4892--b44c--eae536613586-osd--block--db--2244293e--ca96--4847--a5cb--9112f59836fa 254:3 0 1G 0 lvm nvme0n3 259:2 0 20G 0 disk (2)LVS VGS Information # lvs LV VG Attr LSize osd-block-9a914f7d-ae9c-451a-ac7e-bcb6cb1fc926 ceph-block-0515f9d7-3407-46a5-be68-db80fc789dcc -wi-ao---- 9.00g osd-block-79f5920f-b41c-4dd0-94e9-dc85dbb2e7e4 ceph-block-9f7394b2-3ad3-4cd8-8267-7e5993af1271 -wi-ao---- 9.00g osd-block-db-2244293e-ca96-4847-a5cb-9112f59836fa ceph-block-dbs-57d07a01-4440-4892-b44c-eae536613586 -wi-ao---- 1.00g osd-block-db-2b295cc9-caff-45ad-a179-d7e3ba46a39d ceph-block-dbs-57d07a01-4440-4892-b44c-eae536613586 -wi-ao---- 1.00g osd-block-db-test ceph-block-dbs-57d07a01-4440-4892-b44c-eae536613586 -wi-a----- 2.00g lvroot vgoo -wi-ao---- 17.00g lvswap vgoo -wi-ao---- 2.00g (3)create the logical volumes for data block: # vgcreate ceph-block-0 /dev/sdd # lvcreate -l 100%FREE -n block-0 ceph-block-0 (4)create the logical volumes for db/wal block: # lvcreate -L 2GB -n db-0 ceph-block-dbs-57d07a01-4440-4892-b44c-eae536613586 (5)LVS VGS Information # lvs LV VG Attr LSize block-0 ceph-block-0 -wi-a----- 10.00g osd-block-9a914f7d-ae9c-451a-ac7e-bcb6cb1fc926 ceph-block-0515f9d7-3407-46a5-be68-db80fc789dcc -wi-ao---- 9.00g osd-block-79f5920f-b41c-4dd0-94e9-dc85dbb2e7e4 ceph-block-9f7394b2-3ad3-4cd8-8267-7e5993af1271 -wi-ao---- 9.00g db-0 ceph-block-dbs-57d07a01-4440-4892-b44c-eae536613586 -wi-a----- 2.00g osd-block-db-2244293e-ca96-4847-a5cb-9112f59836fa ceph-block-dbs-57d07a01-4440-4892-b44c-eae536613586 -wi-ao---- 1.00g osd-block-db-2b295cc9-caff-45ad-a179-d7e3ba46a39d ceph-block-dbs-57d07a01-4440-4892-b44c-eae536613586 -wi-ao---- 1.00g lvroot vgoo -wi-ao---- 17.00g lvswap vgoo -wi-ao---- 2.00g (6)create the OSDs with ceph-volume: # ceph-volume lvm create --bluestore --data ceph-block-0/block-0 --block.db ceph-block-dbs-57d07a01-4440-4892-b44c-eae536613586/db-0 # ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.07837 root default -7 0.01959 host node001 2 hdd 0.00980 osd.2 up 1.00000 1.00000 5 hdd 0.00980 osd.5 up 1.00000 1.00000 -3 0.01959 host node002 0 hdd 0.00980 osd.0 up 1.00000 1.00000 3 hdd 0.00980 osd.3 up 1.00000 1.00000 -5 0.01959 host node003 1 hdd 0.00980 osd.1 up 1.00000 1.00000 4 hdd 0.00980 osd.4 up 1.00000 1.00000 -9 0.01959 host node004 6 hdd 0.00980 osd.6 up 1.00000 1.00000 7 hdd 0.00980 osd.7 up 1.00000 1.00000 8 hdd 0 osd.8 up 1.00000 1.00000 |
Description of Issue/Question
How to deploy /dev/sdb ,/dev/sdc only use /dev/nvme0n1 as db device ?
or I just want to use a single NVME device as the db device
In addition, drive groups always look for system disks /dev/sda. Can you design to ignore a particular disk?
2 hdds
Vendor: VMware
Model: VMware Virtual S
Size: 10GB
3 NVMES:
Vendor: VMware
Model: VMware Virtual NVMe Disk
Size: 20GB
Setup
(Please provide relevant configs and/or SLS files (Be sure to remove sensitive info).)
disks report show me , use two nvme disk ,
Versions Report
(Provided by running:
rpm -qi salt-minion
rpm -qi salt-master
The text was updated successfully, but these errors were encountered: