-
Notifications
You must be signed in to change notification settings - Fork 82
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #4373 from sunilangadi2/external_migration
live migration of RBD images from one ceph cluster to another ceph cluster
- Loading branch information
Showing
5 changed files
with
584 additions
and
5 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
169 changes: 169 additions & 0 deletions
169
suites/squid/rbd/tier-2_rbd_migration_external_ceph.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,169 @@ | ||
#=============================================================================================== | ||
# Tier-level: 2 | ||
# Test-Suite: tier-2_rbd_migration_external_ceph.yaml | ||
# | ||
# Cluster Configuration: | ||
# cephci/conf/squid/rbd/5-node-2-clusters.yaml | ||
# No of Clusters : 2 | ||
# Each cluster configuration | ||
# 5-Node cluster(RHEL-8.3 and above) | ||
# 3 MONS, 2 MGR, 3 OSD, 1 Client | ||
# Node1 - Mon, Mgr, Installer | ||
# Node2 - client | ||
# Node3 - OSD, MON, MGR | ||
# Node4 - OSD, MON | ||
# Node5 - OSD, | ||
#=============================================================================================== | ||
tests: | ||
- test: | ||
name: setup install pre-requisistes | ||
desc: Setup phase to deploy the required pre-requisites for running the tests. | ||
module: install_prereq.py | ||
abort-on-fail: true | ||
|
||
- test: | ||
abort-on-fail: true | ||
clusters: | ||
ceph-rbd1: | ||
config: | ||
verify_cluster_health: true | ||
steps: | ||
- config: | ||
command: bootstrap | ||
service: cephadm | ||
args: | ||
mon-ip: node1 | ||
orphan-initial-daemons: true | ||
skip-monitoring-stack: true | ||
- config: | ||
command: add_hosts | ||
service: host | ||
args: | ||
attach_ip_address: true | ||
labels: apply-all-labels | ||
- config: | ||
command: apply | ||
service: mgr | ||
args: | ||
placement: | ||
label: mgr | ||
- config: | ||
command: apply | ||
service: mon | ||
args: | ||
placement: | ||
label: mon | ||
- config: | ||
command: apply | ||
service: osd | ||
args: | ||
all-available-devices: true | ||
ceph-rbd2: | ||
config: | ||
verify_cluster_health: true | ||
steps: | ||
- config: | ||
command: bootstrap | ||
service: cephadm | ||
args: | ||
mon-ip: node1 | ||
orphan-initial-daemons: true | ||
skip-monitoring-stack: true | ||
- config: | ||
command: add_hosts | ||
service: host | ||
args: | ||
attach_ip_address: true | ||
labels: apply-all-labels | ||
- config: | ||
command: apply | ||
service: mgr | ||
args: | ||
placement: | ||
label: mgr | ||
- config: | ||
command: apply | ||
service: mon | ||
args: | ||
placement: | ||
label: mon | ||
- config: | ||
command: apply | ||
service: osd | ||
args: | ||
all-available-devices: true | ||
desc: Two ceph cluster deployment for external ceph migration testing | ||
destroy-clster: false | ||
module: test_cephadm.py | ||
name: deploy two ceph cluster | ||
|
||
- test: | ||
abort-on-fail: true | ||
clusters: | ||
ceph-rbd1: | ||
config: | ||
command: add | ||
id: client.1 | ||
node: node2 | ||
install_packages: | ||
- ceph-common | ||
- fio | ||
copy_admin_keyring: true | ||
ceph-rbd2: | ||
config: | ||
command: add | ||
id: client.1 | ||
node: node2 | ||
install_packages: | ||
- ceph-common | ||
- fio | ||
copy_admin_keyring: true | ||
desc: Configure the client node for both the clusters | ||
destroy-cluster: false | ||
module: test_client.py | ||
name: configure client | ||
|
||
- test: | ||
desc: Enable mon_allow_pool_delete to True for deleting the pools | ||
module: exec.py | ||
name: configure mon_allow_pool_delete to True | ||
abort-on-fail: true | ||
config: | ||
cephadm: true | ||
commands: | ||
- "ceph config set mon mon_allow_pool_delete true" | ||
|
||
- test: | ||
desc: Install rbd-nbd and remove any epel packages | ||
module: exec.py | ||
name: Install rbd-nbd | ||
config: | ||
sudo: true | ||
commands: | ||
- "rm -rf /etc/yum.repos.d/epel*" | ||
- "dnf install rbd-nbd -y" | ||
|
||
- test: | ||
name: Test image migration with external ceph cluster | ||
desc: live migration with external ceph native data format | ||
module: test_rbd_migration_external_native_image.py | ||
clusters: | ||
ceph-rbd1: | ||
config: | ||
rep_pool_config: | ||
num_pools: 1 | ||
num_images: 1 | ||
size: 4G | ||
create_pool_parallely: true | ||
create_image_parallely: true | ||
test_ops_parallely: true | ||
io_size: 1G | ||
ec_pool_config: | ||
num_pools: 1 | ||
num_images: 1 | ||
size: 4G | ||
create_pool_parallely: true | ||
create_image_parallely: true | ||
test_ops_parallely: true | ||
io_size: 1G | ||
polarion-id: CEPH-83597689 |
Oops, something went wrong.