-
Notifications
You must be signed in to change notification settings - Fork 13
arch_single_ceph
This scenario is a variation of the shared storage setup. Here, the storage for virtual machines (VMs) and the image repository are provided by a local Ceph cluster. Running VMs directly from Ceph storage can enhance the fault tolerance of the system in the event of a host failure, although it comes with the drawback of increased I/O latency.
In this scenario Ceph OSD servers are deployed on dedicated hosts. Please refer to the documentation of the ceph-ansible project and to the group variable definions inside its official git repository for the full guide on how to configure it.
ceph-ansible
project and introduces the opennebula.deploy.ceph
playbook to be executed before the main deployment.
---
all:
vars:
ansible_user: root
one_version: '6.6'
one_pass: opennebulapass
features:
# Enable the "ceph" feature in one-deploy.
ceph: true
ds:
# Simple datastore setup - use built-in Ceph cluster for datastores 0 (system) and 1 (images).
mode: ceph
vn:
admin_net:
managed: true
template:
VN_MAD: bridge
PHYDEV: eth0
BRIDGE: br0
AR:
TYPE: IP4
IP: 172.20.0.100
SIZE: 48
NETWORK_ADDRESS: 172.20.0.0
NETWORK_MASK: 255.255.255.0
GATEWAY: 172.20.0.1
DNS: 1.1.1.1
frontend:
hosts:
f1: { ansible_host: 172.20.0.6 }
node:
hosts:
n1: { ansible_host: 172.20.0.7 }
n2: { ansible_host: 172.20.0.8 }
ceph:
children:
? mons
? mgrs
? osds
vars:
osd_auto_discovery: true
mons:
hosts:
mon1: { ansible_host: 172.20.0.6, monitor_address: 172.20.0.6 }
mgrs:
hosts:
mgr1: { ansible_host: 172.20.0.6 }
osds:
hosts:
osd1: { ansible_host: 172.20.0.10 }
osd2: { ansible_host: 172.20.0.11 }
osd3: { ansible_host: 172.20.0.12 }
In this scenario we deploy Ceph OSD servers along the OpenNebula KVM nodes. Here, we limit and reserve CPU and RAM for Ceph OSDs to ensure they interfere with the running guest VMs as lightly as possible.
---
all:
vars:
ansible_user: ubuntu
ensure_keys_for: [ubuntu, root]
one_pass: opennebulapass
one_version: '6.6'
features:
# Enable the "ceph" feature in one-deploy.
ceph: true
ds:
# Simple datastore setup - use built-in Ceph cluster for datastores 0 (system) and 1 (images).
mode: ceph
vn:
admin_net:
managed: true
template:
VN_MAD: bridge
PHYDEV: eth0
BRIDGE: br0
AR:
TYPE: IP4
IP: 172.20.0.200
SIZE: 48
NETWORK_ADDRESS: 172.20.0.0
NETWORK_MASK: 255.255.255.0
GATEWAY: 172.20.0.1
DNS: 172.20.0.1
frontend:
hosts:
f1: { ansible_host: 172.20.0.6 }
node:
hosts:
n1: { ansible_host: 172.20.0.7 }
n2: { ansible_host: 172.20.0.8 }
n3: { ansible_host: 172.20.0.9 }
ceph:
children:
? mons
? mgrs
? osds
vars:
osd_memory_target: 4294967296 # 4GiB (default)
# Assuming all osds are of equal size, setup resource limits and reservations
# for all osd systemd services.
ceph_osd_systemd_overrides:
Service:
CPUWeight: 200 # 100 is the kernel default
CPUQuota: 100% # 1 full core
MemoryMin: "{{ (0.75 * osd_memory_target) | int }}"
MemoryHigh: "{{ osd_memory_target | int }}"
# Make sure osds preserve memory if it's below the value of the "osd_memory_target" fact.
ceph_conf_overrides:
osd:
? osd memory target
: "{{ osd_memory_target | int }}"
osd_auto_discovery: true
mons:
hosts:
f1: { ansible_host: 172.20.0.6, monitor_address: 172.20.0.6 }
mgrs:
hosts:
f1: { ansible_host: 172.20.0.6 }
osds:
hosts:
# NOTE: The Ceph osds are deployed along the OpenNebula KVM nodes (HCI setup).
n1: { ansible_host: 172.20.0.7 }
n2: { ansible_host: 172.20.0.8 }
n3: { ansible_host: 172.20.0.9 }
It is possible to manage CRUSH Maps in ceph-ansible
, please take a look at the partial inventory below:
osds:
vars:
# Disable OSD device auto-discovery, as the devices are explicitly specified below (per each OSD node).
osd_auto_discovery: false
# Enable CRUSH rule/map management.
crush_rule_config: true
create_crush_tree: true
# Define CRUSH rules.
crush_rule_hdd:
name: HDD
root: root1
type: host
class: hdd
default: false
crush_rules:
- "{{ crush_rule_hdd }}"
hosts:
osd1:
ansible_host: 172.20.0.10
devices:
- /dev/vdb
- /dev/vdc
osd_crush_location: { host: osd1, rack: rack1, root: root1 }
osd2:
ansible_host: 172.20.0.11
devices:
- /dev/vdb
- /dev/vdc
osd_crush_location: { host: osd2, rack: rack2, root: root1 }
osd3:
ansible_host: 172.20.0.12
devices:
- /dev/vdb
- /dev/vdc
osd_crush_location: { host: osd3, rack: rack3, root: root1 }
Running the opennebula.deploy.ceph
playbook should result in such CRUSH architecture:
# ceph osd crush tree
ID CLASS WEIGHT TYPE NAME
-15 0.37500 root root1
-9 0.12500 rack rack1
-3 0.12500 host osd1
0 hdd 0.06250 osd.0
3 hdd 0.06250 osd.3
-11 0.12500 rack rack2
-7 0.12500 host osd2
1 hdd 0.06250 osd.1
4 hdd 0.06250 osd.4
-10 0.12500 rack rack3
-5 0.12500 host osd3
2 hdd 0.06250 osd.2
5 hdd 0.06250 osd.5
-1 0 root default
Please refer to the official Ceph documentation on CRUSH Maps.
The one-deploy project comes with the opennebula.deploy.ceph
playbook that can be executed as follows:
$ ansible-playbook -i inventory/ceph.yml opennebula.deploy.ceph
The Ceph-related part of the inventory can be identfied as shown below (please refer to the ceph-ansible
's documentation for the full guide on how to configure ceph-*
roles).
ceph:
children:
? mons
? mgrs
? osds
vars:
osd_auto_discovery: true
mons:
hosts:
mon1: { ansible_host: 172.20.0.6, monitor_address: 172.20.0.6 }
mgrs:
hosts:
mgr1: { ansible_host: 172.20.0.6 }
osds:
hosts:
osd1: { ansible_host: 172.20.0.10 }
osd2: { ansible_host: 172.20.0.11 }
osd3: { ansible_host: 172.20.0.12 }
-
1. Prepare the inventory file: Update the
ceph.yml
file in the inventory file to match your infrastructure settings. Please be sure to update or review the following variables:-
ansible_user
, update it if different from root -
one_pass
, change it to the password for the oneadmin account -
one_version
, be sure to use the latest stable version here -
features.ceph
, to enable Ceph in one-deploy -
ds.mode
, to confgure Ceph datastores in OpenNebula
-
-
2. Check the connection: Verify the network connection, ssh and sudo configuration run the following command:
ansible -i inventory/ceph.yml all -m ping -b
- 3. Site installation: Now we can run the site playbooks that provision a local Ceph cluster install and configure OpenNebula services
ansible-playbook -i inventory/ceph.yml opennebula.deploy.pre opennebula.deploy.ceph opennebula.deploy.site
Once the execution of the playbooks finish your new OpenNebula cloud is ready. You can now head to the verification guide.
- Requirements & Platform Notes
- Release Notes
- Using the playbooks
- Reference Architectures:
- Verifying the installation
- Advanced Configurations:
- Additional Options:
- Developer Information: