diff --git a/website/versioned_docs/version-v1.0.4/basic-concepts/basic-concepts.md b/website/versioned_docs/version-v1.0.4/basic-concepts/basic-concepts.md new file mode 100644 index 00000000..f1967f88 --- /dev/null +++ b/website/versioned_docs/version-v1.0.4/basic-concepts/basic-concepts.md @@ -0,0 +1,116 @@ +--- +id: version-v1.0.4-basic-concepts +title: Basic Concepts +sidebar_label: Basic Concepts +original_id: basic-concepts +--- + +Before deploying and using IOMesh, familiarity with the following concepts is suggested. + +[**Kubernetes**](https://kubernetes.io/) + +A portable, extensible open source container orchestration platform for managing containerized workloads and services, facilitating both declarative configuration and automation. + +[**Master Node**](https://kubernetes.io/docs/concepts/overview/components/#control-plane-components) + +A node that runs the control plane components of the Kubernetes cluster and manages a set of worker nodes. Typically, a Kubernetes cluster has one, three, or five master nodes. + +**Worker Node** + +A worker machine that runs Kubernetes node components and containerized applications. IOMesh is installed, deployed, and running on the worker node. + +[**kubectl**](https://kubernetes.io/docs/reference/kubectl/) + +A command line tool for communicating with the control plane of a Kubernetes cluster through the Kubernetes API. + +**Stateful Application** + +Applications can be stateful or stateless. Stateful applications store data on persistent disk storage for use by the server, client, and other applications. Stateless applications do not store client data on the server when switching sessions. + +**IOMesh Block Storage** + +The IOMesh block storage service for ensuring distributed system consistency and data coherence, managing metadata and local disks, and implementing I/O redirection and high availability. + +**IOMesh Node** + +A worker node in the Kubernetes cluster with a chunk pod installed. + +**Chunk** + +The chunk module within each IOMesh Block Storage component that manages local disks, translates access protocols, and ensures data consistency. A chunk pod on a worker node provides storage services, and each worker node can only have one chunk pod. + + +**Meta** + +The meta module within each IOMesh Block Storage component for metadata management, including storage object management, data replica management, access control, and ensuring data consistency. A meta pod on a worker node provides metadata management, and each worker node can only have one meta pod. + +**IOMesh CSI Driver** + +The CSI driver that adheres to [the CSI standard](https://github.com/container-storage-interface/spec/blob/master/spec.md) and utilizes RPC (Remote Procedure Call) to manage persistent volumes, delivering reliable and consistent storage for data applications on Kubernetes. Each Kubernetes persistent volume corresponds to an iSCSI LUN in the IOMesh cluster. + +**IOMesh Operator** + +The IOMesh automated operations and maintenance component, allowing for roll updating IOMesh, scaling out or down nodes, and GitOps while being responsible for automatic discovery, allocation, and management of block devices. + +[**Namespace**](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) + +Provides a mechanism for dividing resources in the same cluster into isolated groups that can be created on demand and managed separately within a cluster. + +[**StorageClass**](https://kubernetes.io/docs/concepts/storage/storage-classes/) + +Provides a way to describe the classes of storage or a template for dynamically provisioning persistent volumes and allows administrators to specify different attributes belonging to a StorageClass. + +[**Persistent Volume**](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) + +A piece of storage in the cluster, which can be pre-provisioned by the administrator or dynamically provisioned using StorageClass. Persistent volumes, like other types of volumes, are implemented using volume plugins, but they have a lifecycle independent of any pod using PV. + +[**Persistent Volume Claim**](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) + +A request for storage by a user. Conceptually similar to a pod, a pod consumes node resources while a PVC consumes PV resources; a pod can request a specific amount of resources like CPU and memory, and similarly a PVC can request a specific size of storage and access mode. + +[**Volume Snapshot**](https://kubernetes.io/docs/concepts/storage/volume-snapshots/) + +A user request for a snapshot of a volume, which is similar to a persistent volume request. + +[**Volume Snapshot Class**](https://kubernetes.io/docs/concepts/storage/volume-snapshot-classes/) + +Provides a way to describe the classes of storage when provisioning a volume snapshot. It allows you to specify different attributes belonging to a VolumeSnapshot. These attributes may differ among snapshots taken from the same volume on the storage system and therefore cannot be expressed by using the same StorageClass of a PersistentVolumeClaim. + +[**Volume Snapshot Content**](https://kubernetes.io/docs/concepts/storage/volume-snapshots/#volume-snapshot-contents) + +A snapshot taken from a volume in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a persistent volume is a cluster resource. + +[**Volume Mode**](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#volume-mode) + +An optional API parameter that describes the specific mode for a persistent volume. Kubernetes supports `Filesystem` and `Block` as `volumeModes`. + +- `filesystem`: A volume with volume mode set to `filesystem` is mounted to a directory by the pod. +- `block`: A volume is used as a raw block device which provides the pod the fastest possible way to access a volume, without any filesystem layer between the pod and the volume. + +[**Access Mode**](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes) + +A PV can be mounted on a host using any supported access mode. IOMesh supports `ReadWriteOnce`, `ReadWriteMany`, and `ReadOnlyMany` access modes; however, `ReadWriteMany` and `ReadOnlyMany` are only available for PVs that use `block` as the volume mode. + +- `ReadWriteOnce`: The volume can be mounted as read-write by a single node. `ReadWriteOnce` still can allow multiple pods to access the volume when the pods are running on the same node. + +- `ReadWriteMany`: The volume can be mounted as read-write by many nodes. + +- `ReadOnlyMany`: The volume can be mounted as read-only by many nodes. + +[**Helm**](https://helm.sh/) + +A package manager for Kubernetes that helps find, share, and build Kubernetes with software. It is necessary to have Helm for IOMesh installation. + +[**Prometheus**](https://prometheus.io/) + +An open source system monitoring and alerting toolkit that can be integrated with IOMesh to help you monitor IOMesh storage metrics in real-time and receive immediate alerts. + +[**Grafana**](https://grafana.com/) + +A web application that offers real-time charts, graphs, and alerts when connected to supported data sources. It is open source and can import IOMesh dashboard template and alerting rules, allowing you to visualize IOMesh storage metrics. + + + + + + diff --git a/website/versioned_docs/version-v1.0.4/cluster-operations/replace-failed-disk.md b/website/versioned_docs/version-v1.0.4/cluster-operations/replace-failed-disk.md new file mode 100644 index 00000000..076ec91c --- /dev/null +++ b/website/versioned_docs/version-v1.0.4/cluster-operations/replace-failed-disk.md @@ -0,0 +1,112 @@ +--- +id: version-v1.0.4-replace-failed-disk +title: Replace Disk +sidebar_label: Replace Disk +original_id: replace-failed-disk +--- + +The IOMesh Dashboard displays the health status of physical disks for easy monitoring. If any disk is indicated as `Unhealthy`, `Failing`, or `S.M.A.R.T not passed`, you should replace it with a new disk as soon as possible. + +**Procedure** + +1. Get the meta leader pod name. + ```shell + kubectl get pod -n iomesh-system -l=iomesh.com/meta-leader -o=jsonpath='{.items[0].metadata.name}' + ``` + ```output + iomesh-meta-0 + ``` + +2. Access the meta leader pod. + ```shell + kubectl exec -it iomesh-meta-0 -n iomesh-system -c iomesh-meta bash + ``` + +3. Run the following command multiple times to verify that there are no ongoing migration or recovery tasks in the cluster. + + Ensure that the output value is 0. If any field has a non-zero value, you should wait for it to reach 0. + + ```shell + /opt/iomesh/iomeshctl summary cluster | egrep "recovers|migrates" + ``` + ```output + num_ongoing_recovers: 0 + num_pending_recovers: 0 + num_ongoing_migrates: 0 + num_pending_migrates: 0 + pending_migrates_bytes: 0 + pending_recovers_bytes: 0 + pending_migrates_bytes: 0 + pending_recovers_bytes: 0 + pending_migrates_bytes: 0 + pending_recovers_bytes: 0 + pending_migrates_bytes: 0 + pending_recovers_bytes: 0 + num_ongoing_recovers: 0 + num_pending_recovers: 0 + num_ongoing_migrates: 0 + num_pending_migrates: 0 + pending_migrates_bytes: 0 + pending_recovers_bytes: 0 + ``` + +4. View the disk that requires replacement. In the given example, let's assume that the disk `blockdevice-66312cce9037ae891a099ad83f44d7c9` needs to be replaced. + ```shell + kubectl --namespace iomesh-system get bd -o wide + ``` + ```output + NAME NODENAME PATH FSTYPE SIZE CLAIMSTATE STATUS AGE + blockdevice-41f0c2b60f5d63c677c3aca05c2981ef qtest-k8s-0 /dev/sdc 53687091200 Unclaimed Active 29h + blockdevice-66312cce9037ae891a099ad83f44d7c9 qtest-k8s-1 /dev/sdc 69793218560 Claimed Active 44h + blockdevice-7aff82fe93fac5153b14af3c82d68856 qtest-k8s-2 /dev/sdb 69793218560 Claimed Active 44h + ``` + +5. Run the following command to edit the `deviceMap` of the disk. Add the disk name to the field `exclude` under `devicemap`. + + ```shell + kubectl edit iomesh iomesh -n iomesh-system + ``` + + ```yaml + # ... + deviceMap: + # ... + dataStore: + selector: + matchExpressions: + - key: iomesh.com/bd-driverType + operator: In + values: + - HDD + matchLabels: + iomesh.com/bd-deviceType: disk + exclude: + - blockdevice-66312cce9037ae891a099ad83f44d7c9 + # ... + ``` + +6. Repeat Step 2 and 3 to verify that there are no ongoing migration or recovery tasks in the cluster. + +7. Verify that the block device is in the `Unclaimed` state. + ```shell + kubectl get bd blockdevice-66312cce9037ae891a099ad83f44d7c9 -n iomesh-system + ``` + ```output + NAME NODENAME PATH FSTYPE SIZE CLAIMSTATE STATUS AGE + blockdevice-66312cce9037ae891a099ad83f44d7c9 qtest-k8s-1 /dev/sdc 69793218560 Unclaimed Active 44h + ``` + +8. Unplug the disk. Then the disk will enter the `Inactive` state. + + Run the following commands simultaneously to remove the block device and its corresponding `blockdeviceclaim`. + + > _NOTE:_ It is normal to see a prompt indicating that `bdc` cannot be found when running the following commands to clear it. + + ```shell + kubectl patch bdc/blockdevice-66312cce9037ae891a099ad83f44d7c9 -p '{"metadata":{"finalizers":[]}}' --type=merge -n iomesh-system + kubectl patch bd/blockdevice-66312cce9037ae891a099ad83f44d7c9 -p '{"metadata":{"finalizers":[]}}' --type=merge -n iomesh-system + kubectl delete bdc blockdevice-66312cce9037ae891a099ad83f44d7c9 -n iomesh-system + kubectl delete bd blockdevice-66312cce9037ae891a099ad83f44d7c9 -n iomesh-system + ``` +9. Plug the new disk. Refer to [Set Up IOMesh](../deploy-iomesh-cluster/setup-iomesh) for mounting steps. + diff --git a/website/versioned_docs/version-v1.0.4/cluster-operations/scale-out-cluster.md b/website/versioned_docs/version-v1.0.4/cluster-operations/scale-out-cluster.md new file mode 100644 index 00000000..9e070a27 --- /dev/null +++ b/website/versioned_docs/version-v1.0.4/cluster-operations/scale-out-cluster.md @@ -0,0 +1,76 @@ +--- +id: version-v1.0.4-scale-out-cluster +title: Scale Out Cluster +sidebar_label: Scale Out Cluster +original_id: scale-out-cluster +--- + +If you have the IOMesh Enterprise edition, you can scale out the cluster online without interrupting its operation. However, scaling out is not possible with the Community edition that only allows a maximum of three meta or chunk pods. When scaling out the cluster, you can choose to add chunk pods, meta pods, or both at the same time. + +**Prerequisite** + +Ensure an adequate number of worker nodes in the Kubernetes cluster. Each worker node can accommodate only one chunk pod and one meta pod. Therefore, if there are insufficient worker nodes, add them to the Kubernetes cluster before scaling out. + +**Procedure** + +1. Add chunk pods. + + >_NOTE_: A single IOMesh cluster should have a minimum of three chunk pods. The maximum number of chunk pods is determined jointly by the total number of worker nodes in the Kubernetes cluster and the node count specified in the IOMesh license, with a maximum of 255 for the Enterprise edition. + + To increase the capacity of the IOMesh cluster, you can choose to add chunk pods by following these steps: + + - Locate `chunk` in `iomesh.yaml`, the default configuration file exported during IOMesh installation. Then modify the value of `replicaCount`, which represents the total number of chunk pods. + + ```yaml + chunk: + replicaCount: 5 # Enter the number of chunk pods. + ``` + - Apply the modification. + + ```shell + helm upgrade --namespace iomesh-system iomesh iomesh/iomesh --values iomesh.yaml + ``` + - Verify that the modification was successful. + + ```shell + kubectl get pod -n iomesh-system | grep chunk + ``` + + If successful, you should see output like this: + ```output + iomesh-chunk-0 3/3 Running 0 5h5m + iomesh-chunk-1 3/3 Running 0 5h5m + iomesh-chunk-2 3/3 Running 0 5h5m + iomesh-chunk-3 3/3 Running 0 5h5m + iomesh-chunk-4 3/3 Running 0 5h5m + ``` + +2. Add meta pods. + + An optional step. When deploying IOMesh, three meta pods are created in the IOMesh cluster by default. If the number of IOMesh nodes in the Kubernetes cluster is equal to or greater than five, it's recommended to increase the number of meta pods from three to five. Note that the number of supported meta pods in the IOMesh cluster should be either three or five. + + - Locate `meta` in `iomesh.yaml`, the default configuration file exported during IOMesh installation. Then modify the value of `replicaCount`, which represents the number of meta pods. + + ```yaml + meta: + replicaCount: 5 # Change the value to 5. + ``` + - Apply the modification. + ```shell + helm upgrade --namespace iomesh-system iomesh iomesh/iomesh --values iomesh.yaml + ``` + - Verify that the modification was successful. + + ```shell + kubectl get pod -n iomesh-system | grep meta + ``` + + If successful, you should see output like this: + ```output + iomesh-meta-0 2/2 Running 0 5h5m + iomesh-meta-1 2/2 Running 0 5h5m + iomesh-meta-2 2/2 Running 0 5h5m + iomesh-meta-3 2/2 Running 0 5h5m + iomesh-meta-4 2/2 Running 0 5h5m + ``` + diff --git a/website/versioned_docs/version-v1.0.4/deploy-iomesh-cluster/install-iomesh.md b/website/versioned_docs/version-v1.0.4/deploy-iomesh-cluster/install-iomesh.md new file mode 100644 index 00000000..fa46f82a --- /dev/null +++ b/website/versioned_docs/version-v1.0.4/deploy-iomesh-cluster/install-iomesh.md @@ -0,0 +1,404 @@ +--- +id: version-v1.0.4-install-iomesh +title: Install IOMesh +sidebar_label: Install IOMesh +original_id: install-iomesh +--- + +IOMesh can be installed on all Kubernetes platforms using various methods. Choose the installation method based on your environment. If the Kubernetes cluster network cannot connect to the public network, opt for custom offline installation. + +- One-click online installation: Use the default settings in the file without custom parameters. +- Custom online installation: Supports custom parameters. +- Custom offline installation: Supports custom parameters. + +## One-Click Online Installation + +**Prerequisite** +- The CPU architecture of the Kubernetes cluster must be Intel x86_64 or Kunpeng AArch64. + +**Limitations** +- The Community Edition is installed by default, which has a 3-node limit. +- Only hybrid disk configurations are allowed. + +**Procedure** + +1. Access a master node. + +2. Run the following command to install IOMesh. Make sure to replace `10.234.1.0/24` with your actual CIDR. After executing the following command, wait for a few minutes. + + > _NOTE:_ One-click online installation utilizes `Helm`, which is included in the following command and will be installed automatically if it is not found. + + ```shell + # The IP address of each worker node running IOMesh must be within the same IOMESH_DATA_CIDR. + export IOMESH_DATA_CIDR=10.234.1.0/24; curl -sSL https://iomesh.run/install_iomesh.sh | bash - + ``` + +3. Verify that all pods are in `Running` state. If so, then IOMesh has been successfully installed. + + ```shell + watch kubectl get --namespace iomesh-system pods + ``` + + > _NOTE:_ IOMesh resources left by running the above commands will be saved for troubleshooting if any error occurs during installation. You can run the command `curl -sSL https://iomesh.run/uninstall_iomesh.sh | sh -` to remove all IOMesh resources from the Kubernetes cluster. + + > _NOTE:_ After installing IOMesh, the `prepare-csi` pod will automatically start on all schedulable nodes in the Kubernetes cluster to install and configure `open-iscsi`. If the installation of `open-iscsi` is successful on all nodes, the system will automatically clean up all `prepare-csi` pods. However, if the installation of `open-iscsi` fails on any node, [manual configuration of open-iscsi](../appendices/setup-worker-node) is required to determine the cause of the installation failure. + + > _NOTE:_ If `open-iscsi` is manually deleted after installing IOMesh, the `prepare-csi` pod will not automatically start to install `open-iscsi` when reinstalling IOMesh. In this case, [manual configuration of open-iscsi](../appendices/setup-worker-node) is necessary. + +## Custom Online Installation + +**Prerequisite** + +Make sure the CPU architecture of your Kubernetes cluster is Intel x86_64, Hygon x86_64, or Kunpeng AArch64. + +**Procedure** +1. Access a master node in the Kubernetes cluster. + +2. Install `Helm`. Skip this step if `Helm` is already installed. + + ```shell + curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 + chmod 700 get_helm.sh + ./get_helm.sh + ``` + + For more details, refer to **[Installing Helm](https://helm.sh/docs/intro/install/)**. + +3. Add the IOMesh Helm repository. + + ```shell + helm repo add iomesh http://iomesh.com/charts + ``` + +4. Export the IOMesh default configuration file `iomesh.yaml`. + + ```shell + helm show values iomesh/iomesh > iomesh.yaml + ``` + +5. Configure `iomesh.yaml`. + + - Set `dataCIDR` to the CIDR you previously configured in [Prerequisites](../deploy-iomesh-cluster/prerequisites#network-requirements). + + ```yaml + iomesh: + chunk: + dataCIDR: "" # Fill in the dataCIDR you configured in Prerequisites. + ``` + + - Set `diskDeploymentMode` according to your [disk configurations](../deploy-iomesh-cluster/prerequisites#hardware-requirements). The system has a default value of `hybridFlash`. If your disk configuration is all-flash mode, change the value to `allFlash`. + ```yaml + diskDeploymentMode: "hybridFlash" # Set the disk deployment mode. + ``` + + - Specify the CPU architecture. If you have a `hygon_x86_64` Kubernetes cluster, enter `hygon_x86_64`, or else leave the field blank. + + ```yaml + platform: "" + ``` + + - Specify the IOMesh edition. The field is blank by default, and if left unspecified, the system will install the Community edition automatically. + + If you have purchased the Enterprise edition, set the value of `edition` to `enterprise`. For details, refer to [IOMesh Specifications](https://www.iomesh.com/spec). + + ```yaml + edition: "" # If left blank, Community Edition will be installed. + ``` + + - An optional step. The number of IOMesh chunk pods is three by default. If you install IOMesh Enterprise Edition, you can deploy more than three chunk pods. + + ```yaml + iomesh: + chunk: + replicaCount: 3 # Enter the number of chunk pods. + ``` + + - An optional step. If you want IOMesh to only use the disks of specific Kubernetes nodes, configure the label of the corresponding node in the `chunk.podPolicy.affinity` field. + + ```yaml + iomesh: + chunk: + podPolicy: + affinity: + nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: kubernetes.io/hostname + operator: In + values: + - iomesh-worker-0 # Specify the values of the node label. + - iomesh-worker-1 + ``` + + It is recommended that you only configure `values`. For more configurations, refer to [Pod Affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity). + + - An optional step. Configure the `podDeletePolicy` field to determine whether the system should automatically delete the Pod and rebuild it on another healthy node when the Kubernetes node that hosts the Pod fails. This configuration applies only to the Pod with an IOMesh-created PVC mounted and the access mode set to `ReadWriteOnly`. + + If left unspecified, the value of this field will be set to `no-delete-pod` by default, indicating that the system won't automatically delete and rebuild the Pod in case of node failure. + ```yaml + csi-driver: + driver: + controller: + driver: + podDeletePolicy: "no-delete-pod" # Supports "no-delete-pod", "delete-deployment-pod", "delete-statefulset-pod", or "delete-both-statefulset-and-deployment-pod". + ``` + +6. Back on the master node, run the following commands to deploy the IOMesh cluster. + + ```shell + helm install iomesh iomesh/iomesh \ + --create-namespace \ + --namespace iomesh-system \ + --values iomesh.yaml \ + --wait + ``` + + If successful, you should see output like this: + + ```output + NAME: iomesh + LAST DEPLOYED: Wed Jun 30 16:00:32 2021 + NAMESPACE: iomesh-system + STATUS: deployed + REVISION: 1 + TEST SUITE: None + ``` + +7. Verify that all pods are in `Running` state. If so, then IOMesh has been installed successfully. + + ```bash + kubectl --namespace iomesh-system get pods + ``` + + If successful, you should see output like this: + + ```output + NAME READY STATUS RESTARTS AGE + iomesh-blockdevice-monitor-76ddc8cf85-82d4h 1/1 Running 0 3m23s + iomesh-blockdevice-monitor-prober-kk2qf 1/1 Running 0 3m23s + iomesh-blockdevice-monitor-prober-w6g5q 1/1 Running 0 3m23s + iomesh-blockdevice-monitor-prober-z6b7f 1/1 Running 0 3m23s + iomesh-chunk-0 3/3 Running 2 2m17s + iomesh-chunk-1 3/3 Running 0 2m8s + iomesh-chunk-2 3/3 Running 0 113s + iomesh-csi-driver-controller-plugin-856565b79d-brt2j 6/6 Running 0 3m23s + iomesh-csi-driver-controller-plugin-856565b79d-g6rnd 6/6 Running 0 3m23s + iomesh-csi-driver-controller-plugin-856565b79d-kp9ct 6/6 Running 0 3m23s + iomesh-csi-driver-node-plugin-6pbpp 3/3 Running 4 3m23s + iomesh-csi-driver-node-plugin-bpr7x 3/3 Running 4 3m23s + iomesh-csi-driver-node-plugin-krjts 3/3 Running 4 3m23s + iomesh-hostpath-provisioner-6ffbh 1/1 Running 0 3m23s + iomesh-hostpath-provisioner-bqrjp 1/1 Running 0 3m23s + iomesh-hostpath-provisioner-rm8ms 1/1 Running 0 3m23s + iomesh-iscsi-redirector-2pc26 2/2 Running 1 2m19s + iomesh-iscsi-redirector-7msvs 2/2 Running 1 2m19s + iomesh-iscsi-redirector-nnbb2 2/2 Running 1 2m19s + iomesh-localpv-manager-6flpl 4/4 Running 0 3m23s + iomesh-localpv-manager-m8qgq 4/4 Running 0 3m23s + iomesh-localpv-manager-p88x7 4/4 Running 0 3m23s + iomesh-meta-0 2/2 Running 0 2m17s + iomesh-meta-1 2/2 Running 0 2m17s + iomesh-meta-2 2/2 Running 0 2m17s + iomesh-openebs-ndm-9chdk 1/1 Running 0 3m23s + iomesh-openebs-ndm-cluster-exporter-68c757948-2lgvr 1/1 Running 0 3m23s + iomesh-openebs-ndm-f6qkg 1/1 Running 0 3m23s + iomesh-openebs-ndm-ffbqv 1/1 Running 0 3m23s + iomesh-openebs-ndm-node-exporter-pnc8h 1/1 Running 0 3m23s + iomesh-openebs-ndm-node-exporter-scd6q 1/1 Running 0 3m23s + iomesh-openebs-ndm-node-exporter-tksjh 1/1 Running 0 3m23s + iomesh-openebs-ndm-operator-bd4b94fd6-zrpw7 1/1 Running 0 3m23s + iomesh-zookeeper-0 1/1 Running 0 3m17s + iomesh-zookeeper-1 1/1 Running 0 2m56s + iomesh-zookeeper-2 1/1 Running 0 2m21s + iomesh-zookeeper-operator-58f4df8d54-2wvgj 1/1 Running 0 3m23s + operator-87bb89877-fkbvd 1/1 Running 0 3m23s + operator-87bb89877-kfs9d 1/1 Running 0 3m23s + operator-87bb89877-z9tfr 1/1 Running 0 3m23s + ``` + > _NOTE:_ After installing IOMesh, the `prepare-csi` pod will automatically start on all schedulable nodes in the Kubernetes cluster to install and configure `open-iscsi`. If the installation of `open-iscsi` is successful on all nodes, the system will automatically clean up all `prepare-csi` pods. However, if the installation of `open-iscsi` fails on any node, [manual configuration of open-iscsi](../appendices/setup-worker-node) is required to determine the cause of the installation failure. + + > _NOTE:_ If `open-iscsi` is manually deleted after installing IOMesh, the `prepare-csi` pod will not automatically start to install `open-iscsi` when reinstalling IOMesh. In this case, [manual configuration of open-iscsi](../appendices/setup-worker-node) is necessary. +## Custom Offline Installation + +**Prerequisite** + +Make sure the CPU architecture of your Kubernetes cluster is Intel x86_64, Hygon x86_64, or Kunpeng AArch64. + +**Procedure** + +1. Download the [IOMesh Offline Installation Package](../appendices/downloads) based on your CPU architecture on the master node and each worker node. + +2. Unpack the installation package on the master node and each worker node. Make sure to replace `` with `v1.0.1` and `` based on your CPU architecture. + - Hygon x86_64: `hygon-amd64` + - Intel x86_64: `amd64` + - Kunpeng AArch64: `arm64` + + ```shell + tar -xf iomesh-offline--.tgz && cd iomesh-offline + ``` +3. Load the IOMesh image on the master node and each worker node. Then execute the corresponding script based on your container runtime and container manager. + + + + + ```shell + docker load --input ./images/iomesh-offline-images.tar + ``` + + ```shell + ctr --namespace k8s.io image import ./images/iomesh-offline-images.tar + ``` + + + ```shell + podman load --input ./images/iomesh-offline-images.tar + ``` + + + +4. On the master node, run the following command to export the IOMesh default configuration file `iomesh.yaml`. + + ```shell + ./helm show values charts/iomesh > iomesh.yaml + ``` + +5. Configure `iomesh.yaml`. + + - Set `dataCIDR` to the data CIDR you previously configured in [Prerequisites](../deploy-iomesh-cluster/prerequisites#network-requirements). + + ```yaml + iomesh: + chunk: + dataCIDR: "" # Fill in the dataCIDR you configured previously in Prerequisites. + ``` + + - Set `diskDeploymentMode` according to your [disk configurations](../deploy-iomesh-cluster/prerequisites#hardware-requirements). The system has a default value of `hybridFlash`. If your disk configuration is all-flash mode, change the value to `allFlash`. + + ```yaml + diskDeploymentMode: "hybridFlash" # Set the disk deployment mode. + ``` + + - Specify the CPU architecture. If you have a `hygon_x86_64` Kubernetes cluster, enter `hygon_x86_64`, or else leave the field blank. + + ```yaml + platform: "" + ``` + + - Specify the IOMesh edition. The field is blank by default, and if left unspecified, the system will install the Community edition automatically. + + If you have purchased the Enterprise edition, set the value of `edition` to `enterprise`. For details, refer to [IOMesh Specifications](https://www.iomesh.com/spec). + + ```yaml + edition: "" # If left blank, Community Edition will be installed. + ``` + + - An optional step. The number of IOMesh chunk pods is 3 by default. If you install IOMesh Enterprise Edition, you can deploy more than 3 chunk pods. + + ```yaml + iomesh: + chunk: + replicaCount: 3 # Specify the number of chunk pods. + ``` + + - An optional step. If you want IOMesh to only use the disks of specific Kubernetes nodes, configure the values of the node label. + + ```yaml + iomesh: + chunk: + podPolicy: + affinity: + nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: kubernetes.io/hostname + operator: In + values: + - iomesh-worker-0 # Specify the values of the node label. + - iomesh-worker-1 + ``` + It is recommended that you only configure `values`. For more configurations, refer to [Pod Affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity). + + - An optional step. Configure the `podDeletePolicy` field to determine whether the system should automatically delete the pod and rebuild it on another healthy node when the Kubernetes node that hosts the pod fails. This configuration applies only to the pod with an IOMesh-created PVC mounted and the access mode set to `ReadWriteOnly`. + + If left unspecified, the value of this field will be set to `no-delete-pod` by default, indicating that the system won't automatically delete and rebuild the pod in case of node failure. + ```yaml + csi-driver: + driver: + controller: + driver: + podDeletePolicy: "no-delete-pod" # Supports "no-delete-pod", "delete-deployment-pod", "delete-statefulset-pod", or "delete-both-statefulset-and-deployment-pod". + ``` + +6. Back on the master node, run the following command to deploy the IOMesh cluster. + + ```shell + ./helm install iomesh ./charts/iomesh \ + --create-namespace \ + --namespace iomesh-system \ + --values iomesh.yaml \ + --wait + ``` + If successful, you should see output like this: + + ```output + NAME: iomesh + LAST DEPLOYED: Wed Jun 30 16:00:32 2021 + NAMESPACE: iomesh-system + STATUS: deployed + REVISION: 1 + TEST SUITE: None + ``` + +7. Verify that all pods are in `Running` state. If so, then IOMesh has been installed successfully. + + ```bash + kubectl --namespace iomesh-system get pods + ``` + If successful, you should see output like this: + ```output + NAME READY STATUS RESTARTS AGE + csi-driver-controller-plugin-89b55d6b5-8r2fc 6/6 Running 10 2m8s + csi-driver-controller-plugin-89b55d6b5-d4rbr 6/6 Running 10 2m8s + csi-driver-controller-plugin-89b55d6b5-n5s48 6/6 Running 10 2m8s + csi-driver-node-plugin-9wccv 3/3 Running 2 2m8s + csi-driver-node-plugin-mbpnk 3/3 Running 2 2m8s + csi-driver-node-plugin-x6qrk 3/3 Running 2 2m8s + iomesh-chunk-0 3/3 Running 0 52s + iomesh-chunk-1 3/3 Running 0 47s + iomesh-chunk-2 3/3 Running 0 43s + iomesh-hostpath-provisioner-8fzvj 1/1 Running 0 2m8s + iomesh-hostpath-provisioner-gfl9k 1/1 Running 0 2m8s + iomesh-hostpath-provisioner-htzx9 1/1 Running 0 2m8s + iomesh-iscsi-redirector-96672 2/2 Running 1 55s + iomesh-iscsi-redirector-c2pwm 2/2 Running 1 55s + iomesh-iscsi-redirector-pcx8c 2/2 Running 1 55s + iomesh-meta-0 2/2 Running 0 55s + iomesh-meta-1 2/2 Running 0 55s + iomesh-meta-2 2/2 Running 0 55s + iomesh-localpv-manager-jwng7 4/4 Running 0 6h23m + iomesh-localpv-manager-khhdw 4/4 Running 0 6h23m + iomesh-localpv-manager-xwmzb 4/4 Running 0 6h23m + iomesh-openebs-ndm-5457z 1/1 Running 0 2m8s + iomesh-openebs-ndm-599qb 1/1 Running 0 2m8s + iomesh-openebs-ndm-cluster-exporter-68c757948-gszzx 1/1 Running 0 2m8s + iomesh-openebs-ndm-node-exporter-kzjfc 1/1 Running 0 2m8s + iomesh-openebs-ndm-node-exporter-qc9pt 1/1 Running 0 2m8s + iomesh-openebs-ndm-node-exporter-v7sh7 1/1 Running 0 2m8s + iomesh-openebs-ndm-operator-56cfb5d7b6-srfzm 1/1 Running 0 2m8s + iomesh-openebs-ndm-svp9n 1/1 Running 0 2m8s + iomesh-zookeeper-0 1/1 Running 0 2m3s + iomesh-zookeeper-1 1/1 Running 0 102s + iomesh-zookeeper-2 1/1 Running 0 76s + iomesh-zookeeper-operator-7b5f4b98dc-6mztk 1/1 Running 0 2m8s + operator-85877979-66888 1/1 Running 0 2m8s + operator-85877979-s94vz 1/1 Running 0 2m8s + operator-85877979-xqtml 1/1 Running 0 2m8s + ``` + > _NOTE:_ After installing IOMesh, the `prepare-csi` pod will automatically start on all schedulable nodes in the Kubernetes cluster to install and configure `open-iscsi`. If the installation of `open-iscsi` is successful on all nodes, the system will automatically clean up all `prepare-csi` Pods. However, if the installation of `open-iscsi` fails on any node, [manual configuration of open-iscsi](../appendices/setup-worker-node) is required to determine the cause of the installation failure. + + > _NOTE:_ If `open-iscsi` is manually deleted after installing IOMesh, the `prepare-csi` pod will not automatically start to install `open-iscsi` when reinstalling IOMesh. In this case, [manual configuration of open-iscsi](../appendices/setup-worker-node) is necessary. + + + diff --git a/website/versioned_docs/version-v1.0.4/deploy-iomesh-cluster/setup-iomesh.md b/website/versioned_docs/version-v1.0.4/deploy-iomesh-cluster/setup-iomesh.md new file mode 100644 index 00000000..e306029a --- /dev/null +++ b/website/versioned_docs/version-v1.0.4/deploy-iomesh-cluster/setup-iomesh.md @@ -0,0 +1,272 @@ +--- +id: version-v1.0.4-setup-iomesh +title: Set Up IOMesh +sidebar_label: Set Up IOMesh +original_id: setup-iomesh +--- + +After IOMesh is installed, you should mount the block devices, which are the disks on the Kubernetes worker nodes, to the IOMesh cluster so that IOMesh can use them to provide storage. + +## View Block Device Objects +In IOMesh, an individual block device can be viewed as a block device object. To mount block devices on IOMesh, you first need to know which block device objects are available for use. + +IOMesh manages disks on Kubernetes worker nodes with OpenEBS [node-disk-manager(NDM)](https://github.com/openebs/node-disk-manager). When deploying IOMesh, BlockDevice CR will be created in the same NameSpace as the IOMesh cluster, and you can see block devices available for use in this NameSpace. + +**Procedure** + +1. Get block devices in the namespace `iomesh-system`. + + ```bash + kubectl --namespace iomesh-system -o wide get blockdevice + ``` + + If successful, you should see output like this: + + ```output + NAME NODENAME PATH FSTYPE SIZE CLAIMSTATE STATUS AGE + blockdevice-648c1fffeab61e985aa0f8914278e9d0 iomesh-node-17-19 /dev/sda1 ext4 16000900661248 Unclaimed Active 92d + blockdevice-648c1fffeab61e985aa0f8914278e9d0 iomesh-node-17-19 /dev/sdb 16000900661248 Unclaimed Active 92d + blockdevice-f26f5b30099c20b1f6e993675614c301 iomesh-node-17-18 /dev/sdb 16000900661248 Unclaimed Active 92d + blockdevice-8b697bad8a194069fbfd544e6db2ddb8 iomesh-node-17-19 /dev/sdc 16000900661248 Unclaimed Active 92d + blockdevice-a3579a64869f799a623d3be86dce7c59 iomesh-node-17-18 /dev/sdc 16000900661248 Unclaimed Active 92d + ``` + + > _NOTE:_ + > The field `FSTYPE` of each IOMesh block device should be blank. + + > _NOTE:_ + > The status of a block device will only be updated when the disk is plugged or unplugged. Therefore, if a disk is partitioned or formatted, its status will not be immediately updated. To update information about disk partitioning and formatting, run the command `kubectl delete pod -n iomesh-system -l app=openebs-ndm` to restart the NDM pod, which will trigger a disk scan. + +2. View the details of a specific block device object. Make sure to replace `` with the block device name. + + ```shell + kubectl --namespace iomesh-system -o yaml get blockdevice + ``` + + If successful, you should see output like this: + ```output + apiVersion: openebs.io/v1alpha1 + kind: BlockDevice + metadata: + annotations: + internal.openebs.io/uuid-scheme: gpt + generation: 1 + labels: + iomesh.com/bd-devicePath: dev.sdb + iomesh.com/bd-deviceType: disk + iomesh.com/bd-driverType: SSD + iomesh.com/bd-serial: 24da000347e1e4a9 + iomesh.com/bd-vendor: ATA + kubernetes.io/hostname: iomesh-node-17-19 + ndm.io/blockdevice-type: blockdevice + ndm.io/managed: "true" + namespace: iomesh-system + name: blockdevice-648c1fffeab61e985aa0f8914278e9d0 + # ... + ``` + Labels with `iomesh.com/bd-` are created by IOMesh and will be used for the device selector. + + | Label | Description | + | --- | --- | + | `iomesh.com/bd-devicePath` | Shows the device path on the worker node.| + | `iomesh.com/bd-deviceType` | Shows if it is a disk or a partition.| + | `iomesh.com/bd-driverType` | Shows the disk type, incluing SSD and HDD.| + | `iomesh.com/bd-serial` | Shows the disk serial number.| + | `iomesh.com/bd-vendor` | Shows the disk vendor.| + +## Configure DeviceMap + +Before configuring device map, familiarize yourself with the mount type and device selector. + +**Mount Type** +|Mode|Mount Type| +|---|---| +|`hybridFlash`|Must configure `cacheWithJournal` and `dataStore`.
  • `cacheWithJournal` serves the performance layer of storage pool and **MUST** be a partitionable block device with a capacity greater than 60 GB. Two partitions will be created: one for journal and the other for cache. Either SATA or NVMe SSD is recommended.
  • `dataStore` is used for the capacity layer of storage pool. Either SATA or SAS HDD is recommended.
  • | +|`allFlash`|

    Only need to configure `dataStoreWithJournal`.

    `dataStoreWithJournal` is used for the capacity layer of storage pool. It **MUST** be a partitionable block device with a capacity greater than 60 GB. Two partitions will be created: one for `journal` and the other for `dataStore`. Either SATA or NVMe SSD is recommended.| + +**Device Selector** +|Parameter|Value|Description| +|---|---|---| +|selector | [metav1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#labelselector-v1-meta) | The label selector to filter block devices. | +|exclude|[block-device-name]| The block device to be excluded. | + +For more information, refer to [Kubernetes Labels and Selectors](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/). + + +**Procedure** +1. Edit `iomesh.yaml`, the default configuration file exported during IOMesh installation. + + ```bash + kubectl edit --namespace iomesh-system iomesh + ``` + + After running the command, locate `chunk` in the file as shown below: + ```yaml + spec: + chunk: + ``` +2. Configure `deviceMap`. Specifically, copy and paste the `deviceMap` content from the following sample code and fill in fields `mount-type`, `matchLabels`, `matchExpressions`, and `exclude` based on your deployment mode and block device information. Label information `` and `` can be obtained from Step 2 in [View Block Device Objects](#view-block-device-objects). + > _NOTE:_ The field `FSTYPE` of each IOMesh block device should be blank. Make sure to exclude the block device that has a specified filesystem. + + > _NOTE:_ It is prohibited to use disk names in the `deviceMap` in the production environment. + + ```yaml + spec: + chunk: + deviceMap: + : + selector: + matchLabels: + : # Enter key and value as needed. + matchExpressions: + - key: + operator: In + values: + - + exclude: + - # Enter the block device name to exclude it. + ``` +3. Verify that the `CLAIMSTATE` of the block devices you select becomes `Claimed`. + + ```bash + kubectl --namespace iomesh-system -o wide get blockdevice + ``` + + If successful, you should see output like this: + + ```output + NAME NODENAME PATH FSTYPE SIZE CLAIMSTATE STATUS AGE + blockdevice-f001933979aa613a9c32e552d05a704a iomesh-node-17-19 /dev/sda1 ext4 16000900661248 Unclaimed Active 92d + blockdevice-648c1fffeab61e985aa0f8914278e9d0 iomesh-node-17-19 /dev/sdb 16000900661248 Claimed Active 92d + blockdevice-f26f5b30099c20b1f6e993675614c301 iomesh-node-17-18 /dev/sdb 16000900661248 Claimed Active 92d + blockdevice-8b697bad8a194069fbfd544e6db2ddb8 iomesh-node-17-19 /dev/sdc 16000900661248 Claimed Active 92d + blockdevice-a3579a64869f799a623d3be86dce7c59 iomesh-node-17-18 /dev/sdc 16000900661248 Claimed Active 92d + blockdevice-a6652946c90d5c3fca5ca452aac5b826 iomesh-node-17-18 /dev/sdd 16000900661248 Unclaimed Active 92d + ``` + +## DeviceMap Examples + +Below are three `deviceMap` examples based on all-flash and hybrid-flash deployment modes. Assuming a Kubernetes cluster has six block devices, the details are as follows: + +```output +NAME NODENAME PATH FSTYPE SIZE CLAIMSTATE STATUS AGE +blockdevice-f001933979aa613a9c32e552d05a704a iomesh-node-17-19 /dev/sda1 ext4 16000900661248 Unclaimed Active 92d +blockdevice-648c1fffeab61e985aa0f8914278e9d0 iomesh-node-17-19 /dev/sdb 16000900661248 Unclaimed Active 92d +blockdevice-f26f5b30099c20b1f6e993675614c301 iomesh-node-17-18 /dev/sdb 16000900661248 Unclaimed Active 92d +blockdevice-8b697bad8a194069fbfd544e6db2ddb8 iomesh-node-17-19 /dev/sdc 16000900661248 Unclaimed Active 92d +blockdevice-a3579a64869f799a623d3be86dce7c59 iomesh-node-17-18 /dev/sdc 16000900661248 Unclaimed Active 92d +blockdevice-a6652946c90d5c3fca5ca452aac5b826 iomesh-node-17-18 /dev/sdd 16000900661248 Unclaimed Active 92d +``` + +You can filter the block devices to be used in IOMesh based on the labels of the block devices. + +**Example 1: Hybrid Configuration `deviceMap`** + +In this example, all SSD disks in the Kubernetes cluster are used as `cacheWithJournal`, and all HDD disks are used as `dataStore`. The block devices `blockdevice-a6652946c90d5c3fca5ca452aac5b826` and `blockdevice-f001933979aa613a9c32e552d05a704a` are excluded from the selection. + +```yaml +spec: + # ... + chunk: + # ... + deviceMap: + cacheWithJournal: + selector: + matchLabels: + iomesh.com/bd-deviceType: disk + matchExpressions: + - key: iomesh.com/bd-driverType + operator: In + values: + - SSD + exclude: + - blockdevice-a6652946c90d5c3fca5ca452aac5b826 + dataStore: + selector: + matchExpressions: + - key: iomesh.com/bd-driverType + operator: In + values: + - HDD + exclude: + - blockdevice-f001933979aa613a9c32e552d05a704a + # ... +``` +Note that after the configuration is complete, any additional SSD or HDD disks added to the nodes later will be immediately managed by IOMesh. If you do not want this automatic management behavior, refer to **Example 2: Hybrid Configuration `deviceMap`** for how to create a custom label for disks. + +**Example 2: Hybrid Configuration `deviceMap`** + +In this example, the block devices located at the `/dev/sdb` path in the Kubernetes cluster are used as `cacheWithJournal`, and the block devices located at the `/dev/sdc` path are used as `dataStore`. + +Based on the information of the block devices provided above, the block devices under the `/dev/sdb` and `/dev/sdc` paths are as follows: + +Block devices under `/dev/sdb` path: +- `blockdevice-648c1fffeab61e985aa0f8914278e9d0` +- `blockdevice-f26f5b30099c20b1f6e993675614c301` + +Block devices under `/dev/sdc` path: +- `blockdevice-8b697bad8a194069fbfd544e6db2ddb8` +- `blockdevice-a3579a64869f799a623d3be86dce7c59` + +1. Run the following commands to create a custom label for the block devices under the `/dev/sdb` path in the Kubernetes cluster. `mountType` is the key of the label, and `cacheWithJournal` is the value of the label. + ```shell + kubectl label blockdevice blockdevice-648c1fffeab61e985aa0f8914278e9d0 mountType=cacheWithJournal -n iomesh-system + kubectl label blockdevice blockdevice-f26f5b30099c20b1f6e993675614c301 mountType=cacheWithJournal -n iomesh-system + ``` + +2. Run the following commands to create a custom label for the block devices under the `/dev/sdc` path in the Kubernetes cluster. `mountType` is the key of the label, and `dataStore` is the value of the label. + + ```shell + kubectl label blockdevice blockdevice-8b697bad8a194069fbfd544e6db2ddb8 mountType=dataStore -n iomesh-system + kubectl label blockdevice blockdevice-a3579a64869f799a623d3be86dce7c59 mountType=dataStore -n iomesh-system + ``` + +After the labels are created, the configuration of `deviceMap` is as follows: + +```yaml +spec: + # ... + chunk: + # ... + deviceMap: + cacheWithJournal: + selector: + matchExpressions: + - key: mountType + operator: In + values: + - cacheWithJournal + dataStore: + selector: + matchExpressions: + - key: mountType + operator: In + values: + - dataStore + # ... +``` + +**Example 3: All-Flash Configuration `deviceMap`** + +In this example, all SSD disks in the Kubernetes cluster are used as `dataStoreWithJournal`. The block device `blockdevice-a6652946c90d5c3fca5ca452aac5b826` is excluded from the selection. +```yaml +spec: +# ... +chunk: + # ... + deviceMap: + dataStoreWithJournal: + selector: + matchLabels: + iomesh.com/bd-deviceType: disk + matchExpressions: + - key: iomesh.com/bd-driverType + operator: In + values: + - SSD + exclude: + - blockdevice-a6652946c90d5c3fca5ca452aac5b826 + # ... +``` +Note that after the configuration is complete, any additional SSD or HDD disks added to the nodes later will be immediately managed by IOMesh. If you do not want this automatic management behavior, refer to **Example 2: Hybrid Configuration `deviceMap`** for how to create a custom label for disks. + diff --git a/website/versions.json b/website/versions.json index 023c32ea..62a83025 100644 --- a/website/versions.json +++ b/website/versions.json @@ -1,4 +1,5 @@ [ + "v1.0.4", "v1.0.3", "v1.0.2", "v1.0.1",