diff --git a/quickstart/faqs.md b/quickstart/faqs.md index 10dd9fa..a16b7b5 100644 --- a/quickstart/faqs.md +++ b/quickstart/faqs.md @@ -87,4 +87,9 @@ In Kubernetes, when a PVC is created with the reclaim policy set to 'Retain', th ### How does the PV garbage collector work? - The PV garbage collector deploys a watcher component, which subscribes to the Kubernetes Persistent Volume deletion events. When a PV is deleted, an event is generated by the Kubernetes API server and is received by this component. Upon a successful validation of this event, the garbage collector deletes the corresponding Mayastor volume resources. \ No newline at end of file + The PV garbage collector deploys a watcher component, which subscribes to the Kubernetes Persistent Volume deletion events. When a PV is deleted, an event is generated by the Kubernetes API server and is received by this component. Upon a successful validation of this event, the garbage collector deletes the corresponding Mayastor volume resources. + + +## How to disable cow for btrfs filesystem? + +To disbale cow for `btrfs` filesystem, use `nodatacow` as a mountOption in the storage class which would be used to provision the volume. \ No newline at end of file diff --git a/quickstart/known-limitations.md b/quickstart/known-limitations.md index 507bf9f..ec053fb 100644 --- a/quickstart/known-limitations.md +++ b/quickstart/known-limitations.md @@ -4,10 +4,6 @@ Once provisioned, neither Mayastor Disk Pools nor Mayastor Volumes can be re-sized. A Mayastor Pool can have only a single block device as a member. Mayastor Volumes are exclusively thick-provisioned. -## Snapshots and Clones - -Mayastor has no snapshot or cloning capabilities. - ## Volumes are "Highly Durable" but without multipathing are not "Highly Available" Mayastor Volumes can be configured \(or subsequently re-configured\) to be composed of 2 or more "children" or "replicas"; causing synchronously mirrored copies of the volumes's data to be maintained on more than one worker node and Disk Pool. This contributes additional "durability" at the persistence layer, ensuring that viable copies of a volume's data remain even if a Disk Pool device is lost. diff --git a/reference/storage-class-parameters.md b/reference/storage-class-parameters.md index b3ac783..bc2e7b7 100644 --- a/reference/storage-class-parameters.md +++ b/reference/storage-class-parameters.md @@ -12,7 +12,9 @@ The storage class parameter `local` has been deprecated and is a breaking change ## "fsType" -File system that will be used when mounting the volume. The default file system when not specified is 'ext4'. We recommend to use 'xfs' that is considered to be more advanced and performant. Though make sure that XFS is installed on all nodes in the cluster before using it. +File system that will be used when mounting the volume. +The supported file systems are **ext4**, **xfs** and **btrfs** and the default file system when not specified is **ext4**. We recommend to use **xfs** that is considered to be more advanced and performant. +Please ensure the requested filesystem driver is installed on all worker nodes in the cluster before using it. ## "ioTimeout" @@ -81,4 +83,6 @@ By default, the `stsAffinityGroup` feature is disabled. To enable it, modify the - When set to `true`, the created clone/restore's filesystem `uuid` will be set to the restore volume's `uuid`. This is important because some file systems, like XFS, do not allow duplicate filesystem `uuid` on the same machine by default. - When set to `false`, the created clone/restore's filesystem `uuid` will be same as the orignal volume `uuid`, but it will be mounted using the `nouuid` flag to bypass duplicate `uuid` validation. - +{% hint style="note" %} +This option needs to be set to true when using a `btrfs` filesystem, if the application using the restored volume is scheduled on the same node where the original volume is mounted, concurrently. +{% endhint %}