Skip to content

Commit

Permalink
docs, ADOPTERS: consistent capitalization
Browse files Browse the repository at this point in the history
Kubevirt -> KubeVirt

Signed-off-by: Dan Kenigsberg <[email protected]>
  • Loading branch information
dankenigsberg committed Jun 16, 2021
1 parent 4510840 commit a5e23f1
Show file tree
Hide file tree
Showing 12 changed files with 20 additions and 20 deletions.
4 changes: 2 additions & 2 deletions ADOPTERS.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,5 +9,5 @@ This is a likely incomplete list of KubeVirt adopters - end-users and distributo
| H3C | 2019 | We distribute KubeVirt as part of CloudOS to enable VM workloads on Kubernetes at customer sites . Follow the [link](https://www.h3c.com/en/Products_Technology/Enterprise_Products/Cloud_Computing/Cloud_Computing_Products/H3C_CloudOS/H3C_CloudOS_full-stack/) for more product information |
| NVIDIA | 2018 | NVIDIA's latest computing platform is built on open-source projects like Kubernetes and KubeVirt to power products like [GeForce NOW](https://www.nvidia.com/en-us/geforce-now/) with more to come.
| [CoreWeave](https://www.coreweave.com) | 2020 | A Kubernetes native cloud provider with focus on GPUs at scale. KubeVirt allows us to co-locate non-containerizable workloads such as Virtual Desktops next to compute intensive containers executing on bare metal. All orchestrated via the Kubernetes API leveraging the same network policies and persistent volumes for both VM and containerized workloads. |
| [Civo](https://www.civo.com) | 2020 | We are using Kubevirt as part of our stack to enable tenant cluster provisioning within Civo cloud. |
| [SUSE](https://www.suse.com/) | 2020 | SUSE believes Kubevirt is the best open source way to handle Virtual Machines on Kubernetes today. We offer this additional possibility to our customers by leveraging Kubevirt in our products. |
| [Civo](https://www.civo.com) | 2020 | We are using KubeVirt as part of our stack to enable tenant cluster provisioning within Civo cloud. |
| [SUSE](https://www.suse.com/) | 2020 | SUSE believes KubeVirt is the best open source way to handle Virtual Machines on Kubernetes today. We offer this additional possibility to our customers by leveraging KubeVirt in our products. |
2 changes: 1 addition & 1 deletion docs/devel/host-devices-and-device-plugins.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ Here is an example of an expected naming of the variables:
```
PCI_RESOURCE_NVIDIA_COM_TU104GL_Tesla_T4=PCIADDRESS1
```
This encodes a PCI device with its resource name (provided in the Kubevirt CR) "nvidia.com/TU104GL_Tesla_T4"
This encodes a PCI device with its resource name (provided in the KubeVirt CR) "nvidia.com/TU104GL_Tesla_T4"
```
PCI_RESOURCE_INTEL_QAT=PCIADDRESS2,PCIADDRESS3,...
MDEV_PCI_RESOURCE_NVIDIA_COM_GRID_T4-1Q=UUID1,UUID2,UUID3,...
Expand Down
2 changes: 1 addition & 1 deletion docs/devel/virtual-machine.md
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ spec:
```
The file specification follows the Kubernetes guide. The apiVersion is linked
with the Kubevirt release cycle.
with the KubeVirt release cycle.
In the metadata section, there is a *required* field, the **name**. Then
following the spec section, there are two important parts. The **running**, which
Expand Down
2 changes: 1 addition & 1 deletion docs/devel/vm-monitoring.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ While the first two bullet points are easy to understand, the third needs some m
Unresponsive metrics sources
----------------------------

When we use QEMU on shared storage, like we want to do with Kubevirt, any network issue could cause
When we use QEMU on shared storage, like we want to do with KubeVirt, any network issue could cause
one or more storage operations to delay, or to be lost entirely.

In that case, the userspace process that requested the operation can end up in the D state,
Expand Down
2 changes: 1 addition & 1 deletion docs/discard-passthrough.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Thick and thin volume provisioning

Sparsification can make a disk thin-provisioned, in other words it allows to convert the freed space within the disk image into free space back on the host. The [fstrim](https://man7.org/linux/man-pages/man8/fstrim.8.html#:~:text=fstrim%20is%20used%20on%20a,unused%20blocks%20in%20the%20filesystem) utility can be used on a mounted filesystem to discard the blocks not used by the filesystem. In order to be able to sparsify a disk inside the guest, the disk needs to be configured in the [libvirt xml](https://libvirt.org/formatdomain.html) with the option `discard=unmap`. In Kubevirt, every disk is passed as default with this option enabled. It is possible to check if the trim configuration is supported in the guest by running`lsblk -D`, and check the discard options supported on every disk. Example:
Sparsification can make a disk thin-provisioned, in other words it allows to convert the freed space within the disk image into free space back on the host. The [fstrim](https://man7.org/linux/man-pages/man8/fstrim.8.html#:~:text=fstrim%20is%20used%20on%20a,unused%20blocks%20in%20the%20filesystem) utility can be used on a mounted filesystem to discard the blocks not used by the filesystem. In order to be able to sparsify a disk inside the guest, the disk needs to be configured in the [libvirt xml](https://libvirt.org/formatdomain.html) with the option `discard=unmap`. In KubeVirt, every disk is passed as default with this option enabled. It is possible to check if the trim configuration is supported in the guest by running`lsblk -D`, and check the discard options supported on every disk. Example:
```bash
$ lsblk -D
NAME DISC-ALN DISC-GRAN DISC-MAX DISC-ZERO
Expand Down
2 changes: 1 addition & 1 deletion docs/libvirt-pod-networking.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ achieve the same, and provide IP addresses to VMIs, not just a interface.
During this document we call each of these _new_ pod interfaces _VMI interface_,
in order to differentiate them from the originally-allocated pod interface
(`eth0`). The original pod interface (`eth0`) is never modified by Kubevirt,
(`eth0`). The original pod interface (`eth0`) is never modified by KubeVirt,
and can be used to access libvirtd (through the libvirtd pod IP) or to provide
VMI-centric services through the VMI pod IP.

Expand Down
14 changes: 7 additions & 7 deletions docs/localstorage-disks.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Local Storage Placement for VM Disks

This document describes a special handling of `DataVolumes` in the `WaitForFirstConsumer` state.
`WaitForFirstConsumer` state is available from [CDI v1.21.0](https://github.com/kubevirt/containerized-data-importer/releases/tag/v1.21.0), and the logic to handle this is available from [Kubevirt v0.36.0](https://github.com/kubevirt/kubevirt/releases/tag/v0.36.0)
`WaitForFirstConsumer` state is available from [CDI v1.21.0](https://github.com/kubevirt/containerized-data-importer/releases/tag/v1.21.0), and the logic to handle this is available from [KubeVirt v0.36.0](https://github.com/kubevirt/kubevirt/releases/tag/v0.36.0)

## Use-case

Expand All @@ -22,26 +22,26 @@ This is especially problematic when using a VM with DataVolumeTemplate with many

The solution is to leverage Kubernetes pod scheduler to bind the PVC to a PV on a correct node.
By using a StorageClass with `volumeBindingMode` set to `WaitForFirstConsumer` the binding and provisioning of PV is delayed until a Pod using the PersistentVolumeClaim is created.
Kubevirt can schedule a special ephemeral pod that becomes a first consumer of the PersistentVolumeClaim.
KubeVirt can schedule a special ephemeral pod that becomes a first consumer of the PersistentVolumeClaim.
Its only purpose is to be scheduled to a node capable of running VM and by using PVCs to trigger kubernetes to provision and bind PV's on the same node.
After PVC are bound the `CDI` can do its work and Kubevirt can start the actual VM.
After PVC are bound the `CDI` can do its work and KubeVirt can start the actual VM.

## Implementation

### Flow

1. A StorageClass with volumeBindingMode=WaitForFirstConsumer is created
2. User creates the VM with DataVolumeTemplate containing
3. `Kubevirt` creates DataVolume
3. `KubeVirt` creates DataVolume
4. The `CDI` sees that new DV has unbound PVC with storage class with volumeBindingMode=WaitForFirstConsumer, sets the phase of DV to `WaitForFirstConsumer` and waits for PVC to be bound by some external action.
5. `Kubevirt` sees the DV in phase `WaitForFirstConsumer`, so it creates an ephemeral pod (basically a virtlauncher pod
5. `KubeVirt` sees the DV in phase `WaitForFirstConsumer`, so it creates an ephemeral pod (basically a virtlauncher pod
without a VM payload and with `kubevirt.io/ephemeral-provisioning` annotation) only used to force PV provisioning
6. Kubernetes schedules the ephemeral pod, (the node selected meets all the VM requirements), pod requires
the same PVC as the VM so kubenertes has to provision and bind the PV to PVC on a correct node before the pod can be started
7. `CDI` sees that PVC is Bound, changes DV status to "ImportScheduled" (or clone/upload), and tries to start worker pods
8. `Kubevirt` sees DV status is `ImportScheduled`, it can terminate the ephemeral provisioning pod
8. `KubeVirt` sees DV status is `ImportScheduled`, it can terminate the ephemeral provisioning pod
8. `CDI` does the Import, marks DV as `Succeeded`
9. `Kubevirt` creates the virtlauncher pod to start a VM
9. `KubeVirt` creates the virtlauncher pod to start a VM

This flow differs from standard scenario (import/upload/clone on storage with Immediate binding) by steps 4, 5, 6 and 8.

Expand Down
2 changes: 1 addition & 1 deletion docs/metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
This document aims to help users that are not familiar with all metrics exposed by different KubeVirt components.
All metrics documented here are auto-generated by the utility tool`tools/doc-generator` and reflects exactly what is being exposed.

## Kubevirt specific metrics
## KubeVirt-specific metrics

### kubevirt_info
Version information.
Expand Down
2 changes: 1 addition & 1 deletion docs/monitoring-guidelines.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ For Example, see the following Kubernetes network metrics:

The KubeVirt metrics for vmi should be:
- **kubevirt_vmi**_network_receive_packets_total
- **Kubevirt_vmi**_network_transmit_packets_total
- **kubevirt_vmi**_network_transmit_packets_total

### KubeVirt Recording Rules

Expand Down
2 changes: 1 addition & 1 deletion docs/replica-sets.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ are in a non-final state and which match `spec.selector` in the
replicas meet the ready condition.

*Note* that at the moment when writing this proposal, there exist no
readiness checks for VirtualMachineInstances in Kubevirt. Therefore a `VirtualMachineInstance` is
readiness checks for VirtualMachineInstances in KubeVirt. Therefore a `VirtualMachineInstance` is
considered to be ready, when reported by virt-handler as running or migrating.

In case of a delete failure:
Expand Down
4 changes: 2 additions & 2 deletions docs/update-go-version.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# How to update Go version
A quick guide to update Kubevirt's Go version.
A quick guide to update KubeVirt's Go version.

To update the Go version we need to update the builder image so that it uses the new version,
push it to the registry and finally let Kubevirt use the new builder image.
push it to the registry and finally let KubeVirt use the new builder image.

In addition, [go rules for bazel](https://github.com/bazelbuild/rules_go) have to be updated if the current version does not support the target Go version.

Expand Down
2 changes: 1 addition & 1 deletion docs/vm-configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -444,7 +444,7 @@ spec:

#### Hotplug

By default Kubevirt will now add a virtio-scsi controller to support hotplugging disks into a running VM. If for whatever reason you do not want this controller, you can stop KubeVirt from adding it by adding DisableHotplug to the devices section of the VM(I) spec
By default KubeVirt will now add a virtio-scsi controller to support hotplugging disks into a running VM. If for whatever reason you do not want this controller, you can stop KubeVirt from adding it by adding DisableHotplug to the devices section of the VM(I) spec

```yaml
spec:
Expand Down

0 comments on commit a5e23f1

Please sign in to comment.