Skip to content

Commit

Permalink
Merge pull request #1121 from JuanmaBM/docs/typos-and-links-fix
Browse files Browse the repository at this point in the history
docs: correct typos and some broken links.
  • Loading branch information
psav authored Jan 14, 2025
2 parents 1ba6e2c + 5a46e77 commit bc49607
Show file tree
Hide file tree
Showing 12 changed files with 38 additions and 38 deletions.
6 changes: 3 additions & 3 deletions docs/clowder-design.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,11 +42,11 @@ applications have. These use cases will be encoded into the API of the
operator, which is of course CRDs. There will be two CRDs:

* ``ClowdEnvironment``
This CR represents an instance of the entire cloud.redhat.com environment,
This CRD represents an instance of the entire cloud.redhat.com environment,
e.g. stage or prod. It contains configuration for various aspects of the
environment, implemented by *providers*.

* ``ClowdApp`` This CR represents a all the configuration an app needs to be deployed into
* ``ClowdApp`` This CRD represents all the configuration an app needs to be deployed into
the cloud.redhat.com environment, including:

* One or more deployment specs
Expand All @@ -63,7 +63,7 @@ How these CRs will be translated into lower level resource types:
![Clowder Flow](img/clowder-flow.svg)

Apps will consume their environmental configuration from a JSON document mounted
in their app container. This JSON document contains the various configuration
in their app container. This JSON document contains various configurations
that could be considered common across the platform or common kinds of resources
that would be requested by an app on the platform, including:

Expand Down
14 changes: 7 additions & 7 deletions docs/contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,12 +41,12 @@ func init() {
```

The `ProvName` is an identifier that defines the name of the provider. Notice that the Golang
pacakge name is the same as this identifier. This is a nice convention and one which should be
package name is the same as this identifier. This is a nice convention and one which should be
maintained when new providers are added. The next declaration is a MultiResourceIdent. These will be
discussed in a little mroe detail below, but in short, this is a declaration of the resources that
discussed in a little more detail below, but in short, this is a declaration of the resources that
this particular provider will create.

After that there is the `GetDeployment()` function. Every provider has some kind of `Get*()`
After that, there is the `GetDeployment()` function. Every provider has some kind of `Get*()`
function, which is responsible for creating deciding which mode to run the provider in. Depending on
the environmental settings, providers can be run in different modes. The `deployment` provider is
a core provider and as such as no modal configuration, i.e. there is only one mode. Providers with
Expand Down Expand Up @@ -113,7 +113,7 @@ _environment_ controller and will be reconciled whenever the `ClowdEnvironment`

By contrast, `ClowdApp` modifications trigger the _application_ reconciliation, which first runs
the _environment_ function, in this case `NewDeploymentProvider()` before then running the
`Provide()` function. This may seem odd and indeed is a design quirk of Clowder that iwill
`Provide()` function. This may seem odd and indeed is a design quirk of Clowder that will
hopefully be resolved in a later release. Its reasoning is that the environmental resources often
need to provide information to the application level reconciliation, for instance to furnish the
`cdappconfig` with the Kafka broker address. Since this information is calculated by the
Expand All @@ -131,7 +131,7 @@ providers that need to modify the resources of other providers result not only i
update the same resources, but also can potentially trigger multiple reconciliations as updates to
Clowder owned resources can trigger these.

To reduce this burden, the Clowder system will onyl apply resources at the very end of the
To reduce this burden, the Clowder system will only apply resources at the very end of the
reconciliation. Until that time, resources are stored in the resource cache and providers are able
to retrieve objects from this cache, update them, and then placed the updated versions back in the
cache, so that their changes will be applied at the end of the reconciliation. This is where the
Expand Down Expand Up @@ -178,7 +178,7 @@ if err := dp.Cache.Update(CoreDeployment, d); err != nil {

This call sends the object back to the cache where it is copied.

When another provider wishes to apply updates to this resource, it first needs to retrieve it from the cache. A very simliar example may be seen in the
When another provider wishes to apply updates to this resource, it first needs to retrieve it from the cache. A very similar example may be seen in the
`serviceaccount` provider:

```golang
Expand Down Expand Up @@ -238,7 +238,7 @@ Please refer to the [Conventional Commits](https://www.conventionalcommits.org)
* ``perf``: For performance enhancements to code flow
* ``test``: For any changes to tests

Using a `!` after the `purpose/scope` denotes a breaking change in the context of Clowder, this should be used whenever the API for either the Clowd* CRD resources, as well as any change to the `cdappconfig.json` spec. An example of a breaing change is shown below:
Using a `!` after the `purpose/scope` denotes a breaking change in the context of Clowder, this should be used whenever the API for either the Clowd* CRD resources, as well as any change to the `cdappconfig.json` spec. An example of a breaking change is shown below:

```
chore(crd)!: Removes old web field value
Expand Down
2 changes: 1 addition & 1 deletion docs/crc-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

## Prerequisites

* Download the [crc binary](https://developers.redhat.com/products/codeready-containers/overview) and follow the instructions to get crc running.
* Download the [crc binary](https://crc.dev/docs/installing/#installing) and follow the instructions to get crc running.
* Fork or clone the [Clowder repo](https://github.com/RedHatInsights/clowder)
* Install [the Clowder dependencies](https://github.com/RedHatInsights/clowder#dependencies)
* Run `make install`
Expand Down
18 changes: 9 additions & 9 deletions docs/developer-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,10 +18,10 @@
NOTE: If you choose to place the kubebuilder executables in a different path, make sure to
use the ``KUBEBUILDER_ASSETS`` env var when running tests (mentioned in ``Unit Tests`` section below)

* Install https://kubectl.docs.kubernetes.io/installation/kustomize/binaries/[kustomize]
* Install [kustomize](https://kubectl.docs.kubernetes.io/installation/kustomize/binaries/)
** The install script places a ``kustomize`` binary in whatever directory you ran the above script in. Move this binary to a folder that is on your ``PATH`` or make sure the directory is appended to your ``PATH``

* Install https://minikube.sigs.k8s.io/docs/start/[minikube]. The latest release we have tested with is https://github.com/kubernetes/minikube/releases/tag/v1.20.0[v1.20.0].
* Install [minikube](https://minikube.sigs.k8s.io/docs/start/). The latest release we have tested with is [v1.20.0](https://github.com/kubernetes/minikube/releases/tag/v1.20.0).

NOTE: If you want/need to use OpenShift, you can install [Code Ready Containers](https://github.com/RedHatInsights/clowder/blob/master/docs/crc-guide.md), just be aware that it consumes a much larger amount of resources and our test helper scripts are designed to work with minikube.

Expand All @@ -30,9 +30,9 @@ We haven't had much success using the docker/podman drivers, and would recommend
### **KVM2-specific notes**

* If you don't have virtualization enabled, follow the guide
on https://docs.fedoraproject.org/en-US/quick-docs/getting-started-with-virtualization/[the minikube docs]
on [the minikube docs](https://docs.fedoraproject.org/en-US/quick-docs/getting-started-with-virtualization/)

* Note that ``virt-host-validate`` may throw errors related to cgroups on Fedora 33 -- which you can https://gitlab.com/libvirt/libvirt/-/issues/94[ignore]
* Note that ``virt-host-validate`` may throw errors related to cgroups on Fedora 33 -- which you can [ignore](https://gitlab.com/libvirt/libvirt/-/issues/94)

* If you don't want to enter a root password when minikube needs to modify its VM, add your user to the ``libvirt`` group:

Expand Down Expand Up @@ -113,8 +113,8 @@ Delve is a Go debugger. It allows you to run your app, set breakpoints, etc from
VS Code an open source IDE that is popular with many of the Clowder developers. Setting up a debugger with VS Code is easy and provides a GUI for setting breakpoints, stepping through code, etc.

* Run the through the Pre-requisites section above and make sure you have minikube running and the CRDs installed.
* Install https://code.visualstudio.com/[VS Code]
* Install the https://marketplace.visualstudio.com/items?itemName=golang.Go[Go extension] for VS Code
* Install [VS Code](https://code.visualstudio.com/)
* Install the [Go extension](https://marketplace.visualstudio.com/items?itemName=golang.Go) for VS Code
* Open the Clowder code in VS Code
* Create a launch.json file in the .vscode directory. Here's an example launch.json file:
Expand Down Expand Up @@ -172,7 +172,7 @@ We include a special make target that will launch a special debug instance of VS
- You will see the green play button next to each test as well as the `run test` and `debug test` buttons above each test
- You can use the run or debug test buttons to run and debug the tests from within VS Code

NOTE: Some features of VS Code may not work correctly when launched this way. We reccomend only launching code this way when you want to write and debug unit tests.
NOTE: Some features of VS Code may not work correctly when launched this way. We recommend only launching code this way when you want to write and debug unit tests.

### E2E Testing

Expand All @@ -181,7 +181,7 @@ There are two e2e testing scripts which:
* build your code changes into a docker image (both ``podman`` or ``docker`` supported)
* push the image into a registry
* deploy the operator onto a kubernetes cluster
* run `kuttl`` tests
* run `kuttl` tests

The scripts are:

Expand Down Expand Up @@ -253,7 +253,7 @@ Then be sure to add doc changes before committing, e.g.:
## Clowder configuration

Clowder can read a configuration file in order to turn on certain debug options, toggle feature
flags and perform profiling. By default clowder will read from the file
flags and performs profiling. By default clowder will read from the file
``/config/clowder_config.json`` to configure itself. When deployed as a pod, it an optional volume
is configured to look for a ``ConfigMap`` in the same namespace, called ``clowder-config`` which
looks similar to the following.
Expand Down
2 changes: 1 addition & 1 deletion docs/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ first.
### How do I set up an internal port for inter-app communication?
Two allow two apps to talk together internally without exposing a port to the public use the
To allow two apps to talk together internally without exposing a port to the public, use the
[`spec.deployments.webServices.private`](https://redhatinsights.github.io/clowder/clowder/dev/api_reference.html#k8s-api-github-com-redhatinsights-clowder-apis-cloud-redhat-com-v1alpha1-privatewebservice)
configuration stanza.

Expand Down
10 changes: 5 additions & 5 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ relied on for production usage.
### Integration Tests in PR Check

The PR check script uses the operator to deploy a temporary platform
environment to run tests. Once the tests complete, the the PR check script
environment to run tests. Once the tests complete, the PR check script
will remove the CRs, triggering the operator to tear down the temporary
environment. Thus the operator needs to *quickly* set up and tear down
environments. This should also be contained in one namespace, as opposed to
Expand Down Expand Up @@ -188,7 +188,7 @@ Configuration:
* Prometheus config
* Prometheus push gateway config
* Entitlements service config
* publicly exposed?
* Publicly exposed?

``ClowdApp`` resources will always depend on one ``ClowdEnvironment``, referenced
by name in its ``base`` attribute.
Expand Down Expand Up @@ -216,7 +216,7 @@ Service hostnames for dependent apps will be added to the JSON configuration
mounted into an app's container; hostnames for apps that are not listed as
dependencies will not be added to the configuration.

### OptionalDependencies
### Optional Dependencies

As well as mandatory dependencies, Clowder also supports the concept of optional
dependencies. These will not prevent an app from being deployed and starting up
Expand All @@ -227,7 +227,7 @@ so that it can pick up the new configuration.
## Gateway

At this time the operator will intend to use the new gateway configuration base
on the https://github.com/RedHatInsights/turnpike[Turnpike] project. This will enable the operator to dyanmically update
on the [Turnpike](https://github.com/RedHatInsights/turnpike) project. This will enable the operator to dynamically update
the routing configuration of the gateway as apps are deployed or removed.

A new CRD will be introduced to persist routing configuration:
Expand Down Expand Up @@ -345,7 +345,7 @@ most pods are idle.
## One Operator vs Many

While each app team should be responsible for the operation of their own apps,
the cost of building and maintaining many operators significantly outweights
the cost of building and maintaining many operators significantly outweighs
the benefit of placing greater operational responsibility on app teams. Having
to create an operator -- even using the Operator SDK using a shared library
with examples -- is a high barrier to entry for any app team looking to build
Expand Down
4 changes: 2 additions & 2 deletions docs/migration/checklist.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,8 @@ and setup backwards compatability
* [ ] ``Dockerfile`` is up to date
* [ ] ``build_deploy.sh`` is pushing to quay
* [ ] ``pr_check.sh`` is building in ephemeral env and passing local tests
* [ ] App interface entries for [Jenkins jobs are running](https://github.com/RedHatInsights/clowder/tree/master/docs/migration#create-pr-check-and-build-master-jenkins-jobs-in-app-interface)
* [ ] saas-deploy file [for Bonfire is enabled](https://github.com/RedHatInsights/clowder/tree/master/docs/migration#create-new-saas-deploy-file)
* [ ] App interface entries for [Jenkins jobs are running](https://github.com/RedHatInsights/clowder/blob/master/docs/migration/migration.md#create-pr-check-and-build-master-jenkins-jobs-in-app-interface)
* [ ] saas-deploy file [for Bonfire is enabled](https://github.com/RedHatInsights/clowder/blob/master/docs/migration/migration.md#create-new-saas-deploy-file)

## Backwards Compatibility
* [ ] e2e builds are disabled
Expand Down
6 changes: 3 additions & 3 deletions docs/migration/migration.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,9 +28,9 @@ Github or Gitlab, then the migration can proceed.
If it is decided to keep the project on Github, it must be open sourced. This usually entails
several things:

. Adding a license file to the root of your repository
. Adding a reference to the license at the top of each source file
. Reviewing the repo's commit history for sensitive information
* Adding a license file to the root of your repository
* Adding a reference to the license at the top of each source file
* Reviewing the repo's commit history for sensitive information

## Moving a Project from Github to Gitlab

Expand Down
6 changes: 3 additions & 3 deletions docs/sop.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Clowder itself.

Clowder utilizes a common configuration format that is presented to each application, no matter
the environment it is running in, enabling a far easier development experience. It governs many
different aspects of an applications configuration from defining the port it should listen to for
different aspects of an application's configuration from defining the port it should listen to for
its main web service, to metrics, kafka and others. When using Clowder, the burden of identifying
and defining dependency and core service credentials and connection information is removed.

Expand Down Expand Up @@ -39,7 +39,7 @@ providers and their modes, please see the relevant pages.

#### Target Namespace

Environmental resources, such as the Kafka/Zookeeper from the exmaple in the *Modes* section, will
Environmental resources, such as the Kafka/Zookeeper from the example in the *Modes* section, will
be placed in the ``ClowdEnvironment``'s target namespace. This is configured by setting the
``targetNamespace`` attribute of the ``ClowdEnvironment``. If it is omitted, a random target
namespace is generated instead. The name of this resource can be found by inspecting the
Expand Down Expand Up @@ -82,7 +82,7 @@ exposed to the requesting app. A ``ClowdApp`` will not be deployed if any of it
dependencies do not exist within the coupled ``ClowdEnvironment``.

Infrastructure dependencies, such as Kafka topics and object bucket storage, are defined in the
``ClowdApp`` spec. More information on each of them is defined in the [API specification](https://redhatinsights.github.io/clowder/api_reference.html#k8s-api-cloud-redhat-com-clowder-v2-apis-cloud-redhat-com-v1alpha1-clowdappspec).
``ClowdApp`` spec. More information on each of them is defined in the [API specification](https://redhatinsights.github.io/clowder/clowder/dev/api_reference.html#k8s-api-github-com-redhatinsights-clowder-apis-cloud-redhat-com-v1alpha1-clowdappspec).

#### Created Resources

Expand Down
2 changes: 1 addition & 1 deletion docs/usage/app-workflow.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Bonfire is a cli tool used to deploy apps with Clowder. Bonfire comes with
a local config option that we'll use to drop our ClowdApp into our minikube
cluster. Read about getting started with bonfire on ephemeral environments [here](https://clouddot.pages.redhat.com/docs/dev/getting-started/ephemeral/index.html)

We'll use our examples from [Getting Started](https://github.com/RedHatInsights/clowder/blob/master/docs/usage/getting-started.rst) again. First, let's make a custom config for our ClowdApp so that bonfire can deploy it without
We'll use our examples from [Getting Started](https://github.com/RedHatInsights/clowder/blob/master/docs/usage/getting-started.md) again. First, let's make a custom config for our ClowdApp so that bonfire can deploy it without
us needing to push any configuration into app-interface.

Type `bonfire config edit` and add the following to the 'apps' section:
Expand Down
4 changes: 2 additions & 2 deletions docs/usage/getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ You can apply the environment's config and wait for it to become "ready" using:
bonfire deploy-env -n jumpstart
```

This will cause bonfire to apply the [default ephemeral template](https://github.com/RedHatInsights/bonfire/blob/master/bonfire/resources/ephemeral-cluster-clowdenvironment.yaml) and set the ``targetNamespace`` to ``jumpstart```
This will cause bonfire to apply the [default ephemeral template](https://github.com/RedHatInsights/bonfire/blob/master/bonfire/resources/ephemeral-cluster-clowdenvironment.yaml) and set the ``targetNamespace`` to ``jumpstart``

NOTE: You will only create a ClowdEnvironment in your local minikube. Stage
and Production will have one ClowdEnv, respectively, shared by all apps in
Expand All @@ -55,7 +55,7 @@ Let's see what the ClowdEnv does.
kubectl get env env-jumpstart -o yaml
```

As you can see in the output, we have ``providers``_ for the different services. Some of these providers have caused certain deployments to appear in the environment's ``targetNamespace`` such as kafka, minio, featureflags service, etc.
As you can see in the output, we have ``providers`` for the different services. Some of these providers have caused certain deployments to appear in the environment's ``targetNamespace`` such as kafka, minio, featureflags service, etc.
These will be used by ClowdApps associated with this environment.

### Accessing services running inside your namespace
Expand Down
2 changes: 1 addition & 1 deletion docs/usage/jobs.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

Jobs and CronJobs are currently enabled as part of the ClowdApp spec. The
``jobs`` field contains a list of all currently defined jobs. The spec for a
job is documented in the [Clowder API reference](https://redhatinsights.github.io/clowder/api_reference.html#k8s-api-cloud-redhat-com-clowder-v2-apis-cloud-redhat-com-v1alpha1-job).
job is documented in the [Clowder API reference](https://redhatinsights.github.io/clowder/clowder/dev/api_reference.html#k8s-api-github-com-redhatinsights-clowder-apis-cloud-redhat-com-v1alpha1-clowdjobinvocation).

Jobs and CronJobs are split by a ``schedule`` field inside your job. If the job
has a ``schedule``, it is assumed to be a CronJob. If not, Clowder runs your
Expand Down

0 comments on commit bc49607

Please sign in to comment.