Skip to content

Commit

Permalink
Merge pull request #204 from Intersmash/doc.improve-getting-started
Browse files Browse the repository at this point in the history
Improve GETTING_STARTED.md
  • Loading branch information
marekkopecky authored Sep 26, 2024
2 parents ac0c79d + c12ebc4 commit 2047ef3
Showing 1 changed file with 82 additions and 1 deletion.
83 changes: 82 additions & 1 deletion docs/GETTING_STARTED.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,86 @@
# Getting Started

## Get an OpenShift cluster (in case you don't have one)

If you've an OpenShift cluster availabe, then you can skip this section.
In case you don't have an OpenShift cluster available to experiment on, then the best choice -
at least according to our experience - would be to use a local instance by installing [Red Hat OpenShift
Local](https://developers.redhat.com/products/openshift-local/getting-started), formerly known as
CodeReady Containers (or CRC).

Once you've followed the steps in the
[installation guide](https://docs.redhat.com/en/documentation/red_hat_openshift_local/2.5/html/getting_started_guide/installation_gsg),
and you have a local OpenShift 4 cluster available on your local machine, you can move to next sectiton,
in order to configure Intersmash for connecting to such instance.

## Set up Intersmash configuration

While interacting with an OpenShift cluster, Intersmash uses two different logical users, namely `admin` and `master`.
Such logical users are meant to represent two OpenShift user accounts, with the former holding administrative permissins,
e.g.: Intersmash would use the administrative account when creating a namespace for the tests to be run, while it would use
the regular account when installing an Operator in the current namespace.

Along with the above mentioned data, like the access token or the credentials the user accounts would use, additional information
is needed in order toconnect to the cluster, e.g.: the API URL.

Intersmash uses [XTF](https://github.com/xtf-cz/xtf/?tab=readme-ov-file#configuration)
in order to connect to an OpenShift cluster. This mainly translates into the fact that it can rely on an existing
`.kube/config` file, or it can use some configuration properties alternatively.

Following is a minimal list of properties that should be configured in order to provide Intersmash with details to connect
to your cluster with the right users accounts:

```
xtf.openshift.namespace=intersmash-test
xtf.bm.namespace=intersmash-test-bm
xtf.openshift.url=https://api.my-ocp-cluster:6443
#xtf.openshift.admin.username=admin
#xtf.openshift.admin.password=admin
xtf.openshift.admin.token=sha256~PMKGVnVjkqhvfFtvUnZcX5Nj6jlJ6MUNnomXfFk7kOU
#xtf.openshift.master.username=xpaasqe
#xtf.openshift.master.password=xpaasqe
xtf.openshift.master.token=sha256~VLz5aYr9x-YFSVOlTXvBk4CmfxLzSAg6cjT-WDFvJh8
```

The user credentials properties are commented in the above example, as a token is provided.
See the [XTF configuration guide](https://github.com/xtf-cz/xtf/?tab=readme-ov-file#configuration) for more details.

## Build Intersmash, and run an existing test on your cluster

```shell
mvn clean install -DskipTests ; mvn test -pl testsuite -Dtest=KafkaOperatorProvisionerTest
```
And keep an eye on what's going on on the cluster in the meantime:

```shell
oc get pods
```
Once you've verified that everything works, you'll be ready to add your first Intersmash test to your project.

## Create an Intersmash test

### Add Intersmash dependencies

The following dependencies must be added to your project POM:

```
<dependencies>
<!-- contains the Intersmash core annotations, contracts and APIs -->
<dependency>
<groupId>org.jboss.intersmash</groupId>
<artifactId>intersmash-core</artifactId>
</dependency>
<!-- provisioning implementations and components-->
<dependency>
<groupId>org.jboss.intersmash</groupId>
<artifactId>intersmash-provisioners</artifactId>
</dependency>
</dependencies>
```

### Create _application descriptor_(s) and a test class

The following example outlines a simple test scenario in which PostgresSql and Wildfly are used.

```java
Expand Down Expand Up @@ -72,4 +153,4 @@ public class WildflyOpenShiftApp implements WildflyImageOpenShiftApplication {
}
```
The application's name is declared
and the build of the Wildfly image to deploy is retrieved and provided to the framework.
and the build of the Wildfly image to deploy is retrieved and provided to the framework.

0 comments on commit 2047ef3

Please sign in to comment.