Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added JDBC tracing section in Java Observability #1278

Merged
merged 8 commits into from
Dec 13, 2024
98 changes: 98 additions & 0 deletions java/operating-applications/observability.md
Original file line number Diff line number Diff line change
Expand Up @@ -217,6 +217,104 @@ In case you've configured `cf-java-logging-support` as described in [Logging Ser
By default, the ID is accepted and forwarded via HTTP header `X-CorrelationID`. If you want to accept `X-Correlation-Id` header in incoming requests alternatively,
follow the instructions given in the guide [Instrumenting Servlets](https://github.com/SAP/cf-java-logging-support/wiki/Instrumenting-Servlets#correlation-id).

### JDBC Tracing in SAP Hana
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is very complicated. I was expecting the configuration maybe also to be a little CAP-specific. These docs should actually be part of the HANA JDBC driver documentation from my perspective..

Could we try if the approach we discussed yesterday with data source properties works as well? That would be more CAP-specific (as you need to know where to place these properties in application.yaml) and also much simpler and platform independent.


To activate JDBC tracing in the SAP Hana JDBC driver, you have to use the driver [Trace Options](https://help.sap.com/docs/SAP_HANA_CLIENT/f1b440ded6144a54ada97ff95dac7adf/4033f8e603504c0faf305ab77627af03.html). You can activate it either by setting datasource properties in the `application.yaml` and restarting the application, or, while the application is running by using the [command line](https://help.sap.com/docs/SAP_HANA_CLIENT/f1b440ded6144a54ada97ff95dac7adf/e411647b03f1425fab1e33bb495c9c42.html).

#### Using datasource properties

In the `application.yaml` under `cds.dataSource.<service-binding-name>:` specify `hikari.data-source-properties.traceFile` and `hikari.data-source-properties.traceOptions`:

```yaml [srv/src/main/resources/application.yaml]
cds:
dataSource:
service-manager: # name of service binding
hikari:
data-source-properties:
traceFile: "/home/user/jdbctraces/trace_.log" # use a path that is write accessible
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does this only work with absolute paths? Or would jdbctraces/trave_.log would write to to jdbctraces in the root of the application / container?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Relative paths work as well. But the important thing is to use a directory the current user (in the cloud native buildpack that's cnb) has write access to. Otherwise nothing will be written silently, there'd be no error message.

traceOptions: "CONNECTIONS,API,PACKET"
```

::: tip
Add an underscore at the end of the trace file's name. It helps redability by separating the string of numbers that the JDBC tracing process appends for the epoch timestamp.

```sh
~/jdbctraces/ $ ls

trace_10324282997834295561.log
trace_107295864860396783.log
trace_10832681394984179734.log
...
```
:::

[Trace Options](https://help.sap.com/docs/SAP_HANA_CLIENT/f1b440ded6144a54ada97ff95dac7adf/4033f8e603504c0faf305ab77627af03.html) lists the available command line options. For the datasource property, you only need the option's name, such as `CONNECTIONS`, `API`, or `PACKET`. You can specify more than one option, separated by commas.

This method of activating JDBC tracing requires restarting the application. For cloud deployments on Cloud Foundry this typically means redeploying via MTA, on Kyma this means rebuilding the application, re-creating, and publishing the container image to the container image registry and redeploying the application via Helm.

Once the `application.yaml` of the deployed application contains both `hikari.data-source-properties.traceFile` and `hikari.data-source-properties.traceOptions`, their values can also be overwritten by setting the corresponding environment variables in the container.

For example, to overwrite the tracefile path for the `application.yaml` you have to set the environment variable such as this with `SERVICE_MANAGER` being the name of the service binding:
```yaml
CDS_DATASOURCE_SERVICE_MANAGER_HIKARI_DATA_SOURCE_PROPERTIES_TRACEFILE: "/home/cnb/jdbctraces/sm/trace_.log"
```
To overwrite the tracing options respectively:
```yaml
CDS_DATASOURCE_SERVICE_MANAGER_HIKARI_DATA_SOURCE_PROPERTIES_TRACEOPTIONS: "DISTRIBUTIONS"
```

#### Using the command line

Using the command line to activate JDBC tracing doesn't require an application restart.

However, when running in the cloud it depends on the buildpacks used for the CAP Java application where the exact location of the Hana JDBC driver and the `java` executable are. The following assumes the usage of the [Cloud Native Buildpacks](https://pages.github.tools.sap/unified-runtime/docs/building-blocks/unified-build-and-deploy/buildpacks) as recommended by the [Unified Runtime](https://pages.github.tools.sap/unified-runtime/).


##### On Kyma

Step-by-step description on how to access a bash session in the application's container to use trace options in the Hana JDBC driver:

1. Run bash in the pod that runs the CAP Java application:

First, identify the pod name:
```sh
kubectl get pods
```
in the right namespace run:
```sh
kubectl exec -it pod/<POD_NAME> -- bash
```
to acquire a bash session in the container.

1. Locate java executable and JDBC driver:

By default `JAVA_HOME` isn't set in the buildpack and contains minimal tooling, as it tries to minimize the container size. However, the default location of the `java` executable is `/layers/paketo-buildpacks_sap-machine/jre/bin`.

For convenience, store the path into a variable, for example, `JAVA_HOME`:
```sh
export JAVA_HOME=/layers/paketo-buildpacks_sap-machine/jre/bin/
```

The JDBC driver is usually located in `/workspace/BOOT-INF/lib`. Store it into another variable, for example, `JDBC_DRIVER_PATH`:
```sh
export JDBC_DRIVER_PATH=/workspace/BOOT-INF/lib
```

1. Use JDBC trace options in the driver, using the correct (versioned) name of the `ngdbc.jar`:
```sh
$JAVA_HOME/java -jar $JDBC_DRIVER_PATH/ngdbc-<VERSION>.jar <option>
```

[Trace Options](https://help.sap.com/docs/SAP_HANA_CLIENT/f1b440ded6144a54ada97ff95dac7adf/4033f8e603504c0faf305ab77627af03.html) lists the available options.

Before turning on tracing with `TRACE ON`, it's recommended to set `TRACE FILENAME` to a path that the current shell user has write access to, for example:
```sh
$JAVA_HOME/java -jar $JDBC_DRIVER_PATH/ngdbc-<VERSION>.jar TRACE FILENAME ~/tmp/traces/jdbctrace
```

1. Read the trace file while the pod is still running, or use [`kubectl cp`](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_cp/) to copy the file to your local machine.



## Monitoring { #monitoring }

Expand Down
Loading