The micrometer
quickstart demonstrates the use of the Micrometer library in WildFly.
Micrometer is a vendor-neutral facade that allows application developers to collect and report application and system metrics to the backend of their choice in an entirely portable manner. By simply replacing the MeterRegistry
used, or combining them in Micrometer’s CompositeRegistry
data can be exported a variety of monitoring systems with no application code changes.
In this quickstart, we will build a small, simple application that shows the usage of a number of Micrometer’s Meter
implementations. We will also demonstrate the means by which WildFly exports the metrics data, which is via the OpenTelemetry Protocol (OTLP) to the OpenTelemetry Collector. To provide simpler access to the published metrics, the Collector will be configured with a Prometheus endpoint, from which we can scrape data.
To complete this guide, you will need:
-
less than 15 minutes
-
JDK 11+ installed with
JAVA_HOME
configured appropriately -
Apache Maven 3.5.3+
-
Docker Compose, or alternatively Podman Compose
In the following instructions, replace WILDFLY_HOME
with the actual path to your WildFly installation. The installation path is described in detail here: Use of WILDFLY_HOME and JBOSS_HOME Variables.
When you see the replaceable variable QUICKSTART_HOME, replace it with the path to the root directory of all of the quickstarts.
-
Open a terminal and navigate to the root of the WildFly directory.
-
Start the WildFly server with the default profile by typing the following command.
$ WILDFLY_HOME/bin/standalone.sh
NoteFor Windows, use the WILDFLY_HOME\bin\standalone.bat
script.
You enable Micrometer by running JBoss CLI commands. For your convenience, this quickstart batches the commands into a configure-micrometer.cli
script provided in the root directory of this quickstart.
-
Before you begin, make sure you do the following:
-
Back up the WildFly standalone server configuration as described above.
-
Start the WildFly server with the standalone default profile as described above.
-
-
Review the
configure-micrometer.cli
file in the root of this quickstart directory. This script adds the configuration that enables Micrometer for the quickstart components. Comments in the script describe the purpose of each block of commands. -
Open a new terminal, navigate to the root directory of this quickstart, and run the following command, replacing
WILDFLY_HOME
with the path to your server:$ WILDFLY_HOME/bin/jboss-cli.sh --connect --file=configure-micrometer.cli
NoteFor Windows, use the WILDFLY_HOME\bin\jboss-cli.bat
script.You should see the following result when you run the script:
The batch executed successfully process-state: reload-required
-
You’ll need to reload the configuration after that:
$ WILDFLY_HOME/bin/jboss-cli.sh --connect --commands=reload
By default, WildFly will publish metrics every 10 seconds, so you will soon start seeing errors about a refused connection.
This is because we told WildFly to publish to a server that is not there, so we need to fix that. To make that as simple as possible, you can use Docker Compose to start an instance of the OpenTelemetry Collector.
The Docker Compose configuration file is docker-compose.yaml
:
version: "3"
services:
otel-collector:
image: otel/opentelemetry-collector
command: [--config=/etc/otel-collector-config.yaml]
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml:Z
ports:
- 1888:1888 # pprof extension
- 8888:8888 # Prometheus metrics exposed by the collector
- 8889:8889 # Prometheus exporter metrics
- 13133:13133 # health_check extension
- 4317:4317 # OTLP gRPC receiver
- 4318:4318 # OTLP http receiver
- 55679:55679 # zpages extension
- 1234:1234 # /metrics endpoint
The Collector server configuration file is otel-collector-config.yaml
:
extensions:
health_check:
pprof:
endpoint: 0.0.0.0:1777
zpages:
endpoint: 0.0.0.0:55679
receivers:
otlp:
protocols:
grpc:
http:
processors:
batch:
exporters:
prometheus:
endpoint: "0.0.0.0:1234"
service:
pipelines:
metrics:
receivers: [otlp]
processors: [batch]
exporters: [prometheus]
extensions: [health_check, pprof, zpages]
We can now bring up the collector server instance:
$ docker-compose up
The service should be available almost immediately, which you can verify by looking at the Prometheus endpoint we’ve configured by pointing your browser at http://localhost:1234/metrics. You should see quite a few metrics listed, none of which are what our application has registered. What you’re seeing are the system and JVM metrics automatically registered and published by WildFly to give systems/applications administrators a comprehensive view of system health and performance.
Note
|
You may use Podman as alternative to Docker if you prefer, in such case the command should be |
Note
|
If your environment does not support Docker or Podman, please refer to Otel Collector documentation for alternatives on how to install and run the OpenTelemetry Collector. |
Micrometer uses a programmatic approach to metrics definition, as opposed the more declarative, annotation-based approach of other libraries. Because of that, we need to explicitly register our Meter
s before they can be used:
@Path("/")
@ApplicationScoped
public class RootResource {
// ...
@Inject
private MeterRegistry registry;
private Counter performCheckCounter;
private Counter originalCounter;
private Counter duplicatedCounter;
@PostConstruct
private void createMeters() {
Gauge.builder("prime.highestSoFar", () -> highestPrimeNumberSoFar)
.description("Highest prime number so far.")
.register(registry);
performCheckCounter = Counter
.builder("prime.performedChecks")
.description("How many prime checks have been performed.")
.register(registry);
originalCounter = Counter
.builder("prime.duplicatedCounter")
.tags(List.of(Tag.of("type", "original")))
.register(registry);
duplicatedCounter = Counter
.builder("prime.duplicatedCounter")
.tags(List.of(Tag.of("type", "copy")))
.register(registry);
}
// ...
}
Notice that we start by @Inject
ing the MeterRegistry
. This is a WildFly-managed instance, so all applications need to do it inject it and start using. Once we have that, we can use to build and register our meters, which we do in @PostConstuct private void createMeters()
Note
|
This must be done post-construction, as the |
In this example, we register several different types to demonstrate their use. With those registered, we can start writing application logic:
@GET
@Path("/prime/{number}")
public String checkIfPrime(@PathParam("number") long number) throws Exception {
performCheckCounter.increment();
Timer timer = registry.timer("prime.timer");
return timer.recordCallable(() -> {
if (number < 1) {
return "Only natural numbers can be prime numbers.";
}
if (number == 1) {
return "1 is not prime.";
}
if (number == 2) {
return "2 is prime.";
}
if (number % 2 == 0) {
return number + " is not prime, it is divisible by 2.";
}
for (int i = 3; i < Math.floor(Math.sqrt(number)) + 1; i = i + 2) {
try {
Thread.sleep(10);
} catch (InterruptedException e) {
//
}
if (number % i == 0) {
return number + " is not prime, is divisible by " + i + ".";
}
}
if (number > highestPrimeNumberSoFar) {
highestPrimeNumberSoFar = number;
}
return number + " is prime.";
});
}
This method represents a simple REST endpoint that is able to determine whether the number passed as a path parameter is a prime number.
-
Make sure you start the WildFly server as described above.
-
Open a terminal and navigate to the root directory of this quickstart.
-
Type the following command to build the quickstart.
$ mvn clean package
-
Type the following command to deploy the quickstart.
$ mvn wildfly:deploy
This deploys the micrometer/target/micrometer.war
to the running instance of the server.
You should see a message in the server log indicating that the archive deployed successfully.
You can either access the application via your browser at http://localhost:8080/micrometer/prime/13, or from the command line:
$ curl http://localhost:8080/micrometer/prime/13
It should return a simple document:
13 is prime.
Once given enough time to allow WildFly to publish metrics updates, you now see your application’s meters reported in the Prometheus export. You can also view them via the command-line:
$ curl -s http://localhost:1234/metrics | grep "prime_"
# HELP prime_duplicatedCounter
# TYPE prime_duplicatedCounter counter
prime_duplicatedCounter{job="wildfly",type="copy"} 0
prime_duplicatedCounter{job="wildfly",type="original"} 0
# HELP prime_highestSoFar Highest prime number so far.
# TYPE prime_highestSoFar gauge
prime_highestSoFar{job="wildfly"} 13
# HELP prime_performedChecks How many prime checks have been performed.
# TYPE prime_performedChecks counter
prime_performedChecks{job="wildfly"} 1
# HELP prime_timer
# TYPE prime_timer histogram
prime_timer_bucket{job="wildfly",le="+Inf"} 1
prime_timer_sum{job="wildfly"} 10.941035
prime_timer_count{job="wildfly"} 1
Notice that all four meters registered in the @PostConstruct
method as well as the Timer
in our endpoint method have all been published.
This quickstart includes integration tests, which are located under the src/test/
directory. The integration tests verify that the quickstart runs correctly when deployed on the server.
Follow these steps to run the integration tests.
-
Make sure you start the WildFly server, as previously described.
-
Make sure you build and deploy the quickstart, as previously described.
-
Type the following command to run the
verify
goal with theintegration-testing
profile activated.$ mvn verify -Pintegration-testing
Note
|
You may also use the environment variable |
When you are finished testing the quickstart, follow these steps to undeploy the archive.
-
Make sure you start the WildFly server as described above.
-
Open a terminal and navigate to the root directory of this quickstart.
-
Type this command to undeploy the archive:
$ mvn wildfly:undeploy
You can restore the original server configuration using either of the following methods.
-
You can run the
restore-configuration.cli
script provided in the root directory of this quickstart. -
You can manually restore the configuration using the backup copy of the configuration file.
-
Start the WildFly server as described above.
-
Open a new terminal, navigate to the root directory of this quickstart, and run the following command, replacing
WILDFLY_HOME
with the path to your server:$ WILDFLY_HOME/bin/jboss-cli.sh --connect --file=restore-configuration.cli
NoteFor Windows, use the WILDFLY_HOME\bin\jboss-cli.bat
script.
When you have completed testing the quickstart, you can restore the original server configuration by manually restoring the backup copy the configuration file.
-
If it is running, stop the WildFly server.
-
Replace the
WILDFLY_HOME/standalone/configuration/standalone.xml
file with the backup copy of the file.
Instead of using a standard WildFly server distribution, you can alternatively provision a WildFly server to deploy and run the quickstart, by activating the Maven profile named provisioned-server
when building the quickstart:
$ mvn clean package -Pprovisioned-server
The provisioned WildFly server, with the quickstart deployed, can then be found in the target/server
directory, and its usage is similar to a standard server distribution, with the simplification that there is never the need to specify the server configuration to be started.
The server provisioning functionality is provided by the WildFly Maven Plugin, and you may find its configuration in the quickstart pom.xml
:
<profile>
<id>provisioned-server</id>
<build>
<plugins>
<plugin>
<groupId>org.wildfly.plugins</groupId>
<artifactId>wildfly-maven-plugin</artifactId>
<configuration>
<feature-packs>
<feature-pack>
<location>org.wildfly:wildfly-galleon-pack:${version.server}</location>
</feature-pack>
</feature-packs>
<layers>...</layers>
<!-- deploys the quickstart on root web context -->
<name>ROOT.war</name>
</configuration>
<executions>
<execution>
<goals>
<goal>package</goal>
</goals>
</execution>
</executions>
</plugin>
...
</plugins>
</build>
</profile>
Note
|
Since the plugin configuration above deploys quickstart on root web context of the provisioned server, the URL to access the application should not have the |
The integration tests included with this quickstart, which verify that the quickstart runs correctly, may also be run with a provisioned server.
Follow these steps to run the integration tests.
-
Make sure the server is provisioned.
$ mvn clean package -Pprovisioned-server
-
Start the WildFly provisioned server, this time using the WildFly Maven Plugin, which is recommended for testing due to simpler automation. The path to the provisioned server should be specified using the
jbossHome
system property.$ mvn wildfly:start -DjbossHome=target/server
-
Type the following command to run the
verify
goal with theintegration-testing
profile activated, and specifying the quickstart’s URL using theserver.host
system property, which for a provisioned server by default ishttp://localhost:8080
.$ mvn verify -Pintegration-testing -Dserver.host=http://localhost:8080
-
Shutdown the WildFly provisioned server, this time using the WildFly Maven Plugin too.
$ mvn wildfly:shutdown
You can use the WildFly JAR Maven plug-in to build a WildFly bootable JAR to run this quickstart.
The quickstart pom.xml
file contains a Maven profile named bootable-jar which configures the bootable JAR building:
<profile>
<id>bootable-jar</id>
<build>
<plugins>
<plugin>
<groupId>org.wildfly.plugins</groupId>
<artifactId>wildfly-jar-maven-plugin</artifactId>
<configuration>
<feature-pack-location>wildfly@maven(org.jboss.universe:community-universe)#${version.server}</feature-pack-location>
<layers>
<layer>jaxrs-server</layer>
<layer>microprofile-config</layer>
</layers>
<plugin-options>
<jboss-fork-embedded>true</jboss-fork-embedded>
</plugin-options>
</configuration>
<executions>
<execution>
<goals>
<goal>package</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
-
Build the quickstart bootable JAR with the following command:
$ mvn clean package -Pbootable-jar
-
Run the quickstart application contained in the bootable JAR:
$ java -jar target/micrometer-bootable.jar
-
You can now interact with the quickstart application.
Note
|
After the quickstart application is deployed, the bootable JAR includes the application in the root context. Therefore, any URLs related to the application should not have the |
The integration tests included with this quickstart, which verify that the quickstart runs correctly, may also be run with a bootable jar.
Follow these steps to run the integration tests.
-
Make sure the bootable jar is provisioned.
$ mvn clean package -Pbootable-jar
-
Start the WildFly bootable jar, this time using the WildFly Maven Jar Plugin, which is recommend for testing due to simpler automation.
$ mvn wildfly-jar:start -Djar-file-name=target/micrometer-bootable.jar
-
Type the following command to run the
verify
goal with theintegration-testing
profile activated, and specifying the quickstart’s URL using theserver.host
system property, which for a bootable jar by default ishttp://localhost:8080
.$ mvn verify -Pintegration-testing -Dserver.host=http://localhost:8080
-
Shutdown the WildFly bootable jar, this time using the WildFly Maven Jar Plugin too.
$ mvn wildfly-jar:shutdown
On OpenShift, the S2I build with Apache Maven uses an openshift
Maven profile to provision a WildFly server, deploy and run the quickstart in OpenShift environment.
The server provisioning functionality is provided by the WildFly Maven Plugin, and you may find its configuration in the quickstart pom.xml
:
<profile>
<id>openshift</id>
<build>
<plugins>
<plugin>
<groupId>org.wildfly.plugins</groupId>
<artifactId>wildfly-maven-plugin</artifactId>
<configuration>
<feature-packs>
<feature-pack>
<location>org.wildfly:wildfly-galleon-pack:${version.server}</location>
</feature-pack>
<feature-pack>
<location>org.wildfly.cloud:wildfly-cloud-galleon-pack:${version.pack.cloud}</location>
</feature-pack>
</feature-packs>
<layers>...</layers>
<name>ROOT.war</name>
</configuration>
<executions>
<execution>
<goals>
<goal>package</goal>
</goals>
</execution>
</executions>
</plugin>
...
</plugins>
</build>
</profile>
You may note that unlike the provisioned-server
profile it uses the cloud feature pack which enables a configuration tuned for OpenShift environment.
This section contains the basic instructions to build and deploy this quickstart to WildFly for OpenShift or WildFly for OpenShift Online using Helm Charts.
-
You must be logged in OpenShift and have an
oc
client to connect to OpenShift -
Helm must be installed to deploy the backend on OpenShift.
Once you have installed Helm, you need to add the repository that provides Helm Charts for WildFly.
$ helm repo add wildfly https://docs.wildfly.org/wildfly-charts/
"wildfly" has been added to your repositories
$ helm search repo wildfly
NAME CHART VERSION APP VERSION DESCRIPTION
wildfly/wildfly ... ... Build and Deploy WildFly applications on OpenShift
wildfly/wildfly-common ... ... A library chart for WildFly-based applications
The functionality of this quickstart depends on a running instance of the OpenTelemetry Collector.
To deploy and configure the OpenTelemetry Collector, you will need to apply a set of configurations to your OpenShift cluster:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: collector-config
data:
collector.yml: |
receivers:
otlp:
protocols:
grpc:
http:
processors:
exporters:
logging:
verbosity: detailed
prometheus:
endpoint: "0.0.0.0:1234"
service:
pipelines:
metrics:
receivers: [otlp]
processors: []
exporters: [logging,prometheus]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: opentelemetrycollector
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: opentelemetrycollector
template:
metadata:
labels:
app.kubernetes.io/name: opentelemetrycollector
spec:
containers:
- name: otelcol
args:
- --config=/conf/collector.yml
image: otel/opentelemetry-collector:0.89.0
volumeMounts:
- mountPath: /conf
name: collector-config
volumes:
- configMap:
items:
- key: collector.yml
path: collector.yml
name: collector-config
name: collector-config
---
apiVersion: v1
kind: Service
metadata:
name: opentelemetrycollector
spec:
ports:
- name: otlp-grpc
port: 4317
protocol: TCP
targetPort: 4317
- name: otlp-http
port: 4318
protocol: TCP
targetPort: 4318
- name: prometheus
port: 1234
protocol: TCP
targetPort: 1234
selector:
app.kubernetes.io/name: opentelemetrycollector
type: ClusterIP
---
apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: otelcol-otlp-grpc
labels:
app.kubernetes.io/name: microprofile
spec:
port:
targetPort: otlp-grpc
to:
kind: Service
name: opentelemetrycollector
tls:
termination: edge
insecureEdgeTerminationPolicy: Redirect
wildcardPolicy: None
---
apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: otelcol-otlp-http
labels:
app.kubernetes.io/name: microprofile
spec:
port:
targetPort: otlp-http
to:
kind: Service
name: opentelemetrycollector
tls:
termination: edge
insecureEdgeTerminationPolicy: Redirect
wildcardPolicy: None
---
apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: otelcol-prometheus
labels:
app.kubernetes.io/name: microprofile
spec:
port:
targetPort: prometheus
to:
kind: Service
name: opentelemetrycollector
tls:
termination: edge
insecureEdgeTerminationPolicy: Redirect
wildcardPolicy: None
To make things simpler, you can find these commands in charts/opentelemetry-collector.yaml
, and to apply them run the following command in your terminal:
$ oc apply -f charts/opentelemetry-collector.yaml
Note
|
When done with the quickstart, the |
Log in to your OpenShift instance using the oc login
command.
The backend will be built and deployed on OpenShift with a Helm Chart for WildFly.
Navigate to the root directory of this quickstart and run the following command:
$ helm install micrometer -f charts/helm.yaml wildfly/wildfly --wait --timeout=10m0s
NAME: micrometer
...
STATUS: deployed
REVISION: 1
This command will return once the application has successfully deployed. In case of a timeout, you can check the status of the application with the following command in another terminal:
oc get deployment micrometer
The Helm Chart for this quickstart contains all the information to build an image from the source code using S2I on Java 17:
build:
uri: https://github.com/wildfly/quickstart.git
ref: main
contextDir: micrometer
deploy:
replicas: 1
env:
- name: OTEL_COLLECTOR_HOST
value: "opentelemetrycollector"
This will create a new deployment on OpenShift and deploy the application.
If you want to see all the configuration elements to customize your deployment you can use the following command:
$ helm show readme wildfly/wildfly
Get the URL of the route to the deployment.
$ oc get route micrometer -o jsonpath="{.spec.host}"
Access the application in your web browser using the displayed URL.
Note
|
The Maven profile named |
The integration tests included with this quickstart, which verify that the quickstart runs correctly, may also be run with the quickstart running on OpenShift.
Note
|
The integration tests expect a deployed application, so make sure you have deployed the quickstart on OpenShift before you begin. |
Run the integration tests using the following command to run the verify
goal with the integration-testing
profile activated and the proper URL:
$ mvn verify -Pintegration-testing -Dserver.host=https://$(oc get route micrometer --template='{{ .spec.host }}')
Note
|
The tests are using SSL to connect to the quickstart running on OpenShift. So you need the certificates to be trusted by the machine the tests are run from. |
Micrometer provides a de facto standard way of capturing and publishing metrics to the monitoring solution of your choice. WildFly provides a convenient, out-of-the-box integration of Micrometer to make it easier to capture those metrics and monitor your application’s health and performance. For more information on Micrometer, please refer to the project’s website.