The metrics collector is a companion application to the Spring Cloud Data Flow server.
It collects metrics emitted by Spring Cloud Stream apps, and groups them together around the stream definition that dataflow used to deploy them.
All of the OOB Spring Cloud App Starters currently have the metrics emitter module already bundled on it.
You can enable metric emission by setting the destination name for metrics binding, e.g. spring.cloud.stream.bindings.applicationMetrics.destination=<DESTINATION_NAME>
Starting with Spring Cloud Stream 2.0, the default metrics support has been switched to use Micrometer. See Metrics Emitter for more details. For information on how to configure the metrics-emitter for Spring Cloud Stream 1.x applications, see here.
The metrics collector 2.x support collecting metrics from both Spring Cloud Stream 1.x and 2.x applications.
Uber Jar | Http Link | Docker Hub Link |
---|---|---|
metrics-collector-rabbit |
https://hub.docker.com/r/springcloud/metrics-collector-rabbit/tags/ [docker pull metrics-collector-rabbit:2.0.0.RELEASE] |
|
metrics-collector-kafka |
https://hub.docker.com/r/springcloud/metrics-collector-kafka/tags/ [docker pull metrics-collector-kafka:2.0.0.RELEASE] |
Uber Jar | Http Link | Docker Hub Link |
---|---|---|
metrics-collector-rabbit |
https://hub.docker.com/r/springcloud/metrics-collector-rabbit/tags/ [docker pull metrics-collector-rabbit:1.0.0.RELEASE] |
|
metrics-collector-kafka-09 |
https://hub.docker.com/r/springcloud/metrics-collector-kafka-09/tags/ [docker pull metrics-collector-kafka-09:1.0.0.RELEASE] |
|
metrics-collector-kafka-10 |
https://hub.docker.com/r/springcloud/metrics-collector-kafka-10/tags/ [docker pull metrics-collector-kafka-10:1.0.0.RELEASE] |
Uber Jar | Http Link | Docker Hub Link |
---|---|---|
metrics-collector-rabbit |
https://hub.docker.com/r/springcloud/metrics-collector-rabbit/tags/ [docker pull metrics-collector-rabbit:2.0.0.BUILD-SNAPSHOT] |
|
metrics-collector-kafka |
https://hub.docker.com/r/springcloud/metrics-collector-kafka/tags/ [docker pull metrics-collector-kafka:2.0.0.BUILD-SNAPSHOT] |
Because apps could use different binder implementations (RabbitMQ, Kafka, JMS), the collector is built using the same support provided by the App Starters to create an executable uber jar artifact per binder.
To build you need first generate the source code for the apps of each binder:
./mvnw clean install -PgenerateApps
This will generate an apps
folder like the one below:
apps
├── metrics-collector-kafka
├── metrics-collector-rabbit
└── pom.xml
So, let’s assume your environment have apps deployed using RabbitMQ, cd into the metrics-collector-rabbit
folder and run
./mvnw package
You should find the uber jar of the collector inside your target
folder.
The collector is an uber jar following the same principles of any Spring Cloud Stream app. So it means, that you need to provide connection information to the broker that you are using. Just follow the instructions on Spring Cloud Stream docs on how to configure each binder. If you are deploying on a platform such as cloudfoundry, you only need to bind a rabbitmq service to the collector.
The default destination that the collector listens to is named metrics
. You can override this default by setting the property
spring.cloud.stream.bindings.input.destination=<DESTINATION_NAME>
This should match the destination that Spring Cloud Stream applications use to send metrics, which is set using the property
spring.cloud.stream.bindings.applicationMetrics.destination=<DESTINATION_NAME>
Assuming apps have been deployed configuring spring.cloud.stream.bindings.applicationMetrics.destination=metrics
. For the RabbitMQ binder, the collector will create an anonymous consumer to an exchange called metrics
. For the Kafka binder, the collector creates a Kafka topic named metrics
.
Internally the collector maintains a cache of the metrics it receives. The default for metric emission is every 60 seconds SCSt 2.x and 5 seconds for SCSt 1.x applications, but can be tuned on the application by using Spring Boot’s metrics exporter scheduling control, please refer to the docs here to configure your applications.
The default for the collector is to evict any metric reading that was not updated over the past 90 seconds so that SCSt 1.x and 2.x applications are both supported. You can change this value by setting the property spring.cloud.dataflow.metrics.collector.evictionTimeout
. Use the java.util.Duration
notation as the value, e.g. 10s
.
Note: It is important that the eviction time is set to a value higher than the emission time.
The collector will have security enabled by default. You can specify the username and password using the Spring Boot 2.0 properties spring.security.user.name
and spring.security.user.password
The following is just a sample of commands that one can use to get the collector up and running and see some metrics on the dataflow UI.
Collector:
java -jar target/metrics-collector-rabbit-2.0.0.BUILD-SNAPSHOT.jar --spring.security.user.name=spring --spring.security.user.password=cloud
Server:
java -jar spring-cloud-dataflow-server-local/target/spring-cloud-dataflow-server-local-1.5.0.BUILD-SNAPSHOT.jar --spring.cloud.dataflow.metrics.collector.uri=http://localhost:8080 --spring.cloud.dataflow.metrics.collector.username=spring --spring.cloud.dataflow.metrics.collector.password=cloud
Register SCSt 1.x Apps:
app import --uri http://bit.ly/Celsius-SR1-stream-applications-rabbit-maven
Stream:
stream create --name foostream --definition "time | log"
stream deploy --name foostream --properties "deployer.*.count=2,app.*.spring.cloud.stream.bindings.applicationMetrics.destination=metrics"