Monitor Type: gitlab-workhorse
(Source)
Accepts Endpoints: Yes
Multiple Instances Allowed: Yes
This is a monitor for GitLab Workhorse, the GitLab service that handles slow HTTP requests. Workhorse includes a built-in Prometheus exporter that this monitor will hit to gather metrics. By default, the exporter runs on port 9229.
To monitor Workhorse using its Prometheus exporter, use a monitor configuration similar to:
monitors:
- type: gitlab-workhorse
discoveryRule: port == 9229 # && <other expressions to avoid false-positives on port alone>
See the Gitlab monitor for more information.
To activate this monitor in the Smart Agent, add the following to your agent config:
monitors: # All monitor config goes under this key
- type: gitlab-workhorse
... # Additional config
For a list of monitor options that are common to all monitors, see Common Configuration.
Config option | Required | Type | Description |
---|---|---|---|
httpTimeout |
no | int64 |
HTTP timeout duration for both read and writes. This should be a duration string that is accepted by https://golang.org/pkg/time/#ParseDuration (default: 10s ) |
username |
no | string |
Basic Auth username to use on each request, if any. |
password |
no | string |
Basic Auth password to use on each request, if any. |
useHTTPS |
no | bool |
If true, the agent will connect to the server using HTTPS instead of plain HTTP. (default: false ) |
httpHeaders |
no | map of strings |
A map of HTTP header names to values. Comma separated multiple values for the same message-header is supported. |
skipVerify |
no | bool |
If useHTTPS is true and this option is also true, the exporter's TLS cert will not be verified. (default: false ) |
caCertPath |
no | string |
Path to the CA cert that has signed the TLS cert, unnecessary if skipVerify is set to false. |
clientCertPath |
no | string |
Path to the client TLS cert to use for TLS required connections |
clientKeyPath |
no | string |
Path to the client TLS key to use for TLS required connections |
host |
yes | string |
Host of the exporter |
port |
yes | integer |
Port of the exporter |
useServiceAccount |
no | bool |
Use pod service account to authenticate. (default: false ) |
metricPath |
no | string |
Path to the metrics endpoint on the exporter server, usually /metrics (the default). (default: /metrics ) |
sendAllMetrics |
no | bool |
Send all the metrics that come out of the Prometheus exporter without any filtering. This option has no effect when using the prometheus exporter monitor directly since there is no built-in filtering, only when embedding it in other monitors. (default: false ) |
These are the metrics available for this monitor. Metrics that are categorized as container/host (default) are in bold and italics in the list below.
gitlab_workhorse_builds_register_handler_open
(gauge)
Describes how many requests is currently open in given stategitlab_workhorse_builds_register_handler_requests
(cumulative)
Describes how many requests in different states hit a register handlergitlab_workhorse_git_http_sessions_active
(gauge)
Number of Git HTTP request-response cycles currently being handled by gitlab-workhorsegitlab_workhorse_http_in_flight_requests
(gauge)
A gauge of requests currently being served by workhorsegitlab_workhorse_http_request_duration_seconds
(cumulative)
A histogram of latencies for requests to workhorsegitlab_workhorse_http_request_duration_seconds_bucket
(cumulative)
A histogram of latencies for requests to workhorsegitlab_workhorse_http_request_duration_seconds_count
(cumulative)
A histogram of latencies for requests to workhorsegitlab_workhorse_http_request_size_bytes
(cumulative)
A histogram of sizes of requests to workhorsegitlab_workhorse_http_request_size_bytes_bucket
(cumulative)
A histogram of sizes of requests to workhorsegitlab_workhorse_http_request_size_bytes_count
(cumulative)
A histogram of sizes of requests to workhorsegitlab_workhorse_http_requests_total
(cumulative)
A counter for requests to workhorsegitlab_workhorse_http_time_to_write_header_seconds
(cumulative)
A histogram of request durations until the response headers are writtengitlab_workhorse_http_time_to_write_header_seconds_bucket
(cumulative)
A histogram of request durations until the response headers are writtengitlab_workhorse_http_time_to_write_header_seconds_count
(cumulative)
A histogram of request durations until the response headers are writtengitlab_workhorse_internal_api_failure_response_bytes
(cumulative)
How many bytes have been returned by upstream GitLab in API failure/rejection response bodiesgitlab_workhorse_keywatcher_keywatchers
(gauge)
The number of keys that is being watched by gitlab-workhorsegitlab_workhorse_keywather_total_messages
(cumulative)
How many messages gitlab-workhorse has received in total on pubsubgitlab_workhorse_object_storage_upload_bytes
(cumulative)
How many bytes were sent to object storagegitlab_workhorse_object_storage_upload_open
(gauge)
Describes many object storage requests are open nowgitlab_workhorse_object_storage_upload_requests
(cumulative)
How many object storage requests have been processedgitlab_workhorse_redis_errors
(cumulative)
Counts different types of Redis errors encountered by workhorse, by type and destination (redis, sentinel)gitlab_workhorse_redis_total_connections
(cumulative)
How many connections gitlab-workhorse has opened in total. Can be used to track Redis connection rate for this processgitlab_workhorse_send_url_bytes
(cumulative)
How many bytes were passed with send URLgitlab_workhorse_send_url_open_requests
(gauge)
Describes how many send URL requests are open nowgitlab_workhorse_send_url_requests
(cumulative)
How many send URL requests have been processedgitlab_workhorse_static_error_responses
(cumulative)
How many HTTP responses have been changed to a static error page, by HTTP status code.
To emit metrics that are not default, you can add those metrics in the
generic monitor-level extraMetrics
config option. Metrics that are derived
from specific configuration options that do not appear in the above list of
metrics do not need to be added to extraMetrics
.
To see a list of metrics that will be emitted you can run agent-status monitors
after configuring this monitor in a running agent instance.