-
-
Notifications
You must be signed in to change notification settings - Fork 744
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add keepalived_exporter into the original project #1731
Comments
At https://github.com/prometheus/prometheus/wiki/Default-port-allocations it lists https://github.com/cafebazaar/keepalived-exporter as well as the gen2brain exporter above. However, these are not the sort of exporters that would be implemented from within keepalived itself; the exporters above trigger keepalived to generate files and then parse those files to provide the data to Prometheus. If keepalived were to implement an exporter, it would be more like HAProxy's PROMEX, so that the application directly generates the output for Prometheus. And then should we do something to work with Grafana such as provide some dashboards like RabbitMQ does? I think implementing this is something that someone could helpfully contribute. As a minimum it would be good to start with the framework with a few items exported, ensuring that this includes examples of each data type, and then this could be built upon over time. Of course, this would need to be written in C, since that is the language that keepalived used. |
I would really like to see a prometheus endpoint directly in keepalived. |
@pqarmitage, HAProxy has an embedded HTTP server for serving Prometheus metrics, but Keepalived currently lacks this feature. To address this, we should add one to export Prometheus metrics from Keepalived. What are your thoughts on using a library for this purpose? For example:
These could simplify the implementation. |
adding a new dependency is not a good option, and if so, libcurl would be better. I am not a big fan of extra lib. We need to ensure and keep core keepalived features as robust as we can. As Quentin said, this could be a good contribution. |
According to https://curl.se/docs/faq.html#Can_I_write_a_server_with_libcur, libcurl is exclusively a client-side library. Therefore, for contributing, we'll still need to select an HTTP server library. What are your thoughts on using libmicrohttpd https://www.gnu.org/software/libmicrohttpd/? |
IMHO, the best way to go would be to add support to Keepalived to RESFUL API, creating interaction point here via AF_UNIX or any local domain networking and then use/create an external tool playing around with all that web stuff and tooling. That way we 'just' need to add simple JSON like interactions to Keepalived core daemon that doesnt have to rely on any extra lib. Adding a webserver directly into a routing software core daemon sounds not a good design to me : change my mind ;) |
The initial idea mentioned above was to implement a Keepalived exporter "similar to HAProxy's PROMEX," integrating it directly into Keepalived. However, I completely agree with you that embedding a web server directly into routing software just for metrcis is not a good design choice. Regarding your suggestion to use an external tool, there’s already one referenced in the initial message of this issue. It interacts with Keepalived differently, by sending the signal and parsing the /tmp/keepalived.json file, but it achieves the same outcome. From a technical perspective, this functionality is already implemented ;) |
what is not implemented is a way to interactively talk to Keepalived, sending signal and getting result into a file is not what we can interactive. Keepalived /tmp/keepalived.json is not the best way to interact with a live daemon. RESTFUL API is not implemented. Futur proof sounds more to provide RESTFUL communication channel and then external tool can send & parse result and do whatever fancy stuff is needed. For compatibility we can keep the dump json file but this has to be changed to RESTFUL point. I have this design somewhere else but need to find time to insert this into Keepalived. But the the additional tools like Graphana connector or Prometheus need to be done outside Keepalived. |
Currently, we have only SNMP and script for monitoring.
We cloud use prometheus exporter like many project (Haproxy, postgres, apache, ngnix, etc.)
Maybe the project https://github.com/gen2brain/keepalived_exporter is well-coded and it can be integrated in the original project
The text was updated successfully, but these errors were encountered: