Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Computing WLS-exporter's CPU use using wls_scrape metrics and number of cores #261

Open
arduinepo opened this issue Feb 8, 2023 · 5 comments

Comments

@arduinepo
Copy link

arduinepo commented Feb 8, 2023

Hi,
I wish to get the CPU use of the WLS-exporter by dividing wls_scrape_cpu_seconds by wls_scrape_duration_seconds and the number of cores, but this formula sends some results above 100%.
Was I wrong ? Thanks for helping me.

image

@russgold
Copy link
Member

russgold commented Feb 8, 2023

From the README:

The exporter produces metrics for monitoring its own performance:

  • wls_scrape_mbeans_count_total reports the number of metrics scraped.
  • wls_scrape_duration_seconds reports the time required to do the scrape.
  • wls_scrape_cpu_seconds reports the CPU time used during the scrape.

They will not tell you anything useful about the overall CPU use of WebLogic Server, but only the exporter.

Is there something unclear about this documentation that could be improved?

@russgold
Copy link
Member

russgold commented Feb 8, 2023

It doesn't appear that dividing by the number of cores tells you anything useful, as indicated by this answer to a related question: https://unix.stackexchange.com/questions/211617/why-is-the-range-of-load-average-not-0-1-for-all-cpus-together

@arduinepo
Copy link
Author

From the README:

The exporter produces metrics for monitoring its own performance:

  • wls_scrape_mbeans_count_total reports the number of metrics scraped.
  • wls_scrape_duration_seconds reports the time required to do the scrape.
  • wls_scrape_cpu_seconds reports the CPU time used during the scrape.

They will not tell you anything useful about the overall CPU use of WebLogic Server, but only the exporter.

Is there something unclear about this documentation that could be improved?

Thansk for your replies.

I already got the JVM CPU load from other metrics, but I need the load of this exporter to measure its impact.
It seemed to me that CPUs usage could be computed by dividing CPU time by elapsed real-time (), having several cores then dividing by the number of cores I would get a number between 0 and 1.

@arduinepo
Copy link
Author

It doesn't appear that dividing by the number of cores tells you anything useful, as indicated by this answer to a related question: https://unix.stackexchange.com/questions/211617/why-is-the-range-of-load-average-not-0-1-for-all-cpus-together

As they write, the load is about the number of processes running, which I don't need, but the CPU usage.

If I don't divide again by the number of cores, I get values far too greater than 1, like 3, 4 5.

@arduinepo arduinepo changed the title Computing CPU use using wls_scrape metrics and number of cores Computing node-exporter's CPU use using wls_scrape metrics and number of cores Feb 9, 2023
@arduinepo arduinepo changed the title Computing node-exporter's CPU use using wls_scrape metrics and number of cores Computing WLS-exporter's CPU use using wls_scrape metrics and number of cores Feb 9, 2023
@arduinepo
Copy link
Author

I think my formula gets not the CPU usage as properly said, but the load measured in number of threads occupied : values are all round, 0.5, 1, 1.5, etc...

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants