-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Website: 2nd quarter improvements #67
Comments
This means the validation check will only check the benchmark names and ignore cases where some sizes within a benchmark do not appear in the CSV, right?
We will display it like this, right
|
Yes, sounds good.
Yes, but to clarify, I meant the Timeout value, not the number of benchmarks that time out. (Thanks for checking!) I think the following rows of the
So X is the number of benchmark names, and Y is the number of (benchmark name, benchmark size) combinations. If we had 2 benchmarks with 3 sizes each, |
|
For now, the MIP information table only displays data from the last iteration, but it contains blank values. Should we filter out these blank values first? Additionally, should we display the information for each size of the benchmarks and include a column for the size information
|
Iteration? I think on commit 59f1e54 I ran only one iteration per benchmark, so there should only be one iteration. Perhaps the issue is that there are rows for each size, and some sizes have blank values? Then maybe it is solved by the below point.
Good point, please add a size column to the table. |
closes open-energy-transition#67 ### Summary - [x] On commit [59f1e54](open-energy-transition@59f1e54) on Full Results: I can't see results for Highs versions '1.5.0.dev0' and '1.6.0.dev0', I only see '1.8.1' ![image](https://github.com/user-attachments/assets/cfae7b93-e1ce-4460-b929-d32930c7785b) - [x] On the above commit, there's also a [bug in the MIP information table](open-energy-transition#67 (comment)) ![image](https://github.com/user-attachments/assets/238ba668-9069-4984-9751-5d71821aaa2c) - [x] benchmarks: filter the sizes table to show only those that are in the results CSV ![image](https://github.com/user-attachments/assets/9b82a778-081d-4368-bc8a-17a53fc12460) - [x] Home: automatically compute number of solvers, benchmarks (also add sizes), and time out from the results CSV file ![image](https://github.com/user-attachments/assets/6de2b836-19b2-4895-a824-ec78064df5fa) - [x] scaling: y-axis label should not say `Run Solver` it should say either runtime or memory as appropriate - [x] scaling: can we split the plots to be 1 plot per row, instead of 3 per row? Now with a lot of data the plots are rather small ![image](https://github.com/user-attachments/assets/7f965ab3-30c1-409a-b852-edd2ceb2f5fa) ![image](https://github.com/user-attachments/assets/ab136e61-6c08-415b-bd95-72b603469793) - [x] Home: combine the SGM tables into one table, which has columns "Solver", (Solver) "Version", (normalized) "SGM Runtime", (normalized) "SGM Memory", (number of benchmarks) "Solved"; and sort this table by SGM runtime; and give it the caption "Results" ![image](https://github.com/user-attachments/assets/70be6d0d-a251-4491-8322-69ab664a17da) - [x] History: also add a plot with "Number of benchmarks solved" (i.e. status = ok) on the y-axis and years on the x-axis, and plot all solvers on the same chart like we do with the existing charts. The aim is to see how the number solved increases with newer solver versions. ![image](https://github.com/user-attachments/assets/4337e4a3-bc8b-4a7c-b76e-d572f487a83b)
pypsa-*
) in the scaling page, is there a bug?Run Solver
it should say either runtime or memory as appropriatewarnings
from SGM calculation? Or use TO value? (But then what do we do for memory?) See GLPK errors on JuMP-HiGHS MPS benchmarks #68The text was updated successfully, but these errors were encountered: