Plotting the steps of progressive_val_score #900
anneborcherding
started this conversation in
Ideas
Replies: 2 comments 1 reply
-
Hey there! It's a good question. You can do this, which is a bit more terse: from itertools import count
from river import metrics
from river import ensemble
from river import datasets
from river.evaluate.progressive_validation import _progressive_validation
import matplotlib.pyplot as plt
dataset = datasets.Phishing()
model = ensemble.AdaptiveRandomForestClassifier(n_models=3, seed=42)
metric = metrics.Accuracy()
checkpoints = count(100, 100)
accuracies = []
for step in _progressive_validation(dataset=dataset, model=model, metric=metric, checkpoints=checkpoints):
accuracies.append(step['Accuracy'])
plt.plot(accuracies, label=f'Adaptive Random Forest (final acc: {accuracies[-1]:.2%})')
plt.xlabel("Evaluation Steps")
plt.ylabel("Accuracy")
plt.title("Accuracy during progressive evaluation")
plt.legend()
plt.show() It's a bit ugly because you have to import a private method, as well as format the metric yourself. I'll make this a bit more accessible in the next release. Hope this helps :) |
Beta Was this translation helpful? Give feedback.
0 replies
-
Done in #901, you'll see it in the next release |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi everyone,
I really like the feature to see the accuracy at different steps of the
progressive_val_score
evaluation and was wondering if it would be possible to return the data in a way for it to be easily plotted.Currently, I am using the following workaround:
Which results in the following graph:
Would it be possible to include that data processing into the evaluation? If you have any hints on how one could/should integrate it, I could of course create a starting point for discussion and put that into a pull request.
All the best
Anne
Beta Was this translation helpful? Give feedback.
All reactions