Skip to content

Use k samples for training before evaluating the progressing validation score #756

Answered by MaxHalford
SirPopiel asked this question in Q&A
Discussion options

You must be logged in to vote

Hello! What's preventing you from doing that right now? For example:

import itertools
from river import datasets
from river import evaluate
from river import linear_model
from river import metrics
from river import preprocessing

model = (
    preprocessing.StandardScaler() |
    linear_model.LogisticRegression()
)

dataset = iter(datasets.Phishing())

# Warm up
k = 200
for x, y in itertools.islice(dataset, k):
    model.predict_one(x)
    model.learn_one(x, y)

evaluate.progressive_val_score(
    model=model,
    dataset=dataset,
    metric=metrics.ROCAUC(),
    print_every=100
)

Replies: 1 comment 2 replies

Comment options

You must be logged in to vote
2 replies
@SirPopiel
Comment options

@MaxHalford
Comment options

Answer selected by SirPopiel
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants