Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation unbiased of search #35

Open
emilradix opened this issue Oct 22, 2024 · 0 comments
Open

Evaluation unbiased of search #35

emilradix opened this issue Oct 22, 2024 · 0 comments
Labels
enhancement New feature or request

Comments

@emilradix
Copy link
Contributor

Currently, the only method in insert_evals relies on the search method. That means that the synthetic dataset generated is dependent on the search and data processing that was used at the time of generation.

This can be an issue since in many cases you want to benchmark different embedding / data processing methods, and right now the dataset will favour methods that were more in line with whatever method was used to generate the evals, rather than the best ones.

One way we could avoid this is by using a vision model to look at a page and generate some questions pertaining to it.

@emilradix emilradix added the enhancement New feature or request label Oct 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant