Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inconsistencies in recrawling/reindexing framework #82

Open
jayaddison opened this issue Dec 11, 2023 · 1 comment
Open

Inconsistencies in recrawling/reindexing framework #82

jayaddison opened this issue Dec 11, 2023 · 1 comment
Labels
bug Something isn't working

Comments

@jayaddison
Copy link
Member

Problem

I discovered a large-ish number of duplicate recipes in the search index today, and have been tracing back through the (re)crawling and (re)indexing logic to figure out how this could have happened.

I think that there are perhaps three logical inconsistencies in the code that cause problems:

  • Unlike the crawl_recipe worker method, the index_recipe worker method does not check for duplicate recipes before (re)indexing content -- and as a result, manual reindexing of recipes may insert duplicates if the operator is not careful when selecting a set of recipes.
  • We had what looked to me like a bug that meant that crawl_recipe may not have been matching recipes to their earliest-known crawled equivalent. This has been addressed by commit d1e0901 (and follow-up bugfix b2e0ef3, suggesting the need for some code refactoring here).
  • There seems to be a tight coupling between the URL requested for recrawling/reindexing by the crawler reciperadar/recipes.py and the RecipeURL constructed by the backend recipe worker(s). This doesn't seem great; we should be able to accept oldest-known, latest-known, or anywhere in-between and expect reliable behaviour.

Analysis

Deduplication

The Celery task workflow during recrawling is crawl_recipe -> index_recipe, compared to the reindexing workflow of simply index_recipe. Sometimes it's more convenient and performant to use the latter, especially because it does not involve any calls to the crawler service and (potentially) external HTTP requests, generally meaning that it can be much faster.

However, it's difficult to strictly enforce the set of recipes that may be selected for reindexing by operators, and we should make the backend robust against mistakes there (because they will happen). This implies that we should move the de-duplcation logic (that detects duplicates, and remaps the src and dst on the Recipe so that duplicates merge) into the index_recipe worker method.

Refactoring

It may make sense to relocate the find_earliest_crawl and find_latest_crawl utility methods from RecipeURL to Recipe, and to always drive the latter based on the resolved URL of the recipe.

The goals here are multiple, and roughly in this priority order:

  • To ensure that latest-and-earliest URL lookup for a recipe are handled correctly, robustly and with the latest-known information at indexing-time so that recipe duplication should not occur.
  • To minimize the number of database queries required to perform the lookups (currently this requires query/construction of a RecipeURL object, for example, and the lookups themselves query the CrawlURL model).
  • To organize the code in a comprehensible way.
@jayaddison jayaddison added the bug Something isn't working label Dec 11, 2023
@jayaddison
Copy link
Member Author

I'm coming to the conclusion that we're likely to continue to have some level of duplication within the Recipe model (database table) using the current schema.

As a result I think what is important is to hide-and-redirect the duplicate recipes, while continuing to display a single, primary copy of the recipe.

The modelling problem here is that we use hash(src) of a recipe as the search engine key for the document -- but src tracks the earliest-known URL for a recipe, and may change over time. I don't think that that invariant is possible to break, and it seems like a good way to anchor recipes to their (best-as-we-can-tell) origins.

So I think the trick is that as the best-known src changes, we ensure that only one of the n documents -- the one where the ID of the document matches hash(src) is displayed at a time. All others should be hidden. See also #71.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Development

No branches or pull requests

1 participant