-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache scraper results #15
Comments
Suggestion: Since we're currently doing a full pull when we run the spiders we might as well wipe and replace the full cache on run instead of worrying on managing the TTL of data for now |
I actually decided that maybe we are overthinking this. Let's just store the results in Firestore, scrap the idea for an API, and have the app pull directly from the Firestore. |
Here is the current structure of Community events community-events/web/src/types/custom.d.ts Lines 1 to 16 in ecdc1d6
|
Is your feature request related to a problem? Please describe.
Currently, the scraper results are stored in a JSON file. This will not scale well, so we should choose a better caching solution.
Describe the solution you'd like
Ideally, we would store the results in Firestore. This gets rid of the need to maintain and extend an API.
A secondary option is caching the results in an appropriate caching solution (either Memcache or Redis)
Results that are more than 30 days in the past should be expired.
Describe alternatives you've considered
n/a
Additional context
n/a
The text was updated successfully, but these errors were encountered: