-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question] Fetch request url from redis fail #285
Comments
Same error... Found a solution? |
Passing a spider argument to the crawl() methods of scrapy.core.engine.ExecutionEngine is no longer supported in scrapy v2.10.0. release notes Try scrapy 2.9.0. |
It looks like pull request #286 that fix this already exist from Aug. class SomeSpider(RedisSpider):
## vvv _add this to spider code
def schedule_next_requests(self):
"""Schedules a request if available"""
# TODO: While there is capacity, schedule a batch of redis requests.
for req in self.next_requests():
self.crawler.engine.crawl(req, spider=self)
# see https://github.com/scrapy/scrapy/issues/5994
if scrapy_version >= (2, 6):
self.crawler.engine.crawl(req)
else:
self.crawler.engine.crawl(req, spider=self)
|
hope the fixed version quickly release |
@rmax would it be possible to release a fix for this? I'm also encountering this issue |
The same problem... |
@rmax would it be possible to release a fix for this? I'm also encountering this issue。 Thanks |
Thank you for your patience. V0.8.0 has been released 🎉 |
Description
If i insert start url to redis before run scrapy, is successful.
But if i run scrapy first and insert url, listen url will get fail info:
I can't get url dynamically and scrapy will crush.
The text was updated successfully, but these errors were encountered: