Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

carwler solution in python #24

Open
wants to merge 10 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 14 additions & 13 deletions .travis.yml
Original file line number Diff line number Diff line change
@@ -1,15 +1,16 @@
sudo: false
language: node_js
cache:
yarn: true
directories:
- node_modules
notifications:
email: false
node_js:
- 'stable'
language:
- python
- node_js
before_script:
- npm test
branches:
except:
- /^v\d+\.\d+\.\d+$/
- npm install
python:
- 3.5
install:
- pip install -r requirements.txt
scripts:
- cd dream11/
- scripyd &
- cd ..
- npm start &
- curl http://localhost:6800/schedule.json -d project=default -d spider=linkspider
25 changes: 25 additions & 0 deletions crawler.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
import scrapy
from heapq import heappush, heappop
code_list = []

class MyBaseSpider(scrapy.Spider):
name = 'spider'
start_urls = ['http://localhost:8080']
custom_settings = {
'LOG_ENABLED': 'false',
'CONCURRENT_REQUESTS': 2,
'CONCURRENT_REQUESTS_PER_DOMAIN': 4
}
def __init__(self, url=None):
self.something = url

def parse(self, response):
local_codes = []
for codes in response.css('div.codes > h1 ::text'):
heappush(local_codes, codes.extract())
yield heappush(code_list, heappop(local_codes))
for next_page in response.css('a'):
yield response.follow(next_page,callback=self.parse)

def closed(self, reason):
print heappop(code_list)
18 changes: 18 additions & 0 deletions crawler.test.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
import unittest
from scrapy.crawler import CrawlerProcess
from crawler import MyBaseSpider


crawlerProcess = CrawlerProcess()
# crawlerProcess.install()
# crawlerProcess.configure()


class TestStringMethods(unittest.TestCase):

def test_isupper(self):
crawlerProcess.crawl(MyBaseSpider)
crawlerProcess.start()

if __name__ == '__main__':
unittest.main()
1 change: 1 addition & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Scrapy == 1.4.0