Skip to content

Commit ccae929

Browse files
committed
更新了爬虫部分的代码
1 parent 0f19b23 commit ccae929

19 files changed

+521
-28
lines changed

Day66-75/Scrapy的应用01.md renamed to Day66-75/Scrapy爬虫框架的应用.md

+78-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
## Scrapy的应用(01)
1+
## Scrapy爬虫框架的应用
22

33
### Scrapy概述
44

@@ -101,6 +101,11 @@ $
101101

102102
2. 在spiders文件夹中编写自己的爬虫。
103103

104+
```Shell
105+
106+
(venv) $ scrapy genspider movie movie.douban.com --template=crawl
107+
```
108+
104109
```Python
105110

106111
# -*- coding: utf-8 -*-
@@ -287,5 +292,77 @@ $
287292
HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
288293
```
289294

295+
### 补充说明
296+
297+
#### XPath语法
298+
299+
1. XPath路径表达式:XPath使用路径表达式来选取XML文档中的节点或者节点集。
300+
301+
2. XPath节点:元素、属性、文本、命名空间、处理指令、注释、根节点。
302+
303+
3. XPath语法。(注:下面的例子来自于[菜鸟教程](http://www.runoob.com/)网站的[XPath教程](http://www.runoob.com/xpath/xpath-syntax.html)。)
304+
305+
XML文件。
306+
307+
```XML
308+
309+
<?xml version="1.0" encoding="UTF-8"?>
310+
311+
<bookstore>
290312

313+
<book>
314+
<title lang="eng">Harry Potter</title>
315+
<price>29.99</price>
316+
</book>
317+
318+
<book>
319+
<title lang="eng">Learning XML</title>
320+
<price>39.95</price>
321+
</book>
322+
323+
</bookstore>
324+
```
325+
XPath语法。
326+
327+
| 路径表达式 | 结果 |
328+
| --------------- | ------------------------------------------------------------ |
329+
| bookstore | 选取 bookstore 元素的所有子节点。 |
330+
| /bookstore | 选取根元素 bookstore。注释:假如路径起始于正斜杠( / ),则此路径始终代表到某元素的绝对路径! |
331+
| bookstore/book | 选取属于 bookstore 的子元素的所有 book 元素。 |
332+
| //book | 选取所有 book 子元素,而不管它们在文档中的位置。 |
333+
| bookstore//book | 选择属于 bookstore 元素的后代的所有 book 元素,而不管它们位于 bookstore 之下的什么位置。 |
334+
| //@lang | 选取名为 lang 的所有属性。 |
335+
336+
XPath谓词。
337+
338+
| 路径表达式 | 结果 |
339+
| ---------------------------------- | ------------------------------------------------------------ |
340+
| /bookstore/book[1] | 选取属于 bookstore 子元素的第一个 book 元素。 |
341+
| /bookstore/book[last()] | 选取属于 bookstore 子元素的最后一个 book 元素。 |
342+
| /bookstore/book[last()-1] | 选取属于 bookstore 子元素的倒数第二个 book 元素。 |
343+
| /bookstore/book[position()<3] | 选取最前面的两个属于 bookstore 元素的子元素的 book 元素。 |
344+
| //title[@lang] | 选取所有拥有名为 lang 的属性的 title 元素。 |
345+
| //title[@lang='eng'] | 选取所有 title 元素,且这些元素拥有值为 eng 的 lang 属性。 |
346+
| /bookstore/book[price>35.00] | 选取 bookstore 元素的所有 book 元素,且其中的 price 元素的值须大于 35.00。 |
347+
| /bookstore/book[price>35.00]/title | 选取 bookstore 元素中的 book 元素的所有 title 元素,且其中的 price 元素的值须大于 35.00。 |
348+
349+
通配符用法。
350+
351+
| 路径表达式 | 结果 |
352+
| ------------ | --------------------------------- |
353+
| /bookstore/* | 选取 bookstore 元素的所有子元素。 |
354+
| //* | 选取文档中的所有元素。 |
355+
| //title[@*] | 选取所有带有属性的 title 元素。 |
356+
357+
选取多个路径。
358+
359+
| 路径表达式 | 结果 |
360+
| -------------------------------- | ------------------------------------------------------------ |
361+
| //book/title \| //book/price | 选取 book 元素的所有 title 和 price 元素。 |
362+
| //title \| //price | 选取文档中的所有 title 和 price 元素。 |
363+
| /bookstore/book/title \| //price | 选取属于 bookstore 元素的 book 元素的所有 title 元素,以及文档中所有的 price 元素。 |
364+
365+
#### 在Chrome浏览器中查看元素XPath语法
366+
367+
![](./res/douban-xpath.png)
291368

Day66-75/Scrapy的应用03.md

Whitespace-only changes.

Day66-75/code/douban/douban/items.py

+18
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
# -*- coding: utf-8 -*-
2+
3+
# Define here the models for your scraped items
4+
#
5+
# See documentation in:
6+
# https://doc.scrapy.org/en/latest/topics/items.html
7+
8+
import scrapy
9+
10+
11+
class DoubanItem(scrapy.Item):
12+
13+
name = scrapy.Field()
14+
year = scrapy.Field()
15+
score = scrapy.Field()
16+
director = scrapy.Field()
17+
classification = scrapy.Field()
18+
actor = scrapy.Field()
+103
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,103 @@
1+
# -*- coding: utf-8 -*-
2+
3+
# Define here the models for your spider middleware
4+
#
5+
# See documentation in:
6+
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html
7+
8+
from scrapy import signals
9+
10+
11+
class DoubanSpiderMiddleware(object):
12+
# Not all methods need to be defined. If a method is not defined,
13+
# scrapy acts as if the spider middleware does not modify the
14+
# passed objects.
15+
16+
@classmethod
17+
def from_crawler(cls, crawler):
18+
# This method is used by Scrapy to create your spiders.
19+
s = cls()
20+
crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
21+
return s
22+
23+
def process_spider_input(self, response, spider):
24+
# Called for each response that goes through the spider
25+
# middleware and into the spider.
26+
27+
# Should return None or raise an exception.
28+
return None
29+
30+
def process_spider_output(self, response, result, spider):
31+
# Called with the results returned from the Spider, after
32+
# it has processed the response.
33+
34+
# Must return an iterable of Request, dict or Item objects.
35+
for i in result:
36+
yield i
37+
38+
def process_spider_exception(self, response, exception, spider):
39+
# Called when a spider or process_spider_input() method
40+
# (from other spider middleware) raises an exception.
41+
42+
# Should return either None or an iterable of Response, dict
43+
# or Item objects.
44+
pass
45+
46+
def process_start_requests(self, start_requests, spider):
47+
# Called with the start requests of the spider, and works
48+
# similarly to the process_spider_output() method, except
49+
# that it doesn’t have a response associated.
50+
51+
# Must return only requests (not items).
52+
for r in start_requests:
53+
yield r
54+
55+
def spider_opened(self, spider):
56+
spider.logger.info('Spider opened: %s' % spider.name)
57+
58+
59+
class DoubanDownloaderMiddleware(object):
60+
# Not all methods need to be defined. If a method is not defined,
61+
# scrapy acts as if the downloader middleware does not modify the
62+
# passed objects.
63+
64+
@classmethod
65+
def from_crawler(cls, crawler):
66+
# This method is used by Scrapy to create your spiders.
67+
s = cls()
68+
crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
69+
return s
70+
71+
def process_request(self, request, spider):
72+
# Called for each request that goes through the downloader
73+
# middleware.
74+
75+
# Must either:
76+
# - return None: continue processing this request
77+
# - or return a Response object
78+
# - or return a Request object
79+
# - or raise IgnoreRequest: process_exception() methods of
80+
# installed downloader middleware will be called
81+
return None
82+
83+
def process_response(self, request, response, spider):
84+
# Called with the response returned from the downloader.
85+
86+
# Must either;
87+
# - return a Response object
88+
# - return a Request object
89+
# - or raise IgnoreRequest
90+
return response
91+
92+
def process_exception(self, request, exception, spider):
93+
# Called when a download handler or a process_request()
94+
# (from other downloader middleware) raises an exception.
95+
96+
# Must either:
97+
# - return None: continue processing this exception
98+
# - return a Response object: stops process_exception() chain
99+
# - return a Request object: stops process_exception() chain
100+
pass
101+
102+
def spider_opened(self, spider):
103+
spider.logger.info('Spider opened: %s' % spider.name)
+43
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,43 @@
1+
# -*- coding: utf-8 -*-
2+
3+
# Define your item pipelines here
4+
#
5+
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
6+
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html
7+
import pymongo
8+
9+
from scrapy.exceptions import DropItem
10+
from scrapy.conf import settings
11+
from scrapy import log
12+
13+
14+
class DoubanPipeline(object):
15+
16+
def __init__(self):
17+
connection = pymongo.MongoClient(settings['MONGODB_SERVER'], settings['MONGODB_PORT'])
18+
db = connection[settings['MONGODB_DB']]
19+
self.collection = db[settings['MONGODB_COLLECTION']]
20+
21+
def process_item(self, item, spider):
22+
#Remove invalid data
23+
valid = True
24+
for data in item:
25+
if not data:
26+
valid = False
27+
raise DropItem("Missing %s of blogpost from %s" %(data, item['url']))
28+
if valid:
29+
#Insert data into database
30+
new_moive=[{
31+
"name":item['name'][0],
32+
"year":item['year'][0],
33+
"score":item['score'],
34+
"director":item['director'],
35+
"classification":item['classification'],
36+
"actor":item['actor']
37+
}]
38+
self.collection.insert(new_moive)
39+
log.msg("Item wrote to MongoDB database %s/%s" %
40+
(settings['MONGODB_DB'], settings['MONGODB_COLLECTION']),
41+
level=log.DEBUG, spider=spider)
42+
return item
43+
+98
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,98 @@
1+
# -*- coding: utf-8 -*-
2+
3+
# Scrapy settings for douban project
4+
#
5+
# For simplicity, this file contains only settings considered important or
6+
# commonly used. You can find more settings consulting the documentation:
7+
#
8+
# https://doc.scrapy.org/en/latest/topics/settings.html
9+
# https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
10+
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html
11+
12+
BOT_NAME = 'douban'
13+
14+
SPIDER_MODULES = ['douban.spiders']
15+
NEWSPIDER_MODULE = 'douban.spiders'
16+
17+
18+
# Crawl responsibly by identifying yourself (and your website) on the user-agent
19+
USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_3) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.54 Safari/536.5'
20+
21+
# Obey robots.txt rules
22+
ROBOTSTXT_OBEY = True
23+
24+
# Configure maximum concurrent requests performed by Scrapy (default: 16)
25+
#CONCURRENT_REQUESTS = 32
26+
27+
# Configure a delay for requests for the same website (default: 0)
28+
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
29+
# See also autothrottle settings and docs
30+
DOWNLOAD_DELAY = 3
31+
RANDOMIZE_DOWNLOAD_DELAY = True
32+
# The download delay setting will honor only one of:
33+
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
34+
#CONCURRENT_REQUESTS_PER_IP = 16
35+
36+
# Disable cookies (enabled by default)
37+
COOKIES_ENABLED = True
38+
39+
MONGODB_SERVER = '120.77.222.217'
40+
MONGODB_PORT = 27017
41+
MONGODB_DB = 'douban'
42+
MONGODB_COLLECTION = 'movie'
43+
44+
# Disable Telnet Console (enabled by default)
45+
#TELNETCONSOLE_ENABLED = False
46+
47+
# Override the default request headers:
48+
#DEFAULT_REQUEST_HEADERS = {
49+
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
50+
# 'Accept-Language': 'en',
51+
#}
52+
53+
# Enable or disable spider middlewares
54+
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
55+
#SPIDER_MIDDLEWARES = {
56+
# 'douban.middlewares.DoubanSpiderMiddleware': 543,
57+
#}
58+
59+
# Enable or disable downloader middlewares
60+
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
61+
#DOWNLOADER_MIDDLEWARES = {
62+
# 'douban.middlewares.DoubanDownloaderMiddleware': 543,
63+
#}
64+
65+
# Enable or disable extensions
66+
# See https://doc.scrapy.org/en/latest/topics/extensions.html
67+
#EXTENSIONS = {
68+
# 'scrapy.extensions.telnet.TelnetConsole': None,
69+
#}
70+
71+
# Configure item pipelines
72+
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
73+
ITEM_PIPELINES = {
74+
'douban.pipelines.DoubanPipeline': 400,
75+
}
76+
77+
LOG_LEVEL = 'DEBUG'
78+
79+
# Enable and configure the AutoThrottle extension (disabled by default)
80+
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
81+
#AUTOTHROTTLE_ENABLED = True
82+
# The initial download delay
83+
#AUTOTHROTTLE_START_DELAY = 5
84+
# The maximum download delay to be set in case of high latencies
85+
#AUTOTHROTTLE_MAX_DELAY = 60
86+
# The average number of requests Scrapy should be sending in parallel to
87+
# each remote server
88+
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
89+
# Enable showing throttling stats for every response received:
90+
#AUTOTHROTTLE_DEBUG = False
91+
92+
# Enable and configure HTTP caching (disabled by default)
93+
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
94+
#HTTPCACHE_ENABLED = True
95+
#HTTPCACHE_EXPIRATION_SECS = 0
96+
#HTTPCACHE_DIR = 'httpcache'
97+
#HTTPCACHE_IGNORE_HTTP_CODES = []
98+
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
# This package will contain the spiders of your Scrapy project
2+
#
3+
# Please refer to the documentation for information on how to create and manage
4+
# your spiders.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
# -*- coding: utf-8 -*-
2+
import scrapy
3+
from scrapy.selector import Selector
4+
from scrapy.linkextractors import LinkExtractor
5+
from scrapy.spiders import CrawlSpider, Rule
6+
7+
from douban.items import DoubanItem
8+
9+
10+
class MovieSpider(CrawlSpider):
11+
name = 'movie'
12+
allowed_domains = ['movie.douban.com']
13+
start_urls = ['https://movie.douban.com/top250']
14+
rules = (
15+
Rule(LinkExtractor(allow=(r'https://movie.douban.com/top250\?start=\d+.*'))),
16+
Rule(LinkExtractor(allow=(r'https://movie.douban.com/subject/\d+')), callback='parse_item'),
17+
)
18+
19+
def parse_item(self, response):
20+
sel = Selector(response)
21+
item = DoubanItem()
22+
item['name']=sel.xpath('//*[@id="content"]/h1/span[1]/text()').extract()
23+
item['year']=sel.xpath('//*[@id="content"]/h1/span[2]/text()').re(r'\((\d+)\)')
24+
item['score']=sel.xpath('//*[@id="interest_sectl"]/div/p[1]/strong/text()').extract()
25+
item['director']=sel.xpath('//*[@id="info"]/span[1]/a/text()').extract()
26+
item['classification']= sel.xpath('//span[@property="v:genre"]/text()').extract()
27+
item['actor']= sel.xpath('//*[@id="info"]/span[3]/a[1]/text()').extract()
28+
#i['domain_id'] = response.xpath('//input[@id="sid"]/@value').extract()
29+
#i['name'] = response.xpath('//div[@id="name"]').extract()
30+
#i['description'] = response.xpath('//div[@id="description"]').extract()
31+
return item
32+

0 commit comments

Comments
 (0)