请问scrapy是url自动去重的吗?比如下面这段代码,为什么运行时start_urls里面的重复url会重复爬取了?
class TestSpider(scrapy.Spider):
name = "test"
allowed_domains = ["baidu.com"]
start_urls = ['http://baike.baidu.com/fenlei/%E5%A8%B1%E4%B9%90%E4%BA%BA%E7%89%A9',
'http://baike.baidu.com/fenlei/%E5%A8%B1%E4%B9%90%E4%BA%BA%E7%89%A9',
'http://baike.baidu.com/fenlei/%E5%A8%B1%E4%B9%90%E4%BA%BA%E7%89%A9',]
def parse(self, response):
for sel in response.xpath('//div[@class="grid-list grid-list-spot"]/ul/li'):
item = TestspiderItem()
item['title'] = sel.xpath('div[@class="list"]/a/text()')[0].extract()
item['link'] = sel.xpath('div[@class="list"]/a/@href')[0].extract()
yield item
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…