Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
279 views
in Technique[技术] by (71.8m points)

python - Following links, Scrapy web crawler framework

After several readings to Scrapy docs I'm still not catching the diferrence between using CrawlSpider rules and implementing my own link extraction mechanism on the callback method.

I'm about to write a new web crawler using the latter approach, but just becuase I had a bad experience in a past project using rules. I'd really like to know exactly what I'm doing and why.

Anyone familiar with this tool?

Thanks for your help!

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

CrawlSpider inherits BaseSpider. It just added rules to extract and follow links. If these rules are not enough flexible for you - use BaseSpider:

class USpider(BaseSpider):
    """my spider. """

    start_urls = ['http://www.amazon.com/s/?url=search-alias%3Dapparel&sort=relevance-fs-browse-rank']
    allowed_domains = ['amazon.com']

    def parse(self, response):
        '''Parse main category search page and extract subcategory search link.'''
        self.log('Downloaded category search page.', log.DEBUG)
        if response.meta['depth'] > 5:
            self.log('Categories depth limit reached (recursive links?). Stopping further following.', log.WARNING)

        hxs = HtmlXPathSelector(response)
        subcategories = hxs.select("//div[@id='refinements']/*[starts-with(.,'Department')]/following-sibling::ul[1]/li/a[span[@class='refinementLink']]/@href").extract()
        for subcategory in subcategories:
            subcategorySearchLink = urlparse.urljoin(response.url, subcategorySearchLink)
            yield Request(subcategorySearchLink, callback = self.parseSubcategory)

    def parseSubcategory(self, response):
        '''Parse subcategory search page and extract item links.'''
        hxs = HtmlXPathSelector(response)

        for itemLink in hxs.select('//a[@class="title"]/@href').extract():
            itemLink = urlparse.urljoin(response.url, itemLink)
            self.log('Requesting item page: ' + itemLink, log.DEBUG)
            yield Request(itemLink, callback = self.parseItem)

        try:
            nextPageLink = hxs.select("//a[@id='pagnNextLink']/@href").extract()[0]
            nextPageLink = urlparse.urljoin(response.url, nextPageLink)
            self.log('
Going to next search page: ' + nextPageLink + '
', log.DEBUG)
            yield Request(nextPageLink, callback = self.parseSubcategory)
        except:
            self.log('Whole category parsed: ' + categoryPath, log.DEBUG)

    def parseItem(self, response):
        '''Parse item page and extract product info.'''

        hxs = HtmlXPathSelector(response)
        item = UItem()

        item['brand'] = self.extractText("//div[@class='buying']/span[1]/a[1]", hxs)
        item['title'] = self.extractText("//span[@id='btAsinTitle']", hxs)
        ...

Even if BaseSpider's start_urls are not enough flexible for you, override start_requests method.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...