Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
364 views
in Technique[技术] by (71.8m points)

html - Python Scrapy Dynamic Web Sites

I am trying to scrape a very simple web page with the help of Scrapy and it's xpath selectors but for some reason the selectors I have do not work in Scrapy but they do work in other xpath utilities

I am trying to parse this snippet of html:

<select id="chapterMenu" name="chapterMenu">

<option value="/111-3640-1/20th-century-boys/chapter-1.html" selected="selected">Chapter 1: Friend</option>

<option value="/111-3641-1/20th-century-boys/chapter-2.html">Chapter 2: Karaoke</option>

<option value="/111-3642-1/20th-century-boys/chapter-3.html">Chapter 3: The Boy Who Bought a Guitar</option>

<option value="/111-3643-1/20th-century-boys/chapter-4.html">Chapter 4: Snot Towel</option>

<option value="/111-3644-1/20th-century-boys/chapter-5.html">Chapter 5: Night of the Science Room</option>

</select>

Scrapy parse_item code:

def parse_item(self, response):
    itemLoader = XPathItemLoader(item=MangaItem(), response=response)
    itemLoader.add_xpath('chapter', '//select[@id="chapterMenu"]/option[@selected="selected"]/text()')
    return itemLoader.load_item()

Scrapy does not extract any text from this but if I get the same xpath and html snippet and run it here it works just fine.

if I use this xpath:

//select[@id="chapterMenu"]

I get the correct element but when I try to access the options inside it does not get anything

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

Scrapy only does a GET request for the url, it is not a web browser and therefore cannot run JavaScript. Because of this Scrapy alone will not be enough to scrape through dynamic web pages.

In addition you will need something like Selenium which basically gives you an interface to several web browsers and their functionalities, one of them being the ability to run JavaScript and get client side generated HTML.

Here is a snippet of how one can go about doing this:

from Project.items import SomeItem
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.selector import Selector
from selenium import webdriver
import time

class RandomSpider(CrawlSpider):

    name = 'RandomSpider'
    allowed_domains = ['random.com']
    start_urls = [
        'http://www.random.com'
    ]

    rules = (
        Rule(SgmlLinkExtractor(allow=('some_regex_here')), callback='parse_item', follow=True),
    )

    def __init__(self):
        CrawlSpider.__init__(self)
        # use any browser you wish
        self.browser = webdriver.Firefox() 

    def __del__(self):
        self.browser.close()

    def parse_item(self, response):
        item = SomeItem()
        self.browser.get(response.url)
        # let JavaScript Load
        time.sleep(3) 

        # scrape dynamically generated HTML
        hxs = Selector(text=self.browser.page_source) 
        item['some_field'] = hxs.select('some_xpath')
        return item

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...