I'm trying to create a basic web scraper for Amazon results. As I'm iterating through results, I sometimes get to page 5 (sometimes only page 2) of the results and then a StaleElementException
is thrown. When I look at the browser after the exception is thrown, I can see that the driver/page did not scroll down to where the page numbers are (bottom bar).
My code:
driver.get('https://www.amazon.com/s/ref=nb_sb_noss_1?url=search-alias%3Daps&field-keywords=sonicare+toothbrush')
for page in range(1,last_page_number +1):
driver.implicitly_wait(10)
bottom_bar = driver.find_element_by_class_name('pagnCur')
driver.execute_script("arguments[0].scrollIntoView(true);", bottom_bar)
current_page_number = int(driver.find_element_by_class_name('pagnCur').text)
if page == current_page_number:
next_page = driver.find_element_by_xpath('//div[@id="pagn"]/span[@class="pagnLink"]/a[text()="{0}"]'.format(current_page_number+1))
next_page.click()
print('page #',page,': going to next page')
else:
print('page #: ', page,'error')
I've looked at this question, and I'm guessing that a similar fix can be applied, but I'm not sure how to find something on the page that disappears. Also, based on how quickly the print statements are occurring, I can see that the implicitly_wait(10)
isn't actually waiting a full 10 seconds.
The exception is pointing to the line that starts with "driver.execute_script". This is the exception:
StaleElementReferenceException: Message: The element reference of <span class="pagnCur"> is stale; either the element is no longer attached to the DOM, it is not in the current frame context, or the document has been refreshed
Sometimes I'll get a ValueError:
ValueError: invalid literal for int() with base 10: ''
So these errors/exceptions lead me to believe that there is something going on with waiting for the page to refresh completely.
Question&Answers:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…