In python I had:
response = s.get(url, allow_redirects=False, cookies=cookies, headers=headers)
soup = BeautifulSoup(response.text, 'lxml')
reg_cart = soup.find('form', attrs={"name": "regCart"})
registered_courses = [i.a.text for i in reg_cart.find_all('div', attrs={"class": "course-number"})]
Now I want to replace BeautifulSoup
with lxml
, reading this:
https://timber.io/blog/an-intro-to-web-scraping-with-lxml-and-python/
I tried to implement what they used there and got:
import lxml.html
doc = lxml.html.fromstring(response.content)
registered_courses = doc.xpath('//div[@class="course-number"]/text()')
But for some reason my output is:
['
', '
', '
', '
', '
', '
', '
']
While previously it correctly showed courses numbers.
How can I fix this? plus how can I edit my path to return only those div tags under the form regCart
and not in all response?
For example the html code looks something like:
<form name="regCart" ....>
</div><div class="entry-spacer"></div><div class="cart-entry">
<div class="course-number">
<a href="https://university.com/rishum/course/236756">236756</a>
</div>
<div class="course-name">
???? ??????? ??????
</div>
<div class="course-points">
3.0 ??'
</div>
<div class="entry-group">
????? 13
</div>
Where I want to return 236756
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…