Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
686 views
in Technique[技术] by (71.8m points)

web scraping - What to do when proxies are giving Max retries exceeded with URL from request module python

strong text I am Facing an issue in python

HTTPSConnectionPool(host='www.indeed.co.in', port=443): Max retries exceeded with url: /jobs?as_phr=python&l=india&limit=15&sort=date&from=advancedsearch&sr=directhire

For that I am using proxy in python script but after some successful run it is showing Max retries exceeded with URL. and sometime getting ReCaptcha

import requests
from bs4 import BeautifulSoup as bs

proxy_host = " Host name"
proxy_port = "Port number " 
proxy_auth = "Key"
proxies = {"https": "https://{}@{}:{}/".format(proxy_auth, 
proxy_host,proxy_port), "http": "http://{}@{}:{}/".format(proxy_auth, 
proxy_host, proxy_port)}

urll = 'https://www.indeed.com/jobs?q=amazon+aurora+- 
   company%3Aamazon+aurora&l=ohio&radius=100&limit=15&sort=date&sr=directhire'

page = requests.get(urll,proxies=proxies,verify=False)
soup = bs(page.text,'html.parser')

        
question from:https://stackoverflow.com/questions/65626511/what-to-do-when-proxies-are-giving-max-retries-exceeded-with-url-from-request-mo

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)
Waitting for answers

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...