I im developing a simple program that will scrape a site. The only problem is that I need to login first.
I've used this browser_cookie3 library on my PC, but now I want to set it up on a VPS. Does anyone know what files I need to copy from my PC to keep the program running or maybe provide a better solution. I tried saving the cookies to a txt file from a session while using browser_cookie3, but sadly it didn't work.
Here is some code:
import browser_cookie3,requests,time,os
headers = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36',
'TE': 'Trailers',
}
url = 'http://www.example.com/'
cookies = browser_cookie3.chrome(domain_name='.example.com')
session = requests.session()
response = requests.get(url, verify=False, headers=headers, cookies=cookies, timeout=3)
print(response.text)
question from:
https://stackoverflow.com/questions/65872091/using-browser-cookie3-on-a-ubuntu-vps 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…