I try to do a webscraping script who gives me if a website is a wordpress or no, but i get this error:
urllib.error.HTTPError: HTTP Error 403: Forbidden
and i don't understand, i use this headers who is supposed to pass it (in other stacks overflow):
headers = {"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:66.0) Gecko/20100101 Firefox/66.0", "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept-Language": "fr-fr,en;q=0.5", "Accept-Encoding": "gzip, deflate", "DNT": "1", "Connection": "close", "Upgrade-Insecure-Requests": "1"}
there is my function;
def check_web_wp(url):
is_wordpress = False
print(repr(url))
headers = {"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:66.0) Gecko/20100101 Firefox/66.0", "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept-Language": "fr-fr,en;q=0.5", "Accept-Encoding": "gzip, deflate", "DNT": "1", "Connection": "close", "Upgrade-Insecure-Requests": "1"}
response = requests.get(url, headers=headers)
with urllib.request.urlopen(url) as response:
texte = response.read()
poste_string = str(texte)
splitted = poste_string.split()
for word in splitted:
if ("wordpress" in word):
is_wordpress = True
break
return is_wordpress
def main():
url = "https://icalendrier.fr/"
is_wp = check_web_wp(url)
did i miss something? Is it the website who is too much "securised"?
Thanks for yours answers
with urllib.request.urlopen(url) as response:(without the headers) is overwriting your previousresponseobject fromresponse = requests.get(url, headers=headers)(with headers).