HTTPError: HTTP error 418:
>
>
>
>
>
>
>
> ‘s disguise code is as follows:
‘s many partners go to user-agent every time they do a crawler project. In fact, it is not necessary, but it is ok to use the previous one. Try to change the last number when tested, and most of the time it works.
and this is what I came up with:
, I hope it’s helpful to you.
Read More:
- Python crawler: urllib.error.HTTPError : HTTP Error 404: Not Found
- raise HTTPError(req.full_url, code, msg, hdrs, fp)urllib.error.HTTPError: HTTP Error 404: Not Found
- Urllib2.httperror: http error 403: forbidden solution
- urllib2.HTTPError: HTTP Error 403: Forbidden
- HTTPError HTTP Error 500 INTERNAL SERVER ERROR
- Python: crawler handles strings of XML and HTML
- Simple Python crawler exercise: News crawling on sohu.com
- python: HTTP Error 505: HTTP Version Not Supported
- python:urllib2.URLError urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed
- Scrapy runs a crawler with an error importerror: cannot import name suppress
- This version of chromedriver only supports chrome version 92 crawler simulates the problem of Google plug-in version when the browser clicks and reports an error
- urllib3 (1.26.4) or chardet (4.0.0) doesn‘t match a supported version!
- Springboot project: error parsing HTTP request header note: further occurrences of HTTP request parsing
- An Ajax HTTP error occurred in drupal7 installation occurred.HTTP Result Code
- [Python] pip.vendor.urllib3.exceptions.readtimeouterror: httpsconnectionpool protocol
- Error parsing HTTP request header Note: further occurrences of HTTP header p
- Web Crawler: How to get the data in the web page and disguise the header, disguise as a browser to visit many times, avoid a single visit leading to IP blocked
- urlopen error unknown url type:httpë/HTTP Error 400:Bad Request
- Solving attributeerror: module ‘urllib’ has no attribute ‘request’