Start a new topic

403 Errors when deployed but not locally


I have had several spiders working fine for months when deployed on scrapinghub but recently have started failing immediately on the first request due to 403 errors.

Is this perhaps due to scrapinghub's standard servers (non Crawlera) using a common pool of IPs? Most of my target sites or in Australia so perhaps they have started geoblocking to Australian IPs?

All of these spiders still work fine from a local machine.

Is the best solution to use crawlera and set region to Australia? 

If so, is there are way to speed the spiders up, for example is it faster to use a single crawlera session (single IP) for the entire crawl rather than a new crawlera session for each request?


Login to post a comment