Crawlera Basics

Getting started with Crawlera
Crawlera is a smart HTTP/HTTPS downloader that routes your requests through a pool of IP addresses, introducing delays and discarding IPs where necessary to...
Wed, 29 Mar, 2017 at 5:53 PM
Crawlera best practices
When using Crawlera, it's important to keep in mind the following best practices: Set download timeout One of the most common problems our users h...
Tue, 19 Jun, 2018 at 8:30 PM
Restricting Crawlera IPs to a specific region
First, go to your Crawlera Dashboard in Scrapinghub and click the 'Create Account' button: Next, under the 'Create New Crawlera Acco...
Fri, 6 Jul, 2018 at 4:54 PM
Fetching HTTPS pages with Crawlera
Crawlera performs HTTPS requests using CONNECT method, transparently tunnelling packets over default HTTPS port 443 to and from the destination server in a ...
Wed, 21 Aug, 2019 at 8:54 AM
Sending POST requests with Crawlera
Crawlera can process POST requests, and they're considered as a single requests, like GET requests. This is commonly used for crawling search results a...
Wed, 26 Apr, 2017 at 3:55 PM
Customizing Crawlera User Agents
Crawlera will by default use a random desktop-like user agent with every request and discard the User-Agent header that the user sends. This is to further h...
Mon, 22 Apr, 2019 at 4:09 PM
I am running Job on SC with Crawlera but job is too slow. How to increase the speed?
Crawlera is a proxy rotater which helps in evading bans from the target website by adjusting delays, handling cookies and managing IPs. When Crawlera is add...
Fri, 10 Aug, 2018 at 5:07 AM
541 Error for file size too big
You are receiving this error because you are using Crawlera service for requesting content that is larger than 500MB. To improve the stability and performa...
Fri, 1 Feb, 2019 at 6:01 AM