0
Answered
han 1 month ago in Scrapy Cloud • updated by Pablo Vaz (Support Engineer) 1 month ago 1

Hi there, I started using scrapinghub a week ago, and have been using it to scrape some ecommerce websites.


I notice that the crawl job for a particular website keeps ending prematurely without any error logs.

On some instances, I try to visit the website and I found out that I got blocked.

So, I activated crawlera and the results is the same.


What could I be missing out?

Answer

Answer
Answered

Hi Han, even we can't provide ban assistance or crawl tuning for standard accounts, there are some best practices you can keep in mind when enabling Crawlera for a particular project.


Please take some minutes to explore more details in:

Crawlera best practices


Best regards,

Pablo

Answer
Answered

Hi Han, even we can't provide ban assistance or crawl tuning for standard accounts, there are some best practices you can keep in mind when enabling Crawlera for a particular project.


Please take some minutes to explore more details in:

Crawlera best practices


Best regards,

Pablo