0
Answered
noa.drach-crawlera 3 days ago in Crawlera • updated by Pablo Vaz (Support Engineer) yesterday at 7:09 p.m. 1

I have the C10 subscription and when I try to use 10 parallel calls i get parallel connection limit reached error.


I dispatch the calls to crawlera in a simple loop


for (var index = 0; index <10; index++) {..}


when i change the loop to run 9 calls it works ok - so it's not clear to me how the limit is reached.


I contacted you on the support chat and got this response

"The bets way to ensure you make 10 concurrent requests and not go beyond that value is to set the concurrent_requests parameter to 10 in your crawlera settings."


this is my only crawlera related code:

var new_req = request.defaults({

'proxy': 'http://<API key>:@proxy.crawlera.com:8010'
});

so it's not clear to me what does it mean "crawlera settings"

Answer

Answer
Answered

Hey Noa, I saw you request support throug Fresh Desk and Thriveni is assisting you.

We are here for any further inquiry you need. About the question you posted here, the best way to use Crawlera with Node.JS is provided in: https://doc.scrapinghub.com/crawlera.html#node-js

Crawlera settings are available for Scrapy projects, if interested to try:

http://scrapy-crawlera.readthedocs.io/en/latest/


Kind regards,

Pablo

Answer
Answered

Hey Noa, I saw you request support throug Fresh Desk and Thriveni is assisting you.

We are here for any further inquiry you need. About the question you posted here, the best way to use Crawlera with Node.JS is provided in: https://doc.scrapinghub.com/crawlera.html#node-js

Crawlera settings are available for Scrapy projects, if interested to try:

http://scrapy-crawlera.readthedocs.io/en/latest/


Kind regards,

Pablo