I have a project that scrapes sites using an R script with rSelenium. My R code talks to a selenium driver from a pre-built docker image selenium/standalone-firefox.
Can anyone point me to how I go about integrating crawlera into this workflow?
Is it correct that I need to install the crawlera-headless-proxy? I've tried that, but the crawlera-headless-proxy command never returns and the IP address remains unchanged.
Is there a way of making a crawlera-headless-proxy docker container work with/around a selenium docker container?
Replied on the ticket you created for the same, posting it here too for other to benefit
Regarding crawlera-headless-proxy never returns, it is running but it does not show any output if no request is sent to it. For starters kindly start with -d option to run it in debug mode