Making use of PHP binding for libcurl library:


$ch = curl_init();

$url = '';
$proxy = '';
$proxy_auth = '<API KEY>:';

curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_PROXY, $proxy);
curl_setopt($ch, CURLOPT_PROXYUSERPWD, $proxy_auth);
curl_setopt($ch, CURLOPT_HEADER, 1);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 30);
curl_setopt($ch, CURLOPT_TIMEOUT, 180);
curl_setopt($ch, CURLOPT_CAINFO, '/path/to/crawlera-ca.crt'); //required for HTTPS
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, TRUE); //required for HTTPS

$scraped_page = curl_exec($ch);
echo $scraped_page;


Please be sure to download the certificate provided in your Crawlera settings page and set the correct path to the file in your script.

Making use of Guzzle, a PHP HTTP client, in the context of Symfony framework:


namespace AppBundle\Controller;

use GuzzleHttp\Client;
use Symfony\Bundle\FrameworkBundle\Controller\Controller;
use Sensio\Bundle\FrameworkExtraBundle\Configuration\Route;
use Symfony\Component\HttpFoundation\Response;

class CrawleraController extends Controller
     * @Route("/crawlera", name="crawlera")
    public function crawlAction()
        $url = '';
        $client = new Client(['base_uri' => $url]);
        $crawler = $client->get($url, ['proxy' => 'http://<API KEY>'])->getBody();

        return new Response(
            '<html><body> '.$crawler.' </body></html>'