Here is a code snippet that illustrates how to use Crawlera with Python Requests library:

import requests

url = ""
proxy_host = ""
proxy_port = "8010"
proxy_auth = "<APIKEY>:" # Make sure to include ':' at the end
proxies = {"https": "https://{}@{}:{}/".format(proxy_auth, proxy_host, proxy_port),
      "http": "http://{}@{}:{}/".format(proxy_auth, proxy_host, proxy_port)}

r = requests.get(url, proxies=proxies,

Requesting [{}]
through proxy [{}]

Request Headers:

Response Time: {}
Response Code: {}
Response Headers:

""".format(url, proxy_host, r.request.headers, r.elapsed.total_seconds(),
           r.status_code, r.headers, r.text))

Note: This code uses Requests version 2.18. Using previous versions can lead to authentication 407 errors, Hence ensure Requests version is at least 2.18.