With Crawlera, each request is routed through a different outgoing IPs. To make requests using the same IP you need to use Sessions. Each session will internally map to a single outgoing IP. 

Handling bans

If you get banned using sessions, you will get a 503 response. This is different than when using Crawlera without sessions where outgoing IPs are rotated automatically. 

So, when using sessions you need to handle bans yourself, by requesting a new sessions when the current one gets banned.

Session duration

Sessions expire after 30 minutes of its last use.

Number of sessions

The number of concurrent sessions for C10 plan is 100, and for all other plans it's 5000. When the limit is exceeded, Crawlera returns the response code 400 with the message of user_session_limit in the X-Crawlera-Error response header.

So to set this up in code if you are using Python and Scrapy you would set the sessions like this:

# -*- coding: utf-8 -*-
import scrapy

class ToScrapeCSSSpider(scrapy.Spider):
    name = "toscrape-css"
    def start_requests(self):
        yield scrapy.Request(
            headers={'X-Crawlera-Session': 'create'},

    def parse(self, response):
        for quote in response.css("div.quote"):
            yield {
                'text': quote.css("span.text::text").extract_first(),
                'author': quote.css("small.author::text").extract_first(),
                'tags': quote.css("div.tags > a.tag::text").extract()

        next_page_url = response.css("li.next > a::attr(href)").extract_first()
        if next_page_url is not None:
            session_id = response.headers.get('X-Crawlera-Session', '')
            yield scrapy.Request(
                headers={'X-Crawlera-Session': session_id}

For more information, please check the Crawlera Sessions API.