With Crawlera, each request is routed through a different outgoing IPs. To make requests using the same IP you need to use Sessions. Each session will internally map to a single outgoing IP. 


Handling bans

If you get banned using sessions, you will get a 503 response. This is different than when using Crawlera without sessions where outgoing IPs are rotated automatically. 

So, when using sessions you need to handle bans yourself, by requesting a new sessions when the current one gets banned.


Session duration

Sessions expire after 30 minutes of its last use.


So to set this up in code if you are using Python and Scrapy you would set the sessions like this:


# -*- coding: utf-8 -*-
import scrapy


class ToScrapeCSSSpider(scrapy.Spider):
    name = "toscrape-css"
    
    def start_requests(self):
        yield scrapy.Request(
            'http://quotes.toscrape.com/',
            headers={'X-Crawlera-Session': 'create'},
            callback=self.parse,
        ) 

    def parse(self, response):
        for quote in response.css("div.quote"):
            yield {
                'text': quote.css("span.text::text").extract_first(),
                'author': quote.css("small.author::text").extract_first(),
                'tags': quote.css("div.tags > a.tag::text").extract()
            }

        next_page_url = response.css("li.next > a::attr(href)").extract_first()
        if next_page_url is not None:
            session_id = response.headers.get('X-Crawlera-Session', '')
            yield scrapy.Request(
                response.urljoin(next_page_url),
                headers={'X-Crawlera-Session': session_id}
            )




For more information, please check the Crawlera Sessions API.