To enable Crawlera for use with Puppeteer, flip ignoreHTTPSErrors option in puppetteer.launch method to True, specify Crawlera's host and port in --proxy-server Chromium flag, and send Crawlera credentials in the Proxy-Authorization header by means of page.setExtraHTTPHeaders method.


A sample Nodejs script:


const puppeteer = require('puppeteer');

(async () => {
    const browser = await puppeteer.launch({
        ignoreHTTPSErrors: true,
        args: [
            `--proxy-server=proxy.crawlera.com:8010`
        ]
    });
    const page = await browser.newPage();

    await page.setExtraHTTPHeaders({
        'Proxy-Authorization': 'Basic ' + Buffer.from('<CRAWLERA_APIKEY>:').toString('base64'),
    });
    
    console.log('Opening page ...');
    try {
        await page.goto('https://httpbin.scrapinghub.com/redirect/6', {timeout: 180000});
    } catch(err) {
        console.log(err);
    }
  
    console.log('Taking a screenshot ...');
    await page.screenshot({path: 'screenshot.png'});
    await browser.close();
})();


To speed up the page rendering, filtering out static files is recommended. For instance, this block of code, added before Proxy-Authorization, excludes images from the loaded page:


await page.setRequestInterception(true);
page.on('request', request => {
    if (request.resourceType == 'image')
        request.abort();
    else
        request.continue();
});


See request.resourceType() and page.setRequestInterception(value) for more details.