Skip to content

benkang-chen/headless-chrome-crawler

 
 

Repository files navigation

Headless Chrome Crawler npm build Greenkeeper badge

Distributed crawler powered by Headless Chrome

Features

Crawlers based on simple requests to HTML files are generally fast. However, it sometimes ends up capturing empty bodies, especially when the websites are built on such modern frontend frameworks as AngularJS, React and Vue.js.

Powered by Headless Chrome, the crawler provides simple APIs to crawl these dynamic websites with the following features:

  • Distributed crawling
  • Configure concurrency, delay and retry
  • Support both depth-first search and breadth-first search algorithm
  • Pluggable cache storages such as Redis
  • Support CSV and JSON Lines for exporting results
  • Pause at the max request and resume at any time
  • Insert jQuery automatically for scraping
  • Save screenshots for the crawling evidence
  • Emulate devices and user agents
  • Priority queue for crawling efficiency
  • Obey robots.txt
  • Follow sitemap.xml
  • [Promise] support

Getting Started

Installation

yarn add headless-chrome-crawler
# or "npm i headless-chrome-crawler"

Note: headless-chrome-crawler contains Puppeteer. During installation, it automatically downloads a recent version of Chromium. To skip the download, see Environment variables.

Usage

const HCCrawler = require('headless-chrome-crawler');

(async () => {
  const crawler = await HCCrawler.launch({
    // Function to be evaluated in browsers
    evaluatePage: (() => ({
      title: $('title').text(),
    })),
    // Function to be called with evaluated results from browsers
    onSuccess: (result => {
      console.log(result);
    }),
  });
  // Queue a request
  await crawler.queue('https://example.com/');
  // Queue multiple requests
  await crawler.queue(['https://example.net/', 'https://example.org/']);
  // Queue a request with custom options
  await crawler.queue({
    url: 'https://example.com/',
    // Emulate a tablet device
    device: 'Nexus 7',
    // Enable screenshot by passing options
    screenshot: {
      path: './tmp/example-com.png'
    },
  });
  await crawler.onIdle(); // Resolved when no queue is left
  await crawler.close(); // Close the crawler
})();

Examples

See here for the full examples list. The examples can be run from the root folder as follows:

NODE_PATH=../ node examples/priority-queue.js

API reference

See here for the API reference.

Tips

Distributed crawling

In order to crawl under distributed mode, use Redis for the shared cache storage. You can run the same script on multiple machines, so that Redis is used to share and distribute task queues.

const HCCrawler = require('headless-chrome-crawler');
const RedisCache = require('headless-chrome-crawler/cache/redis');

const TOP_PAGES = [
  // ...
];

const cache = new RedisCache({
  // ...
});

(async () => {
  const crawler = await HCCrawler.launch({
    maxDepth: 3,
    cache,
  });
  await crawler.queue(TOP_PAGES);
})();

Launch options

HCCrawler.launch()'s options are passed to puppeteer.launch(). It may be useful to set the headless and slowMo options so that you can see what is going on.

HCCrawler.launch({ headless: false, slowMo: 10 });

Also, the args option is passed to the browser instance. List of Chromium flags can be found here. Passing --disable-web-security flag is useful for crawling. If the flag is set, links within iframes are collected as those of parent frames. If it's not, the source attributes of the iframes are collected as links.

HCCrawler.launch({ args: ['--disable-web-security'] });

Running tests

All tests but RedisCache's are run by the following command:

yarn test

When you modify RedisCache's code, make sure that Redis is installed, start the server and run all tests with the following command:

yarn test-all

Enable debug logging

All requests and browser's logs are logged via the debug module under the hccrawler namespace.

env DEBUG="hccrawler:*" node script.js
env DEBUG="hccrawler:request" node script.js
env DEBUG="hccrawler:browser" node script.js

FAQ

How is this different from other crawlers?

There are roughly two types of crawlers. One is static and the other is dynamic.

The static crawlers are based on simple requests to HTML files. They are generally fast, but fail scraping the contents when the HTML dynamically changes on browsers.

Dynamic crawlers based on PhantomJS and Selenium work magically on such dynamic applications. However, PhantomJS's maintainer has stepped down and recommended to switch to Headless Chrome, which is fast and stable. Selenium is still a well-maintained cross browser platform which runs on Chrome, Safari, IE and so on. However, crawlers do not need such cross browsers support.

This crawler is dynamic and based on Headless Chrome.

How is this different from Puppeteer?

This crawler is built on top of Puppeteer.

Puppeteer provides low to mid level APIs to manupulate Headless Chrome, so you can build your own crawler with it. This way you have more controls on what features to implement in order to satisfy your needs.

However, most crawlers requires such common features as following links, obeying robots.txt and etc. This crawler is a general solution for most crawling purposes. If you want to quickly start crawling with Headless Chrome, this crawler is for you.

About

Distributed crawler powered by Headless Chrome

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • JavaScript 100.0%