Crawler for news feeds based on StormCrawler. Produces WARC files to be stored as part of the CommonCrawl dataset.
- Install ElasticSearch 2.3.1 and Kibana 4.5.1
- Install Apache Storm 1.0
- Clone and compile [https://github.com/DigitalPebble/sc-warc] with
mvn clean install
- Clone and compile [https://github.com/DigitalPebble/storm-crawler] with
mvn clean install
- Start ES and Storm
- Build ES indices with :
curl -L "https://git.io/vaGkv" | bash
The default configuration should work out-of-the-box. The only thing to do is to configure the user agent properties send in the HTTP request header. Open the file conf/crawler-conf.yaml
in an editor and fill in the values for http.agent.name
and all further properties starting with the http.agent.
prefix.
Generate an uberjar:
mvn clean package
Inject some URLs with
storm jar target/crawler-1.0-SNAPSHOT.jar com.digitalpebble.stormcrawler.elasticsearch.ESSeedInjector . seeds/feeds.txt -conf conf/es-conf.yaml -conf conf/crawler-conf.yaml -local
This pushes the newsfeed seeds to the status index and has to be done every time new seeds are added. To delete seeds, delete by query in the ES index or wipe the index clean and reindex the whole lot.
You can check that the URLs have been injected on [http://localhost:9200/status/_search?pretty].
You can then run the crawl topology with :
storm jar target/crawler-1.0-SNAPSHOT.jar com.digitalpebble.stormcrawler.CrawlTopology -conf conf/es-conf.yaml -conf conf/crawler-conf.yaml
The topology will create WARC files in the directory specified in the configuration under the key warc.dir
. This directory must be created beforehand.
See instructions on [https://github.com/DigitalPebble/storm-crawler/tree/master/external/elasticsearch] to install the templates for Kibana.
Build the Docker image from the Dockerfile:
docker build -t newscrawler:1.0 .
Note: the uberjar is included in the Docker image and needs to be built first.
Launch an interactive container:
docker run --net=host \
-p 127.0.0.1:9200:9200 -p 127.0.0.1:9300:9300 \
-p 5601:5601 -p 8080:8080 \
-v .../newscrawl/elasticsearch:/data/elasticsearch \
-v .../newscrawl/warc:/data/warc \
--rm -i -t newscrawler:1.0 /bin/bash
NOTE: don't forget to adapt the paths to mounted volumes used to persist data on the host.
CAVEAT: Make sure that the Elasticsearch port 9200 is not already in use or mapped by a running ES instance. Otherwise ES commands may affect the running instance!
The crawler is launched in the running container by the script
/home/ubuntu/news-crawler/bin/run-crawler.sh
After 1-2 minutes if everything is up, connect to Elasticsearch on port 9200 or Kibana on port 5601.