Skip to content

CMWENLIU/news-crawl

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

56 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

NEWS-CRAWL

Crawler for news feeds based on StormCrawler. Produces WARC files to be stored as part of the CommonCrawl dataset.

Prerequisites

Configuration

The default configuration should work out-of-the-box. The only thing to do is to configure the user agent properties send in the HTTP request header. Open the file conf/crawler-conf.yaml in an editor and fill in the values for http.agent.name and all further properties starting with the http.agent. prefix.

Run the crawl

Generate an uberjar:

mvn clean package

Inject some URLs with

storm jar target/crawler-1.0-SNAPSHOT.jar com.digitalpebble.stormcrawler.elasticsearch.ESSeedInjector . seeds/feeds.txt -conf conf/es-conf.yaml -conf conf/crawler-conf.yaml -local

This pushes the newsfeed seeds to the status index and has to be done every time new seeds are added. To delete seeds, delete by query in the ES index or wipe the index clean and reindex the whole lot.

You can check that the URLs have been injected on [http://localhost:9200/status/_search?pretty].

You can then run the crawl topology with :

storm jar target/crawler-1.0-SNAPSHOT.jar com.digitalpebble.stormcrawler.CrawlTopology -conf conf/es-conf.yaml -conf conf/crawler-conf.yaml

The topology will create WARC files in the directory specified in the configuration under the key warc.dir. This directory must be created beforehand.

Monitor the crawl

See instructions on [https://github.com/DigitalPebble/storm-crawler/tree/master/external/elasticsearch] to install the templates for Kibana.

Run Crawl from Docker Container

Build the Docker image from the Dockerfile:

docker build -t newscrawler:1.0 .

Note: the uberjar is included in the Docker image and needs to be built first.

Launch an interactive container:

docker run --net=host \
    -p 127.0.0.1:9200:9200 -p 127.0.0.1:9300:9300 \
    -p 5601:5601 -p 8080:8080 \
    -v .../newscrawl/elasticsearch:/data/elasticsearch \
    -v .../newscrawl/warc:/data/warc \
    --rm -i -t newscrawler:1.0 /bin/bash

NOTE: don't forget to adapt the paths to mounted volumes used to persist data on the host.

CAVEAT: Make sure that the Elasticsearch port 9200 is not already in use or mapped by a running ES instance. Otherwise ES commands may affect the running instance!

The crawler is launched in the running container by the script

/home/ubuntu/news-crawler/bin/run-crawler.sh

After 1-2 minutes if everything is up, connect to Elasticsearch on port 9200 or Kibana on port 5601.

About

News crawling with SC - stores output as WARC

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Java 60.4%
  • Shell 35.7%
  • FLUX 3.9%