Skip to content

A sample web scraper written in nightmare that outputs HTML files to AWS S3, and can be deployed to AWS ECS

License

Notifications You must be signed in to change notification settings

anishk123/ecs_s3_scraper

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

52 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ecs_s3_scraper

A web scraper written in Nightmare.js that outputs HTML files to AWS S3, and can be deployed to AWS ECS.

Goal

The goal of this repo is take a sample web scraper that can be run on a local machine and go through a series of development and tooling steps into deploying it as a service on AWS ECS

How to get the most out of this repo

Click on the step link, and you will see instructions on how to run the code in that step. It will also walk you through the code that was written in that step, and issues that were encountered along the way. If you like to run the final result, please go to the Step 5 folder.

Outline of steps

Setup nightmarejs example web scraper on the local machine.

Note: If you have followed the steps in ecs_s3_scraper_starter, then you don't need to do this step. The dev log talks about how we got there though, if you are interested in finding out.

Expand on the example to allow users to pass a keyword as an argument, and save the resulting HTML file to the local machine

Dockerize the web scraper i.e. run the code in a docker container, and obtain the resulting HTML file

Configure a mock AWS S3 running on a docker container, and write the HTML file from the dockerized web scraper to it

Deploy the dockerized web scraper to AWS ECS, and write the HTML file to the real AWS S3!

About

A sample web scraper written in nightmare that outputs HTML files to AWS S3, and can be deployed to AWS ECS

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published