A web scraper written in Nightmare.js that outputs HTML files to AWS S3, and can be deployed to AWS ECS.
The goal of this repo is take a sample web scraper that can be run on a local machine and go through a series of development and tooling steps into deploying it as a service on AWS ECS
Click on the step link, and you will see instructions on how to run the code in that step. It will also walk you through the code that was written in that step, and issues that were encountered along the way. If you like to run the final result, please go to the Step 5 folder.
Setup nightmarejs example web scraper on the local machine.
Note: If you have followed the steps in ecs_s3_scraper_starter, then you don't need to do this step. The dev log talks about how we got there though, if you are interested in finding out.
Expand on the example to allow users to pass a keyword as an argument, and save the resulting HTML file to the local machine
Dockerize the web scraper i.e. run the code in a docker container, and obtain the resulting HTML file
Configure a mock AWS S3 running on a docker container, and write the HTML file from the dockerized web scraper to it
Deploy the dockerized web scraper to AWS ECS, and write the HTML file to the real AWS S3!