Skip to content

Latest commit

 

History

History
159 lines (130 loc) · 4.45 KB

README.md

File metadata and controls

159 lines (130 loc) · 4.45 KB

recipe-scraper

A NodeJS package for scraping recipes from the web.

Build Status Coverage Status

Installation

npm install recipe-scraper

Usage

// import the module
const recipeScraper = require("recipe-scraper");

// enter a supported recipe url as a parameter - returns a promise
async function someAsyncFunc() {
  ...
  let recipe = await recipeScraper("some.recipe.url");
  ...
}

// using Promise chaining
recipeScraper("some.recipe.url").then(recipe => {
    // do something with recipe
  }).catch(error => {
    // do something with error
  });

Supported Websites

Don't see a website you'd like to scrape? Open an issue and we'll do our best to add it.

Recipe Object

Depending on the recipe, certain fields may be left blank. All fields are represented as strings or arrays of strings. The name, ingredients, and instructions properties are required for schema validation.

{
    name: "",
    ingredients: [],
    instructions: [],
    tags: [],
    servings: "",
    image: "",
    time: {
      prep: "",
      cook: "",
      active: "",
      inactive: "",
      ready: "",
      total: ""
    }
}

Error Handling

If the url provided is invalid and a domain is unable to be parsed, an error message will be returned.

recipeScraper("keyboard kitty").catch(error => {
  console.log(error.message);
  // => "Failed to parse domain"
});

If the url provided doesn't match a supported domain, an error message will be returned.

recipeScraper("some.invalid.url").catch(error => {
  console.log(error.message);
  // => "Site not yet supported"
});

If a recipe is not found on a supported domain site, an error message will be returned.

recipeScraper("some.no.recipe.url").catch(error => {
  console.log(error.message);
  // => "No recipe found on page"
});

If a page does not exist or some other 400+ error occurs when fetching, an error message will be returned.

recipeScraper("some.nonexistent.page").catch(error => {
  console.log(error.message);
  // => "No recipe found on page"
});

If a supported url does not contain the proper sub-url to be a valid recipe, an error message will be returned including the sub-url required.

recipeScraper("some.improper.url").catch(error => {
  console.log(error.message);
  // => "url provided must include '#subUrl'"
});

Bugs

With web scraping comes a reliance on the website being used not changing format. If this occurs we need to update our scrape. Please reach out if you are experiencing an issue.

Contributing

I welcome pull requests that keep the scrapes up to date or add new ones. I'm doing my best to keep this package maintained and with your help this goal is much more achievable. Please add testing if you add a scrape. Thank you 😁