Web Scraping may be a good way to gather large amounts of knowledge in less time. Worldwide data is increasing and alongside that Web Scraping has become more important for businesses than ever before.
List of Web Scraping Libraries we’ll undergo
-
Request-Promise-Native
It is an HTTP client through which you’ll make HTTP calls very easily. It also supports HTTPS & follows redirects by default.
Now, let’s see an example of request-promise-native how it works.
const request = require(‘request-promise-native’); let scrape = async() => { var respo = await request(‘http://books.toscrape.com/') return respo; } scrape().then((value) => { console.log(value); // HTML code of the website });
Advantages of using request-promise-native:
- It provides proxy support
- Custom headers
- HTTP Authentication
- Support TLS/SSL Protocol
2. Unirest
Unirest may be a lightweight HTTP client library from Mashape. alongside JS, it’s also available for Java, .Net, Python, Ruby, etc.
- GET request
var unirest = require('unirest'); let scrape = async() => { var respo = await unirest.get(‘http://books.toscrape.com/') return respo.body; } scrape().then((value) => { console.log(value); // Success! });
- POST request
var unirest = require(‘unirest’); let scrape = async() => { var respo = await unirest.post(‘http://httpbin.org/anything').headers({'X-header': ‘123’}) return respo.body; } scrape().then((value) => { console.log(value); // Success! });
Response
{ args: {}, data: ‘’, files: {}, form: {}, headers: { ‘Content-Length’: ‘0’, Host: ‘httpbin.org’, ‘X-Amzn-Trace-Id’: ‘Root=1–5ed62f2e-554cdc40bbc0b226c749b072’, ‘X-Header’: ‘123’ }, json: null, method: ‘POST’, origin: ‘23.238.134.113’, url: ‘http://httpbin.org/anything' }
- PUT request
var unirest = require(‘unirest’); let scrape = async() => { var respo = await unirest.put(‘http://httpbin.org/anything').headers({'X-header': ‘123’}) return respo.body; } scrape().then((value) => { console.log(value); // Success! });
Response
{ args: {}, data: ‘’, files: {}, form: {}, headers: { ‘Content-Length’: ‘0’, Host: ‘httpbin.org’, ‘X-Amzn-Trace-Id’: ‘Root=1–5ed62f91-bb2b684e39bbfbb3f36d4b6e’, ‘X-Header’: ‘123’ }, json: null, method: ‘PUT’, origin: ‘23.63.69.65’, url: ‘http://httpbin.org/anything' }
In the response to POST and PUT requests, you’ll see I even have added a custom header. We add custom headers to customize the results of the response.
Advantages of using Unirest
- support all HTTP Methods (GET,POST,DELETE,etc.)
- support forms uploads
- supports both streaming and callback interfaces
- HTTP Authentication
- Proxy Support
- Support TLS/SSL Protocol
3. Cheerio
Cheerio module, you’ll be ready to use the syntax of jQuery while working with downloaded web data. Cheerio provides developers with the power to supply their attention to the downloaded data, instead of parsing it.
Now, we’ll calculate the number of books available on the primary page of the target website.
const cheerio = require(‘cheerio’) let scrape = async() => { var respo = await request(‘http://books.toscrape.com/') return respo; } scrape().then((value) => { const $ = cheerio.load(value) var numberofbooks = $(‘ol[class=”row”]’).find(‘li’).length console.log(numberofbooks); // 20! });
Advantages of using Cheerio
- Familiar syntax: Cheerio implements a subset of core jQuery. It removes all the DOM inconsistencies and browser cruft from the jQuery library, revealing its truly gorgeous API.
- Lightening Quick: Cheerio works with a really simple, consistent DOM model. As a result, parsing, manipulating, and rendering are incredibly efficient. Preliminary end-to-end benchmarks suggest that cheerio is about 8x faster than JSDOM.
- Stunningly flexible: Cheerio can parse nearly any HTML or XML document.
-
Puppeteer
Puppeteer may be a Node.js library that gives an easy but efficient API that permits you to regulate Google’s Chrome or Chromium browser.
It also enables you to run Chromium in headless mode (useful for running browsers in servers) and may send and receive requests without the necessity for an interface.
It has better control over the Chrome browser because it doesn’t use any external adaptor to regulate Chrome plus it’s Google support too.
The great thing is that it works within the background, performing actions as instructed by the API.
We’ll see an example of a puppeteer to scrape the entire HTML code of our target website.
let scrape = async () => { const browser = await puppeteer.launch({headless: true}); const page = await browser.newPage(); await page.goto(‘http://books.toscrape.com/'); await page.waitFor(1000); var result = await page.content(); browser.close(); return result; }; scrape().then((value) => { console.log(value); // complete HTML code of the target url! });
What each step means here:
- This will launch a chrome browser.
- Second-line will open a replacement tab.
- The third line will open that focus on URL.
- We are extracting all the HTML content of that website.
- We are closing the Chrome browser.
- returning the results.
Advantages of using Puppeteer
- Click elements like buttons, links, and pictures
- Automate form submissions
- Navigate pages
- Take a timeline trace to seek out where the problems are during a website
- Carry out automated testing for user interfaces and various front-end apps, directly during a browser
- Take screenshots
- Convert sites to pdf files
-
Osmosis
- Osmosis is HTML/XML parser and web scraper.
- It is written in node.js which full of css3/XPath selector and light-weight HTTP wrapper
- No large dependencies like Cheerio
We’ll be working with this page on Wikipedia, which contains population information for the US States.
osmosis(‘https://en.wikipedia.org/wiki/List_of_U.S._states_and_territories_by_population').set({ heading: ‘h1’, title: ‘title’}).data(item => console.log(item));
The response will appear as if this
{ heading: ‘List of U.S. states and territories by population’, title: ‘List of U.S. states and territories by population — Wikipedia’ }
Advantages of using Osmosis
- Supports CSS 3.0 and XPath 1.0 selector hybrids
- Load and search AJAX content
- Logs URLs, redirects, and errors
- Cookie jar and custom cookies/headers/user agent
- Login/form submission, session cookies, and basic auth
- Single proxy or multiple proxies and handles proxy failure
- Retries and redirect limits
Conclusion
Web scraping is about to grow as time progresses. As web scraping applications abound, JavaScript libraries will grow in demand.
While there are salient JavaScript libraries, it might be puzzling to settle on the proper one. However, it might eventually boil right down to your own respective requirements.
Abhishek Kumar
More posts by Abhishek Kumar