Web scraping without getting blocked

Introduction

Web scraping or crawling is the process of fetching data from a third-party website by downloading and parsing the HTML code to prize the data you want. 

Still, not every website offers an API, and APIs do not always expose every piece of information you need. So, it’s frequently the only result to prize website data. 

There are numerous use cases for web scraping 

  • E-commerce price monitoring 
  • News aggregation 
  • Lead generation 
  • SEO ( hunt machine affect runner monitoring) 
  • Bank account aggregation (Mint in the US, Bankin’ In Europe) 
  • Individualities and experimenters erecting datasets else not available. 

The main problem is that utmost websites don’t want to be scrapped. They only want to serve content to real druggies using real web browsers.

So, when you scrape, you don’t want to be honored as a robot. There are two main ways to seem human: use human tools and emulate human geste. 

Emulate Human Tool Headless Chrome 

Why Use Headless Browsing? 

When you open your browser and go to a webpage, it nearly always means that you ask an HTTP garçon for some content. One of the easiest ways to pull content from an HTTP garçon is to use a classic command-line tool such as cURL.

The thing is, if you just do cURL www.google.com, Google has numerous ways to know that you aren’t a mortal (for illustration by looking at the heads). Heads are small pieces of information that go with every HTTP request that hits the waiters. One of those pieces of information precisely describes the customer making the request, This is the ignominious “User-Agent” header. Just by looking at the “User-Agent” header, Google knows that you’re using cURL. Heads are easy to alter with cURL, and copying the “User-Agent” header of a legal cybersurfer could do the trick. In the real world, you’d need to set more than one title. But it isn’t delicate to instinctively forge an HTTP request with cURL or any library to make the request look exactly like a request made with a cyber surfer. 

Do you speak Javascript? 

The conception is simple, the website embeds a Javascript grain in its webpage that, formerly executed, will “ unlock” the webpage. However, you will not notice the difference, If you are using a real cybersurfer. 

Headless Browsing

Trying to execute Javascript particles on the side withNode.js is delicate and not robust. And more importantly, as soon as the website has a more complicated check system or is a big single-runner operation ringlet and pseudo-JS prosecution withNode.js come useless. So the stylish way to look like a real cybersurfer is to actually use one. 

Headless browsers will bear like a real cybersurfer except that you’ll be fluently suitable to programmatically use them. The most popular is Chrome Headless, a Chrome option that behaves like Chrome without all of the stoner interface belting it. 

Still, it’ll not be enough as websites now have tools that descry headless cybersurfers. This arms race has been going on for a long time. 

While these results can be easy to do on your original computer, it can be trickier to make this work at scale. 

Browser fingerprinting

Everyone, especially frontal-end devs, knows that every browser behaves differently. Occasionally it’s about rendering CSS, occasionally Javascript, and occasionally just internal parcels. Most of these differences are well- known and it’s now possible to describe if a browser is actually who it pretends to be. This means the website asks “ do all of the browser parcels and actions match what I know about the User-Agent transferred by this browser?”. 

This is why there’s an everlasting arms race between web scrapers who want to pass themselves as a real browser and websites who want to distinguish headless from the rest. 

Most of the time, when a Javascript law tries to describe whether it’s being run in headless mode, it’s when a malware is trying to shirk behavioral characteristics. This means that the Javascript will bear nicely inside a scanning terrain and poorly outside a real browser. And this is why the platoon behind the Chrome headless mode is trying to make it indistinguishable from a real stoner’s web browser in order to stop malware from doing that. Web scrapers can benefit from this trouble. 

Another thing to know is that while running 20 cURL in parallel is trivial and Chrome Headless is fairly easy to use for small use cases, it can be tricky to put at scale. Because it uses lots of RAM, managing more than 20 cases of it’s a challenge. 

TLS fingerprinting 

What’s it? 

TLS stands for Transport Layer Security and is the successor of SSL which was basically what the “ S” of HTTPS stood for. 

This protocol ensures sequestration and data integrity between two or further communicating computer operations (in our case, a web cybersurfer or a script and an HTTP garçon). 

Analogous to browser fingerprinting, the thing of TLS characteristic is to uniquely identify users grounded on the way they use TLS. 

How this protocol works can be split into two big parts.

First, when the customer connects to the garçon, a TLS handshake happens. During this handshake, numerous requests are transferred between the two to ensure that everyone is actually who they claim to be. 

Also, if the handshake has been successful the protocol describes how the customer and the garçon should cipher and decipher the data in a secure way.

It actually makes sense as a TLS point is reckoned using way smaller parameters than a browser fingerprint. 

Those parameters are, amongst others 

  • TLS interpretation 
  • Handshake interpretation 
  • Cipher suites supported 
  • Extensions 

Emulate Mortal Behaviour Proxy, Captcha Working and Request Patterns 

Proxy Yourself 

A human using a real browser will infrequently request 20 runners per second from the same website. So if you want to request a lot of runners from the same website you have to trick the website into allowing that all those requests come from different places in the world i.e: different I.P addresses. In other words, you need to use proxies. 

One thing to consider is that deputy IPs need to be constantly covered in order to discard the bone that isn’t working presently and replace it. 

There are several makeshift results on the request, they are the most habituated rotating deputy providers Luminati Network, Blazing SEO, and SmartProxy. 

There’s also a lot of free deputy lists that do not recommend using these because they’re frequently slow and unreliable, and websites offering these lists aren’t always transparent about where these delegates are located. Free deputy lists are generally public, and thus, their IPs will be automatically banned by utmost websites. Proxy quality is important. Anti-crawling services are known to maintain an internal list of deputy IP so any business coming from those IPs will also be blocked. Be careful to choose a good character.

Another deputy type that you could look into is mobile, 3g, and 4g delegates. This is helpful for scraping hard-to-scrape mobile-first websites, like social media. 

To make your own deputy you could take a look at scrapoxy, a great open-source API, allowing you to make a deputy API on top of different pall providers. Scrapoxy will produce a deputy pool by creating cases on colorful pall providers (AWS, OVH, Digital Ocean). Also, you’ll be suitable to configure your customer so it uses the Scrapoxy URL as the main deputy, and Scrapoxy it’ll automatically assign a deputy inside the deputy pool. 

You could also use the Escarpment network, aka, The Onion Router. It’s a worldwide computer network designed to route business through numerous different waiters to hide its origin. Escarpment operation makes network surveillance/ business analysis veritably delicate. There are a lot of use cases for Escarpment operation, similar as sequestration, freedom of speech, intelligence in absolutist governance, and of course, illegal conditioning. In the environment of web scraping, Escarpment can hide your IP address, and change your bot’s IP address every 10 minutes. The Escarpment exit bumps IP addresses are public. Some websites block Escarpment business using a simple rule: if the server receives a request from one of the TOR public exit bumps, it’ll block it. That’s why in numerous cases, Escarpment won’t help you, compared to classic proxies. It’s worth noting that business through Escarpment is also innately important slower because of the multiple routing. 

Captchas 

Occasionally proxies won’t be enough. Some websites totally ask you to confirm that you’re a human with so-called CAPTCHAs. Most of the time CAPTCHAs are only displayed to suspicious IPs, so switching proxy will work in those cases. For the other cases, you will need to use CAPTCHAs working service 

While some Captchas can be automatically resolved with optical character recognition (OCR), the most recent bone has to be answered by hand 

Still, on the other side of the API call you will have hundreds of people resolving CAPTCHAs for as low as 20ct an hour, If you use the forenamed services. 

But also again, indeed if you break CAPCHAs or switch deputies as soon as you see one, websites can still decrypt your data birth process. 

Request Pattern 

Another advanced tool used by websites to describe scraping is pattern recognition. So if you plan to scrape every IDs from 1 to 10 000 for the URLwww.example.com/product/, try to not do it successively or with a constant rate of request. You could, for illustration, maintain a set of integers going from 1 to 10 000 and aimlessly choose one integer inside this set and also scrape your product. 

Some websites also do statistics on browser points per endpoint. This means that if you do not change some parameters in your headless cybersurfer and target a single endpoint, they might block you. 

Websites also tend to cover the origin of business, so if you want to scrape a website in Brazil, try to not do it with proxies in Vietnam.

Was this post helpful?