Contents

Web Scraping 101 with Python

Kevin Sahin Kevin Sahin

Kevin has been working in the web scraping industry for 10 years before co-founding ScrapingBee. He is also the author of the Java Web Scraping Handbook.

Web Scraping 101 with Python

Introduction:

In this post, which can be read as a follow up to our ultimate web scraping guide, we will cover almost all the tools Python offers you to web scrape. We will go from the more basic to the most advanced one and will cover the pros and cons of each. Of course, we won't be able to cover all aspect of every tool we discuss, but this post should be enough to have a good idea of which tools does what, and when to use which.

Note: when I talk about Python in this blog post you should assume that I talk about Python3.

0. Web Fundamentals

The internet is really complex: there are many underlying technologies and concepts involved to view a simple web page in your browser. I don’t have the pretension to explain everything, but I will show you the most important things you have to understand in order to extract data from the web.

HyperText Transfer Protocol

HTTP uses a client/server model, where an HTTP client (A browser, your Python program, curl, Requests…) opens a connection and sends a message (“I want to see that page : /product”)to an HTTP server (Nginx, Apache…).

Then the server answers with a response (The HTML code for example) and closes the connection. HTTP is called a stateless protocol, because each transaction (request/response) is independent. FTP for example, is stateful.

Basically, when you type a website address in your browser, the HTTP request looks like this:

GET /product/ HTTP/1.1
Host: example.com
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/web\
p,*/*;q=0.8
Accept-Encoding: gzip, deflate, sdch, br
Connection: keep-alive
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit\
/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36

In the first line of this request, you can see multiples things:

  • the GET verb or method being used, meaning we request data from the specific path: /product/.There are other HTTP verbs, you can see the full list here.
  • The version of the HTTP protocol, in this tutorial we will focus on HTTP 1.
  • Multiple headers fields

Here are the most important header fields :

  • Host: The domain name of the server, if no port number is given, is assumed to be 80.
  • User-Agent: Contains information about the client originating the request, including the OS information. In this case, it is my web-browser (Chrome), on OSX. This header is important because it is either used for statistics (How many users visit my website on Mobile vs Desktop) or to prevent any violations by bots. Because these headers are sent by the clients, it can be modified (it is called “Header Spoofing”), and that is exactly what we will do with our scrapers, to make our scrapers look like a normal web browser.
  • Accept: The content types that are acceptable as a response. There are lots of different content types and sub-types: text/plain, text/html, image/jpeg, application/json
  • Cookie : name1=value1;name2=value2… This header field contains a list of name-value pairs. It is called session cookies, these are used to store data. Cookies are what websites use to authenticate users, and/or store data in your browser. For example, when you fill a login form, the server will check if the credentials you entered are correct, if so, it will redirect you and inject a session cookie in your browser. Your browser will then send this cookie with every subsequent request to that server.
  • Referrer: The Referrer header contains the URL from which the actual URL has been requested. This header is important because websites use this header to change their behavior based on where the user came from. For example, lots of news websites have a paying subscription and let you view only 10% of a post, but if the user came from a news aggregator like Reddit, they let you view the full content. They use the referrer to check this. Sometimes we will have to spoof this header to get to the content we want to extract.

And the list goes on…you can find the full header list here.

A server will respond with something like this:

HTTP/1.1 200 OK
Server: nginx/1.4.6 (Ubuntu) Content-Type: text/html; charset=utf-8 <!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" /> ...[HTML CODE]

On the first line, we have a new piece of information, the HTTP code 200 OK. It means the request has succeeded. As for the request headers, there are lots of HTTP codes, split into four common classes, 2XX for successful requests, 3XX for redirects, 4XX for bad requests (the most famous being 404 Not found), and 5XX for server errors.

Then, in case you are sending this HTTP request with your web browser, the browser will parse the HTML code, fetch all the eventual assets (Javascript files, CSS files, images…) and it will render the result into the main window.

In the next parts we will see the different ways to perform HTTP requests with Python and extract the data we want from the responses.

1. Manually opening a socket and sending the HTTP request

Socket

The most basic way to perform an HTTP request in Python is to open a socket and manually send the HTTP request.

import socket

HOST = 'www.google.com'  # Server hostname or IP address
PORT = 80        # Port

client_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_address = (HOST, PORT)
client_socket.connect(server_address)

request_header = b'GET / HTTP/1.0\r\nHost: www.google.com\r\n\r\n'
client_socket.sendall(request_header)

response = ''
while True:
    recv = client_socket.recv(1024)
    if not recv:
        break
    response += str(recv)

print(response)
client_socket.close()  

Now that we have the HTTP response, the most basic way to extract data from it is to use regular expressions.

Regular Expressions

A regular expression (RE, or Regex) is a search pattern for strings. With regex, you can search for a particular character/word inside a bigger body of text.

For example, you could identify all phone numbers inside a web page. You can also replace items, for example, you could replace all uppercase tag in a poorly formatted HTML by lowercase ones. You can also validate some inputs …

The pattern used by the regex is applied from left to right. Each source character is only used once. You may be wondering why it is important to know about regular expressions when doing web scraping?

After all, there is all kind of different Python module to parse HTML, with XPath, CSS selectors.

In an ideal semantic world, data is easily machine-readable, the information is embedded inside relevant HTML element, with meaningful attributes.

But the real world is messy, you will often find huge amounts of text inside a p element. When you want to extract a specific data inside this huge text, for example, a price, a date, a name… you will have to use regular expressions.

Note: Here is a great website to test your regex: https://regex101.com/ and one awesome blog to learn more about them, this post will only cover a small fraction of what you can do with regexp.

Regular expressions can be useful when you have this kind of data:

    <p>Price : 19.99$</p>

We could select this text node with an Xpath expression, and then use this kind of regex to extract the price :

    ^Price\s:\s(\d+\.\d{2})\$

To extract the text inside an HTML tag, it is annoying to use a regex, but doable:

import re

html_content = '<p>Price : 19.99$</p>'

m = re.match('<p>(.+)<\/p>', html_content)
if m:
	print(m.group(1))

As you can see, manually sending the HTTP request with a socket, and parsing the response with regular expression can be done, but it's complicated and there are higher-level API that can make this task easier.

2. urllib3 & LXML

Disclaimer: It is easy to get lost in the urllib universe in Python. You have urllib and urllib2 that are parts of the standard lib. You can also find urllib3. urllib2 was split in multiple modules in Python 3, and urllib3 should not be a part of the standard lib anytime soon. This whole confusing thing will be the subject of a blog post by itself. In this part, I've made the choice to only talk about urllib3 as it is used widely in the Python world, by Pip and requests to name only them.

Urllib3 is a high-level package that allows you to do pretty much whatever you want with an HTTP request. It allows doing what we did above with socket with way fewer lines of code.

import urllib3
http = urllib3.PoolManager()
r = http.request('GET', 'http://www.google.com')
print(r.data)

Much more concise than the socket version. Not only that, but the API is straightforward and you can do many things easily, like adding HTTP headers, using a proxy, POSTing forms …

For example, had we decide to set some headers and to use a proxy, we would only have to do this (you can learn more about proxy servers at bestproxyreviews.com)

import urllib3
user_agent_header = urllib3.make_headers(user_agent="<USER AGENT>")
pool = urllib3.ProxyManager(f'<PROXY IP>', headers=user_agent_header)
r = pool.request('GET', 'https://www.google.com/')

See? Exactly the same number of line, however, there are some things that urllib3 does not handle very easily, for example, if we want to add a cookie, we have to manually create the corresponding headers and add it to the request.

There are also things that urllib3 can do that requsts can't, creation and management of pool and proxy pool, control of retry strategy for example.

To put in simply, urllib3 is between requests and socket in terms of abstraction, although way closer to requests than socket.

This time, to parse the response, we are going to use the lxml package and XPath expressions.

XPath

Xpath is a technology that uses path expressions to select nodes or node- sets in an XML document (or HTML document). As with the Document Object Model, Xpath is a W3C standard since 1999. Even if Xpath is not a programming language in itself, it allows you to write expression that can access directly to a specific node, or a specific node-set, without having to go through the entire HTML tree (or XML tree).

Think of XPath as regexp, but specifically for XML/HMTL.

To extract data from an HTML document with XPath we need 3 things:

  • an HTML document
  • some XPath expressions
  • an XPath engine that will run those expressions

To begin we will use the HTML that we got thanks to urllib3, we just want to extract all the links from the Google homepage so we will use one simple XPath expression: //a and we will use LXML to run it. LXML is a fast and easy to use XML and HTML processing library that supports XPATH.

Installation:

    pip install lxml

Below is the code that comes just after the previous snippet:

from lxml import html

# We reuse the reponse from urllib3
data_string = r.data.decode('utf-8', errors='ignore')
# We instantiate a tree object from the HTML
tree = html.fromstring(data_string)
# We run the XPath against this HTML
# This returns an array of element
links = tree.xpath('//a')
for link in links:
    # For each element we can easily get back the URL
    print(link.get('href'))

And the output should look like this:

https://books.google.fr/bkshp?hl=fr&tab=wp
https://www.google.fr/shopping?hl=fr&source=og&tab=wf
https://www.blogger.com/?tab=wj
https://photos.google.com/?tab=wq&pageId=none
http://video.google.fr/?hl=fr&tab=wv
https://docs.google.com/document/?usp=docs_alc
...
https://www.google.fr/intl/fr/about/products?tab=wh

You have to keep in mind that this example is really really simple and doesn't really show you how powerful XPath can be (note: this XPath expression should have been changed to //a/@href to avoid having to iterate on links to get their href ).

If you want to learn more about XPath you can read this good introduction. The LXML documentation is also well written and is a good starting point.

XPath expresions, like regexp, are really powerful and one of the fastest way to extract information from HTML, and like regexp, XPath can quickly become messy, hard to read and hard to maintain.

If you'd like to learn more about Xpath, do not hesitate to read my dedicated blog post about XPath applied to web scraping.

Tired of getting blocked while scraping the web? Our API handles headless browsers and rotates proxies for you.

3. requests & BeautifulSoup

requests

Requests is the king of python packages, with more than 11 000 000 downloads, it is the most widly used package for Python.

Installation:

    pip install requests

Making a request with Requests (no comment) is really easy:

import requests

r = requests.get('https://www.scrapingninja.co')
print(r.text)

With Requests it is easy to perform POST requests, handle cookies, query parameters…

Authentication to Hacker News

Let's say we want to create a tool to automatically submit our blog post to Hacker news or any other forums, like Buffer. We would need to authenticate to those websites before posting our link. That's what we are going to do with Requests and BeautifulSoup!

Here is the Hacker News login form and the associated DOM:

Hacker News login form

There are three <input> tags on this form, the first one has a type hidden with a name “goto” and the two others are the username and password.

If you submit the form inside your Chrome browser, you will see that there is a lot going on: a redirect and a cookie is being set. This cookie will be sent by Chrome on each subsequent request in order for the server to know that you are authenticated.

Doing this with Requests is easy, it will handle redirects automatically for us, and handling cookies can be done with the Session object.

BeautifulSoup

The next thing we will need is BeautifulSoup, which is a Python library that will help us parse the HTML returned by the server, to find out if we are logged in or not.

Installation:

    pip install beautifulsoup4

So all we have to do is to POST these three inputs with our credentials to the /login endpoint and check for the presence of an element that is only displayed once logged in:

import requests
from bs4 import BeautifulSoup

BASE_URL = 'https://news.ycombinator.com'
USERNAME = ""
PASSWORD = ""

s = requests.Session()

data = {"gogo": "news", "acct": USERNAME, "pw": PASSWORD}
r = s.post(f'{BASE_URL}/login', data=data)

soup = BeautifulSoup(r.text, 'html.parser')
if soup.find(id='logout') is not None:
	print('Successfuly logged in')
else:
	print('Authentication Error')

In order to learn more about BeautifulSoup we could try to extract every links on the homepage.

By the way, Hacker News offers a powerful API, so we're doing this as an example, but you should use the API instead of scraping it!

The first thing we need to do is to inspect the Hacker News's home page to understand the structure and the different CSS classes that we will have to select:

Hacker news's HTML

We can see that all posts are inside a <tr class="athing"> so the first thing we will need to do is to select all these tags. This can be easily done with:

    links = soup.findAll('tr', class_='athing')

Then for each link, we will extract its id, title, url and rank:

import requests
from bs4 import BeautifulSoup

r = requests.get('https://news.ycombinator.com')
soup = BeautifulSoup(r.text, 'html.parser')
links = soup.findAll('tr', class_='athing')

formatted_links = []

for link in links:
	data = {
		'id': link['id'],
		'title': link.find_all('td')[2].a.text,
		"url": link.find_all('td')[2].a['href'],
		"rank": int(links[0].td.span.text.replace('.', ''))
	}
	formatted_links.append(data)

print(formatted_links)

As you saw, Requests and BeautifulSoup are great libraries to extract data and automate different things by posting forms. If you want to do large-scale web scraping projects, you could still use Requests, but you would need to handle lots of things yourself.

If you want to learn more about Python, BeautifulSoup and particularly CSS selectors, you can read this.

When you need to scrape a lots of webpages, there are many things you have to take care of:

  • finding a way of parallelizing your code to make it faster
  • handling error
  • storing result
  • filtering result
  • throttling your request so you don't over load the server

Fortunately for us, tools exist that can handle those things for us.

grequests

While the requests package is easy-to-use, you might find it a bit slow to use if you have hundreds of page to scrape.

This package, out of the box, only allow you to make synchronous requests, meaning that if you have 25 URLs to scrape, you will have to do it one by one.

So if one page takes 10 seconds to be fetched, will take you 25*10 seconds to fetch 25 pages.

import requests

# An array with 25 urls
urls = [...] 

for url in urls:
    result = requests.get(url)

The easiest way to speed-up this process is to make several calls at the same time, which means that instead of sending every request sequentially, you can send requests by a batches of 5. You send 5 requests simultaneously, wait for all of them to complete, and then send another batch of 5 requests and then wait and then do it again until you don't have any more URLs to scrape. This way, you can send 25 requests in 5 batches of 5 requests. Meaning all the URLs can be scrape in 5*10 seconds instead of 25*10 seconds.

Usually, this kind of behaviour is implemented using thread-based parallelism. It can be tricky to do it for beginners. Fortunately, there is a version of the requests package that does all the hard work for us.

It's called grequest, for g + requests, g standing for gevent, an asynchronous Python API widely used for web application.

What this library allows us to do, is sending multiple requests at the same time, and in a very easy and elegant way.

Here is how to send our 25 initials URLs in batches of 5:

# pip install grequests
import grequests

BATCH_LENGTH = 5

# An array with 25 urls
urls = [...] 
# Our empty results array
results = []

while urls:
    # get our first batch of 5 URLs
    batch = urls[:BATCH_LENGTH]
    # create a set of unsent Requests
    rs = (grequests.get(url) for url in batch)
    # send them all at the same time
    batch_results = grequests.map(rs)
    # appending results to our main results array
    results += batch_results
    # removing fetched URLs from urls
    urls = urls[BATCH_LENGTH:]

print(results)
# [<Response [200]>, <Response [200]>, ..., <Response [200]>, <Response [200]>]

And that's it. Grequest is perfect for small scripts but is not suited for production code or high scale web scraping. For that, we have Scrapy 👇.

4. Web Crawling Frameworks

Scrapy

Scrapy Logo

Scrapy is a powerful Python web scraping and web crawling framework. It provides many features to download web pages asynchronously, process and save it. It handles multithreading, crawling (the process of going from links to links to find every URLs in a website), sitemap crawling and many more.

Scrapy has also an interactive mode called the Scrapy Shell. With Scrapy Shell you can test your scraping code really quickly, like XPath expression or CSS selectors.

The downside of Scrapy is that the learning curve is steep, there is a lot to learn.

To follow up on our example about Hacker news, we are going to write a Scrapy Spider that scrapes the first 15 pages of results, and saves everything in a CSV file.

You can easily install Scrapy with pip:

    pip install Scrapy

Then you can use the scrapy cli to generate the boilerplate code for our project:

    scrapy startproject hacker_news_scraper

Inside hacker_news_scraper/spider we will create a new python file with our Spider's code:

from bs4 import BeautifulSoup
import scrapy


class HnSpider(scrapy.Spider):
    name = "hacker-news"
    allowed_domains = ["news.ycombinator.com"]
    start_urls = [f'https://news.ycombinator.com/news?p={i}' for i in range(1,16)]

    def parse(self, response):
        soup = BeautifulSoup(response.text, 'html.parser')
        links = soup.findAll('tr', class_='athing')

        for link in links:
        	yield {
        		'id': link['id'],
        		'title': link.find_all('td')[2].a.text,
        		"url": link.find_all('td')[2].a['href'],
        		"rank": int(link.td.span.text.replace('.', ''))
        	}

There is a lot of convention in Scrapy, here we define an Array of starting urls. The attribute name will be used to call our Spider with the Scrapy command line.

The parse method will be called on each URL in the start_urls array

We then need to tune Scrapy a little bit in order for our Spider to behave nicely against the target website.

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
AUTOTHROTTLE_ENABLED = True
# The initial download delay
AUTOTHROTTLE_START_DELAY = 5

You should always turn this on, it will make sure the target website is not slow down by your spiders by analyzing the response time and adapting the numbers of concurrent threads.

You can run this code with the Scrapy CLI and with different output format (CSV, JSON, XML…):

    scrapy crawl hacker-news -o links.json

And that's it! You will now have all your links in a nicely formatted JSON file.

There is much more to say about this tool, so if you wish to learn more about it, don't hesitate to check out our dedicated blog post about web scraping with Scrapy.

PySpider

PySpider is an alternative to Scrapy, albeit a bit outdated. Its last release is from 2018, but it is still relevant because it does many things that Scrapy does not out of the box.

First, PySpider works well with JavaScript pages (SPA, Ajax call) because it comes with PhantomJS, a headless browsing library. In Scrapy you would need to install middlewares to do just that.

Also, it comes with a nice UI that makes it easy to monitor all your crawling jobs.

PySpider interface
PySpider interface

However you might still prefer to use Scrapy for several reasons:

  • A much better documentation than PySpider with easy to understand guides
  • A built-in HTTP cache system that can really speed up your program
  • Automatic HTTP authentication
  • 3XX redirection supported through HTML meta refresh

5. Headless browsing

Selenium & Chrome

Scrapy is really nice for large-scale web scraping tasks, but it is not enough if you need to scrape Single Page Application written with Javascript frameworks because It won't be able to render the Javascript code.

It can be challenging to scrape these SPAs because there are often lots of AJAX calls and websockets connections involved. If performance is an issue, you should always try to reproduce the Javascript code, meaning manually inspecting all the network calls with your browser inspector, and replicating the AJAX calls containing the interesting data.

In some cases, there are just too many asynchronous HTTP calls involved to get the data you want and it can be easier to just render the page in a headless browser.

Another great use case would be to take a screenshot of a page, and this is what we are going to do with the Hacker News homepage (again !)

You can install the selenium package with pip:

    pip install selenium

You will also need Chromedriver:

    brew install chromedriver

Then we just have to import the Webdriver from selenium package, configure Chrome with headless=True and set a window size (otherwise it is really small):

from selenium import webdriver
from selenium.webdriver.chrome.options import Options

options = Options()
options.headless = True
options.add_argument("--window-size=1920,1200")
driver = webdriver.Chrome(options=options, executable_path=r'/usr/local/bin/chromedriver')
driver.get("https://news.ycombinator.com/")
driver.save_screenshot('hn_homepage.png')
driver.quit()

You should get a nice screenshot of the homepage:

Hacker News's front page

You can do many more with the Selenium API and Chrome, like :

  • Executing Javascript
  • Filling forms
  • Clicking on Elements
  • Extracting elements with CSS selectors / XPath expressions

Selenium and Chrome in headless mode is really the ultimate combination to scrape anything you want. You can automate anything that you could do with your regular Chrome browser.

The big drawback is that Chrome needs lots of memory / CPU power. With some fine-tuning you can reduce the memory footprint to 300-400mb per Chrome instance, but you still need 1 CPU core per instance.

Don't hesitate to check out our in-depth article about Selenium and Python.

If you want to run several Chrome instances concurrently, you will need powerful servers (the cost goes up quickly) and constant monitoring of resources.

Robobrowser

RoboBrowser is a Pythonic library that will allow you to browse the web by wrapping requests and BeautifulSoup in an easy-to-use interface.

It is not a headless browser per se because it's not relying on any web-browser binary, but it's a lightweight library that allows you to write scripts as if you were executing them in a “browser-like” environment.

For example, if you want to login to Hacker-News, instead of manually crafting a request with requests, you can write this kind of script that will populate the form and “press” the login button.

# pip install RoboBrowser
from robobrowser import RoboBrowser

browser = RoboBrowser()
browser.open('https://news.ycombinator.com/login')

# Get the signup form
signin_form = browser.get_form(action='login')


# Fill it out
signin_form['acct'].value = 'account'
signin_form['password'].value = 'secret'

# Submit the form
browser.submit_form(signin_form)

As you can see, the code is really written as if you were manually doing the task in a real browser, even though it is not a real headless browsing library.

RoboBrowser is cool because its lightweight approach allows to easily parallelize it on your computer, but because it's not using any real browser, it won't be able to deal with JavaScript execution like AJAX calls or Single Page Application.

Unfortunately, its documentation is also very lightweight, and I would not recommend it for newcomers or people not already used to the BeautilfulSoup or requests API.

Conclusion

Here is a quick recap table of every technology we discuss about in this about. Do not hesitate to tell us in the comment if you know some ressources that you feel have their places here.

Name socket urllib3 requests Scrapy selenium
Ease of use - - - + + + + + + + +
Flexibility + + + + + + + + + + + + + +
Speed of execution + + + + + + + + + + +
Common use case -Writing low-level programming interface -High level application that needs fine control over HTTP (pip, aws client, requests, streming) -Calling an API
-Simple application (in terms of HTTP needs)
-Crawling a important list of website
- Filter, extract and load on scrapped data
-JS rendering
-Scraping SPA
-Automated testing
-Programmatic screenshot
Learn more - Official documentation
- Great tutorial 👍
- Official documentation
- PIP usage of urllib3, very interesting
- Official documentation
- Requests usage of urllib3
- Official documentation - Scrapy overview - Official documentation
- Scraping SPA

I hope you enjoyed this blog post, it was a quick introduction to the most used Python tools for web scraping. In the next posts we're going to go deeper on each individual tools or topics like XPath, CSS selectors.

Happy Scraping

Tired of getting blocked while scraping the web? Our API handles headless browsers and rotates proxies for you.