Which Python library is used for web scraping?

There are various Python libraries that can be used for web scraping, but the most popular ones are:

1. Requests:

Requests is an easy to use HTTP library, it abstracts the complexity of making HTTP/1.1 requests behind a simple API so that you can focus on scraping the web page, and not on the request itself.  So this tool will allow you to fetch the HTML/JSON contents of any page.

Here's an example on how to get the HTML code of ScrapingBee's blog:

>>> import requests
>>> r = requests.get('https://scrapingbee.com/blog')
>>> r.status_code
200
>>> r.headers['content-type']
'text/html; charset=utf-8'
>>> r.encoding
'utf-8'
>>> r.content
b'<!DOCTYPE html>\n<html lang="en">\n...'

2. Scrapy:

Scrapy is a fast high-level web crawling & scraping framework that helps extract data from pages, and store full websites. It is a much harder tool to use, this is why we suggest you check out this tutorial showing you how to start with Scrapy if you want to use it.

3. BeautifulSoup4: 

Beautiful Soup is a library that makes it easy to scrape information from web pages. It parses any HTML or XML documents, and it works well in coordination with HTTP  python libraries like requests, creating a parse tree that can easily be iterated, searched, and modifyed.

Here's a simple example:

>>> from bs4 import BeautifulSoup
>>> soup = BeautifulSoup("<p>Some HTML Code</p>")
>>> soup.p # Searching for the element <p>
<p>Some HTML Code</p>

You can find more on BeautifulSoup4 in this article: What does Beautifulsoup do in Python?

 

Related Python web scraping questions: