Web Scraping using Selenium and Python

08 July 2021 (updated) | 9 min read

In the last tutorial we learned how to leverage the Scrapy framework to solve common web scraping problems. Today we are going to take a look at Selenium (with Python ❤️ ) in a step-by-step tutorial.

Selenium refers to a number of different open-source projects used for browser automation. It supports bindings for all major programming languages, including our favorite language: Python.

The Selenium API uses the WebDriver protocol to control a web browser, like Chrome, Firefox or Safari. The browser can run either localy or remotely.

At the beginning of the project (almost 20 years ago!) it was mostly used for cross-browser, end-to-end testing (acceptance tests).

Now it is still used for testing, but it is also used as a general browser automation platform. And of course, it us used for web scraping!

Selenium is useful when you have to perform an action on a website such as:

  • Clicking on buttons
  • Filling forms
  • Scrolling
  • Taking a screenshot

It is also useful for executing Javascript code. Let's say that you want to scrape a Single Page Application. Plus you haven't found an easy way to directly call the underlying APIs. In this case, Selenium might be what you need.


We will use Chrome in our example, so make sure you have it installed on your local machine:

To install the Selenium package, as always, I recommend that you create a virtual environment (for example using virtualenv) and then:

pip install selenium


Once you have downloaded both Chrome and Chromedriver and installed the Selenium package, you should be ready to start the browser:

from selenium import webdriver

DRIVER_PATH = '/path/to/chromedriver'
driver = webdriver.Chrome(executable_path=DRIVER_PATH)

This will launch Chrome in headfull mode (like regular Chrome, which is controlled by your Python code). You should see a message stating that the browser is controlled by automated software.

To run Chrome in headless mode (without any graphical user interface), you can run it on a server. See the following example:

from selenium import webdriver
from selenium.webdriver.chrome.options import Options

options = Options()
options.headless = True

driver = webdriver.Chrome(options=options, executable_path=DRIVER_PATH)

The driver.page_source will return the full page HTML code.

Here are two other interesting WebDriver properties:

  • driver.title gets the page's title
  • driver.current_url gets the current URL (this can be useful when there are redirections on the website and you need the final URL)

Locating Elements

Locating data on a website is one of the main use cases for Selenium, either for a test suite (making sure that a specific element is present/absent on the page) or to extract data and save it for further analysis (web scraping).

There are many methods available in the Selenium API to select elements on the page. You can use:

  • Tag name
  • Class name
  • IDs
  • XPath
  • CSS selectors

We recently published an article explaining XPath. Don't hesitate to take a look if you aren't familiar with XPath.

As usual, the easiest way to locate an element is to open your Chrome dev tools and inspect the element that you need. A cool shortcut for this is to highlight the element you want with your mouse and then press Ctrl + Shift + C or on macOS Cmd + Shift + C instead of having to right click + inspect each time:

Document Object Model


There are many ways to locate an element in selenium. Let's say that we want to locate the h1 tag in this HTML:

        ... some stuff
        <h1 class="someclass" id="greatID">Super title</h1>
h1 = driver.find_element_by_name('h1')
h1 = driver.find_element_by_class_name('someclass')
h1 = driver.find_element_by_xpath('//h1')
h1 = driver.find_element_by_id('greatID')

All these methods also have find_elements (note the plural) to return a list of elements.

For example, to get all anchors on a page, use the following:

all_links = driver.find_elements_by_tag_name('a')

Some elements aren't easily accessible with an ID or a simple class, and that's when you need an XPath expression. You also might have multiple elements with the same class (the ID is supposed to be unique).

XPath is my favorite way of locating elements on a web page. It's a powerful way to extract any element on a page, based on it's absolute position on the DOM, or relative to another element.


A WebElement is a Selenium object representing an HTML element.

There are many actions that you can perform on those HTML elements, here are the most useful:

  • Accessing the text of the element with the property element.text
  • Clicking on the element with element.click()
  • Accessing an attribute with element.get_attribute('class')
  • Sending text to an input with: element.send_keys('mypassword')

There are some other interesting methods like is_displayed(). This returns True if an element is visible to the user.

It can be interesting to avoid honeypots (like filling hidden inputs).

Honeypots are mechanisms used by website owners to detect bots. For example, if an HTML input has the attribute type=hidden like this:

<input type="hidden" id="custId" name="custId" value="">

This input value is supposed to be blank. If a bot is visiting a page and fills all of the inputs on a form with random value, it will also fill the hidden input. A legitimate user would never fill the hidden input value, because it is not rendered by the browser.

That's a classic honeypot.

Full example

Here is a full example using Selenium API methods we just covered.

We are going to log into Hacker News:

Hacker News login page

In our example, authenticating to Hacker News is not really useful on its own. However, you could imagine creating a bot to automatically post a link to your latest blog post.

In order to authenticate we need to:

  • Go to the login page using driver.get()
  • Select the username input using driver.find_element_by_* and then element.send_keys() to send text to the input
  • Follow the same process with the password input
  • Click on the login button using element.click()

Should be easy right? Let's see the code:


login = driver.find_element_by_xpath("//input").send_keys(USERNAME)
password = driver.find_element_by_xpath("//input[@type='password']").send_keys(PASSWORD)
submit = driver.find_element_by_xpath("//input[@value='login']").click()

Easy, right? Now there is one important thing that is missing here. How do we know if we are logged in?

We could try a couple of things:

  • Check for an error message (like "Wrong password")
  • Check for one element on the page that is only displayed once logged in.

So, we're going to check for the logout button. The logout button has the ID "logout" (easy)!

We can't just check if the element is None because all of the find_element_by_* raise an exception if the element is not found in the DOM. So we have to use a try/except block and catch the NoSuchElementException exception:

# dont forget from selenium.common.exceptions import NoSuchElementException
    logout_button = driver.find_element_by_id("logout")
    print('Successfully logged in')
except NoSuchElementException:
    print('Incorrect login/password')

Taking a screenshot

We could easily take a screenshot using:

Hacker News login page

Note that a lot of things can go wrong when you take a screenshot with Selenium. First, you have to make sure that the window size is set correctly. Then, you need to make sure that every asynchronous HTTP call made by the frontend Javascript code has finished, and that the page is fully rendered.

In our Hacker News case it's simple and we don't have to worry about these issues.

If you need to make screenshots at scale, feel free to try our new Screenshot API here.

Waiting for an element to be present

Dealing with a website that uses lots of Javascript to render its content can be tricky. These days, more and more sites are using frameworks like Angular, React and Vue.js for their front-end. These front-end frameworks are complicated to deal with because they fire a lot of AJAX calls.

If we had to worry about an asynchronous HTTP call (or many) to an API, there are two ways to solve this:

  • Use a time.sleep(ARBITRARY_TIME) before taking the screenshot.
  • Use a WebDriverWait object.

If you use a time.sleep() you will probably use an arbitrary value. The problem is, you're either waiting for too long or not enough. Also the website can load slowly on your local wifi internet connection, but will be 10 times faster on your cloud server. With the WebDriverWait method you will wait the exact amount of time necessary for your element/data to be loaded.

    element = WebDriverWait(driver, 5).until(
        EC.presence_of_element_located((By.ID, "mySuperId"))

This will wait five seconds for an element located by the ID "mySuperId" to be loaded. There are many other interesting expected conditions like:

  • element_to_be_clickable
  • text_to_be_present_in_element
  • element_to_be_clickable

You can find more information about this in the Selenium documentation

Executing Javascript

Sometimes, you may need to execute some Javascript on the page. For example, let's say you want to take a screenshot of some information, but you first need to scroll a bit to see it. You can easily do this with Selenium:

javaScript = "window.scrollBy(0,1000);"

Using a proxy with Selenium Wire

Unfortunately, Selenium proxy handling is quite basic. For example, it can't handle proxy with authentication out of the box.

To solve this issue, you need to use Selenium Wire. This package extends Selenium's bindings and gives you access to all the underlying requests made by the browser. If you need to use Selenium with a proxy with authentication this is the package you need.

pip install selenium-wire

This code snippet shows you how to quickly use your headless browser behind a proxy.

# Install the Python selenium-wire library:
# pip install selenium-wire
from seleniumwire import webdriver

proxy_username = "USER_NAME"
proxy_password = "PASSWORD"
proxy_url = "http://proxy.scrapingbee.com"
proxy_port = 8886

options = {
    "proxy": {
        "http": f"http://{proxy_username}:{proxy_password}@{proxy_url}:{proxy_port}",
        "verify_ssl": False,

URL = "https://httpbin.org/headers?json"

driver = webdriver.Chrome(

Blocking images and JavaScript

With Selenium, by using the correct Chrome options, you can block some requests from being made.

This can be useful if you need to speed up your scrapers or reduce your bandwidth usage.

To do this, you need to launch Chrome with the below options:

from selenium import webdriver

chrome_options = webdriver.ChromeOptions()

### This blocks images and javascript requests
chrome_prefs = {
    "profile.default_content_setting_values": {
        "images": 2,
        "javascript": 2,
chrome_options.experimental_options["prefs"] = chrome_prefs

driver = webdriver.Chrome(


I hope you enjoyed this blog post! You should now have a good understanding of how the Selenium API works in Python. If you want to know more about how to scrape the web with Python don't hesitate to take a look at our general Python web scraping guide.

Another interesting read  is our guide toPuppeteer with Python. Puppeteer is an API to control Chrome, it's much more powerful than Selenium (it's maintained directly by the Google team).

Selenium is often necessary to extract data from websites using lots of Javascript. The problem is that running lots of Selenium/Headless Chrome instances at scale is hard. This is one of the things we solve with ScrapingBee, our web scraping API

Selenium is also an excellent tool to automate almost anything on the web.

If you perform repetitive tasks like filling forms or checking information behind a login form where the website doesn't have an API, it's maybe* a good idea to automate it with Selenium,just don't forget this xkcd:

Blog post header

image description
Kevin Sahin

Kevin worked in the web scraping industry for 10 years before co-founding ScrapingBee. He is also the author of the Java Web Scraping Handbook.