In the last tutorial we learned how to leverage the Scrapy framework to solve common web scraping problems. Today we are going to take a look at Selenium (with Python ❤️ ) in a step-by-step tutorial.
Selenium refers to a number of different open-source projects used for browser automation. It supports bindings for all major programming languages, including our favorite language: Python.
The Selenium API uses the WebDriver protocol to control a web browser, like Chrome, Firefox or Safari. The browser can run either localy or remotely.
At the beginning of the project (almost 20 years ago!) it was mostly used for cross-browser, end-to-end testing (acceptance tests).
Now it is still used for testing, but it is also used as a general browser automation platform. And of course, it us used for web scraping!
Selenium is useful when you have to perform an action on a website such as:
We will use Chrome in our example, so make sure you have it installed on your local machine:
To install the Selenium package, as always, I recommend that you create a virtual environment (for example using virtualenv) and then:
pip install selenium
Once you have downloaded both Chrome and Chromedriver and installed the Selenium package, you should be ready to start the browser:
from selenium import webdriver DRIVER_PATH = '/path/to/chromedriver' driver = webdriver.Chrome(executable_path=DRIVER_PATH) driver.get('https://google.com')
This will launch Chrome in headfull mode (like regular Chrome, which is controlled by your Python code). You should see a message stating that the browser is controlled by automated software.
To run Chrome in headless mode (without any graphical user interface), you can run it on a server. See the following example:
from selenium import webdriver from selenium.webdriver.chrome.options import Options options = Options() options.headless = True options.add_argument("--window-size=1920,1200") driver = webdriver.Chrome(options=options, executable_path=DRIVER_PATH) driver.get("https://www.nintendo.com/") print(driver.page_source) driver.quit()
driver.page_source will return the full page HTML code.
Here are two other interesting WebDriver properties:
driver.titlegets the page's title
driver.current_urlgets the current URL (this can be useful when there are redirections on the website and you need the final URL)
Locating data on a website is one of the main use cases for Selenium, either for a test suite (making sure that a specific element is present/absent on the page) or to extract data and save it for further analysis (web scraping).
There are many methods available in the Selenium API to select elements on the page. You can use:
We recently published an article explaining XPath. Don't hesitate to take a look if you aren't familiar with XPath.
As usual, the easiest way to locate an element is to open your Chrome dev tools and inspect the element that you need. A cool shortcut for this is to highlight the element you want with your mouse and then press Ctrl + Shift + C or on macOS Cmd + Shift + C instead of having to right click + inspect each time:
There are many ways to locate an element in selenium. Let's say that we want to locate the h1 tag in this HTML:
<html> <head> ... some stuff </head> <body> <h1 class="someclass" id="greatID">Super title</h1> </body> </html>
h1 = driver.find_element_by_name('h1') h1 = driver.find_element_by_class_name('someclass') h1 = driver.find_element_by_xpath('//h1') h1 = driver.find_element_by_id('greatID')
All these methods also have
find_elements (note the plural) to return a list of elements.
For example, to get all anchors on a page, use the following:
all_links = driver.find_elements_by_tag_name('a')
Some elements aren't easily accessible with an ID or a simple class, and that's when you need an XPath expression. You also might have multiple elements with the same class (the ID is supposed to be unique).
XPath is my favorite way of locating elements on a web page. It's a powerful way to extract any element on a page, based on it's absolute position on the DOM, or relative to another element.
WebElement is a Selenium object representing an HTML element.
There are many actions that you can perform on those HTML elements, here are the most useful:
There are some other interesting methods like
is_displayed(). This returns True if an element is visible to the user.
It can be interesting to avoid honeypots (like filling hidden inputs).
Honeypots are mechanisms used by website owners to detect bots. For example, if an HTML input has the attribute
type=hidden like this:
<input type="hidden" id="custId" name="custId" value="">
This input value is supposed to be blank. If a bot is visiting a page and fills all of the inputs on a form with random value, it will also fill the hidden input. A legitimate user would never fill the hidden input value, because it is not rendered by the browser.
That's a classic honeypot.
Here is a full example using Selenium API methods we just covered.
We are going to log into Hacker News:
In our example, authenticating to Hacker News is not really useful on its own. However, you could imagine creating a bot to automatically post a link to your latest blog post.
In order to authenticate we need to:
element.send_keys()to send text to the input
Should be easy right? Let's see the code:
driver.get("https://news.ycombinator.com/login") login = driver.find_element_by_xpath("//input").send_keys(USERNAME) password = driver.find_element_by_xpath("//input[@type='password']").send_keys(PASSWORD) submit = driver.find_element_by_xpath("//input[@value='login']").click()
Easy, right? Now there is one important thing that is missing here. How do we know if we are logged in?
We could try a couple of things:
So, we're going to check for the logout button. The logout button has the ID “logout” (easy)!
We can't just check if the element is
None because all of the
find_element_by_* raise an exception if the element is not found in the DOM.
So we have to use a try/except block and catch the
# dont forget from selenium.common.exceptions import NoSuchElementException try: logout_button = driver.find_element_by_id("logout") print('Successfully logged in') except NoSuchElementException: print('Incorrect login/password')
We could easily take a screenshot using:
In our Hacker News case it's simple and we don't have to worry about these issues.
If we had to worry about an asynchronous HTTP call (or many) to an API, there are two ways to solve this:
time.sleep(ARBITRARY_TIME)before taking the screenshot.
If you use a
time.sleep() you will probably use an arbitrary value. The problem is, you're either waiting for too long or not enough.
Also the website can load slowly on your local wifi internet connection, but will be 10 times faster on your cloud server.
WebDriverWait method you will wait the exact amount of time necessary for your element/data to be loaded.
try: element = WebDriverWait(driver, 5).until( EC.presence_of_element_located((By.ID, "mySuperId")) ) finally: driver.quit()
This will wait five seconds for an element located by the ID “mySuperId” to be loaded. There are many other interesting expected conditions like:
You can find more information about this in the Selenium documentation
I hope you enjoyed this blog post! You should now have a good understanding of how the Selenium API works in Python. If you want to know more about how to scrape the web with Python don't hesitate to take a look at our general Python web scraping guide.
Selenium is also an excellent tool to automate almost anything on the web.
If you perform repetitive tasks like filling forms or checking information behind a login form where the website doesn't have an API, it's maybe* a good idea to automate it with Selenium,just don't forget this xkcd: