How to Scrape Google Finance Using Python and ScrapingBee

04 September 2025 | 12 min read

Learning how to scrape Google Finance gives you access to real-time stock prices, company performance metrics, and other financial metrics. However, scraping stock information isn’t always simple, especially on platforms that receive so much traffic. Other issues lie in loading dynamic JavaScript elements, frequent layout changes, and IP restrictions, which make it difficult for automated scrapers to extract consistent data.

This tutorial will teach you how to build a Google Finance scraper from scratch using Python and our versatile HTML API. We’ll cover everything – from setting up your environment to writing code that automatically handles JavaScript and any connection restrictions.

If you're a developer, analyst, or researcher looking to extract valuable financial data from Google Finance Pages with as little effort as possible, this guide will walk you through the entire process of extracting and breaking down HTML content.

Quick Answer (TL;DR)

Understanding how to scrape Google Finance is a lot easier than it looks, and you can do it with simple API calls using our Python SDK. JavaScript rendering is enabled by default for our connections, and we also, proxy rotation and CAPTCHAs for you. In just a few minutes, you will have an intuitive Google Finance API that provides clean JSON output of real-time stock data.

How to Scrape Google Finance with ScrapingBee

If you’re new to web scraping and want to collect data from Google Finance, Python is the best language to begin with. As the world's biggest coding language, it contains thousands of free tutorials online, so you can look up any rules or syntax details and accumulate enough knowledge to apply effective web scraping techniques very quickly.

Its clean, beginner-friendly syntax makes automation and data extraction straightforward for first-time developers. Python supports powerful external libraries, including our official SDK, which simplifies scraping dynamic content like real-time stock prices or company details. You can find more tips on its use in our Python Web Scraping Guide.

Set Up Your Environment

Before you start coding, build your Python environment by making sure Python 3.6 or later is installed on your system. You can install it using:

  • The official website: python.org

  • Microsoft Store (easy setup for Windows users)

  • A Linux package manager (for Ubuntu, Fedora, Arch, etc.)

    Python

Now, let's integrate the external Python libraries. Launch your Terminal (or Command Prompt on Windows) and use pip install <package-name> to add the tools you need. For scraping Google Finance with our tools, you’ll need the following packages:

  • scrapingbee – The official ScrapingBee Python SDK that simplifies API calls, supports JavaScript rendering, handles proxy rotation automatically, and removes the need for headless browsers when scraping publicly available data.

  • pandas – A powerful data analysis library that transforms raw JSON into structured DataFrames, making it simple to organize, clean, and analyze real time stock data.

You can install the necessary libraries on your Terminal with a single command:

pip install scrapingbee pandas

Before diving into your web scraping script, there’s one last step: create or log into your ScrapingBee account to access the API. If you're new to our power tools, don't worry, as you won't need to make any financial commitments right out the gate. Every new user gets a free trial with 1,000 credits valid for 7 days, which is a great way to explore how easy scraping Google Finance can be using our Python SDK.

Register

Once you’ve signed up, you will land on our dashboard. Here, you’ll see a clear overview of your available credits, usage stats, and account settings.

Locate your unique API key in the top-right corner. It is essential for authenticating your requests and tracking usage, so you will need to include it in your script to start making API calls.

Dashboard

Now we can finally start working on our script. First, create a new directory where you’ll store all your scraping files. Inside that folder, create a new text file with a .py extension to indicate it’s a Python script. For example, name your file google_finance_scraper.py to keep things organized.

Next, open the script in your preferred text editor. Begin by importing the libraries you'll need to define your scraping logic. Then, we initialize our client within the "client" variable – this is where you will add your API key:

#Importing our HTML API
from scrapingbee import ScrapingBeeClient 
# pandas dataframes for better data formatting
import pandas as pd 
# An internal Python library to integrate input parameters into a URL string
from urllib.parse import urlencode 

# Initializing our API client in the "client" variable
client = ScrapingBeeClient(api_key='YOUR_API_KEY')

Note: While a basic editor like Notepad can work, we highly recommend using an IDE such as VS Code or PyCharm. These tools highlight syntax, catch errors early, and make your coding process much smoother.

Once the libraries are imported, we can start working on the web scraping logic that will target Google Finance websites. The following line starts the definition of a "Scrape_Google_Finance" function that will store our API calls and other relevant parameters:

# Starts defining a function which contains Google Finance scraping logic
def Scrape_Google_Finance():

After that, we add a block of code which takes user input to extend the base Google Finance URL and target a specific page:

    # finance_term - code for a specific stock
 finance_term = input("Finance_term: ").strip()
    base_finance = "https://www.google.com/finance/quote/"
 # Appending the base URL with a code of a specific stock
    url = f"{base_finance}{finance_term}"
 # Printing the URL to check the result
    print(url)

Note: The indented lines following the "def scrape_Google_Finance():" line are part of the function's body. In Python, indentation defines the scope of a function. Once the indentation ends, the function is complete.

Then, before the API call itself, we define additional parameters that will affect how the scraper accesses the website. We start with a "js_scenario" variable, which waits for 3 seconds for all elements on the page to load before extracting information.

    js_scenario = {
        "instructions": [
            {"wait": 3000}
        ]
    }

Make Your First API Request

To make sure our scraper can access Google Finance data, let's extract the raw HTML content first before parsing stock descriptions and other HTML elements. The "response" variable sends the GET API call to our URL:

   response = client.get(
    url,
            params={
                "js_scenario": js_scenario,
                'custom_google': 'True',
                'premium_proxy': 'True',
                }
            )

However, in its definition, there is a "params" dictionary, so let's break down what it does:

  • "js_scenario": js_scenario – Enables JavaScript rendering by executing custom browser-like instructions.

  • 'custom_google': 'True' – Activates our API's specialized handling for Google pages, improving accuracy and reliability when scraping search or finance data from Google's dynamic HTML structure.

  • 'premium_proxy': 'True' – Routes your request through ScrapingBee’s residential-quality IP addresses to avoid blocks, rate limits, and CAPTCHA challenges — especially helpful on Google Finance.

After that, we print the HTTP status code of the response variable (200 indicates a successful connection), followed by the contents of the "result" variable. The last line closes the function and calls for its execution:

    #Check the HTTP status code to make sure the scraper accessed the URL
 print(response.status_code)
    result=(response.text)
    print(result)
#call the functions
Scrape_Google_Finance()

If everything was done correctly, your output should look like this:

Result

Not much to work with, right? Let's introduce the "extract_rules" dictionary and add it to the "params" variable to extract clean data from Google Finance, which you will be able to access and read easily via Excel or Google Sheets.

Extract and Clean Data from Google Finance

Our Python SDK has built-in tools for parsing HTML content through CSS selectors. To find them for appropriate stock titles and other data points like day range, year range, and market cap, we must first inspect the page manually.

Go to your browser and open the Google Finance page. Let's continue for continuity: https://www.google.com/finance/quote/AMZN:NASDAQ. Press F12 or right-click the browser page and select "Inspect" to open Developer Tools. While in there, find the HTML element that is a container of all relevant information on the site.

Class

Then, let's go back to our "Scrape_Google_Finance" function. After the "js_scenario" variable, insert another dictionary – "extract rules". Here we will define parsing rules for the scraper, without needing additional external libraries like BeautifulSoup. If you want to use it instead, check out our BeautifulSoup Tutorial.

Let's start defining the dictionary and insert the main selector:

    extract_rules = {
    "Finance_result": {
        #the main selector for picking Google Finance data points, manually extracted as shown in the image above
        "selector": "div.Gfxi4",

Now, the hard part: identify CSS selectors for each data point. With them, we will expand the extract_rules with assigned column names. Before adding the selectors to your code, make sure to test them in the Developer Tools search bar.

Note: In developer tools, you can find the HTML tags that contain them, and then right-click > Copy > Copy Selector. However, Google Finance pages are a bit tricky because they dynamically load HTML elements, so class names mentioned in CSS selectors can change. That is why some selectors are based on the "jsname" paramater.

After going through the HTML code and picking appropriate selectors, here is a complete collection of extract rules that will be displayed in a JSON file:

    extract_rules = {
    "Finance_result": {
        #the main selector picking out each search result
        "selector": "div.Gfxi4",
        #extracts a list of parameters within the main selector
        "type": "list",
        "output": {
        # Output columns
           "Price": 'div[jsname="ip75Cb"] div.YMlKec',
           "Difference today": 'div[jsname="CGyduf"] span.P2Luy',
           "Previous close": 'div.eYanAe > div.gyFHrc:nth-of-type(2) >div',
           "Day Range": 'div.eYanAe > div.gyFHrc:nth-of-type(3) >div',
           "Year Range": 'div.eYanAe > div.gyFHrc:nth-of-type(4) >div',
           "Market Cap": 'div.eYanAe > div.gyFHrc:nth-of-type(5) >div',
           "Avg Volume": 'div.eYanAe > div.gyFHrc:nth-of-type(6) >div',
           "P/E Ratio": 'div.eYanAe > div.gyFHrc:nth-of-type(7) >div'
           }
        },

}

Now, change the result variable to give a response in JSON format:

    result=(response.json())

Once that is taken care of, try running your scraper once again. The displayed financial data is now clean and easy to read.

Data

We can make it even better by transforming the result into a Pandas DataFrame, which will take the information in the "Finance_result" list and transform its key values into columns. Add the following line to transform the "result" variable:

    result=pd.DataFrame(result['Finance_result'])

Then, print out the result once again, but add the print command after this conversion. Your outcome should look like this:

Data

If we transform it into a CSV file, your scrape data can be appended and analyzed by external tools. Add the following line to store the output in a "result.csv" file.

Table

By scraping additional URLs, or targeting the same stock, but at different time intervals, you can append the list and see how each column expands with additional entries. This way, you can compare current stock price to historical data, track dividend yield, or just keep collecting prices to make informed investment decisions.

Sample Code for Parsing Data

Want to test the scraper for yourself? There is no better learning practice than following the tutorial while having the full, working code at your disposal.

The following example of a Google Finance scraper covers all the steps mentioned in this guide. It contains the entire scraping logic while leaving room for comfortable upscaling and parameter control through our powerful tools. While we take care of unrestricted access to multiple Google Finance URLs, you can tweak the code to add stock description, financial news, and other relevant data for understanding market trends.

#Importing our HTML API
from scrapingbee import ScrapingBeeClient 
# pandas dataframes for better data formatting
import pandas as pd 
# An internal Python library to integrate input parameters into a URL string
from urllib.parse import urlencode 

# Initializing our API client in the "client" variable
client = ScrapingBeeClient(api_key='YOUR_API_kEY')



# Starts defining a function which contains Google Finance scraping logic
def Scrape_Google_Finance():
    finance_term = input("Finance_term: ").strip()
    base_finance = "https://www.google.com/finance/quote/"
    url = f"{base_finance}{finance_term}"
    print(url)


    js_scenario = {
        "instructions": [
            {"wait": 3000}
        ]
    }

    extract_rules = {
    "Finance_result": {
        #the main selector picking out each search result
        "selector": "div.Gfxi4",
        #extracts a list of parameters within the main selector
        "type": "list",
        "output": {
        # Output columns: double-check and update these selectors as Google may change them
           "Price": 'div[jsname="ip75Cb"] div.YMlKec',
           "Difference today": 'div[jsname="CGyduf"] span.P2Luy',
           "Previous close": 'div.eYanAe > div.gyFHrc:nth-of-type(2) >div',
           "Day Range": 'div.eYanAe > div.gyFHrc:nth-of-type(3) >div',
           "Year Range": 'div.eYanAe > div.gyFHrc:nth-of-type(4) >div',
           "Market Cap": 'div.eYanAe > div.gyFHrc:nth-of-type(5) >div',
           "Avg Volume": 'div.eYanAe > div.gyFHrc:nth-of-type(6) >div',
           "P/E Ratio": 'div.eYanAe > div.gyFHrc:nth-of-type(7) >div',
           }
        },

}


    response = client.get(
    url,
    #   GET API call parameters
            params={
                "extract_rules": extract_rules,
                "js_scenario": js_scenario,
                'custom_google': 'True',
                'premium_proxy': 'True',
                }
            )
    #checks the HTML status code
    print(response.status_code)
    result=(response.json())

    # transforms the result into a Pandas DataFrame
    result=pd.DataFrame(result['Finance_result'])
    print(result)
    result.to_csv("result.csv", index=False)
    
Scrape_Google_Finance()

Best Practices for Google Finance Scraping

While there are many popular websites where extracting prices is much more intuitive, you will still encounter several sources like Google which try to complicate scraping efforts through CSS changes and IP blocking.

Scraping Google Finance reliably comes down to stability and restraint. Our API allows you to add a js_scenario, so the scraper can interact with the platform until prices and stats fully populate. To avoid persistent restrictions, enable automatic proxy rotation with a "premium_proxy" variable to avoid IP bans, and use space requests with exponential backoff to respect rate limits.

If the platform blocks your entry, you can randomize headers (User-Agent, Accept-Language) and keep request payloads minimal to reduce fingerprinting which leads to IP blocking. For more tips, check out our informative blog on scraping without getting blocked. Happy scraping!

Frequently Asked Questions (FAQs)

Yes, scraping publicly available data from Google Finance is legal. However, large-scale or commercial scraping may violate Google’s Terms of Service and restrict your access to the site.

How Often Can I Scrape Google Finance?

Google actively monitors automated traffic. Too many rapid requests can trigger rate limits or blocks, forcing you back into time-consuming manual scraping. Thankfully, with our API, you can rotate proxies and throttle request frequency to avoid detection.

Can I Get JSON Instead of HTML?

Yes. Our API supports extract rules that let you target specific CSS selectors and return clean, structured JSON. This means you can pull only the fields you need without dealing with the full raw HTML response.

What Happens If My Request Fails?

Occasional failures happen due to timeouts, CAPTCHAs, or layout changes. You can inspect why your request failed by checking its status code (e.g., response.status_code) Our API minimizes these issues with automatic retries, proxy rotation, and CAPTCHA handling.

image description
Kevin Sahin

Kevin worked in the web scraping industry for 10 years before co-founding ScrapingBee. He is also the author of the Java Web Scraping Handbook.