How to Scrape Google Maps: A Step-by-Step Guide

22 August 2025 | 11 min read

Need business leads or location data from Google Maps but frustrated by constant CAPTCHAs, IP blocks, or unreliable scraping scripts? Scraping is one of the fastest ways to gather high-value information, but Google’s aggressive anti-bot measures turn large-scale data collection into a real challenge.

Access to business names, addresses, ratings, and phone numbers is too valuable to ignore, so users keep finding ways around Google’s automation blocks. But how exactly do they do it?

The answer lies in using a tool that handles JavaScript rendering and proxy rotation for you – like ScrapingBee! In this guide, you can learn about the most reliable way to scrape Google Maps with built-in JavaScript rendering and automatic proxy management.

Let us do the heavy lifting for you.

Step-by-Step: Scraping Google Maps with ScrapingBee

ScrapingBee ensures consistent access to Google Maps data without IP blocks, CAPTCHAs, and broken scripts. With built-in JavaScript rendering and automatic proxy control, here is how you can extract structured data from Google Maps with our scraper API.

Get your API key

To use ScrapingBee, you’ll need an API key. This will be your authentication token that links the procedure to your account.

  1. Sign up at ScrapingBee
  2. Navigate to your dashboard
  3. Copy your API key from the account section
How to scrape Google Maps

Set up your environment with ScrapingBee

Let's start the scraping process by setting up your scraping environment with Python. If Python is not installed, head over to python.org/downloads and choose the installer for your operating system.

Windows tip: check the box that says “Add Python to PATH” during installation if you want to run Python from any folder.

Once completed, verify your installation on a terminal (Command Prompt on Windows) by running:

python --version

If installation is successful, you will see your Python version. Once everything is in check, use pip (Python’s package manager) to install the ScrapingBee library from PyPI:

pip install scrapingbee

Now we can finally start working on your Google scraper script. Start by importing the ScrapingBee HTML client, which simplifies API calls and error handling:

from scrapingbee import ScrapingBeeClient

Our client does a great job at simplifying the integration of your Google Maps data scraper. Now all we have to do is define 3 objects:

  • client: An instance of the imported ScrapingBeeClient class, which will make a GET request for Google Maps results.
  • response: An object that stores the returned data (it behaves like a requests.Response).
  • params: A dictionary of parameters that defines how the page is fetched.

After putting everything together, the core of your scraper should look like this:

from scrapingbee import ScrapingBeeClient

client = ScrapingBeeClient(api_key='YOUR_API_KEY')

response = client.get(
    'YOUR_URL',
    params={
        # your parameters go here
    }
)

Build your Google Maps search query

To start scraping Google Maps, create a search URL by combining your keyword and location. For example, here is a URL for targeting restaurants in Paris:

https://www.google.com/maps/search/restaurants+in+Paris

This URL is passed as a parameter in your ScrapingBee API call, which will extract information with the help of JavaScript rendering. Let's fill in this information into your script:

from scrapingbee import ScrapingBeeClient

client = ScrapingBeeClient(api_key='YOUR_API_KEY')

response = client.get(
    'https://www.google.com/maps/search/restaurants+in+Paris',
    params={
        'custom_google': True
    }
)

Make your API call with render_js=true

Google Maps heavily relies on JavaScript to load business listings and location data – the search results we are trying to extract. When you open a search URL like “restaurants in Paris,” the publicly available data is rendered dynamically using JavaScript. Because of that, it blocks web scraping attempts by traditional scrapers that only fetch static HTML content.

By adding a render_js parameter, the scraper will use a headless browser to access the JavaScript‑powered page. With this setting, our API can return the fully rendered content, including business names, addresses, ratings, and more results, making it ideal for scraping Google Maps.

The current version of the ScrapingBee HTML API has JavaScript rendering enabled by default. For clarity and best practice, we will include render_js=true in our request throughout this tutorial. Let's add two print commands to show the status code and the returned content. This script successfully extracts data from the Google Maps website:

from scrapingbee import ScrapingBeeClient

client = ScrapingBeeClient(api_key='YOUR_API_KEY')

response = client.get(
    'https://www.google.com/maps/search/restaurants+in+Paris',
    params={
        'custom_google': True,
        'render_js': True
    }
)

print('Response HTTP Status Code:', response.status_code)
print('Response HTTP Response Body:', response.content)

Parse the returned HTML with selectors

Once your script renders the Google Maps page and returns fully loaded HTML, it will look like a mess. To transform the data into a readable and understandable format, you can parse it using tools like BeautifulSoup. These libraries let you extract elements using tag names, CSS selectors, or XPath, automatically cleaning up the extracted content.

Go back to your terminal or CMD and use pip to install BeautifulSoup for data parsing:

pip install beautifulsoup4

Now, import the BeautifulSoup library at the beginning of your web scraping script:

from bs4 import BeautifulSoup

Right below our definition of the GET request, add a line that will parse the extracted HTML response:

# Parse the HTML response
soup = BeautifulSoup(response.content, 'html.parser')

After a manual inspection of the rendered HTML code in the browser, we can identify which HTML elements contain the public data we are looking for. For example, many business names appear in the aria-label attribute within existing anchor tags <a>.

At this moment, the parsed data is contained within the soup object. Let's add a line that will define the results object, the result of BeautifulSoup's find_all method:

# Find all <a> tags with an aria-label (usually business names)
results = soup.find_all('a', attrs={'aria-label': True})

To finish things up, add a for loop to print extracted business names:

print("\nExtracted Business Names:")
for tag in results:
    name = tag['aria-label']
    print(name)

Note: Even with render_js=True, some listings may load after the initial page render. Our API captures a snapshot of the page once it’s “visually complete”, but for Google Maps, that might be too early for all listings to appear. Add a short wait so everything loads properly.

After putting everything together, your end result should look like this:

from scrapingbee import ScrapingBeeClient
from bs4 import BeautifulSoup

# Set up the ScrapingBee client
client = ScrapingBeeClient(api_key='YOUR_API_KEY')

# Request rendered Google Maps search results
response = client.get(
    'https://www.google.com/maps/search/restaurants+in+Paris',
    params={
        'custom_google': True,
        'render_js': True,
        'wait': 3000  # wait 3 seconds before snapshot
    }
)

# Print HTTP status
print('Response HTTP Status Code:', response.status_code)

# Parse the HTML response
soup = BeautifulSoup(response.content, 'html.parser')

# Find all <a> tags with an aria-label
results = soup.find_all('a', attrs={'aria-label': True})

print("\nExtracted Business Names:")
for tag in results:
    name = tag['aria-label']
    print(name)

After running your script, the print command will display every name attributed to aria-label in all retrieved <a> tags. When it renders the retrieved HTML code, BeautifulSoup can take over its parsing.

Note: The example above targets only one attribute. With more information, parsing can become a difficult process, and Google is known for changing class names and other parameters within the HTML to minimize scraping activities. Make sure to check the BeautifulSoup documentation and manually inspect the targeted URL to successfully extract your desired data parameters.

Export and Use the Scraped Data

Once you’ve successfully scraped business listings from Google Maps using our API, the next step is to clean, organize, and save that data as actionable insights. Exporting your results into structured formats like CSV or JSON lets you store the information for analysis.

Python makes this process straightforward. With tools like BeautifulSoup, pandas, and regex, you can extract specific fields (like names and links), format them consistently, and export everything in a few lines of code.

Clean and structure the extracted data

Before exporting, make sure your data is well-structured. Use methods like .strip() to remove extra whitespace, .replace() to clean up unwanted characters, and .lower() or .title() to standardize text casing. You can also use re.sub() (from the re module) for pattern-based cleaning, like removing symbols or extracting numbers. Here is an example of how you can remove leading and trailing whitespace from extracted names with .strip():

name = tag['aria-label'].strip()

Export to CSV or JSON with pandas

Pandas is a powerful Python library used to build data frames and export extracted information to CSV or JSON. Let's start by installing pandas with pip:

pip install pandas

Import pandas near the top of your script:

import pandas as pd

Before the extraction loop, define an empty list that you’ll turn into a DataFrame. Add a loop that includes the URL within the <a> tag and append dictionaries to the list:

businesses = []
for tag in results:
    name = tag['aria-label'].strip()
    href = tag.get('href', '')
    url = f"https://www.google.com{href}" if href.startswith('/') else href
    businesses.append({'name': name, 'url': url})

Now, instead of a list of names, you have a list of dictionaries. Create a pandas DataFrame and export Google Maps data:

# Create DataFrame and export to CSV and JSON
df = pd.DataFrame(businesses)
df.to_csv("restaurants_paris.csv", index=False)
df.to_json("restaurants_paris.json", orient="records", indent=2)

After combining everything, you will have your first Google Maps scraper that can target specific areas within the rendered website. Congrats!

import pandas as pd
from scrapingbee import ScrapingBeeClient
from bs4 import BeautifulSoup

# Set up the ScrapingBee client
client = ScrapingBeeClient(api_key='YOUR_API_KEY')

# Request rendered Google Maps search results
response = client.get(
    'https://www.google.com/maps/search/restaurants+in+Paris',
    params={
        'custom_google': True,
        'render_js': True,
        'wait': 3000
    }
)

# Parse the HTML response
soup = BeautifulSoup(response.content, 'html.parser')

# Extract business names and links
businesses = []
results = soup.find_all('a', attrs={'aria-label': True})

for tag in results:
    name = tag['aria-label'].strip()
    href = tag.get('href', '')
    url = f"https://www.google.com{href}" if href.startswith('/') else href
    businesses.append({'name': name, 'url': url})

# Create DataFrame and export to CSV and JSON
df = pd.DataFrame(businesses)
df.to_csv("restaurants_paris.csv", index=False)
df.to_json("restaurants_paris.json", orient="records", indent=2)

print("Data exported to 'restaurants_paris.csv' and 'restaurants_paris.json'")

If you want to reverse the process, check out our blog article on how to parse JSON data with Python.

What You Can Extract from Google Maps

The ScrapingBee API renders Google Maps pages fully, allowing you to extract key business data directly from the visible HTML. By targeting specific HTML attributes and text nodes, you can pull valuable information like names, categories, ratings, contact details, and even geographic coordinates.

Business names and categories

Business names can appear in <a>, <h3>, or <span> elements with accessible labeling, depending on the layout. Categories like “French restaurant” or “Coffee shop” are found nearby, often as small text or tooltips. With ScrapingBee's rendered output, these elements are easy to locate, but stay mindful of constant changes on the Google Maps website, which can stop your scraper from working.

Google Maps scraping tutorial part 1

Ratings and reviews

Ratings usually appear as a numeric value (e.g., “4.5”) alongside a star icon, and review counts are displayed in parentheses nearby. These are often found inside aria-label or <span> attributes. ScrapingBee’s JS rendering ensures these values are captured even if loaded asynchronously. This data helps assess the popularity and user satisfaction of potential competitors and other parties related to your activities.

Google Maps scraping tutorial part 2

Addresses and contact info

Addresses are displayed below the business name or in the side panel, often wrapped in <span> tags or labeled elements. You can also extract phone numbers, website URLs, and business hours when available. These fields vary by listing, so conditional logic helps ensure clean results.

Google Maps scraping tutorial part 3

Geolocation and coordinates

Location coordinates aren't shown directly on the page but can be extracted from the URL or internal JavaScript objects. For example, coordinates often appear in redirected URLs or inside data-url attributes linked to the business map pin. Our tools will help you parse this data from the rendered HTML or final request URLs.

Start Scraping Google Maps with ScrapingBee Today

ScrapingBee handles everything Google Maps scraping needs — JavaScript rendering, proxy rotation, and headless browsing – all in a simple but powerful API. Just pass the URL to your Google Maps scraper, set render_js=true, and let ScrapingBee return fully loaded HTML. There’s no need to maintain a headless browser or rotate user agents manually.

👉 Start your free trial here and get 1,000 free API calls.

Frequently Asked Questions (FAQs)

How can I extract data from Google Maps?

You can extract data points from Google Maps with the help of our API and the render_js=true parameter. This allows you to load dynamic content like listings and map data. Once the page is fully rendered, you can parse the HTML to collect business names, reviews, addresses, and more.

Yes, scraping Google Maps data is legal because it is publicly available information. However, scraping it may violate Google’s terms of service, so use it responsibly: avoid storing personal information, and use proxy server connections for consistent access to Google search results.

What types of information can be scraped from Google Maps?

With a fully rendered page, you can scrape business names, categories, ratings, review counts, addresses, phone numbers, website links, and even latitude/longitude coordinates. ScrapingBee makes all of this accessible through JavaScript rendering and HTML parsing, as if you were viewing the content in a browser.

ScrapingBee is the best option for Google Maps scraping thanks to its JavaScript rendering and automatic proxy handling. While tools like Selenium or Puppeteer are effective, our HTML API just makes things so much easier! Just put your Google Maps URL into the web scraper and receive ready-to-parse HTML.

image description
Kevin Sahin

Kevin worked in the web scraping industry for 10 years before co-founding ScrapingBee. He is also the author of the Java Web Scraping Handbook.