New: ScrapingBee CLI – powerful web scraping, from your command line Try it now

How To Scrape Google Trends Data Using PyTrends

14 April 2026 | 8 min read

A Google Trends scraper automates the collection of search interest data from Google Trends, giving you programmatic access to what the world is searching for. Python is the go-to language for this task thanks to libraries like requests, pandas, and pytrends that make data extraction and analysis straightforward. Instead of manually checking trends one keyword at a time, a scraper lets you monitor thousands of search terms across regions and time periods in minutes.

This guide walks you through three practical approaches, from the lightweight pytrends library to raw HTTP requests to API-based scraping - with working code for each.

How To Scrape Google Trends Data Using PyTrends

Quick Answer (TL;DR)

The fastest way to scrape Google Trends is with Python's pytrends library for small jobs, or a dedicated Google Trends scraper API for production-scale collection. Below is the minimal working example with pytrends:

from pytrends.request import TrendReq

pytrends = TrendReq()
pytrends.build_payload(["python", "javascript"], timeframe="today 3-m")
df = pytrends.interest_over_time()
print(df.head())

For high-volume or long-running projects, an API approach avoids the IP blocks and rate limits that inevitably hit direct scraping.

Google Trends shows the relative popularity of search queries on a scale of 0–100 over a given time range and region. Unlike raw search volume numbers from keyword tools, Trends data reveals how interest changes over time, making it uniquely useful for spotting rising demand, seasonal patterns, and fading topics.

Common use cases include:

  • SEO and content planning - find which topics are gaining traction before your competitors do.
  • Market research - compare brand or product interest across regions.
  • Academic research - study public attention to health, political, or social topics.
  • Product launches - time releases to coincide with peak interest.

You can also combine Trends data with broader search engine scraping workflows to build richer datasets.

Key Metrics You Can Extract

MetricWhat It Tells YouExample Output
Interest Over TimeWeekly or daily popularity score (0–100)2024-01-07: 72, 2024-01-14: 85
Interest by RegionWhere a term is most popularUS: 100, UK: 67, DE: 45
Related QueriesOther terms people search alongside yours"python tutorial", "learn python"
Related TopicsBroader subjects connected to your keyword"Machine learning", "Data science"
Trending SearchesWhat's spiking right now in a countryDaily and real-time trending lists

For deeper keyword exploration, a Google related searches API can expand your seed list automatically.

Why Manual Scraping Breaks

Scraping Google Trends directly (without a library or API) is fragile for three reasons:

  1. Rate limits - Google throttles repeated requests from a single IP. After a handful of queries, you'll get empty responses or CAPTCHAs.
  2. Dynamic page structure - Trends uses JavaScript-rendered widgets, not static HTML. A simple requests + BeautifulSoup approach won't see the data at all.
  3. Endpoint changes - Google's internal API endpoints shift without notice, breaking custom scrapers regularly.

These problems are solvable, but they require proxy rotation, session management, and ongoing maintenance, which is why most teams either use pytrends or a managed API.

pytrends is an unofficial Python wrapper for Google Trends. It reverse-engineers Google's internal endpoints and returns clean DataFrames. It's the best option for small-to-medium workloads.

Step 1: Install Dependencies

pip install pytrends pandas

Step 2: Get Interest Over Time

from pytrends.request import TrendReq

# Initialize — add a short delay between requests to avoid rate limits
pytrends = TrendReq(hl="en-US", tz=360, retries=3, backoff_factor=1.0)

# Define your search
keywords = ["chatgpt", "gemini", "claude"]
pytrends.build_payload(keywords, cat=0, timeframe="today 12-m", geo="US")

# Fetch interest over time
df = pytrends.interest_over_time()
print(df[keywords].tail(10))

Output:

                chatgpt  gemini  claude
date
2024-12-01         62      38      15
2024-12-08         58      41      17
...

Step 3: Get Interest by Region

region_df = pytrends.interest_by_region(resolution="COUNTRY", inc_low_vol=True)
top_10 = region_df.sort_values("chatgpt", ascending=False).head(10)
print(top_10[["chatgpt", "gemini", "claude"]])
related = pytrends.related_queries()
for keyword, data in related.items():
    print(f"\n--- {keyword} ---")
    print("Top queries:")
    print(data["top"].head(5))
    print("Rising queries:")
    print(data["rising"].head(5))
# Daily trending searches for a specific country
trending = pytrends.trending_searches(pn="united_states")
print(trending.head(20))

# Real-time trending searches (currently trending)
realtime = pytrends.realtime_trending_searches(pn="US")
print(realtime[["title", "entityNames"]].head(10))

Step 6: Export to CSV

df.to_csv("trends_interest_over_time.csv")
region_df.to_csv("trends_by_region.csv")
print("Data saved.")

Handling Rate Limits in pytrends

Google will block you if you send too many requests. Here's a resilient wrapper:

import time
from pytrends.request import TrendReq

def fetch_trends(keyword_list, timeframe="today 12-m", geo="", sleep=5):
    """Fetch trends for a list of keywords in batches of 5."""
    pytrends = TrendReq(hl="en-US", retries=5, backoff_factor=2.0)
    all_results = []

    # pytrends accepts max 5 keywords per request
    for i in range(0, len(keyword_list), 5):
        batch = keyword_list[i:i+5]
        try:
            pytrends.build_payload(batch, timeframe=timeframe, geo=geo)
            df = pytrends.interest_over_time()
            all_results.append(df)
        except Exception as e:
            print(f"Error on batch {batch}: {e}")
        time.sleep(sleep)  # Wait between batches

    return all_results

If you need more control than pytrends offers, you can call Google's internal Trends endpoints directly. This is more fragile but gives you full flexibility.

Step 1: Identify the Endpoint

Google Trends fetches data from an internal API at https://trends.google.com/trends/api/widgetdata/multiline. The parameters are obtained from a "token" request first.

Step 2: Fetch a Token

import requests
import json

def get_trends_token(keyword, timeframe="today 12-m", geo="US"):
    """Get widget tokens from Google Trends explore page."""
    url = "https://trends.google.com/trends/api/explore"
    params = {
        "hl": "en-US",
        "tz": -60,
        "req": json.dumps({
            "comparisonItem": [
                {"keyword": keyword, "geo": geo, "time": timeframe}
            ],
            "category": 0,
            "property": ""
        }),
    }
    headers = {"User-Agent": "Mozilla/5.0"}
    resp = requests.get(url, params=params, headers=headers)

    # Google prepends ")]}'" to the response — strip it
    data = json.loads(resp.text[5:])
    return data["widgets"]

Step 3: Fetch Interest Over Time

def get_interest_over_time(widgets):
    """Use the token from the TIMESERIES widget to fetch data."""
    widget = next(w for w in widgets if w["id"] == "TIMESERIES")
    url = "https://trends.google.com/trends/api/widgetdata/multiline"
    params = {
        "hl": "en-US",
        "tz": -60,
        "req": json.dumps(widget["request"]),
        "token": widget["token"],
    }
    headers = {"User-Agent": "Mozilla/5.0"}
    resp = requests.get(url, params=params, headers=headers)
    data = json.loads(resp.text[5:])
    return data

# Usage
widgets = get_trends_token("python programming")
timeseries = get_interest_over_time(widgets)

for point in timeseries["default"]["timelineData"][:5]:
    print(f"{point['formattedTime']}: {point['value'][0]}")

Warning: This method is brittle. Google changes these endpoints and token formats without notice. For production use, prefer pytrends or a managed API.

For production systems that need to scrape thousands of keywords daily, a managed API like ScrapingBee handles proxy rotation, rate limiting, and endpoint changes for you. You send a request; you get clean JSON back.

Step 1: Get an API Key

Sign up at ScrapingBee and grab your API key from the dashboard.

Step 2: Make a Request

import requests

API_KEY = "YOUR_SCRAPINGBEE_API_KEY"

response = requests.get(
    "https://app.scrapingbee.com/api/v1/store/google/trends",
    params={
        "api_key": API_KEY,
        "search": "machine learning",
        "geo": "US",
        "time": "today 12-m",
        "data_type": "TIMESERIES",
    },
)

data = response.json()
for point in data["interest_over_time"][:5]:
    print(point)

Step 3: Loop Over Multiple Keywords

import time
import json

keywords = ["react", "vue", "svelte", "angular", "nextjs"]

for kw in keywords:
    resp = requests.get(
        "https://app.scrapingbee.com/api/v1/store/google/trends",
        params={
            "api_key": API_KEY,
            "search": kw,
            "geo": "US",
            "time": "today 12-m",
            "data_type": "TIMESERIES",
        },
    )
    with open(f"trends_{kw}.json", "w") as f:
        json.dump(resp.json(), f)
    print(f"Saved {kw}")
    time.sleep(1)

For real-time trending data specifically, the Google Trends Trending Now API returns what's spiking right now without any scraping logic on your end.

Which Method Should You Use?

FactorpytrendsRaw HTTPScraper API
Setup time5 minutes1–2 hours10 minutes
Rate limit handlingBasic retriesManualFully managed
MaintenanceLow (community-maintained)HighNone
Scale~100 queries/day~50 queries/dayThousands/day
CostFreeFreePaid
Best forResearch, prototypingLearning, custom needsProduction pipelines

Google Trends data becomes more powerful when combined with other sources:

  • Google News - see if trending spikes are driven by news events. Use a workflow to scrape Google News alongside your Trends data.
  • Google Search Results - pair trend data with actual SERP content using Python to scrape Google search results.
  • Google Jobs - track hiring demand for trending skills with a Google Jobs scraper API to see whether search interest correlates with real job postings.

This multi-source approach gives you a fuller picture: Trends tells you what people care about, News tells you why, and Jobs tells you where the money is.

If you're still checking Google Trends manually, you're leaving data on the table. Pick the method that fits your scale:

  1. Just exploring? Install pytrends and run the five-line example at the top of this guide.
  2. Building a recurring report? Use the batch function with sleep timers to stay under rate limits.
  3. Running a production pipeline? Use a managed scraper API and forget about blocks entirely.

Whichever path you take, you'll go from guessing what's trending to knowing—backed by data.

Scraping publicly available data for research is generally accepted, but it may conflict with Google's Terms of Service. Using an API-based approach reduces legal gray areas since you're accessing data through a structured service rather than scraping the site directly. Consult a lawyer if you're building a commercial product around Trends data.

How often can I update the data?

With pytrends, you can safely pull data a few times per hour if you add delays. A managed API can update trending data every few minutes without throttling. For historical analysis, daily or weekly pulls are usually sufficient.

Yes. Any language with HTTP support works. You can also use a no-code web scraping API that returns JSON directly, no programming required. Node.js, Ruby, and Go all have HTTP clients that can call the same endpoints.

What's the best way to avoid getting blocked?

Three rules: add delays between requests (3–5 seconds minimum), rotate your IP address using proxies, and keep your request volume reasonable. Or skip the complexity entirely and use a Google Trends API that handles all of this behind the scenes.

image description
Jakub Zielinski

Jakub is a Senior Content Manager at ScrapingBee, a T-shaped content marketer deeply rooted in the IT and SaaS industry.