New Amazon API: We've just released a brand new way to scrape Amazon at scale Start Free Trial 🐝

Price Scraper With ScrapingBee

24 October 2025 | 13 min read

Building a multi-functional price scraper is one of the best ways to extract data from competitor platforms and study their pricing strategies. Because most e-commerce businesses use automated connections for competitive analysis, finding a reliable way to access website data and study market trends is one of the best ways to outshine competitors.

However, researching and analyzing data takes a lot of time, so having the best tools for scraping prices provides a big advantage. In this guide, we will show you how to access web data and start scraping websites with our intuitive HTML API. Stick around to build your first price scraping tool in just a few minutes!

Quick Answer (TL;DR)

Want to start working on structured data fields right out of the gate? Here is a full working Python script for a price scraper that collects product information from Target. If you want to learn more about customizable parameters and available options, check out our ScrapingBee API documentation page.

#Importing our HTML API
from scrapingbee import ScrapingBeeClient 
# pandas dataframes for better data formatting
import pandas as pd 

# Initializing our API client in the "client" variable
client = ScrapingBeeClient(api_key='YOUR_API_KEY')


url_list = ['https://www.target.com/s?searchTerm=apple+watch', 'https://www.target.com/s?searchTerm=garmin+watch', 'https://www.target.com/s?searchTerm=samsung+watch']


# Start the function definition, the indented lines define its boundaries
def price_scraper():
    # Rules for JavaScript Rendering
    js_scenario = {
        "instructions": [
            {"wait": 2000}
        ]
    }
    #Extract rules definition
    extract_rules = {
    "Search_Result": {
    #Main selector that shows the parser which containers should be targeted
        "selector": "div.styles_detailsWrapper__cC4Wx",
    # Outputs a list because we want the name of the product along with its price
        "type": "list",
        "output": {
    # Name of the product
        "Name": "a[data-test='product-title'] div[title]",
    # Price selector
        "Price":"span[data-test='current-price']"
           }
        },

}
    Search_result_list=[]
    for url in url_list:
        response = client.get(
        url,
        params={
        "js_scenario": js_scenario,
        "extract_rules": extract_rules,
        'stealth_proxy': 'True',
        'country_code':'us'
                }
        )
        print("STATUS CODE: ", response.status_code)
        result=(response.json())
        df=pd.DataFrame(result['Search_Result'])
        #appending the list of pages with a DataFrame for each page
        Search_result_list.append(df)
    #uses concatenation to merge all DataFrames from the Pages_List
    Pages_DataFrame= pd.concat(Search_result_list, ignore_index=True)
    print(Pages_DataFrame)
    Pages_DataFrame.to_csv("Price_extraction.csv", index=False)
        
price_scraper()

What Is a Price Scraper?

A price scraper is a specialized web scraper designed to retrieve the current price of a product, along with optional metadata such as currency or discounts from an e-commerce or retail website. It automates the extraction of structured pricing data that would otherwise need to be gathered manually.

Competitor price monitoring is one of the main reasons why price scrapers are so popular. Businesses track rival pricing for similar products to remain competitive by aggregating data from multiple stores to identify pricing trends and market extremes.

Other purposes include deal alerts or price tracking systems that notify users when a product’s cost drops below a set threshold, and Minimum Advertised Price (MAP) enforcement to identify retailers violating pricing agreements.

Since product pages frequently load data dynamically through JavaScript and deploy anti-scraping protections, building a reliable price scraper requires more than basic HTML requests. It demands tools capable of rendering JavaScript, rotating proxies, and managing rate limits to consistently capture accurate data.

Why Use ScrapingBee for Price Scraping?

With the help of our API, you can remove most of the complexity from building a price scraper by handling tasks that usually require micromanaging roadblocks and ensuring browser automation. As an alternative, you can use our Python SDK to integrate JavaScript rendering directly into the API, which is essential because many modern product pages load price data dynamically and receive the fully rendered HTML without setting up tools like Selenium or Puppeteer.

Our tools automate proxy rotation and include built-in anti-block features. Instead of maintaining your own proxy list, managing IP rotation, or solving CAPTCHA manually, these layers will be managed internally based on your provided parameters and default settings, minimizing request failures and rate-limit issues.

The API and SDKs are designed for simplicity. You can fetch fully rendered pages or structured data, eliminating the need for local browser orchestration or complex script management. By offloading browser rendering, proxy management, and scaling logistics with our Python SDK, you can concentrate on price extraction and data analysis rather than technical maintenance. The result is a more stable, scalable, and maintainable scraping pipeline.

Challenges in Price Scraping

Before writing code, it’s important to understand the key obstacles in price scraping and how ScrapingBee addresses each one.

Dynamic content is one of the biggest challenges. Many e-commerce sites load price data asynchronously with JavaScript, leaving the initial HTML incomplete. Our API resolves this by rendering JavaScript automatically, ensuring that the returned HTML includes all dynamically loaded prices and product details.

Bot detection and IP blocks are common when scraping at scale, but we can handle these defenses by rotating proxies and managing headers internally. Our API includes built-in CAPTCHA and rate-limit avoidance mechanisms, so your requests appear more human-like without extra configuration.

Rate limiting and throttling can slow or interrupt your data collection. You can customize your connections and JavaScript Rendering parameters, plus use delays to control request pacing and retries intelligently, maintaining steady throughput without triggering server-side restrictions.

Finally, parsing and cleaning price data can be tedious when symbols, separators, or inconsistent formatting appear. If you want a solution that requires fewer external tools, our Python SDK even has parsing capabilities to deliver consistent HTML responses for your data-processing pipeline. You can learn more about accessing e-commerce platforms in our blog about Web scraping challenges.

How to Build a Price Scraper With ScrapingBee

Despite many potential hurdles, our web scraping tools are a great way to simplify data extraction for beginners while still targeting popular e-commerce websites to track market trends. Let's start working on our price scraper, which extracts information from Target. Once we are done, you will be able to send Python requests with ScrapingBee and customize the script to work on other complex websites and compare price changes with real-time data.

Step 1: Install and Set Up

Begin by making sure you have Python version 3.6 or newer on your device. It will also install its package manager pip, used to download external libraries. Once you have Python, go to Command Prompt (or Terminal for Linux users) and type the following line:

pip install scrapingbee pandas

This will take care of two external libraries that introduce our tools and advanced features to extract product data in a readable format:

  • scrapingbee: Our Python SDK with built-in support for JS rendering and proxy management.

  • pandas: Powerful data analysis library used here to structure, clean, and export scraped product data in a readable format (like CSV or DataFrame).

After that, log in to your ScrapingBee and copy the API key. If you register your account right now, you can use a free trial of 1,000 credits for a week to test price scraping on different e-commerce websites.

image.png

Step 2: Send First Request

Choose a project folder and create a text file with a .py extension, for example: "price_scraper.py". We highly recommend using more sophisticated editors like Visual Studio Code or Notepad++, as syntax highlighting will help you identify errors in the code faster.

Once everything is in check, you can start the script by importing these libraries. After that, create a "client" variable that will give us API access to gather data.

#Importing our HTML API
from scrapingbee import ScrapingBeeClient 
# pandas dataframes for better data formatting
import pandas as pd 

# Initializing our API client in the "client" variable
client = ScrapingBeeClient(api_key='YOUR_API_KEY')

Now we can start working on the main function definition for our web scraper API. In this tutorial, we named it "price_scraper". Follow it up with a "js_scenario" variable that contains a dictionary of JavaScript rendering instructions. For example, our code instructs the scraper to load the page for 3 seconds before attempting the extraction.

# Start the function definition, the indented lines define its boundaries
def price_scraper():
    # Rules for JavaScript Rendering
    js_scenario = {
        "instructions": [
            {"wait": 3000}
        ]
    }

Before we extract specific data points from the web page, let's establish a working connection and then work on ready-made datasets. The following section contains a response variable that delivers the GET API call. In it, we have two parameters: the predefined js_scenario variable, as well as enabled proxy connections targeting servers in the US:

    response = client.get(
        url,
        params={
# parameters for the GET API call
        "js_scenario": js_scenario,
        #"extract_rules": extract_rules,
        'premium_proxy': 'True',
        'country_code':'us'
                }
        )
    result=(response.text)
    print(response.status_code)
    print(result)

Note: Our API supports headless browsers, which are enabled by default via a hidden 'render_js='true' parameter. For a raw extraction, all that is left is to add a URL of prices and call the function at the end of the script. In this guide, we will target data from Target product descriptions for an Apple Watch:

url = 'https://www.target.com/s?searchTerm=apple+watch'

Invoke the function at the end of the code:

price_scraper()

After running the code, we can see that extraction was successful: it just outputs all HTML syntax and other clutter alongside actual data points:

image.png

Note: As you approach different platforms to scrape data, you will encounter various websites that impose different anti-scraping restrictions. For example, Amazon is notoriously hard to scrape, but you can still retrieve detailed information and start scrapingAmazon prices.

Step 3: Parse Product Price

To keep everything beginner-friendly, we will use our powerful "extract_rules" parameter, capable of parsing product prices from the RAW HTML with very little coding required. To identify the positions of these data points, we have to manually inspect the page with developer tools and find appropriate CSS selectors.

Before we do that, let's prepare our "extract_rules" variable:

    #Extract rules definition
    extract_rules = {
    "Search_Result": {
    #Main selector that shows the parser which containers should be targeted
        "selector": "",
    # Outputs a list because we want the name of the product along with its price
        "type": "list",
        "output": {
    # Name of the product
        "Name": "",
    # Price selector
        "Price":"",
           }
        },

}

Now let's go to our URL of Target's Apple Watches to find each container through Developer tools (Press F12 or Right click -> Inspect). After identifying a <div> container with an appropriate class and running it through the search bar within Developer tools, we can see that it corresponds to each product listing card on the page, showing 24 results.

image.png

Add it as the main selector, and then identify the other two selectors for the name and price within its boundaries. After that, your "extract_rules" should look something like this:

    #Extract rules definition
    extract_rules = {
    "Search_Result": {
    #Main selector that shows the parser which containers should be targeted
        "selector": "div.styles_detailsWrapper__cC4Wx",
    # Outputs a list because we want the name of the product along with its price
        "type": "list",
        "output": {
    # Name of the product
        "Name": "a[data-test='product-title'] div[title]",
    # Price selector
        "Price":"span[data-test='current-price']",
           }
        },

After that, we can add it to the list of parameters within the GET API call:

    response = client.get(
        url,
        params={
        "js_scenario": js_scenario,
# Parsing rules added:
        "extract_rules": extract_rules,
        'premium_proxy': 'True',
        'country_code':'us'
                }
        )

Now, head over to the "response" variable and the "result" type to output data in JSON format. After that, we can transform it into a Pandas data frame for comfortable price monitoring.

    result=(response.json())
    Price_DataFrame=pd.DataFrame(result['Search_Result'])
    print(Price_DataFrame)
    print("STATUS CODE: ", response.status_code)
        
price_scraper()

Once we run the code, we can see that with just a few clicks, we can extract and monitor competitor prices and combine different data sets from targeted e-commerce sites:

image.png

Note: If you want an alternative way to parse target websites, check out our BeautifulSoup tutorial.

Step 4: Handle Blocks and CAPTCHAs

Scraping public websites often leads to IP blocks, CAPTCHAs, or JavaScript challenges, but these public data sources are necessary for tracking competitor websites. Without intuitive solutions like our Python SDK, scrapers usually require building custom proxy rotators, handling browser fingerprinting, and constantly updating their setup to avoid getting blocked.

With built-in smart proxies and automatic CAPTCHA avoidance, ScrapingBee handles most anti-bot systems out of the box. Go ahead and test "premium_proxy" and "stealth_proxy" parameters, and our infrastructure will take care of Rotating proxies and header management, while solving CAPTCHAs in the background.

Need to scrape a site aggressively or from multiple locations? For developers used to wrestling with detection mechanisms, our API offers a huge productivity boost—letting you focus on data, not defenses.

Add Extra Features to Your Price Scraper

Due to very little coding needed, our API provides a comfortable infrastructure to extract pricing information from multiple URLs with one script. By utilizing automation, businesses can spend less time on the intricacies of scraping and stay competitive while focusing on investment opportunities.

For example, by changing our URL into a list of links, and running the GET API calls through a for loop, we can create multiple DataFrames, connect them into one, and then export everything into a CSV to relay to price comparison tools or share the content with partners via Google Sheets.

  • Word Count: 300–350 words

  • Purpose: Show how to scale scraper: multiple products, saving to CSV, scheduling.

  • Notes: Encourage automation for business value.

Extract Multiple Product Prices

Let's expand our web scraping tools to extract data from different types of Smart Watches on target. Here is an example of our new URL list:

url_list = ['https://www.target.com/s?searchTerm=apple+watch', 'https://www.target.com/s?searchTerm=garmin+watch', 'https://www.target.com/s?searchTerm=samsung+watch']

Then, before the GET API call, we create an empty list to append with 3 different DataFrames before connecting them all into one. Because we access 3 URLs, our "for loop" ends with appending results into the empty list:

    Search_result_list=[]
    for url in url_list:
        response = client.get(
        url,
        params={
        "js_scenario": js_scenario,
        "extract_rules": extract_rules,
        'stealth_proxy': 'True',
        'country_code':'us'
                }
        )
        print("STATUS CODE: ", response.status_code)
        result=(response.json())
        df=pd.DataFrame(result['Search_Result'])
        #appending the list of pages with a DataFrame for each page
        Search_result_list.append(df)

After closing the loop, we connect them all with Pandas' "concat" method:

    Pages_DataFrame= pd.concat(Search_result_list, ignore_index=True)

Now, the last line of our data extraction function exports the result into a CSV file:

    Pages_DataFrame.to_csv("Price_extraction.csv", index=False)

Once we run the code, we can see that extraction was successful, outputting 72 rows of price data (24 per each page).

image.png

Note: E-commerce platforms can be unpredictable because they impose restrictions on automated data extraction. If you encounter any problems, check out our blog on Web scraping e-commerce sites.

Build Your Own Price Tracker Today

Building a price tracker no longer requires changing IP addresses, managing CAPTCHAs, or configuring browsers. Instead, you can use our API to automate price monitoring through a simple API request. Whether you’re analyzing competitors, building a product comparison engine, or tracking market fluctuations, our infrastructure ensures clean, JavaScript-rendered data without downtime!

Start with your free 1,000-credit trial or check out the ScrapingBee pricing page, and build a working scraper in minutes using our Python SDK. Add more pages, export to CSV, or integrate with Google Sheets for real-time updates!

Frequently Asked Questions (FAQs)

Yes, scraping publicly available data is legal, but e-commerce platforms try to mimic bot traffic on their websites. However, make sure to always avoid collecting personal or non-public information to not violate data protection laws.

How do I scrape dynamic pricing pages?

You can collect dynamic prices with different extractions, access through a wider range of IPs, and easy JavaScript rendering provided by our API. It executes JS automatically, ensuring that asynchronously loaded prices and product details appear in the final HTML.

Can I scrape Amazon for product prices?

Yes, but Amazon uses strong anti-bot systems. Our built-in smart proxy rotation and CAPTCHA handling provide a more consistent access to the platform, allowing you to collect pricing information without advanced coding skills!

How do I avoid getting blocked while scraping prices?

Enable premium_proxy or stealth_proxy parameters on GET API calls with our Python SDK. These settings will simulate natural browser traffic to bypass rate limits and CAPTCHAs. If you still cannot access the platform, check out our guide on How to avoid getting blocked.

image description
Kevin Sahin

Kevin worked in the web scraping industry for 10 years before co-founding ScrapingBee. He is also the author of the Java Web Scraping Handbook.