Learning how to scrape Amazon prices is a great way to access real-time product data for market research, competitor analysis, and price tracking. However, as the biggest retailer in the world, Amazon imposes many scraping restrictions to keep automated connections away from its sensitive price intelligence.
The Amazon page uses dynamic JavaScript elements, aggressive anti-bot systems, and geo-based restrictions that make it difficult to extract price data. This tutorial will show you how to extract Amazon product prices with Python and our powerful API, because not every web scraper can handle data from Amazon.
Keep reading to learn how our built-in JavaScript rendering, automatic proxy rotation, and CAPTCHA handling will help you bypass anti-scraping and track price trends. This way, you will see how to set up your environment, send HTTP requests, and extract currently available deals.
Scraping Amazon Prices with ScrapingBee
Scraping Amazon isn’t as simple as sending a request with requests or urllib Python libraries. Without the right setup, which can be provided by our SDK, you’ll quickly hit CAPTCHAs, blocked IPs, or incomplete HTML from JavaScript-heavy pages.
With intuitive configuration at your fingertips, you can check how much control you have over your GET API calls on our ScrapingBee Documentation page. To get you everything you need for an Amazon Scraper, let's prepare our coding environment to build scripts that can withstand connection restrictions on websites of many E-commerce businesses.
Get Your API Key and Install Requests
First, make sure you have Python, version 3.6 or newer, on your device. You can download it straight from the website (python.org), the Microsoft Store, or your package manager if you are using Linux.
Then, install required libraries that will control the logic in our scraping process. While Python has its own external tools, some [libraries are not built in Python](libraries are not built in Python), yet they are wonderful additions that simplify coding, letting you focus on price comparison or inventory management if you deal in products scraped from the Amazon website.
After installing Python, go to your Command Prompt (Terminal for Linux users). We will need two external libraries:
scrapingbee – Our official Python SDK, a collection of tools that simplify API calls, supports JavaScript rendering, and handle proxy rotation automatically. Also contains known popular web scraping tools like the "requests" library.
pandas – A powerful data analysis library to transform data in a JSON format into structured DataFrames, which can be easily exported for viewing in Excel or Google Sheets.
Stay focused, we're not working on the script just yet. In your Command Prompt, enter the following line. It will use the Python's package manager PIP to download external libraries which will let us send HTTP requests.
pip install scrapingbee pandas
Before diving into your web scraping script, there’s one last step: create or log into your ScrapingBee account, as you will need to add our API key to launch the client. Fortunately, our 7-day free trial of 1,000 credits is more than enough to test out the service before making any purchases. We believe its a good opportunity to witness the simplicity of scraping Amazon data with our Python SDK.
After a successful registration you will see a clear overview of your available credits, usage stats, and account settings. Copy your API key, located at the top-right corner of your dashboard.
And now its finally time to start working on your Python code. Create a dedicated folder where you’ll store your script. If your code exports data into new files, they will land in the same folder by default. Create a new text file with a .py extension. For example, name your Python file Amazon_Price_Scraper.py.
Open your .py file using an IDE or a text editor of your choice. We recommend using Visual Studio Code because it supports extensions which highlight your coding mistakes before rendering, saving you from wasting credits in the tutorial phase.
Start your script by importing the libraries that we previously downloaded with pip:
from scrapingbee import ScrapingBeeClient
import pandas as pd
#initializing our HTML API client
client = ScrapingBeeClient(api_key='YOUR_API_KEY')
Note: There is no need to import requests because it’s already included in our Python SDK.
We start defining the logic for our Amazon Web Scraper with a function definition:
def amazon_prices(asins):
The "asins" parameter will hold the list of ASIN codes which let us open Amazon Product page content without connecting to an account. We will add a list of 3 product ASINs after the function definition.
Next, we define another variable, a "js_scenario" dictionary which contains the rules for JavaScript rendering. It ensures that product prices and other details are fully loaded before our Amazon scraper starts the extraction.
# Instructions for JavaScript Rendering that will scroll the loaded webpage
js_scenario = {
"instructions":[
{"wait": 2000},
{"evaluate": "window.scrollTo(0, document.body.scrollHeight);"}
]
}
Parse the Price from the HTML
To parse the price from the raw code, we created an "extract_rules" variable, which will be a dictionary defining CSS selectors that point at product details. The following section of code is a simple parser that targets product details for price monitoring:
# Lines are indented because they are a part of function definition
extract_rules = {
"Products": {
"selector": "#centerCol",
"type": "list",
"output": {
"name": "#productTitle",
"price": "span.a-price.aok-align-center.reinventPricePriceToPayMargin.priceToPay"
}
}
}
In the section above, we used three CSS selectors:
#centerCol – main product detail container, parser will not access HTML tags outside of it.
#productTitle – product name/title.
span.a-price.aok-align-center.reinventPricePriceToPayMargin.priceToPay – product price element.
But how did we find those elements? Pick an Amazon product page of your choice, and open it on your browser. Then, click F12 (or right-click > Inspect) and identify the section that stores product name and price, so we can pick it as the main selector in "extract rules".
If you are not sure if you picked the right selector, you can right-click the chosen HTML tag in Developer tool and copy its selector:
Now, after assigning #centerCol as the main container, our parser will not access product data outside of the blue area, seen in the image above. In the same way, we picked out the other two selectors to target the name and price data.
Note: You may encounter websites where working with CSS selectors is extremely difficult. However, there is an alternative way to extract this data from the HTML. XPath is a different technique for parsing web pages, and we highly recommend checking out our blog comparing XPath vs CSS Selectors for Scraping.
Make an API Call to Amazon Product Page
Once we add these variables to a list of parameters in the GET API call, our Amazon web scraper will act according to your instructions. You can also add a different User-Agent for more secrecy, or expand your Amazon Price scraper with additional product details.
Technically, we will not send one API call. The following section defines a for loop that sends an API call with same instructions for each product in our list of ASINs.
We've finally arrived to the API call. Before we start working on it, create an empty list that will store our lists that contain product name and price. Yup, a list of lists, but it will look nice and clean in th
all_prices = []
#Looping GET API calls through different ASIN code URLs
for asin in asins:
response = client.get(
f'https://www.amazon.com/dp/{asin}',
params={
Add the previously defined dictionaries as parameters that will travel with the connection request. We also included an instruction to attempt 2 retries if connection fails.
response = client.get(
f'https://www.amazon.com/dp/{asin}',
params={
"extract_rules": extract_rules,
"js_scenario": js_scenario
},
retries=2
)
Now, all extracted product data is stored in the "response" variable. To only extract its contents, we use the .json method and assign the outcome to a "result" variable. That's a lot of variables, but please bear with us!
Once we have our data in the result variable, it is time to transform each result into a Pandas DataFrame. Then, each generated DataFrame is appended into the "all_prices" list:
# adding all extracted data to one list
result=response.json()
df=pd.DataFrame(result['Products'])
all_prices.append(df)
Now we can close the loop and connect, or concatenate each individual DataFrame into one with the following line:
Prices_df=pd.concat(all_prices, ignore_index=True)
print(Prices_df)
Let's add a bit more. This line of code extracts collected product data into a csv file:
Prices_df.to_csv("Amazon_Prices.csv", index=False)
It is always a good practice to track HTTP status code of your requests. Add the following line to your Amazon Web Scraper to track any errors (200 is OK). If you want more tips on extracting publicly available data from big retailers, check out our blog on How to Scrape Amazon.
All what is left is to add our list of ASINs and call the scraper function:
# Example ASIN list
asin_list = ["B0CTH2QF23", "B0CCDTPDTQ","B099WTN2TR"]
# Calling the defined scraper function
amazon_prices(asin_list)
After putting everything together, your Command Prompt should look like this:
In the same folder as your script, there should be a new csv file with the same results:
Handle Anti-Bot and Localization Challenges
Amazon product pages can block automated scraping through IP bans, CAPTCHA challenges, and location-based redirects. On top of that, product prices can differ by region, currency, and device type, making data collection inconsistent across markets.
On top of that, mobile and desktop versions may serve different layouts or dynamic elements, further complicating extraction. However, our API handles many anti-bot and localization challenges automatically, and you can also customize the parameters by introducing different User-agents, plus proxy IPs, with the ability to pick the location of your choice.
Combined with JavaScript rendering, this blend of features will help you extract up-to-date Amazon prices and remain in full control of your connection.
While our connection was successful without extra measures, here are additional parameters that you can add to your API call:
"premium_proxy": "True", # Uses highest quality rotating proxies
"country_code": "US", # Forces geo-location to a specific country
"block_resources": "False", # Loads images/CSS/JS normally for full rendering
"render_js": "True", # Enables JavaScript rendering for dynamic pages
"custom_headers": {"User-Agent": "Mozilla/5.0"}, # Mimics real browser headers
Full Code Example
We understand that this tutorial might feel all over the place for those who never used Python to scrape web data. Here you can find a copy-paste ready script that uses Requests and other web scraping libraries through our Python SDK. Feel free to test it for yourself and scale it up with additional ASINs or more extracted data points:
from scrapingbee import ScrapingBeeClient
import pandas as pd
#initializing our HTML API client
client = ScrapingBeeClient(api_key='YOUR_API_kEY')
def amazon_prices(asins):
extract_rules = {
"Products": {
"selector": "#centerCol",
"type": "list",
"output": {
"name": "#productTitle",
"Price": "span.a-price.aok-align-center.reinventPricePriceToPayMargin.priceToPay"
}
}
}
# Instructions for JavaScript Rendering that will scroll the loaded webpage
js_scenario = {
"instructions":[
{"wait": 2000},
{"evaluate": "window.scrollTo(0, document.body.scrollHeight);"}
]
}
all_prices = []
#Looping GET API calls through different ASIN code URLs
for asin in asins:
response = client.get(
f'https://www.amazon.com/dp/{asin}',
params={
"extract_rules": extract_rules,
"js_scenario": js_scenario,
"premium_proxy":"True",
"country_code": "us"
},
retries=2
)
# adding all extracted data to one list
result=response.json()
df=pd.DataFrame(result['Products'])
all_prices.append(df)
Prices_df=pd.concat(all_prices, ignore_index=True)
print(Prices_df)
Prices_df.to_csv("Amazon_Prices.csv", index=False)
# Checks if extraction is successful
print(f"{response.status_code}, price extracted")
# Example ASIN list
asin_list = ["B0CTH2QF23", "B0CCDTPDTQ","B099WTN2TR"]
# Calling the defined scraper function
amazon_prices(asin_list)
Start Scraping Amazon Prices Now
When you target retail giants like Amazon for price monitoring, you will eventually encounter anti- scraping measures that sever your data collection efforts. With our Python SDK, one API call is more than enough to add extra information to your pricing strategy.
Let us worry about JavaScript rendering, and proxy rotations, while you manage extracted data in HTML or JSON, to track prices at scale in minutes. Once you've completed your free trial, check out our Pricing Page to continue scraping with our powerful API!
Frequently Asked Questions (FAQs)
Is it legal to scrape Amazon prices?
Yes, scraping available prices is legal. However, make sure to review Amazon’s Terms of Service and consult legal counsel to ensure compliance with applicable laws and policies.
How often can I scrape prices from Amazon?
There's no mandatory time intervals for scraping Amazon. Avoid rapid-fire requests, use delays, retries, and proxy rotation to stay under detection thresholds. Make sure to schedule scrapes at reasonable intervals to reduce blocks.
Does Amazon block scraping bots?
Yes. Amazon employs CAPTCHAs, IP rate limits, and region checks. Our API can bypass these obstacles with automatic proxy rotation, server-side JavaScript rendering, smart retries, and configurable headers.
Can ScrapingBee handle product variations like size or color?
Yes. Thanks to our automatic JavaScript Rendering, you can target variation-specific selectors (e.g., selected swatch blocks) or loop through variation URLs/ASINs for precise pricing.

Kevin worked in the web scraping industry for 10 years before co-founding ScrapingBee. He is also the author of the Java Web Scraping Handbook.