Learning how to scrape Bing search results can feel like navigating a minefield of anti-bot measures and IP blocks. Microsoft's Bing search engine has sophisticated protection systems to detect traditional scraping attempts faster than you can debug your first request failure.
That’s exactly why I use ScrapingBee. Instead of wrestling with proxy rotations, JavaScript rendering, and constantly changing anti-bot methods, this web scraper handles all the complexity. It allows you to scrape search results data without any technical issues.
In this guide, I'll walk you through the exact process of using ScrapingBee to scrape Bing search results. You’ll get complete code examples, learn the specific API parameters, and discover techniques that actually work.
Quick Answer (TL;DR)
ScrapingBee can extract Bing search results in a single API call. Our proprietary Bing Search Scraping API handles IP rotation, JavaScript rendering, and geo-targeting automatically. Simply set up your web scraper, specify a search query URL, and our platform will handle the rest.
Here's the complete Python script to help you scrape Bing search results immediately:
import requests
import json
import time
from typing import List, Dict, Optional
class BingScrapingBeeClient:
def __init__(self, api_key: str):
self.api_key = api_key
self.base_url = 'https://app.scrapingbee.com/api/v1/'
def scrape_search_results(self, query: str, pages: int = 1) -> List[Dict]:
"""Scrape Bing search results using ScrapingBee"""
all_results = []
extract_rules = {
"results": {
"selector": "li.b_algo",
"type": "list",
"output": {
"title": {"selector": "h2"},
"link": {"selector": "h2 a", "attr": "href"},
"snippet": {"selector": ".b_caption p"},
"display_url": {"selector": "cite"}
}
},
"total_results": {"selector": ".sb_count"},
"related_searches": {
"selector": ".b_rs a",
"type": "list"
}
}
for page in range(pages):
offset = page * 10
search_url = f"https://www.bing.com/search?q={query.replace(' ', '+')}&first={offset}"
try:
response = requests.get(
self.base_url,
params={
'api_key': self.api_key,
'url': search_url,
'extract_rules': json.dumps(extract_rules),
'premium_proxy': True,
'country_code': 'US'
},
timeout=30
)
if response.status_code == 200:
data = response.json()
page_results = data.get('results', [])
all_results.extend(page_results)
print(f"Scraped page {page + 1}: {len(page_results)} results")
else:
print(f"Error on page {page + 1}: {response.status_code}")
except requests.exceptions.RequestException as e:
print(f"Request failed for page {page + 1}: {str(e)}")
continue
# Respectful delay between requests
if page < pages - 1:
time.sleep(2)
return all_results
def save_results(self, results: List[Dict], filename: str):
"""Save results to JSON file"""
with open(filename, 'w', encoding='utf-8') as f:
json.dump(results, f, indent=2, ensure_ascii=False)
print(f"Saved {len(results)} results to {filename}")
# Usage example
if __name__ == "__main__":
scraper = BingScrapingBeeClient('YOUR_API_KEY')
results = scraper.scrape_search_results("python web scraping", pages=2)
scraper.save_results(results, 'bing_results.json')
Ready to dive deeper? Let’s walk through each step and explore advanced techniques that make Bing scraping reliable and scalable.
How to Scrape Bing Search Results with ScrapingBee
Scraping Bing can be tricky because, like other search engines, it uses techniques to block automated traffic. However, this doesn't mean that scraping this website is impossible. A reliable web scraper that can pass realistic headers, offers rotating proxies, and JavaScript rendering can easily avoid getting blocked.
Now, if you're not willing to build the infrastructure yourself, ScrapingBee is designed with all the tools needed to scrape search engines, whether it's Bing or Google. The best part is that it requires minimal coding skills. Let's take a quick look at the process.
Scraping Bing typically involves four steps. First, you create the target Bing search URL – for example, a query for SEO keyword optimization, or Bing ads analysis.
Second, you configure your scraper with the right selectors and extraction rules. This way, you'll capture the data you care about, such as organic results, links, snippets, and ads from the first page of the Bing SERPs.
Third, you store data from the structured response, usually as a JSON or CSV file, so it’s ready for analysis or integration into an SEO strategy.
Finally, once you’ve retrieved all the data you need from the Bing search pages, you can download it into a format of your choice. If you want ot get familiar with all export options and other advanced techniques, visit [ScrapingBee's Documentation](ScrapingBee Documentation).
Now, let me show you how to scrape Bing search results step by step.
Step 1: Get Your ScrapingBee API Key
Before you can scrape Bing search results, you need access to ScrapingBee’s API. The setup process is straightforward and takes just a few minutes.
• Go to ScrapingBee.com • Log in or sign up for a free account
• Once logged in, find your API key in the dashboard
• Copy the key – you’ll need it for all API requests
Step 2: Install Required Package
Now, you need the requests library, which is Python’s standard HTTP client. It handles all the communication with ScrapingBee’s API endpoints.
Run this code to install it:
pip install requests
Step 2: Form Your Bing Search URL
Bing utilizes a specific URL format for search queries. Spaces need to be encoded as + symbols or %20.
Here’s how to create proper Bing search URLs:
query = "web scraping tools"
encoded_query = query.replace(" ", "+")
bing_url = f"https://www.bing.com/search?q={encoded_query}"
This is a sample of a properly formatted Bing search URL that ScrapingBee can process. The encoding ensures special characters don’t break the request.
Step 3: Make Your First Request to Bing Search Pages
Now that your environment is set up, our platform can process your request by routing it through managed proxies. Then, it loads the page in a real browser and executes JavaScript, allowing the site to render fully.
Finally, it applies your extraction rules and returns only the specified fields as structured JSON. These requests can be fully customizable. Visit the data extraction documentation for more advanced options.
To kickstart this process, use the following code to send a request to the Bing search engine:
import requests
api_key = 'YOUR_API_KEY'
url = 'https://www.bing.com/search?q=web+scraping+tools'
response = requests.get(
'https://app.scrapingbee.com/api/v1/',
params={
'api_key': api_key,
'url': url
}
)
html = response.text
This basic request returns the full HTML content of the Bing search results page. As a result, you get a response containing the completely rendered page, including any JavaScript-generated content.
JavaScript rendering is enabled by default on our platform, so you don’t need to specify this parameter explicitly.
Step 4: Extracting Search Result Data
Getting raw HTML is just the first step. The real power comes from ScrapingBee’s extract_rules feature, which transforms messy HTML into clean, structured JSON data.
For Bing search results, you need to identify the containers that hold titles, URLs, and snippets. Here’s the working code:
import json
extract_rules = {
"results": {
"selector": "li.b_algo",
"type": "list",
"output": {
"title": {"selector": "h2"},
"link": {"selector": "h2 a", "attr": "href"},
"snippet": {"selector": ".b_caption p"}
}
}
}
response = requests.get(
'https://app.scrapingbee.com/api/v1/',
params={
'api_key': api_key,
'url': url,
'extract_rules': json.dumps(extract_rules)
}
)
data = response.json()
print(data)
The extract_rules parameter tells the platform precisely what data to extract and how to structure it. The “li.b_algo” selector targets Bing’s search result containers, while the nested selectors grab specific elements like titles and links.
How Extract Rules Work on Bing
Here’s a complete working example that demonstrates the entire process from query formation to data extraction:
query = "web scraping tools"
encoded_query = query.replace(" ", "+")
url = f"https://www.bing.com/search?q={encoded_query}"
extract_rules = {
"results": {
"selector": "li.b_algo",
"type": "list",
"output": {
"title": {"selector": "h2"},
"link": {"selector": "h2 a", "attr": "href"},
"snippet": {"selector": ".b_caption p"}
}
}
}
response = requests.get(
'https://app.scrapingbee.com/api/v1/',
params={
'api_key': api_key,
'url': url,
'extract_rules': json.dumps(extract_rules)
}
)
results = response.json().get('results', [])
for r in results:
print(f"Title: {r['title']}")
print(f"Link: {r['link']}")
print(f"Snippet: {r['snippet']}")
print("-" * 40)
This request successfully prints titles, links, and snippets. But we're not done yet. To maximize the benefits of your scraping project, I'd like to introduce you to some advanced techniques that may be useful.
Advanced Bing Scraping Techniques with ScrapingBee
Beyond basic search result extraction, our platform provides advanced features that enhance the robustness and versatility of your Bing scraper. These techniques help you handle edge cases and extract additional data types.
Handling Pagination
Bing search results span multiple pages. Here’s how to scrape across pages systematically:
def scrape_multiple_pages(query, num_pages=3):
all_results = []
for page in range(num_pages):
# Bing uses 'first' parameter for pagination
offset = page * 10
url = f"https://www.bing.com/search?q={query}&first={offset}"
response = requests.get(
'https://app.scrapingbee.com/api/v1/',
params={
'api_key': api_key,
'url': url,
'extract_rules': json.dumps(extract_rules)
}
)
if response.status_code == 200:
page_results = response.json().get('results', [])
all_results.extend(page_results)
# Add delay between requests
time.sleep(1)
return all_results
Geographic Targeting
ScrapingBee supports geo-targeting, which is crucial for location-specific search results:
response = requests.get(
'https://app.scrapingbee.com/api/v1/',
params={
'api_key': api_key,
'url': url,
'extract_rules': json.dumps(extract_rules),
'country_code': 'US', # Target US results
'premium_proxy': True
}
)
Custom Headers and User Agents
For additional customization, you can specify custom headers:
custom_headers = {
'Accept-Language': 'en-US,en;q=0.9',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8'
}
response = requests.get(
'https://app.scrapingbee.com/api/v1/',
params={
'api_key': api_key,
'url': url,
'extract_rules': json.dumps(extract_rules),
'custom_headers': json.dumps(custom_headers)
}
)
Using custom headers and user agents makes your web scraper more flexible and less likely to be blocked by websites. It also ensures that you receive the desired data (e.g., English content instead of a localized version) and maintains consistency across requests.
Building Your First Bing Scraper with ScrapingBee
Let’s put everything together into a production-ready Bing scraper that handles errors, implements best practices, and provides clean output:
import requests
import json
import time
from typing import List, Dict, Optional
class BingScrapingBeeClient:
def __init__(self, api_key: str):
self.api_key = api_key
self.base_url = 'https://app.scrapingbee.com/api/v1/'
def scrape_search_results(self, query: str, pages: int = 1) -> List[Dict]:
"""Scrape Bing search results using ScrapingBee"""
all_results = []
extract_rules = {
"results": {
"selector": "li.b_algo",
"type": "list",
"output": {
"title": {"selector": "h2"},
"link": {"selector": "h2 a", "attr": "href"},
"snippet": {"selector": ".b_caption p"},
"display_url": {"selector": "cite"}
}
},
"total_results": {"selector": ".sb_count"},
"related_searches": {
"selector": ".b_rs a",
"type": "list"
}
}
for page in range(pages):
offset = page * 10
search_url = f"https://www.bing.com/search?q={query.replace(' ', '+')}&first={offset}"
try:
response = requests.get(
self.base_url,
params={
'api_key': self.api_key,
'url': search_url,
'extract_rules': json.dumps(extract_rules),
'premium_proxy': True,
'country_code': 'US'
},
timeout=30
)
if response.status_code == 200:
data = response.json()
page_results = data.get('results', [])
all_results.extend(page_results)
print(f"Scraped page {page + 1}: {len(page_results)} results")
else:
print(f"Error on page {page + 1}: {response.status_code}")
except requests.exceptions.RequestException as e:
print(f"Request failed for page {page + 1}: {str(e)}")
continue
# Respectful delay between requests
if page < pages - 1:
time.sleep(2)
return all_results
def save_results(self, results: List[Dict], filename: str):
"""Save results to JSON file"""
with open(filename, 'w', encoding='utf-8') as f:
json.dump(results, f, indent=2, ensure_ascii=False)
print(f"Saved {len(results)} results to {filename}")
# Usage example
if __name__ == "__main__":
scraper = BingScrapingBeeClient('YOUR_API_KEY')
results = scraper.scrape_search_results("python web scraping", pages=2)
scraper.save_results(results, 'bing_results.json')
This production-ready code includes error handling, rate limiting, and structured output. It demonstrates how our platform simplifies what would otherwise be a complex scraping project.
Turning Bing SERPs Into Actionable Data
In this article, I demonstrated how you can scrape Bing search results in four simple steps. But don't get fooled by simplicity. The process of data extraction from websites like Bing can be challenging without the right tools.
That's why I'm a fan of ScrapingBee's powerful web scraper. If "it simply works" is not enough of an argument, its technical characteristics, like proxy management, JavaScript rendering, and IP rotation, speak for themselves.
This solution allows you to perform SEO tasks – from competitive analysis to search ranking monitoring – in minutes. After all, nobody has time to copy and paste web links or keywords anymore.
Ready to start scraping Bing search results? Sign up for ScrapingBee today and get 1,000 free API calls to test everything we’ve covered.
Frequently Asked Questions (FAQs)
How do I scrape Bing search results without getting blocked?
ScrapingBee handles [scraping without getting blocked](Scraping Without Getting Blocked) automatically through its proxy rotation system, browser fingerprint management, and intelligent rate limiting. When you scrape search engines, these tools are enabled automatically, and the platform adapts to Bing’s anti-bot measures.
Can ScrapingBee handle JavaScript-based Bing features?
Yes, ScrapingBee includes full JavaScript rendering capabilities. The platform uses real browsers to execute JavaScript, ensuring that dynamically loaded content and interactive features are properly captured when scraping Bing search pages.
Does ScrapingBee support geo-targeting for Bing?
ScrapingBee offers comprehensive geo-targeting through country codes and premium proxy locations. You can specify the geographic region for your requests, ensuring you get localized search results that match your target audience.
How much does it cost to scrape Bing with ScrapingBee?
ScrapingBee offers flexible pricing based on API usage, starting with 1,000 free requests for new accounts. Pricing scales with volume, and premium features like geo-targeting and custom headers are available across all paid plans.

Kevin worked in the web scraping industry for 10 years before co-founding ScrapingBee. He is also the author of the Java Web Scraping Handbook.