Make concurrent requests in PHP

Our API is designed to allow you to have multiple concurrent scraping operations. That means you can speed up scraping for hundreds, thousands or even millions of pages per day, depending on your plan.

The more concurrent requests limit you have the more calls you can have active in parallel, and the faster you can scrape.

Making concurrent requests in PHP is as easy as creating threads for our scraping functions! The code below will make two concurrent requests to ScrapingBee’s pages and display the HTML content of each page:

​​​​​​<?php

// Set base url & API key
$BASE_URL = "https://app.scrapingbee.com/api/v1/?";
$API_KEY = "YOUR-API-KEY";

function scrape_website($url) {
global $BASE_URL;
global $API_KEY;

// Set parameters
$parameters = array(
    'api_key' => $API_KEY,
    'url' => $url // The URL to scrape
);

// Get cURL resource
$ch = curl_init();

// Building the URL query
$query = http_build_query($parameters);

// Set the URL for cURL
curl_setopt($ch, CURLOPT_URL, $BASE_URL.$query);

// Set method
curl_setopt($ch, CURLOPT_CUSTOMREQUEST, 'GET');

// Return the transfer as a string
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);

// Send the request and save response to $response
$response = curl_exec($ch);

// Stop if fails
if (!$response) {
    die('Error: "' . curl_error($ch) . '" - Code: ' . curl_errno($ch));
}

echo 'HTTP Status Code: ' . curl_getinfo($ch, CURLINFO_HTTP_CODE) . PHP_EOL;
echo 'Response Body: ' . $response . PHP_EOL;

// Close curl resource to free up system resources
curl_close($ch);

}

$array = array("https://scrapingbee.com/blog", "https://scrapingbee.com/documentation");
for ($i = 0; $i < 2; ++$i) {
  $pid = pcntl_fork();

  if (!$pid) {
    echo "Scraping $array[$i]\n";
    scrape_website($array[$i]);
    exit;
  }
}
?>
Go back to tutorials