Make concurrent requests in Ruby

Our API is designed to allow you to have multiple concurrent scraping operations. That means you can speed up scraping for hundreds, thousands or even millions of pages per day, depending on your plan.

The more concurrent requests limit you have the more calls you can have active in parallel, and the faster you can scrape.

Making concurrent requests in Ruby is as easy as creating threads for our scraping functions! The code below will make two concurrent requests to ScrapingBee’s pages and display the HTML content of each page:

require 'net/http'
require 'net/https'
require 'addressable/uri'

# Classic (GET)
def send_request(user_url)
    uri = Addressable::URI.parse("")
    api_key = "YOUR-API-KEY"
    uri.query_values = {
      'api_key'  => api_key,
      'url' => user_url
    uri = URI(uri)

    # Create client
    http =, uri.port)
    http.use_ssl = true
    http.verify_mode = OpenSSL::SSL::VERIFY_PEER

    puts "Sending the request to "+user_url
    # Create Request
    req =

    # Fetch Request
    res = http.request(req)

    # Print response body
    puts res.body
rescue StandardError => e
    puts "HTTP Request failed (#{ e.message })"

# Create a thread for the first request
first ={send_request("")}

# Create a thread for the second request
second ={send_request("")}

# Wait for the first thread to finish

# Wait for the second thread to finish

puts "Process End"
Go back to tutorials