Make concurrent requests in Ruby

Our API is designed to allow you to have multiple concurrent scraping operations. That means you can speed up scraping for hundreds, thousands or even millions of pages per day, depending on your plan.

The more concurrent requests limit you have the more calls you can have active in parallel, and the faster you can scrape.

Making concurrent requests in Ruby is as easy as creating threads for our scraping functions! The code below will make two concurrent requests to ScrapingBee’s pages and display the HTML content of each page:

require 'net/http'
require 'net/https'
require 'addressable/uri'

# Classic (GET)
def send_request(user_url)
    uri = Addressable::URI.parse("https://app.scrapingbee.com/api/v1/")
    api_key = "YOUR-API-KEY"
    uri.query_values = {
      'api_key'  => api_key,
      'url' => user_url
    }
    uri = URI(uri)

    # Create client
    http = Net::HTTP.new(uri.host, uri.port)
    http.use_ssl = true
    http.verify_mode = OpenSSL::SSL::VERIFY_PEER

    puts "Sending the request to "+user_url
    # Create Request
    req =  Net::HTTP::Get.new(uri)

    # Fetch Request
    res = http.request(req)

    # Print response body
    puts res.body
rescue StandardError => e
    puts "HTTP Request failed (#{ e.message })"
end

# Create a thread for the first request
first = Thread.new{send_request("https://scrapingbee.com/blog")}

# Create a thread for the second request
second = Thread.new{send_request("https://scrapingbee.com/documentation")}

# Wait for the first thread to finish
first.join

# Wait for the second thread to finish
second.join

puts "Process End"
Go back to tutorials