Extract Job Listings, Details and Salaries from Indeed with ScrapingBee and Make.com

11 April 2022 | 13 min read

Taking the time to read through target pages is usually not the best idea. It's too time-consuming and it's easy to miss important changes when you're scrolling through hundreds of pages. Therefore, learning how to perform updates automatically without the need for coding skills is crucial.

In this tutorial, we will scrape jobs from indeed.com , one of the most popular job aggregator websites. Web scraping is an excellent tool for finding valuable information from a job listing database.

There is no doubt that the website is loaded regularly with data. With Indeed and ScrapingBee , you can search for millions of jobs, CVs, company reviews and more online. You can curate city-specific or industry-specific listings that only display jobs from a particular city or industry.

Here's what we'll do:

  • Search for Customer Service-related jobs posted to a number of cities on indeed.com .
  • Implement a ScrapingBee function on Make (formerly Integromat) to retrieve job listings, details and other relevant information
  • Store all extracted data in a spreadsheet for further cleaning and analysis
  • Send these job details via email or newsletter campaign
  • Create an alert for new job listings
  • Schedule the scenario to run at least once a day
cover image

Why Scrape Product Data from job listing sites?

In case you are wondering why you even need to scrape data from Indeed, let’s take a look at what you can do with Indeed data.

There are a number of reasons why you might want to extract data from job listing sites:

  • You own a job board or newsletter
  • You're on the job market and are on the hunt for a new and interesting job
  • You want to find out which companies are hiring, what roles are in demand, and how much the average salaries are
  • You’re interested in job statistics to find the next step in your career

A web scraping tool can help you gather all the product data from a specific category, and export scraped data as visual formats like spreadsheets, CSVs, or databases. You can use ScrapingBee to collect job posting information and parse it out for you in a nice tidy format!

If you’re new to no-code web scraping, we have an article exploring the many aspects of no-code web scraping for coders and no-coders .

Now let’s get into gathering and parsing information from Indeed’s pages very easily using ScrapingBee !

Getting started

We have a clear picture of what we need to do and how web scraping can help. Let's start putting these ideas into action.

ScrapingBee  is a Web Scraping API that handles proxies and Headless browsers for you, so you can focus on extracting the data you want, and nothing else.

The API also allows you to:

  • Extract complex structured information from any website with CSS selectors
  • Run JavaScript scenario
  • Take a screenshot of the website, the full page, a part of the screen

First, get your  ScrapingBee account if you haven't already got one. For this tutorial, you’d also need a Make (formerly Integromat) account.

Extract data from CSS selectors

To use ScrapingBee to pull visible content from a series of Indeed pages, it will be helpful to analyze the URL and page structure. All of the information on the page is coded with HTML (HyperText Markup Language) tags.

When using ScrapingBee to load a specific CSS element, you must pass the CSS selector from Indeed.com 's page link.

If you don't know how CSS selectors work, check out our  No Code Competitor Monitoring  tutorial about how to do it. It doesn't require any coding knowledge, and it's very easy to put into practice.

The URL of the page we want to scrape in this tutorial is https://uk.indeed.com/jobs?q=Customer%20Service .

We will start by extracting the following information for the purpose of this tutorial:

  • Job title
  • Link
  • Company name
  • Company location
  • Salary

Depending on your requirements, you can add more to these. You can choose to pull the job summary details or rating. The general syntax for extract rules is:

{ "key_name" : "css_selector" }

Extraction rules can be quite complex for those not familiar with CSS. You may be interested in reading the documentation for more details about how it works.

By adding extraction rules to your API call, we are able to extract data from pages without parsing the HTML on your end. We pass multiple variables, along with the CSS selector to populate the variable.

The HTML structure of each listing is basically identical, so we can run our scenario on all of them and get all the details quickly.

If you want to get all elements matching the selector, you should use the list type option. Since we want all job listings on the page and not just one, we use the type list.

{
	"results": {
		"selector": "a.tapItem.fs-unmask.result.resultWithShelf.sponTapItem.desktop",
		"type": "list",
		"output": {
			"link": {
				"selector": "a.tapItem.fs-unmask.result.resultWithShelf.sponTapItem.desktop",
				"output": "@href"
			},
			"title": "h2.jobTitle>span",
			"name": "span.companyName",
			"location": "div.companyLocation",
			"salary": "div.salary-snippet"
		}
	}
}

This will return a list of all elements matching the selector. The information extracted by the above rules would be:

"link": "/rc/clk?jk=0d5c8c0dfe43c3d5&fccid=057e594637329eab&vjs=3",
"title": "Customer Service Advisor",
"name": "City of Westminster Council",
"location": "Westminster",
"salary": "£27,009 - £30,063 a year"

You can also discover this feature using our Postman collection covering every ScrapingBee's features.

Test your API Call on ScrapingBee

Let’s use the  ScrapingBee Dashboard to test our API calls and quickly see how each of these settings works. To get started, you just need to create an account  here .

You can set up each action you want to perform using the respective fields in the Request Builder:

1. Request builder.png

Input the link in the URL field and uncheck the Javascript Rendering. Rendering the JavaScript on the page with a headless browser cost 5 credits/request:

1.1 URL required.png

Next, check the “extract rules” box and input the code in your ScrapingBee request builder:

1.2 Extract JSON.png

Click the ‘Try it’ button on the right-hand side of your dashboard to preview the results.

Now let’s get on to automating this process with Make.

Get your ScrapingBee API Key

In order to add a connection on Make .com, you’d need your ScrapingBee API key. Your API key can be gotten from the ScrapingBee dashboard :

1.3 API Key.png

Copy the API key to your clipboard for later.

Create a new Make (formerly Integromat) Scenario

With Make , you can connect different modules and create scenarios to define what happens when content is updated and which pages should load. There are numerous apps you can use with Make , and you can combine different apps to run automated tasks.

To get started, create a new Make account or log into your existing account:

Screenshot 2022-03-04 at 09.59.01.png

Now you need to create a module to load the code in your scenario. Go to Scenarios and create a new scenario.

From the account homepage, create a new scenario. A Make scenario is comprised of a series of  modules  that indicate how data should be transferred and transformed between apps/services:

Screenshot 2022-03-04 at 10.04.32.png

Select the app that will trigger your scenario. Click the first “+” sign and select ScrapingBee:

Screenshot 2022-03-04 at 15.42.03.png

Now choose one of the actions from the list. In this case, select the Make an API call action:

Screenshot 2022-03-04 at 15.48.41.png

Next, you will need to create a connection or select one of your previously configured connections for the module.

Create a ScrapingBee connection

Make uses connections to link to different services that require authentication. For example, to perform a scrape, we need to connect to ScrapingBee using our API key.

If you have a connection set up, click the dropdown menu to select your connected account:

5. Add new connection.png

Alternatively to add a new connection, click on “Add” and then enter your ScrapingBee API key gotten from the ScrapingBee dashboard :

Screenshot 2022-03-04 at 15.52.27.png

Once the connection is set up, we will now configure the action.

Configure Action to make an API call

Set method to GET. This is the default method and is what browsers use to display pages. POST is used to submit forms, but we have no need for that here.

Enter the URL of the Indeed job listing that you want to watch in the URL field. The URL field is the link we want to scrape:

Enter the URL that you want to watch in the URL field.png

Toggle the box “Show advanced settings”. This is where we enter the data extraction rules that allow ScrapingBee to return the data in a nicely formatted way.

{
	"results": {
		"selector": "a.tapItem.fs-unmask.result.resultWithShelf.sponTapItem.desktop",
		"type": "list",
		"output": {
			"link": {
				"selector": "a.tapItem.fs-unmask.result.resultWithShelf.sponTapItem.desktop",
				"output": "@href"
			},
			"title": "h2.jobTitle>span",
			"name": "span.companyName",
			"location": "div.companyLocation",
			"salary": "div.salary-snippet"
		}
	}
}

Scroll to “Extract Rules (JSON)” and enter the code to extract all the details:

Extract Rules (JSON).png

Optionally, you should mark “Render JS” as No. This is not a feature required for this scraping job and costs extra credits.

Screenshot 2022-03-10 at 11.11.32.png

Test Connection

You can now test run the scrape job using the “Run Once” button at the bottom of the page

Run scenario once.png

Control workflow with Iterator

For the next step in the scenario, we will select “Flow Control” and pick “Iterator”. Iterators are a separate tool that can be found under the Flow section of tools.

Iterator  is a special  type of module  that allows you to split a large bundle of data into multiple smaller bundles. The Iterator module sends each of our database records to ScrapingBee, one at a time, so it can load our pages.

Screenshot 2022-03-04 at 16.34.27.png

Each array item will be outputted as a separate bundle. This allows us to make the rest of the scenario run for each value in the result set. This is because (as we will see later) we want to add a row in our spreadsheet for each scraped post.

To configure the Iterator, we will select “Body → Results”. This makes it loop over the scraped data and will perform any subsequent action for each item:

Iterator.png

Click ‘Ok’ then Run once to preview the results.

The results returned are then passed on one after another to subsequent modules. This way you can deal with each smaller bundle as separate bundles.

Now, you will have to create a connection to the corresponding service.

Configure Data store Module

During scenario synchronization, you can use a data store to store new data from scenarios and transfer data between scenario runs and individual scenarios. This is similar to databases or simple tables. Data Store modules allow you to add, replace, update, retrieve, delete, search, and count records from various sources.

In order to use the Data store module, it is necessary to create a data store in your Make.com account. Now let’s select or add the data store where you want to create a record.

Just like with the ScrapingBee integration, we first need to add a connection. Click the plus (+) sign and add a Data store module:

add a data store module.png

Choose the Add/replace a record action.

Create a Data store

Next, we will create a data store if you don’t have an existing data store set up.

Click the Add data store button:

Create a data store.png

Enter settings for the new data store:

  • Data store name - Enter the name for the data store. E.g. Indeed jobs
  • Data Structure - A data structure is a list of the columns for a table that indicates the column name and data type. Select the data structure that has been already created or click the Add button to create a new data structure. Optionally, if you leave the field empty without selecting a data structure, the database will only contain the primary key.
  • Data storage size in MB - Allocate the size for the data store from your total Internal data storage. The reserved amount can be changed at any time later on.
Enter settings for the new data store.png

Set up Data structure

A data structure is a document that describes in detail the format of the data being transferred to Make.

If you have a data structure set up already, then select this from the dropdown menu.

You can alternatively create a data structure by clicking the Add button in the settings of the module that requires data structure specification or by clicking the Create a new data structure button in the Data structure overview section.

  • Data structure name - Enter the name for the data structure you are going to create
  • Specification - Click the Add item button to specify the properties of one column manually, then enter the Name and Type for the data store column and define corresponding properties. Alternatively, use the Generator button to determine the columns from the sample data you provide.

For this tutorial, we will create five columns with the Generator button by inputting the JSON sample data:

{
"link": "/rc/clk?jk=0d5c8c0dfe43c3d5&fccid=057e594637329eab&vjs=3",
"title": "Customer Service Advisor",
"name": "City of Westminster Council",
"location": "Westminster",
"salary": "£27,009 - £30,063 a year"
}

By providing a data sample, the generator will automatically create a data structure based on the data sample you entered. The created data structure can then be manually modified.

Setting Up the Data Structure.png

Click Save when the data store is set up.

Add/Replace a Record

Now that we have selected or added the data store where you want to create a record, we will continue to set up an action to add or replace a record in the data store.

  • Key - Enter the unique key. The key can be used later to retrieve the record. If you leave this field blank, the key will be generated.
  • Overwrite an existing record - Enable this option to overwrite the record. The record you want to overwrite must be specified in the Key field above.
  • Record - Enter the desired values to the record's fields.

The module throws an error when you try to add the record which is already in the data store under the same name and the Overwrite an existing record option is disabled.

Map the Title, Link, Name, Salary and Location to the record from the Iterator results:

Map data store fields.png

Note: It is important that you add https://uk.indeed.com with no spaces or forward slash before the link field. This will ensure that the URL is correctly formatted.

Click ‘Ok’. Run the integration again.

Expand the results:

View Data store results.png

You can also go to your Make.com data store to view, edit or delete the records created:

View, edit, rename data store results.png

Push the extracted data to another module

Now that the data is stored on Make.com, you have several options to explore. You can choose to:

  • set up slack notifications
  • filter the results by those that have salary input
  • send the results via email
  • set up a new email marketing campaign
  • create a new CMS item and more

All you have to do is add a router and forward the data to other modules. Make.com’s built-in router makes it easy to duplicate processed data, due to the way it divides scenarios into multiple routes and can process each route differently.

add a router.png

Additionally, you can filter your data as it passes through the routes. You can specify the filter conditions to define which files are permitted to pass through a specific module.

Set up a router filter.png

Schedule a Make Scenario

You can specify when and how frequently an activated scenario runs. To do so, go to the scenario detail page and locate the Schedule tab.

To set the schedule, click the clock icon overlaying the ScrapingBee module and change the Run Scenario. Also, set the time when it should run:

Screenshot 2022-03-10 at 16.21.52.png

For this tutorial, we will set the time interval in which the scenario should be repeated to 60 minutes (every hour):

time interval in which the scenario should be repeated .png

Finally, turn on Scheduling for the scenario. Without turning it on, the scenario will not be run automatically regardless of the scheduling settings.

28. Turn on scheduling.png

Conclusion

There you go. So that demonstrates the use of the ScrapingBee > Scrape job details from Indeed in a scenario.

With thousands of jobs across several niches posted daily on Indeed.com , you can get real-time information without the constant manual effort.

Here’s a quick recap. The scenario consists of the following modules:

  • ScrapingBee > Make an API call
  • Iterator
  • Data store > Add/replace a record
  • Router to Slack, Mailchimp and Email
Screenrecording conclusion.gif

That concludes this tutorial on scraping job listings from Indeed.

Happy Scraping!

image description
Juliet Edjere

Juliet is an automation expert with vast experience working with no code tools.