New Amazon API: We've just released a brand new way to scrape Amazon at scale Start Free Trial 🐝

How to make HTTP requests in Node.js with fetch API

12 January 2026 | 27 min read

If you're looking for a clear Node.js fetch example, you're in the right place. Making HTTP requests is a core part of most Node.js apps, whether you're calling an API, fetching data from another service, or scraping web pages. The good news is that modern Node.js comes with a native Fetch API. For many use cases, you no longer need to install a separate HTTP client just to make requests. Fetch is built in, promise-based, and works almost the same way it does in the browser.

In this article, we'll start with the basics of using fetch in Node.js, then move on to practical patterns like headers, POST requests, and error handling. After that, we'll look at how those same ideas apply to web scraping, and when it makes sense to move from raw fetch calls to a production-ready setup using ScrapingBee.

How to make HTTP requests in Node.js with fetch API

Quick answer: How to use the Fetch API in Node.js

If you just want the working pattern and don't care about the theory yet, this is what using the Fetch API in Node.js looks like in real life:

// Make an HTTP GET request
const response = await fetch("https://httpbingo.org/get");

// Always check the HTTP status before parsing the response
// fetch only rejects on network errors, not on 4xx / 5xx
if (!response.ok) {
  throw new Error(`HTTP ${response.status}`);
}

// Parse the response body as JSON
const data = await response.json();

// Use the data
console.log(data);

In Node.js 18 and newer, fetch is available globally, so you can call it directly without importing anything. This is the standard Fetch API flow you'll use for most GET requests in Node.js.

If you're running an older Node.js version, you can get the same API by installing node-fetch. Just keep in mind that modern versions of node-fetch are ESM-only, so for new projects, upgrading Node.js and using the native Fetch API is usually the cleaner and simpler option.

What is the Fetch API in Node.js?

The short version

The Fetch API is a modern interface for making HTTP requests and fetching resources like JSON, text, or files. It originally came from browsers and is now available natively in Node.js, which means no extra libraries just to make a simple request.

Native Fetch in Node.js is a big quality-of-life upgrade. The API is small, predictable, and easy to reason about, especially if you've already used fetch in the browser. Same mental model, same patterns, fewer surprises.

How fetch works in Node.js

At its core, everything starts with a single function call:

fetch(url, options)
  • You pass in the URL you want to request and an optional options object. That object is where you define things like the HTTP method, request headers, or a request body for POST and PUT requests.
  • The fetch call returns a Promise that resolves to a Response object once the request finishes. That response doesn't contain the data yet — it represents the HTTP response and gives you control over how to read the body (JSON, text, streams, and so on).

The Promise-based flow is the real win here. With async and await, you can write request code that runs top to bottom and reads like normal synchronous logic, without callbacks or long .then() chains getting in the way.

Requests and responses, without the pain

When you call fetch, Node.js creates a Request object behind the scenes. That represents what you're sending out. Most of the time, though, you'll be working with the Response object that comes back.

From that response, you choose how to read the body depending on what the server returned:

  • response.json() for JSON APIs
  • response.text() for plain text
  • response.arrayBuffer() for binary data

This is the exact same mental model you get in the browser. If you've ever used fetch on the frontend, using the Fetch API in Node.js will feel immediately familiar.

Why native fetch is nicer than the old stuff

Before fetch became a first-class thing in Node.js, you basically had two options:

  • Use the built-in http / https modules (http.request, streams, manual headers, more boilerplate than you wanted)
  • Install a library like Axios / got / request (nice ergonomics, but now you've got an extra dependency and a different API to learn)

The core modules are powerful, but they're low-level on purpose — you do more wiring yourself, and the code gets noisy fast once you add timeouts, retries, JSON parsing, and decent error handling.

Native fetch hits a sweet spot: it's standard, Promise-based, and the same mental model you already know from the browser (Request, Response, response.json(), etc.). That means your Node.js request code stays clean and readable, and you can share patterns across frontend and backend without your brain context-switching every five minutes.

Setting up Fetch in Node.js

Native Fetch in modern Node.js

If you're on Node.js 18 or newer, fetch is already there. No setup, no imports, no extra packages. In this tutorial I'm using Node.js 24, which ships with the Fetch API exposed globally, just like in the browser. You can call fetch() directly and move on with your life.

If you're not sure which Node.js version you're running, check it like this:

node -v

If the version is 18 or higher, you're good. The Fetch API is available by default in Node.js and behaves the way you'd expect if you've used fetch before.

What about older Node.js versions?

If you're stuck on an older Node.js runtime for whatever reason, you've got two realistic options.

  • Upgrade Node.js. Yeah, captain obvious, but it's worth saying. If you control the environment, this is almost always the best move. You get the native Fetch API plus a pile of other improvements for free.
  • Install node-fetch. This is mainly for legacy setups or shared environments where upgrading Node.js isn't possible yet. In that case, you can pull in node-fetch and get a fetch-compatible API.

Here's the minimal setup using node-fetch:

import fetch from "node-fetch";

This gives you a fetch function that behaves almost the same as the native one. Still, if you can avoid this path, do it. Fewer dependencies means fewer updates, fewer security issues, and less stuff to keep in your head.

Note that in this tutorial we will be using ESM examples only, so you'll need to add the "type": "module" line into your package.json.

Good setup habits

A few things are worth keeping in mind when working with fetch in Node.js:

  • Prefer the built-in Fetch API when your Node.js runtime supports it
  • Keep dependencies small and intentional
  • Always test fetch behavior in the same Node.js version you deploy to
  • Stay consistent across environments, especially between local and production

For most modern setups, you don't need to install node-fetch at all. Native fetch in Node.js is stable, familiar, and more than enough for everyday HTTP requests.

Basic Node.js Fetch examples: simple GET requests

A simple GET request with async and await

Let's start with the most common case. You want to make a GET request and read JSON from the response. For demos, httpbingo.org/get is a solid endpoint because it simply echoes back what it receives. That makes it perfect for testing headers, query parameters, and general request behavior without guessing what the server is doing.

Here's a basic Node.js fetch example using async and await:

const response = await fetch("https://httpbingo.org/get");

// Parse the response body as JSON
const data = await response.json();

console.log(data);

The response is JSON and includes fields like args, headers, and origin. If you pass query parameters, they'll show up under args, which makes this endpoint great for test-driving requests locally before wiring them into real APIs or services.

The same request using .then()

If you prefer working directly with Promises instead of async and await, fetch supports that style as well. This Node.js fetch example does the exact same thing, just written in a different way:

fetch("https://httpbingo.org/get")
  .then(response => {
    // Convert the response body to JSON
    return response.json();
  })
  .then(data => {
    console.log(data);
  })
  .catch(error => {
    // This will catch network errors or JSON parsing errors
    console.error(error);
  });

Both styles are perfectly valid. In real projects, async and await are usually easier to read, especially once you start adding error handling, conditional logic, or multiple requests.

Passing query parameters for testing

One nice thing about httpbingo.org is that it reflects query parameters straight back to you. That makes it very obvious what you're actually sending in the request.

const response = await fetch("https://httpbingo.org/get?user=john&mode=test");

// Parse the JSON response
const data = await response.json();

// The echoed query parameters live under `args`
console.log(data.args);

You'll see user and mode inside the args object. This is great for local testing and debugging, but the exact same pattern applies when you're working with real production APIs.

A quick note on non-JSON responses

Fetch does not assume the response is JSON. If the server returns plain text or HTML, calling response.json() will throw an error. In those cases, you should use response.text() instead.

It's important to know what an endpoint returns, especially in production apps where errors should be handled cleanly instead of crashing your process. Also remember that fetch always returns a Promise. The request doesn't actually do anything until you either await it or attach a .then() handler.

A solid default pattern for GET requests

If you just want one copy-paste pattern to start with, this Node.js fetch example is a solid default for most GET requests:

const response = await fetch("https://api.example.com/data");

// fetch only throws on network errors,
// so you need to check the HTTP status yourself
if (!response.ok) {
  throw new Error(`Request failed with status ${response.status}`);
}

// Parse the response body as JSON
const data = await response.json();

return data;

This pattern keeps things simple and explicit. It works well in production code and makes failures obvious instead of silently continuing with partial or invalid data. If something goes wrong, you'll know immediately.

Handling Node.js fetch responses and headers

Understanding the Response object

Every fetch call in Node.js resolves to a Response object. That object tells you whether the request succeeded and gives you access to both the response data and the HTTP metadata that came back from the server.

The first properties you usually care about are:

  • response.ok — a boolean that's true for successful responses (typically HTTP status codes in the 200 range)
  • response.status — the numeric HTTP status code
  • response.statusText — the human-readable status message

This is your first line of defense in production code, before you even touch the response body.

Here's a quick example using httpbingo.org/get:

const response = await fetch("https://httpbingo.org/get");

console.log(response.ok);         // true for 2xx responses
console.log(response.status);     // e.g. 200
console.log(response.statusText); // e.g. "OK"

Reading the response body

Once you know the request succeeded, you can read the response body. The Response object exposes several methods, but the two you'll use most often are response.json() and response.text().

Use response.json() when the API returns JSON. Use response.text() when you expect plain text or HTML. These methods read from the response body stream and return a Promise, which means you can only consume the body once.

const response = await fetch("https://httpbingo.org/get");

// Read and parse the response body as JSON
const data = await response.json();

console.log(data);

This pattern covers the majority of real-world APIs, including internal services and third-party endpoints, and works the same way in modern Node.js versions.

Working with response headers

Headers come back as part of the Response object and are available via response.headers. This is a Headers instance, not a plain JavaScript object, so you interact with it using methods instead of direct property access.

If you want to log all response headers, you can iterate over them like this:

const response = await fetch("https://httpbingo.org/get");

for (const [key, value] of response.headers) {
  console.log(key, value);
}

To read a specific header, such as Content-Type, use get:

const contentType = response.headers.get("content-type");
console.log(contentType);

This is where fetch response headers become useful for things like content negotiation, caching decisions, or debugging API behavior. The same approach works with the native Fetch API in Node.js and with node-fetch in older setups.

Node.js vs browser behavior

One important difference between Node.js and the browser is CORS. In the browser, you can only access headers that the server explicitly exposes. In Node.js, there are no CORS restrictions, so you can inspect all response headers by default.

That doesn't mean you can ignore API rules. Rate limits, authentication requirements, and usage policies still apply. Fetch response headers give you visibility, not permission. Always respect the API you're calling, especially in production systems where misuse can get you blocked very quickly.

Sending POST and other HTTP methods with Fetch in Node.js

Sending JSON with a POST request

When you need to send data to an API, you usually switch from GET to POST. With fetch, that just means adding a method, some headers, and a request body. The /post endpoint on httpbingo.org is great for testing because it simply echoes back whatever you send.

Here's a basic Node.js fetch POST example that sends JSON:

const response = await fetch("https://httpbingo.org/post", {
  method: "POST",
  headers: {
    // Tell the server we are sending JSON
    "Content-Type": "application/json"
  },
  body: JSON.stringify({
    name: "John",
    role: "dev"
  })
});

// Always check the status before trusting the response body
if (!response.ok) {
  throw new Error(`HTTP ${response.status}`);
}

// Parse the JSON response
const data = await response.json();
console.log(data);

When sending JSON, the request body must be a string, which is why JSON.stringify() is required. Setting the Content-Type header tells the server how to interpret the payload. This POST pattern works the same way for demos and real production APIs, whether you're talking to an internal service or a third-party endpoint.

Sending form data

If you're sending form data instead of JSON, you can use FormData. In this case, you usually don't need to set the Content-Type header yourself; fetch will handle it, including the correct boundary value.

const formData = new FormData();
formData.append("email", "test@example.com");
formData.append("plan", "pro");

const response = await fetch("https://httpbingo.org/post", {
  method: "POST",
  body: formData
});

// Check for HTTP errors
if (!response.ok) {
  throw new Error(`HTTP ${response.status}`);
}

// Parse the echoed response
const data = await response.json();
console.log(data);

This approach is useful for file uploads, multipart form submissions, or legacy APIs that expect form-encoded data instead of JSON. In modern Node.js versions, FormData is available globally, just like fetch, so no extra imports are needed.

Other HTTP methods

Using PUT, PATCH, or DELETE with fetch works exactly the same way. The only thing that changes is the method value, which is part of what makes the Fetch API easy to use and easy to remember.

await fetch("https://httpbingo.org/delete", {
  method: "DELETE"
});

For PUT and PATCH requests, you'll usually include a request body as well, just like with POST. The same rules apply: stringify JSON when needed, set the appropriate headers, and always check the response status if the result matters.

A few practical reminders

  • GET and HEAD requests should not include a request body. Some servers may tolerate it, but it's outside the spec and can lead to confusing understand-once-break-later behavior across environments.
  • In real applications, never blindly forward user input. Always validate and sanitize data before passing it to fetch. An API might accept bad input, but you're still responsible for what leaves your server.

Error handling, timeouts, and retries for Fetch in Node.js

Fetch does not throw on HTTP errors

This trips people up all the time. In Node.js, fetch only throws on network-level problems: things like DNS failures, dropped connections, or aborted requests. If the server responds with a 404 or a 500, fetch still resolves successfully and hands you a Response object.

That's why, when using fetch in Node.js, you must check response.ok or response.status yourself. If you don't, it's very easy to accidentally treat a failed request as a successful one.

A simple mental model:

  • If response.ok is false, treat it as an error
  • If it's true, go ahead and parse the body

Error handling with async and await

This is the cleanest default for most codebases, and it's how fetch is typically used in real Node.js apps.

try {
  const response = await fetch("https://httpbingo.org/status/500");

  if (!response.ok) {
    // Try to read a small error body for debugging
    // but don't assume it's always JSON
    const bodyText = await response.text().catch(() => "");
    throw new Error(
      `HTTP ${response.status} ${response.statusText} ${bodyText}`
    );
  }

  const data = await response.json();
  console.log(data);
} catch (error) {
  console.error("Request failed:", error);
}

The key detail here is response.text() in the error path. In production, grabbing even a small error message from the server can save you a lot of time when debugging. Just be careful not to log sensitive data by accident, especially when dealing with third-party APIs.

Error handling with .then() and .catch()

If you're chaining Promises instead of using async and await, the rule is exactly the same: you still need to handle HTTP errors by checking response.ok.

fetch("https://httpbingo.org/status/404")
  .then(async (response) => {
    if (!response.ok) {
      // Read a small error body if available
      const bodyText = await response.text().catch(() => "");
      throw new Error(
        `HTTP ${response.status} ${response.statusText} ${bodyText}`
      );
    }

    return response.json();
  })
  .then((data) => {
    console.log(data);
  })
  .catch((error) => {
    console.error("Request failed:", error);
  });

This pattern behaves the same way as the async / await version. The main difference is readability. Once things get more complex, promise chains tend to be harder to follow, which is why many teams prefer async and await.

Adding a timeout with AbortController

Node.js does not magically time out requests unless you tell it to. If a request can hang, you should always set a timeout. The most common and standards-based way to do this is with AbortController, which is part of the Fetch API and works natively in modern Node.js.

const controller = new AbortController();
const timeoutMs = 5000;

// Abort the request after 5 seconds
const timeoutId = setTimeout(() => controller.abort(), timeoutMs);

try {
  const response = await fetch("https://httpbingo.org/delay/10", {
    signal: controller.signal
  });

  if (!response.ok) {
    throw new Error(`HTTP ${response.status}`);
  }

  const data = await response.json();
  console.log(data);
} catch (error) {
  // AbortController causes fetch to reject,
  // so timeouts land here just like network errors
  console.error("Request failed:", error);
} finally {
  // Always clean up the timer
  clearTimeout(timeoutId);
}

When the timeout fires, controller.abort() cancels the request. Fetch then rejects the Promise, and execution jumps straight to the catch block. This is exactly what you want for services that need to stay responsive and avoid getting stuck on slow or broken upstreams.

When retries make sense

Retries are not something you should enable by default for every request. They make sense when failures are likely to be temporary: things like flaky network hiccups, timeouts, or 5xx responses from an overloaded upstream. They usually don't make sense for 4xx errors, because those are often permanent until you change the request.

A practical retry rule set looks like this:

  • Retry on network errors and timeouts
  • Retry on 502, 503, 504, and sometimes other 5xx responses
  • Do not retry on 400, 401, 403, or 404 unless you have a very specific reason
  • Add a short backoff and cap the number of attempts

When you do retry, always respect rate limits. If the API sends a Retry-After header, listen to it. Even without that signal, keep retries small, add delays between attempts, and log every failure so you can spot patterns in monitoring. Retries without logging and metrics are basically silent failure machines. If fetch in Node.js starts failing in production, you want to know when it started, how often it happens, and which upstream service is causing the pain.

Using Fetch for web scraping in Node.js (with Cheerio)

Fetching HTML instead of JSON

Fetch isn't only for APIs. You can also use it to download raw HTML, which is the first step in a lot of web scraping workflows. In Node.js, this is straightforward because fetch has no browser restrictions and gives you full access to the response body.

A typical Node.js fetch example for scraping starts by requesting a public page and reading the response as text:

const response = await fetch("https://example.com");

// Read the raw HTML as a string
const html = await response.text();

At this point, html is just a plain string containing the page markup. To actually extract useful data from it, you'll need an HTML parser: this is where tools like Cheerio come in.

Parsing HTML with Cheerio

Cheerio is a lightweight library that gives you a jQuery-like API for working with HTML on the server. It's fast, simple, and pairs very naturally with fetch, especially when you just need to extract data from static pages.

Here's a minimal example that fetches a page and extracts its <title>:

// Install with:
// npm install cheerio
import * as cheerio from 'cheerio';

const response = await fetch("https://example.com");

// Read the HTML response as text
const html = await response.text();

// Load the HTML into Cheerio
const $ = cheerio.load(html);

// Use CSS selectors to extract data
const title = $("title").text();

console.log(title);

The same pattern works for prices, product names, links, tables, or any other element you can target with CSS selectors. This is often the first real scraping use case people run into once they move beyond calling JSON APIs and start dealing with actual web pages.

Things you must not ignore

Just because you can scrape a page doesn't mean you should do it blindly. Always check robots.txt, read the site's terms of service, and respect privacy and usage rules. Fetching publicly available HTML does not remove your responsibility as a developer.

You should also be mindful of request volume. Even simple Node.js fetch usage can overload a site if you send requests too aggressively or without delays.

If you want to go deeper, this guide on scraping with Node.js and Cheerio is a good next step: Web Scraping with node-fetch.

When you need headless browsers instead of plain Fetch

Plain fetch-based scraping in Node.js works well when a page returns real HTML straight from the server. But some sites do almost everything on the client side. The initial HTML is basically empty, and the actual content only appears after JavaScript runs in the browser.

In those cases, Node.js fetch plus Cheerio isn't enough. Fetch will download the page, but there's nothing useful to parse because the data gets injected later by client-side JavaScript. That's where headless browsers come in.

Tools like Puppeteer run a real browser in the background. They load the page, execute scripts, wait for content to render, and then give you the final HTML. This makes them a better fit for modern web apps, dashboards, and sites built with heavy frontend frameworks.

The tradeoff is complexity. Fetch-based scraping is lightweight, fast, and easy to reason about. Headless browsers are heavier, slower, and require more infrastructure to run reliably at scale. For many teams, especially in production, a managed browser-based solution or API is often easier than maintaining your own headless browser fleet. You still get fully rendered HTML, without worrying about browser crashes, scaling issues, or constant updates.

If you want to dig deeper into the browser approach, this Puppeteer scraping guide for Node.js is a solid reference: Puppeteer Web Scraping Tutorial in Nodejs.

Proxies, anti-bot systems, and scaling Fetch-based scrapers

Why simple Fetch scripts break once you scale

A small fetch-based script in Node.js can look perfect in local testing and still fall apart the moment you run it at scale. As soon as volume goes up, you start running into defenses and operational headaches.

Common reasons things break:

  • IP blocks after repeated requests from the same address
  • Rate limits that suddenly start returning 429 responses
  • CAPTCHAs that block automated traffic entirely
  • Geo restrictions where content changes or disappears depending on location
  • Fragile infrastructure, like retries that multiply traffic or jobs that hang forever

The Fetch API in Node.js is great, but at scale it becomes less about the code itself and more about how your requests look to the target site, and how reliably you can run them over time.

What proxies are, in plain terms

An HTTP proxy is basically a middleman for your requests. Instead of your scraper calling the target site directly, the proxy sends the request on your behalf. This helps with things like geo routing, spreading traffic across multiple IPs, and reducing the chance of getting blocked immediately.

Proxies are not a free pass, though. You still need to follow local laws, respect the target site's terms of service, and comply with data protection rules. You also need to be careful about what data you collect and how you store it, especially if user data is involved.

Routing Fetch through a proxy in Node.js (native fetch only)

Native fetch in Node.js (v18+) is powered by Undici. That's awesome for performance, but it also means proxy support works a little differently than older Node HTTP clients.

Here's the key thing:

  • In Node 18–23, native fetch does not support an { agent } option, and it doesn't automatically honor HTTP_PROXY / HTTPS_PROXY env vars.
  • In Node 24+, proxy env vars can be respected, but only if you enable it with NODE_USE_ENV_PROXY=1 (or --use-env-proxy).

So if you want to use a proxy with native fetch, you generally do it through an Undici dispatcher (usually ProxyAgent or one of the env-proxy helpers).

Option 1: Set a global proxy for all fetch() calls

This is the "flip the switch once" approach. After this, every fetch() goes through the proxy unless you override the dispatcher.

import { ProxyAgent, setGlobalDispatcher } from "undici";

setGlobalDispatcher(new ProxyAgent("http://user:pass@proxy:8080"));

const res = await fetch("https://example.com");
const html = await res.text();
console.log(html);

This works because you're configuring Undici's global dispatch layer under the hood.

Option 2: Proxy only for a specific request

If you don't want to affect everything globally, you can pass a dispatcher per request:

import { ProxyAgent } from "undici";

const dispatcher = new ProxyAgent("http://user:pass@proxy:8080");

const res = await fetch("https://example.com", { dispatcher });
const html = await res.text();
console.log(html);

Option 3: Env var proxy in Node 24+ (when enabled)

If you're on Node 24+, you can also configure proxies via environment variables. Just remember you need to enable env-proxy mode:

NODE_USE_ENV_PROXY=1 \
HTTP_PROXY=http://user:pass@proxy:8080 \
HTTPS_PROXY=http://user:pass@proxy:8080 \
NO_PROXY=localhost,127.0.0.1 \
node app.mjs

That's the native-fetch story in 2026: if you want proxies, think Undici dispatcher, not { agent }.

If you want a real, working setup with concrete examples, this guide walks through Node.js fetch proxy configuration in detail: How to use a proxy with node-fetch.

What teams usually do in production

Most real scraping stacks are layered, because no single approach works everywhere:

  • Fetch in Node.js for the fast path when pages are simple and stable
  • Proxy pools to spread requests and handle geo-specific content
  • Backoff and retry rules that respect 429 responses and Retry-After headers
  • Occasional fallbacks when fetch hits edge cases or you need different behavior

You'll also see some teams keep Axios around as a fallback client. It's not magical, but it can be useful when certain features or patterns are easier to express there. If you go that route, this guide on setting up an Axios proxy in Node.js is a practical reference: How to set up Axios proxy.

The main point is to treat scaling as an engineering problem, not just a bigger for-loop. When anti-bot systems show up, your scraper needs to behave like a responsible client, not a hammer.

Upgrading your Node.js Fetch scraper with ScrapingBee

At some point, plain fetch starts doing too much heavy lifting. You still want to write simple Node.js code, but you don't want to manage proxies, headless browsers, retries, and anti-bot systems yourself. That's where ScrapingBee fits in cleanly.

The idea is simple: you keep using the Fetch API in Node.js, but instead of calling the target website directly, you call the ScrapingBee API and pass the target URL as a parameter. ScrapingBee takes care of the messy parts (rotating proxies, JavaScript rendering, retries, and bot defenses) while your code stays focused on parsing and business logic.

Conceptually, nothing changes in how you think about requests. It's still a fetch call, still async, still predictable. You just move the operational complexity out of your app and into a service that's built to handle it.

A basic Fetch call to ScrapingBee

Here's a minimal example using the Node.js Fetch API to fetch a fully rendered page through ScrapingBee:

const apiKey = process.env.SCRAPINGBEE_API_KEY;
if (!apiKey) {
  throw new Error("Missing SCRAPINGBEE_API_KEY env var");
}

const targetUrl = "https://example.com";

const params = new URLSearchParams({
  api_key: apiKey,
  url: targetUrl,
  // ScrapingBee renders JS by default
  // Set to false to save up API credits
  // (unless the scraped site is JS-heavy)
  // render_js: "false",
});

const response = await fetch(`https://app.scrapingbee.com/api/v1?${params.toString()}`);

if (!response.ok) {
  const body = await response.text().catch(() => "");
  throw new Error(`ScrapingBee HTTP ${response.status} ${response.statusText}${body ? ` — ${body}` : ""}`);
}

const html = await response.text();
console.log(html);

From here, you can pass the HTML straight into Cheerio, extract data, or process it however you like. ScrapingBee handles the infrastructure layer, so your Fetch API usage stays boring. And boring is honestly good. Boring code is easy to reason about, easy to debug, and way easier to run in production.

What ScrapingBee handles for you

Instead of building and maintaining all of this yourself, ScrapingBee takes care of the heavy lifting:

  • rotating proxy pools and IP management
  • JavaScript rendering for dynamic pages
  • retries and failure handling
  • protection against common anti-bot systems

When you need more than just raw HTML, you can layer in additional features without changing your core Node.js code:

Best practices when using an API-based scraper

Even with managed tooling, responsibility doesn't magically disappear. The basics still matter:

  • only fetch the data you actually need
  • avoid collecting sensitive or personal information
  • respect robots.txt, site terms, and local laws
  • log errors and monitor usage so failures don't go unnoticed

ScrapingBee removes a lot of operational pain, but good scraping hygiene is still on you.

Automating scraping workflows beyond raw code

Once your fetch-based scripts are stable, teams usually want to automate around them. This is where you stop thinking purely in terms of code and start thinking in workflows.

ScrapingBee integrates with no-code and automation tools, which lets non-developers or ops teams trigger scrapes and route data without touching your Node.js code. For example:

  • trigger a scrape on a schedule and send results to a spreadsheet
  • scrape a page and forward parsed data to a webhook or internal service

Two common options are:

This works well alongside Node.js fetch scripts. Developers build the core logic, and automation tools handle orchestration, notifications, and delivery. That separation keeps systems easier to reason about and much easier to maintain as things scale.

Conclusion: From simple Fetch calls to production scraping

You started with the basics of how to use fetch in Node.js and built up from there: handling responses, headers, POST requests, and errors. Those same fundamentals carry straight into scraping with Cheerio, where fetching HTML is just another Fetch call.

The Node.js Fetch API is a great fit for learning, small projects, and lightweight scraping. It's simple, predictable, and easy to reason about. Once traffic grows or sites get stricter, though, infrastructure becomes the real problem, not your JavaScript. That's when a tool like ScrapingBee starts to make sense. You keep using fetch, but offload proxies, JavaScript rendering, retries, and anti-bot handling. Start simple, log and test early, and upgrade only when DIY scraping stops being worth the effort.

Thank you for staying with, and until the next time!

Ready to level up your Node.js Fetch scraping?

If you're already comfortable with the Node.js Fetch API, you're most of the way there. You understand how requests work, how to parse responses, and how to handle errors. At that point, the hard part usually isn't Fetch itself: it's dealing with proxies, browsers, retries, and everything around the request.

That's where ScrapingBee comes in. You keep using the same Fetch API patterns you already know, but spend less time fighting infrastructure and more time working on actual product features or analysis. If it sounds useful, try it on your next scraping task. Start small, stay compliant, and experiment with a simple test project first. You can create an account today, get 1,000 scraping credits as a gift, and see how it fits into your workflow.

Node.js Fetch API FAQs

What Node.js version do I need to use the Fetch API without installing node-fetch?

You need Node.js 18 or newer. In Node.js 24, fetch is available globally by default, so there's no need to install node-fetch or any other HTTP client.

How do I handle HTTP errors and response.ok when using Fetch in Node.js?

Fetch only throws on network-level errors. For 4xx and 5xx responses, it still resolves normally and returns a Response object. That's why you must explicitly check response.ok or response.status and throw your own error when the response is not OK.

What is the difference between the Fetch API and Axios for HTTP requests in Node.js?

The Fetch API is built into Node.js and follows the standard Request and Response model used by browsers. Axios is a third-party library that adds conveniences like automatic JSON handling and some extra features, but it introduces an additional dependency and its own API surface.

Can I use Fetch in Node.js for web scraping, and what rules should I follow?

Yes. You can fetch HTML and parse it with tools like Cheerio. You should still follow robots.txt, respect site terms of service, comply with local laws, and avoid scraping personal or restricted data. Technical access does not remove legal or ethical responsibility.

How do I send JSON data in a POST request with Fetch in Node.js?

Set method to POST, set the Content-Type header to application/json, and set body to JSON.stringify(yourObject). Then read the response using response.json() or response.text(), depending on what the API returns.

How can I set headers and read response headers using Fetch in Node.js?

Send request headers via the headers option in fetch. Read response headers using response.headers.get("content-type") or by looping over them with for (const [key, value] of response.headers).

When should I switch from plain Fetch-based scrapers to a managed scraping API?

Switch when scaling pain shows up: IP blocks, CAPTCHAs, geo restrictions, JavaScript-heavy pages, or too much time spent maintaining proxies and headless browsers. At that point, a managed solution like ScrapingBee can save time and reduce operational headaches.

image description
Ilya Krukowski

Ilya is an IT tutor and author, web developer, and ex-Microsoft/Cisco specialist. His primary programming languages are Ruby, JavaScript, Python, and Elixir. He enjoys coding, teaching people and learning new things. In his free time he writes educational posts, participates in OpenSource projects, tweets, goes in for sports and plays music.