If you're trying to set up a Node fetch proxy for scraping or high-volume crawling, you'll quickly notice that neither native fetch nor node-fetch has built-in proxy configuration (like a proxy option or automatic HTTP(S)_PROXY support). With node-fetch you need to wire an Agent (e.g. HttpsProxyAgent); with native fetch you need an Undici dispatcher or, on Node 24+, NODE_USE_ENV_PROXY.
Node-fetch was originally built to bring the browser's fetch API into Node. Even though modern Node now ships with its own fetch, the idea stays the same: give devs a simple, flexible way to fire off async HTTP requests on the server.
Since many scraping workflows rely on a Node fetch proxy to avoid rate limits and blocks, this guide walks you through the cleanest ways to make node-fetch work with proxies: no weird hacks and no duct tape.

Quick answer (TL;DR)
If you're using node-fetch, the easiest node-fetch proxy setup is to create an HttpsProxyAgent and pass it inside { agent }. In Node 18–23, native fetch doesn't support proxies out of the box: it ignores HTTP(S)_PROXY and doesn't accept an agent option. In Node 24+, you can enable proxy support via NODE_USE_ENV_PROXY=1 + HTTP(S)_PROXY / NO_PROXY.
Otherwise you need an Undici dispatcher (ProxyAgent / EnvHttpProxyAgent) or you can just call ScrapingBee's scraping API with fetch and pass the target URL the fetch proxy layer will be handled for you. https-proxy-agent only works with node-fetch / http(s).request, not native fetch.
// make sure to install:
// npm i https-proxy-agent
import { HttpsProxyAgent } from 'https-proxy-agent';
// npm i node-fetch
// We'll also cover native fetch
// later in this post
import fetch from 'node-fetch';
const proxy = 'http://username:password@your-proxy:8000';
const agent = new HttpsProxyAgent(proxy);
const res = await fetch('https://example.com', { agent });
const html = await res.text();
console.log(html);
Or with ScrapingBee:
const API_KEY = 'YOUR_SCRAPINGBEE_KEY';
const res = await fetch(
`https://app.scrapingbee.com/api/v1/?api_key=${API_KEY}&url=https://example.com`
);
const html = await res.text();
console.log(html);
Why node-fetch? What it is and what it isn't
node-fetch used to be the go-to way to work with the Fetch API in Node, back before Node shipped with a built-in fetch. It mirrors the browser version pretty closely: same request/response shapes, same streaming behavior, same mental model. That's why a lot of older tutorials, tools, and scraping workflows still use it. Just remember that node-fetch v3 is ESM-only, so if your project is still on CommonJS, you'll either stick to v2 or do a bit of extra setup.
Now that Node has its own fetch, you could drop node-fetch entirely. But here's the catch: the built-in fetch behaves like the browser version, and browsers don't let you configure proxies through fetch. So there's no native NodeJS fetch proxy support (unless you're on Node 24+, see below) and no automatic node-fetch proxy support either. Passing a proxy URL won't do anything as fetch simply ignores it.
Node's built-in fetch is powered by Undici. It doesn't expose an
agentoption, but you can configure proxying by setting a global dispatcher (e.g.ProxyAgent/EnvHttpProxyAgent) or by turning onNODE_USE_ENV_PROXYin Node 24+. https-proxy-agent is designed for Node's http(s).request API and libraries that accept an agent option (like node-fetch). Native fetch doesn't use the Agent API (it uses Undici dispatchers instead) so you needProxyAgent/EnvHttpProxyAgentthere, not https-proxy-agent.
To actually run a real Node fetch proxy setup, you need to wire in an Agent (like HttpsProxyAgent) that tunnels your request through the proxy. Or, if you don't want to deal with all that, you can use an external API that handles the proxy logic for you. Either way, both node-fetch and the native fetch leave proxy handling completely out of scope, which is why getting a proper Node fetch proxy flow working always takes a bit more effort than beginners expect.
Solution: Use https-proxy-agent with node-fetch
Since node-fetch doesn't include any built-in proxy support, the standard way to create a Node fetch proxy setup is to plug in an HttpsProxyAgent and pass it through the { agent } option. This works for both HTTP and HTTPS targets and also supports authenticated proxies (http://user:pass@host:port). Yes, despite the name, HttpsProxyAgent works for both HTTP and HTTPS targets.
A quick note of sanity: avoid free proxies unless you're cool with random strangers sniffing your traffic. They're slow, flaky, and usually unsafe. If you want predictable behavior, stick to your own proxy or a managed service. And keep in mind that a proxy only changes the route your request takes: if you run a local proxy, the outgoing IP is still your machine.
ESM (node-fetch v3)
// filename: example-esm.mjs
// Requires: npm i node-fetch https-proxy-agent
import fetch from 'node-fetch';
import { HttpsProxyAgent } from 'https-proxy-agent';
const proxyUrl = 'http://user:pass@host:port';
const targetUrl = 'https://httpbin.org/ip';
const agent = new HttpsProxyAgent(proxyUrl);
try {
const res = await fetch(targetUrl, { agent }); // node fetch use proxy
const data = await res.json();
console.log('Response:', data);
} catch (err) {
console.error('Fetch error:', err);
}
This ESM example shows the basic node fetch proxy flow:
HttpsProxyAgentis created from your proxy URL and passed into{ agent }.node-fetchsends the request through that agent, which tunnels it to the proxy.- The target URL (
https://httpbin.org/ip) simply returns the IP your proxy exposes. - The
try/catchis just to make the example safer and show proper error handling.
Everything else is standard fetch: you await the response, parse JSON, and log it.
CommonJS (node-fetch v2)
// filename: example-cjs.js
// Requires: npm i node-fetch@2 https-proxy-agent
const fetch = require('node-fetch');
const { HttpsProxyAgent } = require('https-proxy-agent');
const proxyUrl = 'http://user:pass@host:port';
const targetUrl = 'https://httpbin.org/ip';
const agent = new HttpsProxyAgent(proxyUrl);
(async () => {
try {
const res = await fetch(targetUrl, { agent }); // node fetch http proxy
const data = await res.json();
console.log('Response:', data);
} catch (err) {
console.error('Fetch error:', err);
}
})();
This CommonJS example shows how a Node fetch proxy setup works in node-fetch v2:
require('node-fetch')pulls in the v2 API, which still supports CommonJS.HttpsProxyAgentis built from your proxy URL, including auth if needed.- The agent is passed into
{ agent }, which forces all traffic through the proxy. - The IIFE wrapper is just there so you can use
awaitin a CommonJS file.
Running a custom proxy with Docker
Public proxies are slow, flaky, and often unsafe. Some even break HTTPS or snoop on your data. If you just want a stable test environment, you can spin up a simple local proxy with Docker:
docker run -d -p 8888:8888 vimagick/tinyproxy
After that, set your proxy URL to http://localhost:8888. It will work exactly like a normal Node fetch proxy setup, but keep in mind that the outgoing IP won't change — the proxy is running on your own machine, so the traffic still comes from your local network.
Example with environment variables
Keeping proxy credentials and target URLs out of your source code is always the safer move. In this setup, PROXY_URL and TARGET_URL come from process.env, and you simply plug PROXY_URL into an HttpsProxyAgent before calling fetch.
Install the right packages for your project:
# ESM (Node 18+ or "type":"module")
npm i node-fetch https-proxy-agent
# CommonJS
npm i node-fetch@2 https-proxy-agent
ESM (.mjs, node-fetch v3)
// file: example-esm.mjs
import fetch from 'node-fetch';
import { HttpsProxyAgent } from 'https-proxy-agent';
const { PROXY_URL, TARGET_URL = 'https://httpbin.org/ip' } = process.env;
if (!PROXY_URL) throw new Error('Set PROXY_URL, e.g. http://user:pass@host:port');
const agent = new HttpsProxyAgent(PROXY_URL);
const res = await fetch(TARGET_URL, { agent }); // node fetch agent
console.log('Status:', res.status);
console.log('Body:', await res.text());
This example loads PROXY_URL from the environment, creates an HttpsProxyAgent from it, and passes that agent into fetch so all traffic goes through your proxy. TARGET_URL defaults to https://httpbin.org/ip, which is handy for checking what IP the proxy exposes.
CommonJS (node-fetch v2)
// file: example-cjs.js
const fetch = require('node-fetch'); // v2
const { HttpsProxyAgent } = require('https-proxy-agent');
const { PROXY_URL, TARGET_URL = 'https://httpbin.org/ip' } = process.env;
if (!PROXY_URL) throw new Error('Set PROXY_URL, e.g. http://user:pass@host:port');
const agent = new HttpsProxyAgent(PROXY_URL);
(async () => {
const res = await fetch(TARGET_URL, { agent }); // node.js fetch proxy
console.log('Status:', res.status);
console.log('Body:', await res.text());
})();
This CommonJS version works the same way as the ESM one: it reads PROXY_URL from the environment, builds an HttpsProxyAgent, and passes it into fetch so the request is routed through your proxy. The async IIFE is just there to let you use await in a CommonJS file.
Run it
PROXY_URL="http://user:pass@host:port" TARGET_URL="https://httpbin.org/ip" node example-esm.mjs
# or
PROXY_URL="http://user:pass@host:port" TARGET_URL="https://httpbin.org/ip" node example-cjs.js
Use a .env file or CI secrets for real projects — never commit proxy credentials.
If you're using ScrapingBee, you don't need a proxy agent at all. You simply call the API with your target URL, and ScrapingBee takes care of proxy rotation, JavaScript rendering, and anti-bot defenses for you.
ScrapingBee with fetch (ESM, node-fetch v3)
// file: scrapingbee-esm.mjs
import fetch from 'node-fetch';
const { SCRAPINGBEE_API_KEY, TARGET_URL = 'https://httpbin.org/ip' } = process.env;
if (!SCRAPINGBEE_API_KEY) throw new Error('Set SCRAPINGBEE_API_KEY first');
const url =
`https://app.scrapingbee.com/api/v1/?api_key=${SCRAPINGBEE_API_KEY}` +
`&url=${encodeURIComponent(TARGET_URL)}`;
const res = await fetch(url);
console.log('Status:', res.status);
console.log('Body:', await res.text());
This ESM example uses node-fetch v3 to call the ScrapingBee API. Since ScrapingBee handles all proxying internally, you don't need an agent: just build the API URL with your key and target, then call fetch normally.
Handling TLS, certificates, and common errors
When you run a Node.js fetch proxy setup, TLS issues are one of the most common problems. A typical error you'll see is:
UNABLE_TO_VERIFY_LEAF_SIGNATURE
This usually means one of two things:
- your proxy is intercepting and re-signing HTTPS traffic (corporate MITM), or
- the proxy or the target site is using a self-signed or otherwise untrusted certificate.
Whatever you do, avoid setting NODE_TLS_REJECT_UNAUTHORIZED=0. That disables TLS checks for the entire process. It's okay for a quick local test, but absolutely not for production.
If you need to trust a corporate or custom CA, load it explicitly instead of turning verification off.
Node.js with a custom CA certificate (ESM)
// esm: example.mjs
import fetch from 'node-fetch';
import { HttpsProxyAgent } from 'https-proxy-agent';
import fs from 'node:fs';
const proxyUrl = process.env.PROXY_URL; // http://user:pass@host:port
// Load your corporate or custom CA
const corpCA = fs.readFileSync('./corp-root-ca.pem');
// ca on HttpsProxyAgent affects the TLS between Node and the proxy (if the proxy is HTTPS).
// For plain http://proxy + HTTPS upstream, this won't fix cert issues with the target; for that you need global Node CA handling (e.g. NODE_EXTRA_CA_CERTS) or a separate https agent.
const agent = new HttpsProxyAgent({
...new URL(proxyUrl),
ca: corpCA,
rejectUnauthorized: true
});
const res = await fetch('https://example.com', { agent });
console.log(await res.text());
If you prefer not to modify code, you can trust additional CAs via an environment variable:
export NODE_EXTRA_CA_CERTS=./corp-root-ca.pem
node example.mjs
This keeps proper TLS verification in place while still allowing corporate proxies, self-signed certs, or internal CA setups to work correctly with your Node fetch proxy flow.
Using native fetch in Node
Node.js v18+ ships with a built-in fetch that's powered by Undici under the hood. Out of the box it behaves like browser fetch: in Node 18-23 there's no { agent } option and it doesn't automatically honor HTTP_PROXY / HTTPS_PROXY the way older HTTP client libraries do. Note that in Node 24+ HTTP(S)_PROXY variables are respected when NODE_USE_ENV_PROXY=1 (or --use-env-proxy) is enabled.
If you really want to stick with native fetch, the way to add proxy support is via an Undici dispatcher, typically ProxyAgent or EnvHttpProxyAgent:
import { ProxyAgent, setGlobalDispatcher } from 'undici';
setGlobalDispatcher(new ProxyAgent('http://user:pass@proxy:8080'));
const res = await fetch('https://example.com'); // goes through the proxy
const html = await res.text();
console.log(html);
This works, but it's a more advanced path: you're configuring Undici's global dispatch layer instead of just passing an agent option into fetch().
Here's another approach:
import { ProxyAgent } from 'undici';
const dispatcher = new ProxyAgent('http://user:pass@proxy:8080');
const res = await fetch('https://example.com', { dispatcher });
For Node 24+ you can also configure proxy as environment variables:
NODE_USE_ENV_PROXY=1 \
HTTP_PROXY=http://user:pass@proxy:8080 \
HTTPS_PROXY=http://user:pass@proxy:8080 \
NO_PROXY=localhost,127.0.0.1 \
node app.mjs
For most scraping / crawling setups it's still simpler to either:
- use node-fetch with an
HttpsProxyAgentand pass{ agent }, or - skip proxy plumbing entirely and call ScrapingBee's API, which handles rotation, geolocation, JS rendering, and anti-bot defenses for you.
Proxy with node-fetch + HttpsProxyAgent
// example.mjs
import fetch from 'node-fetch';
import { HttpsProxyAgent } from 'https-proxy-agent';
// Proxy details (supports auth via user:pass)
const proxyUrl = 'http://user:pass@host:port';
const agent = new HttpsProxyAgent(proxyUrl);
const targetUrl = 'https://httpbin.org/ip?json';
try {
const response = await fetch(targetUrl, { agent }); // proxy in node fetch via agent
const data = await response.json();
console.log('Response:', data);
} catch (err) {
console.error('Fetch error:', err);
}
This example shows the standard node fetch proxy setup: you create an HttpsProxyAgent from your proxy URL (auth supported), pass it into { agent }, and node-fetch sends the request through that proxy. The target URL here (httpbin.org/ip) is useful because it returns the IP your proxy exposes, making it easy to confirm that the request actually went through the proxy.
Alternative: use ScrapingBee (no proxy configuration required)
// One-line fetch with ScrapingBee (automatic proxying)
const response = await fetch(
`https://app.scrapingbee.com/api/v1/?api_key=YOUR_API_KEY&url=${encodeURIComponent('https://httpbin.org/ip')}`
);
const data = await response.json();
console.log('Fetched IP:', data);
This example shows the simplest way to fetch through a proxy without configuring any agents. ScrapingBee handles all proxying internally (rotation, geolocation, anti-bot logic) so you just pass your target URL to the API endpoint and treat it like a normal fetch call.
Rotate proxies and manage sessions
When you use node fetch with proxy servers, rotation is what keeps you from getting banned all the time. Changing IPs spreads out your traffic, and keeping some requests on the same IP (sticky sessions) makes flows like login, cart, or pagination behave more like a real user.
A common pattern in fetch-with-proxy Node.js setups is: random proxy per request for broad coverage, retries with backoff, and a concurrency cap (for example, 2–4 parallel requests per site) so you don't smash rate limits.
Pattern A: Random proxy per request (ESM, node-fetch v3)
This pattern:
- picks a random proxy and builds a fresh
HttpsProxyAgentper request - retries failed requests with jittered exponential backoff
- limits concurrency to 3 in-flight requests as a simple safety cap
// npm i node-fetch https-proxy-agent
import fetch from 'node-fetch';
import { HttpsProxyAgent } from 'https-proxy-agent';
const PROXIES = [
'http://user:pass@dc-proxy-1.example:8001',
'http://user:pass@dc-proxy-2.example:8001',
// res-gw.example stands for a residential gateway domain — use your provider's actual proxy host
'http://user:pass@res-gw.example:7777'
];
const sleep = (ms) => new Promise((r) => setTimeout(r, ms));
async function withRetries(fn, { retries = 3, baseMs = 300 } = {}) {
let attempt = 0;
while (true) {
try {
return await fn();
} catch (err) {
if (attempt++ >= retries) throw err;
// Exponential backoff + small jitter
const delay = baseMs * 2 ** (attempt - 1) + Math.random() * 100;
await sleep(delay);
}
}
}
function randomProxyAgent() {
const proxy = PROXIES[Math.floor(Math.random() * PROXIES.length)];
return new HttpsProxyAgent(proxy);
}
export async function fetchViaRandomProxy(url, opts = {}) {
const agent = randomProxyAgent();
return withRetries(async () => {
const res = await fetch(url, { agent, ...opts });
if (!res.ok) throw new Error(`HTTP ${res.status}`);
return res.text();
});
}
// Example (limit concurrency to 3 in-flight requests)
async function main() {
const targets = [
'https://httpbin.org/ip',
'https://example.com',
'https://httpbin.org/headers'
// ...
];
const pool = new Set();
const MAX = 3;
for (const t of targets) {
const job = (async () => {
try {
const body = await fetchViaRandomProxy(t);
console.log('[OK]', t, '->', body.slice(0, 80), '...');
} catch (e) {
console.error('[FAIL]', t, e.message);
}
})();
pool.add(job);
job.finally(() => pool.delete(job));
if (pool.size >= MAX) {
await Promise.race(pool);
}
}
await Promise.allSettled(pool);
}
main().catch(console.error);
What this is doing:
PROXIESholds your proxy pool;randomProxyAgent()picks one per request and builds anHttpsProxyAgentso the node fetch proxy routing goes through that endpoint.withRetries()wraps any async task with retry + exponential backoff + jitter, which is ideal for flaky proxies.fetchViaRandomProxy()is your main helper: it picks a proxy, callsfetch, throws on non-2xx, and returns the response body.main()is a tiny concurrency-limited worker pool: at most 3 requests are in flight at a time, which is a safe baseline for not nuking a target with too many parallel calls.
Pattern B: Sticky sessions (reuse the same proxy/agent)
Sometimes you don't want a new IP for every request. Login flows, carts, pagination, and multi-step user journeys often break if the IP keeps changing. That's where sticky sessions come in.
This pattern:
- reuses the same proxy/agent for a given session key until a TTL expires
- lets the proxy provider rotate IPs behind a residential gateway if they support it
- keeps separate sessions per site or workflow so cookies and reputation don't mix
- (optional) can be combined with
withRetries
// npm i node-fetch https-proxy-agent
import fetch from 'node-fetch';
import { HttpsProxyAgent } from 'https-proxy-agent';
const SESSION_TTL_MS = 10 * 60 * 1000; // 10 minutes
const STICKY_POOL = [
'http://user:pass@res-gw.example:7777',
'http://user:pass@res-gw.example:7777' // same gateway; provider may rotate behind it
];
const sessions = new Map(); // key -> { agent, expiresAt }
function getStickyAgent(key = 'default') {
const now = Date.now();
const existing = sessions.get(key);
if (existing && existing.expiresAt > now) {
return existing.agent;
}
const proxy = STICKY_POOL[sessions.size % STICKY_POOL.length];
// Some providers let you encode session IDs in the username/user params,
// e.g. user-<youruser>-session-<uuid>.
const agent = new HttpsProxyAgent(proxy);
sessions.set(key, { agent, expiresAt: now + SESSION_TTL_MS });
return agent;
}
async function fetchSticky(url, key = 'default', opts = {}) {
const agent = getStickyAgent(key);
const res = await fetch(url, { agent, ...opts });
if (!res.ok) throw new Error(`HTTP ${res.status}`);
return res.text();
}
// Example: keep the same session across a small flow
async function runFlow() {
const SESSION_KEY = 'site-A-userflow-1';
const page1 = await fetchSticky('https://httpbin.org/ip', SESSION_KEY);
const page2 = await fetchSticky('https://httpbin.org/headers', SESSION_KEY);
console.log('Page 1:', page1);
console.log('Page 2:', page2);
}
runFlow().catch(console.error);
What this is doing:
sessionsacts as an in-memory session store. Each key gets its own long-lived proxy agent.getStickyAgent()reuses the same agent until the TTL expires, then rotates to the next proxy.- Many residential providers automatically keep IPs stable behind a gateway; others let you embed session IDs in the username so the IP stays "sticky" on their side.
fetchSticky()is your drop-in replacement for normal fetch when you need a stable exit IP for a whole workflow.runFlow()shows how multiple pages of the same flow share one proxy session.
This setup mimics browser-like behavior and prevents workflow breakage that happens when your IP jumps mid-session.
Basic rotation snippet
Here's a smaller, copy-paste-ready node fetch proxy example. It loops through a proxy list, creates a fresh HttpsProxyAgent for each request, and retries with exponential backoff + jitter. Real fetch-proxy Node.js setups usually add concurrency limits, per-site rules, and sometimes sticky sessions (reusing the same proxy URL or a provider "session" parameter across multiple calls).
// filename: rotate-proxies.mjs
// npm i node-fetch https-proxy-agent
import fetch from 'node-fetch';
import { HttpsProxyAgent } from 'https-proxy-agent';
const TARGET_URL = 'https://httpbin.org/ip';
const PROXIES = [
'http://user:pass@us-proxy.example.com:8000',
'http://user:pass@de-proxy.example.com:8000',
'http://user:pass@gb-proxy.example.com:8000'
];
const sleep = (ms) =>
new Promise((r) => setTimeout(r, ms + Math.floor(Math.random() * 250)));
async function fetchWithRetries(url, agent, { attempts = 3, baseDelay = 300 } = {}) {
let lastErr;
for (let i = 0; i < attempts; i++) {
try {
const res = await fetch(url, { agent });
if (!res.ok) throw new Error(`HTTP ${res.status}`);
return await res.text();
} catch (e) {
lastErr = e;
const delay = Math.min(2000, baseDelay * 2 ** i) + Math.floor(Math.random() * 200);
await sleep(delay);
}
}
throw lastErr;
}
async function main() {
for (const proxyUrl of PROXIES) {
const agent = new HttpsProxyAgent(proxyUrl);
try {
const body = await fetchWithRetries(TARGET_URL, agent, {
attempts: 3,
baseDelay: 300
});
console.log(`[OK] via ${proxyUrl} ->`, body.trim());
} catch (e) {
console.warn(`[FAIL] via ${proxyUrl}:`, e.message);
}
await sleep(300);
}
}
main().catch(console.error);
What this snippet does:
- Builds a new
HttpsProxyAgentfor each proxy to forcefetchto exit through that specific IP. - Uses exponential backoff with jitter (
fetchWithRetries) so flaky proxies don't instantly break your workflow. - Logs which proxy succeeded or failed, making it easy to evaluate your pool.
- Adds a short delay between requests to avoid hammering the target.
In a real system you'd also track success rates and latency per proxy, then automatically disable the ones that keep timing out or getting blocked.
Things you should know about proxies
Now that you know how to plug a Node fetch proxy into your requests, there are a few practical realities worth keeping in mind. Free proxies look tempting, but they're slow, unreliable, and can see everything you send through them. Most of them are already sitting on public proxy lists, which means websites can block them instantly. They're fine for quick experiments or demos — but not for anything production-grade.
Paid proxies don't magically solve everything either. Datacenter IPs are cheap and fast but very easy for sites to detect. Residential proxies come from real home connections and usually perform far better, but the price goes up. And then there are mobile (4G/5G) proxies — the hardest to block, the most "real" looking, and priced like it. Proxy providers also vary wildly: some rotate too aggressively, some barely rotate at all, some are stable, some are molasses slow.
Types of proxies explained
When you're using a fetch API proxy setup, it helps to know what kind of IP you're actually buying:
Datacenter proxies
Cheap, fast, and everywhere. But they're also the easiest for sites to detect and block because they don't look like real user traffic. Good for low-risk or internal tasks.Residential proxies
Real home ISP IPs. More expensive and slower, but much harder to identify. Great for scraping sites that instantly nuke datacenter traffic. Many node fetch proxy workflows rely on residential rotation.ISP proxies
A hybrid option: IPs issued by real ISPs but hosted in data centers. More stable than residential, faster, and still difficult to block. Usually mid-range in price.Mobile (4G/5G) proxies
The heavyweight option. Mobile carriers NAT tons of users behind shared IPs, so these look extremely "natural" and are very hard to block. Also very expensive. We have written an extensive benchmarks for mobile proxies.
Most providers support rotation, whether automatic or per-request, which spreads out your traffic and reduces the risk of burning a single IP.
Performance, cost, and reliability trade-offs
Each proxy type has strengths and weaknesses:
- Datacenter — cheapest, fastest, lowest stealth. Great latency, success rate varies a lot.
- Residential — slower and pricier but extremely solid against anti-bot systems.
- ISP — middle ground: stable, fast, harder to block than DC, cheaper than residential.
- Mobile — best success rate on hard targets, but at a premium price.
Your ideal setup depends on how strict the target site is and how much traffic you're sending. A heavy-duty fetch proxy nodejs pipeline (millions of pages) usually combines rotation, retries, concurrency caps, and per-proxy reputation tracking.
Compliance and ethical use of proxies
Using proxies doesn't give you a free pass on legality or ethics. You can still break Terms of Service or data-handling rules if you scrape stuff you shouldn't. A few best practices:
- Respect site rules when possible (
robots.txtis there for a reason). - Don't scrape personal or authenticated data unless you have explicit permission.
- Log failures so you can slow down before a site rate-limits you.
- Adjust concurrency responsibly — don't crush small sites with 200 parallel requests.
- When possible, use higher-level tools that enforce safer scraping patterns, like ScrapingBee's no-code features.
Proxies are powerful tools, just make sure you're using them responsibly and legally.
When native fetch isn't enough for JS-heavy pages
Sometimes a plain request (even with a fetch proxy) just won't load the content you need. Modern sites lean heavily on JavaScript to build pages, fetch data after user actions, or run full SPA flows with buttons, scroll events, timers, and background requests. Native fetch (and node-fetch) doesn't run any of that, so your javascript web scraper ends up pulling an empty shell instead of the real page.
That's where headless rendering becomes necessary: you need something that can actually execute JS, wait for elements, click buttons, scroll, or handle multi-step flows. Instead of running and maintaining your own browser cluster, you can offload that to a managed scraping API that handles the messy parts for you. ScrapingBee can render JavaScript, take full-page screenshots, run scripted scenarios, and you can still call everything through a simple fetch call.
Examples:
- Capture screenshots for debugging layouts or selector issues — see the screenshot API.
- Run full JS workflows with timers, clicks, and custom scripts — see the javascript web scraper feature.
With this approach, your code stays clean and straightforward while the rendering, proxy rotation, and anti-bot handling happen behind the scenes.
A faster path: call ScrapingBee with fetch
Once you've seen how much wiring goes into a DIY node fetch proxy setup — rotation, agents, retries, sticky sessions, TLS fixes — the easier path becomes obvious: just call ScrapingBee's API directly with fetch. It really is a single request:
const res = await fetch(
`https://app.scrapingbee.com/api/v1/?api_key=YOUR_API_KEY&url=${encodeURIComponent('https://example.com')}`
);
const body = await res.text();
console.log(body);
Behind that one call you automatically get proxy rotation, geolocation routing, headless rendering for JS-heavy sites, optional resource blocking (images/fonts/ads), and even AI-powered web data extraction. No agents, no proxy lists, no juggling sessions — you just pass a URL and let the service handle the rest.
For most fetch proxy nodejs use cases, this ends up being the clean drop-in alternative once you realize how much maintenance the DIY route demands.
If you want the full list of capabilities (JS rendering, browser control, screenshots, and more) see the web scraping API.
Geo-targeting and Google SERP access
Sometimes location really matters: prices change by region, news shifts by country, and sites show different banners or translated content depending on where you appear to be browsing from. ScrapingBee lets you set a country or locale parameter so the API fetches the page as if you were physically in that region. This removes the usual node fetch proxy gymnastics where you'd otherwise buy IPs from multiple countries and rotate them yourself.
If you're working with search results, ScrapingBee also offers a dedicated Google Search API that handles SERP scraping without bans, captchas, or babysitting your own browser cluster. It's the simplest way to combine a fetch API proxy workflow with clean, stable, region-accurate search data.
Troubleshooting checklist
If your Node.js fetch with proxy setup starts failing, run through this quick list:
- Double-check the proxy URL format (
http://user:pass@host:port). - Make sure authentication details are correct — URL-encode the username/password if they contain special characters.
- Verify the scheme: HTTP proxies won't work if you accidentally point to an HTTPS URL.
- Test that the proxy port is actually reachable from your machine (telnet, nc, curl, etc.).
- Confirm DNS resolution works for both the proxy host and the target site.
- Inspect the TLS certificate chain — corporate MITM proxies or self-signed certs can break the node fetch agent.
- If you're using sticky sessions, confirm you're reusing the same agent instead of creating a new one every call.
- Increase timeouts; some proxies introduce noticeable latency.
- Add retries with exponential backoff to smooth out transient failures.
- Check if the target site is blocking requests based on headers, cookies, or missing browser fingerprints.
Proxies are (probably) not enough
Using a node fetch proxy is only one layer of staying unblocked, and usually not the most important one. Modern anti-bot systems look at dozens of signals: missing or incorrect headers, inconsistent cookies, unrealistic timing between requests, and browser fingerprints that don't match real traffic. Even with a fetch with proxy nodejs setup, a site can still block you if your requests don't resemble actual user behavior.
That's why it's worth learning how to set proper headers (especially a realistic User-Agent), handle cookies the same way a browser would, and smooth out your timing so you don't look like a script firing requests at machine speed. And for JS-heavy or fingerprint-sensitive pages, a headless environment or a managed rendering service is usually the better option. The JavaScript-rendering and ScrapingBee API examples above show how to handle these scenarios without running your own browser cluster.
More details here: how to avoid getting blocked while scraping.
Ready to get started?
If you're done wrestling with a DIY fetch proxy Node.js setup (agents, rotation, retries, TLS quirks, and endless JS workarounds) ScrapingBee is the clean, no-nonsense upgrade. You get automatic proxy rotation, full JavaScript rendering, and AI-powered extraction, all behind a single fetch call. No infrastructure, no babysitting, no burnout.
If you want to ditch the maintenance and focus on building your product, you can get started now and drop it straight into your existing nodejs fetch proxy workflow.
Conclusion
All of this is just a long way of saying: choosing and maintaining a proxy provider is its own full-time job. You have to worry about success rates, IP quality, rotation behavior, regional coverage, latency, and whether the provider is actually reputable. That's exactly the burden ScrapingBee was built to remove: instead of juggling proxy pools yourself, you make a single API call and get reliable, rotated, geolocated results without touching any proxy infrastructure at all.
I hope you learned something new reading this article. If you wish to learn more about web scraping in JavaScript, I really recommend you take a look at this web scraping with NodeJS guide.
You could also take a look at this article about file downloading with Puppeteer
Before you go, check out these related reads:
Proxy in Node-Fetch FAQs
Does Node's built-in fetch support proxies without extra libraries?
On Node 18–23, no: native fetch ignores HTTP(S)_PROXY and doesn't accept an agent option. You need an Undici dispatcher such as ProxyAgent / EnvHttpProxyAgent, or a managed scraping API.
On Node 24+, you can enable proxy support via NODE_USE_ENV_PROXY=1 + HTTP(S)_PROXY / NO_PROXY.
What's the simplest way to use a proxy with node-fetch?
Create an HttpsProxyAgent and pass it through { agent } in your fetch() call. That's the cleanest and most reliable setup for node-fetch v2/v3.
How do I pass proxy authentication (user/password) to https-proxy-agent?
Put the credentials directly in the URL:http://user:pass@host:port
If the username or password has special characters, URL-encode them.
Why do I get UNABLE_TO_VERIFY_LEAF_SIGNATURE, and should I disable TLS checks?
Your proxy or the target is serving a self-signed or untrusted certificate. Load the correct CA or use NODE_EXTRA_CA_CERTS. Don't disable TLS verification except for quick local tests.
How can I rotate proxies or keep sticky sessions with node-fetch?
Rotate by creating a new HttpsProxyAgent each time, or keep sticky sessions by reusing the same agent for a TTL window. Combine this with retries, backoff, and concurrency caps for stability.
When should I switch from DIY proxies to a scraping API?
When you need geolocation, reliability, large proxy pools, CAPTCHA handling, or JS rendering. A scraping API removes proxy maintenance and usually gives a much higher success rate.
Can I scrape JS-heavy pages with fetch only, or do I need a headless browser?
Fetch alone can't execute JavaScript. For SPA pages, dynamic rendering, or click/wait flows, you need a headless browser or a service that handles rendering for you.
How do I test geo-restricted content from Node.js?
Use proxies with IPs in the target region, or pass a country parameter to a service like ScrapingBee. That way you avoid managing multiple regional proxy pools yourself.
Are free proxies safe for development or production?
Free proxies are slow, unstable, and can inspect or manipulate your traffic. They're never safe for production and only barely acceptable for tiny experiments or throwaway scripts.



