New Amazon API: We've just released a brand new way to scrape Amazon at scale Start Free Trial 🐝

Using cURL with a proxy

02 December 2025 | 26 min read

If you've ever needed to route your requests through another server, using cURL with proxy is one of the easiest ways to do it. A proxy sits between you and the destination, forwarding your requests and sending the responses back like a chill middle-man that doesn't ask questions.

Sometimes you need this because a service shows different data depending on where you appear to be coming from: geo-restricted content, prices shown in the "wrong" currency, or straight-up blocked regions. Hitting the site directly won't cut it, but sending the same request through a proxy in the right location gets you exactly the data you need.

In this guide, we'll walk through the practical ways to use curl through a proxy, with clear steps and simple examples.

Using cURL with a proxy

Quick answer: How to use cURL with proxy

A proxy is basically an intermediary server that sends your request forward and brings the response back. Devs use curl with proxy setups for all sorts of reasons: privacy, testing how a site behaves in another region, bypassing annoying network rules, or grabbing data for scraping tasks. If you just want the quick curl proxy example set to get moving, here you go.

Copy–paste commands: curl through proxy

1. HTTP proxy

curl -x http://127.0.0.1:8080 https://httpbingo.org/ip

2. HTTPS proxy

curl -x https://127.0.0.1:8443 https://httpbingo.org/ip

https:// in the proxy URL only affects the client–proxy leg; the destination's HTTPS is still its own TLS session as usual.

3. SOCKS5 proxy

curl -x socks5h://127.0.0.1:1080 https://httpbingo.org/ip

4. Proxy with authentication

curl -x http://127.0.0.1:8080 -U user:pass https://httpbingo.org/ip

5. ScrapingBee proxy-as-API (no proxy setup needed)

curl https://app.scrapingbee.com/api/v1/?api_key=YOUR_KEY&url=https://httpbingo.org/ip

If you ever want to flip a browser request into a cURL command before sending it through proxy, our handy Curl Converter tool helps a ton.

What is cURL?

cURL is a command-line tool for transferring data, around since 1996 and used everywhere from quick debugging to full-blown automation. Developers rely on it because it's lightweight, universally available on Unix-like systems, works with tons of protocols (HTTP/HTTPS, FTP, SCP, and more), and fits naturally into scripts for tasks like web scraping or even routing a curl command with proxy setups.

When you need a simple, dependable way to fetch or send data (or even plug in a proxy for curl) this is the tool most folks reach for.

Basic request example

Here's the simplest possible example: grab the content of Google by calling cURL directly.

curl google.com

Which returns:

<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
<A HREF="http://www.google.com/">here</A>.
</BODY></HTML>

Checking headers with -I

If you want only the headers instead of the whole page, add -I:

curl -I google.com

This returns something like:

HTTP/1.1 301 Moved Permanently
Location: http://www.google.com/
Content-Type: text/html; charset=UTF-8
Date: Wed, 14 Jul 2021 10:37:31 GMT
Expires: Fri, 13 Aug 2021 10:37:31 GMT
Cache-Control: public, max-age=2592000
Server: gws
...

Why the redirect?

If you're wondering where the usual Google homepage went — this response is normal. Google redirects the non-www domain to www.google.com, and cURL won't automatically follow redirects unless you tell it to.

So let's be explicit:

curl https://www.google.com

Now you'll get the full HTML you expect, just like your browser would. Alternatively, tell curl to follow redirects by adding the -L flag:

curl -L https://google.com

Why being explicit matters

cURL follows a simple rule: garbage in, garbage out. It doesn't try to outsmart you or correct your mistakes. If you give it a weird URL or mismatched protocol, it will still try to run the command and show you whatever happens.

That's why being clear and intentional with your flags and arguments is such a big part of working comfortably with cURL.

The fastest way – use ScrapingBee

Before we dive deeper into setting up your own proxy, here's the truth: managing proxies yourself can be a pain. You've got to find a stable proxy, keep it alive, deal with IP bans, handle rotation, add retries, maybe even render JavaScript. When all you wanted was to run curl with proxy or send curl through proxy, the overhead can feel way out of proportion.

That's where ScrapingBee comes in. Instead of juggling servers and configs, you hit their web scraping API and it handles everything automatically: IP rotation, geolocation, retries, headless browser rendering, the whole mess. It's basically the shortcut version of "set up a proxy," but without the babysitting.

Here's what using it looks like:

curl "https://app.scrapingbee.com/api/v1/?api_key=YOUR_KEY&url=https://httpbingo.org/ip"

That's it. No proxy servers to maintain, no network rules to fight with, no JavaScript headaches.

If you want to see what else you can do with it (JS rendering, CAPTCHAs, country targeting), check out the full API.

Our proxy setup

For the examples in the rest of this article, we're keeping things simple: a local proxy running on port 5000. Nothing fancy. When using cURL, the only things you must specify are the host and port. The scheme defaults to http and the port defaults to 80, but we'll call things out explicitly so everything is clear.

If you're just testing, you can spin up a quick local proxy using tools like mitmproxy, tinyproxy, or even a simple Docker image. The idea is the same: run a proxy on your machine, point cURL at it, and watch how the traffic flows through.

Most of our demos target the /ip endpoint on httpbingo.org, which is great for this kind of work because it shows the IP and request info exactly as the server sees it.

Here's the setup we'll reference:

  • Proxy Hostname: 127.0.0.1
  • Proxy Port: 5000
  • URL: https://httpbingo.org/ip
  • Username: user
  • Password: password

1. Using command-line arguments (-x/--proxy)

The quickest way to tell cURL to use a proxy is through a command-line flag. The curl proxy command syntax is simple, flexible, and works with pretty much any curl proxy server or curl proxy protocol setup you throw at it.

If you ever forget the flags, cURL even ships with built-in help:

curl --help proxy

You'll see the core option right away:

-x, --proxy [protocol://]host[:port]

That means you can use either -x or --proxy as both do the same thing.

Basic HTTP proxy example

Using -x:

curl -x http://127.0.0.1:5000 https://httpbingo.org/ip

Equivalent using --proxy:

curl --proxy http://127.0.0.1:5000 https://httpbingo.org/ip

And since cURL defaults to HTTP for the proxy scheme, you can shorten it:

curl -x 127.0.0.1:5000 https://httpbingo.org/ip

Proxy with username and password (inline credentials)

Sometimes you need to authenticate directly inside the proxy URL. Here's a clean curl use proxy example:

curl -x http://user:[email protected]:5000 https://httpbingo.org/ip

If your password includes special characters, quote it:

curl -x http://user:"p@ssw0rd!"@127.0.0.1:5000 https://httpbingo.org/ip

Auto-negotiating the authentication scheme

If you want cURL to figure out the right authentication type:

curl --proxy http://127.0.0.1:5000 --proxy-anyauth -U user:password https://httpbingo.org/ip

Following redirects when using a proxy

Some sites return 301/302 redirects. To follow them through the proxy, add -L:

curl -L -x http://127.0.0.1:5000 https://example.com

If you want to convert your cURL command into JavaScript, Python, Go, Rust, etc., the Curl Converter is super handy.

Authentication methods beyond Basic

So far we've looked at simple -U user:password, but different proxy servers may require stronger or more specialized authentication. This is where advanced curl proxy authentication options come in.

Basic authentication (default):

curl -U user:password --proxy http://127.0.0.1:5000 https://httpbingo.org/ip

Digest authentication:

curl --proxy http://127.0.0.1:5000 --proxy-digest -U user:password https://httpbingo.org/ip

NTLM authentication:

curl --proxy http://127.0.0.1:5000 --proxy-ntlm -U user:password https://httpbingo.org/ip

Quick reality check. While curl supports --proxy-digest and --proxy-ntlm, and the commands above work, modern proxy setups rarely use these anymore. NTLM is also notorious for getting stuck mid-handshake, and a lot of corporate/proxy servers have quietly disabled both.

So don't assume these modes will magically work in 2025+. In real-world environments you usually get either simple Basic auth or Kerberos/Negotiate. If your proxy suddenly throws 407s or timeouts when using Digest/NTLM — it's not you, it's the protocol.

Custom Proxy-Authorization header:

If you need to specify your own header manually:

curl --proxy http://127.0.0.1:5000 \
  --proxy-user user:password \
  --proxy-header "Proxy-Authorization: Basic dXNlcjpwYXNz" \
  https://httpbingo.org/ip

Note, however, that --proxy-header can override curl's built-in auth handling. If you set Proxy-Authorization manually (for example with --proxy-header "Proxy-Authorization: ..."), you're effectively bypassing curl's built-in proxy auth logic. Don't mix --proxy-header Proxy-Authorization: ... with -U/--proxy-anyauth unless you really know what you're doing.

A quick security note

Avoid storing plain-text credentials in shell history or scripts when you can help it. Use environment variables, prompt for the password, or (even better) skip the entire proxy-auth headache when possible. Managed APIs like ScrapingBee handle secure web data extraction without requiring you to manage proxy auth manually.

2. Using environment variables (http_proxy, https_proxy)

If you don't feel like typing proxy flags every time, you can let the shell do the work for you. cURL checks a set of curl proxy environment variables automatically: things like http_proxy, https_proxy, and even NO_PROXY. Once these are set, cURL just assumes you want to use them. This is a chill way to apply curl proxy settings across your session.

Linux/macOS: setting proxy vars

export http_proxy="http://127.0.0.1:5000"
export https_proxy="http://127.0.0.1:5000"

With authentication:

export http_proxy="http://username:[email protected]:5000"
export https_proxy="http://username:[email protected]:5000"

Now your request becomes:

curl https://httpbingo.org/ip

No -x, no --proxy, nothing extra.

Note that casing matters for http_proxy and https_proxy. For example, an HTTP_PROXY variable will be ignored due to security reasons. Other ones (HTTPS_PROXY, NO_PROXY) can exist in both upper- and lowercase.

Windows: CMD version

set http_proxy=http://127.0.0.1:5000
set https_proxy=http://127.0.0.1:5000
curl https://httpbingo.org/ip

Windows: PowerShell version

$env:http_proxy="http://127.0.0.1:5000"
$env:https_proxy="http://127.0.0.1:5000"
curl.exe https://httpbingo.org/ip

In Windows PowerShell, curl is an alias for Invoke-WebRequest, so we use curl.exe specifically to avoid weird issues.

Skipping the proxy for specific hosts (NO_PROXY)

Sometimes you want most requests to use the proxy, but a few to go direct. That's what NO_PROXY is for: it tells cURL when to chill out and send the request straight, giving you a clean curl without proxy path.

export http_proxy=http://127.0.0.1:5000
export NO_PROXY=localhost,.example.com
curl https://localhost/test

For IPs, recent curl versions also accept CIDR ranges in NO_PROXY (e.g. NO_PROXY=10.0.0.0/8).

NO_PROXY has some quirks:

  • It does not support glob-style wildcards like *.example.com.
  • Values are basically treated as suffix matches on the hostname.

So, in practice:

  • NO_PROXY=localhost → matches only localhost
  • NO_PROXY=example.com → matches example.com and any host whose name ends with example.com (e.g. api.example.com)
  • NO_PROXY=.example.com → behaves the same as example.com in cURL land; people add the dot out of habit, but it's still just suffix matching, not a special wildcard.

Example:

export http_proxy=http://127.0.0.1:5000
export NO_PROXY=localhost,.example.com

curl https://localhost/test         # bypasses proxy
curl https://api.example.com/data   # bypasses proxy
curl https://another.site           # goes through proxy

Session scope vs. permanent settings

Environment variables set this way live only for the current shell session. Close the terminal, they're gone. If you need them every time you open a terminal, drop them into your startup files:

  • Bash: ~/.bashrc or ~/.profile
  • Zsh: ~/.zshrc
  • macOS login shells: sometimes ~/.profile
  • Windows (CMD): add them via System Properties → Environment Variables or put set http_proxy=... inside a startup .bat file
  • Windows (PowerShell): persist with:
[System.Environment]::SetEnvironmentVariable("http_proxy", "http://127.0.0.1:5000", "User")

Just remember: these variables affect any tool that respects them (wget, apt, npm, etc.). Handy if that's what you want, but messy if it isn't.

This approach works great when you want a simple, global-ish proxy setup without having to sprinkle proxy flags everywhere in your commands.

3. Using an alias for convenience

If you're hitting a proxy all the time, setting an alias is a practical way to streamline your workflow. Instead of retyping the same curl proxy settings in every command, you just teach your shell a shortcut and let it handle the boring part.

The idea is simple: replace the default curl call with a version that always includes your proxy.

Hardcoded proxy alias

alias curl="command curl -x http://127.0.0.1:5000"

Now when you run:

curl https://httpbingo.org/ip

your shell actually runs:

curl -x http://127.0.0.1:5000 https://httpbingo.org/ip

Automatically, every time. To be extra safe, you can name your alias something like curlp to avoid breaking things.

Unfortunately, Windows CMD does not support aliases but you can create a .bat file and place it somewhere in your PATH:

@echo off
"C:\Windows\System32\curl.exe" -x http://127.0.0.1:5000 %*

More flexible alias (powered by environment variables)

If you want a setup that doesn't hard-wire the host and port, you can combine aliases with environment variables:

# Create an alias for cURL with proxy
alias curlproxy='curl -x $http_proxy'

Usage:

curlproxy https://httpbingo.org/ip

This way, you can switch proxies just by updating $http_proxy, and the alias keeps working without editing your shell config over and over.

Add the alias to ~/.bashrc, ~/.zshrc, or your shell's startup file if you want it available in every session.

4. Using a curlrc file

If you're tired of repeating flags and want cURL to "just know" your preferred settings, a config file is the cleanest way to do it. This is where curlrc files come in. It's basically a place where you can curl set proxy options (or any other defaults) and cURL will load them automatically every time.

cURL will check for a config file unless you explicitly tell it not to (-q passed as the first argument). There's also the --config flag that adds more options to the default behavior; if you pass flags on the command line, they override whatever's in the config file.

Keep in mind: curl doesn't always load .curlrc and/or _curlrc files from anywhere.

  • On Linux/macOS it reads $HOME/.curlrc (or $CURL_HOME/.curlrc if that variable is set).
  • On Windows it computes a "home" directory from CURL_HOME, HOME, or %APPDATA% / %USERPROFILE%\Application Data, and looks there for _curlrc or .curlrc. If it finds no config there, it also checks the directory where curl.exe is located.

All in all, this behavior depends on the build so make sure to check the official docs to understand the check sequence.

Plus, this config is used only by the curl CLI. Tools built on top of libcurl (anything that embeds curl under the hood) completely ignore .curlrc, so don't expect your proxy settings to magically apply there.

Linux / macOS example

Create or edit the ~/.curlrc file:

proxy = "http://127.0.0.1:5000"

Windows example

Create or edit the %APPDATA%\_curlrc file:

proxy = "http://127.0.0.1:5000"

Usage

Once the file is in place:

curl https://httpbingo.org/ip

No flags required: cURL pulls the proxy setting straight from your config file. It's a simple, reliable curl proxy example that saves a ton of typing over time.

If you prefer avoiding local config files entirely, no-code scraping tools can also remove proxy management from your workflow.

Proxy types and protocols — HTTP vs HTTPS vs SOCKS4/5

Before diving deeper into configs, it helps to understand the different kinds of proxies you can plug into cURL. Each protocol has its own strengths, and the one you choose can impact speed, compatibility, and how well your traffic blends in, especially for scraping or region-based testing.

HTTP proxies (the common default)

Most examples you see online use an HTTP proxy, because it works with the majority of web requests and is easy to configure.

Example:

curl -x http://127.0.0.1:5000 https://httpbingo.org/ip

This is the classic curl http proxy pattern and usually the first stop for devs.

With a "normal" HTTP proxy, curl does not resolve the target site's hostname itself, it just sends the full URL/host to the proxy, and the proxy takes care of DNS for the destination. curl only needs to resolve the proxy's own hostname (e.g. proxy.example.com), nothing else.

That's also why options like --resolve don't affect traffic going through an HTTP proxy: the proxy's DNS wins.

HTTPS proxies (encrypted tunneling)

An HTTPS proxy encrypts the connection between your machine and the proxy itself. This can matter if your network is monitored or you want an extra layer of privacy.

When you fire an HTTPS request through any classic proxy (doesn't matter if the proxy URL is http:// or https://), curl automatically goes into "tunnel mode" and sends a CONNECT to punch a hole straight to the target server. That's just how HTTPS-over-proxy works: the proxy becomes a dumb pipe for encrypted bytes, and curl doesn't give you a toggle to change that.

Example with an HTTPS proxy:

curl -x https://127.0.0.1:5000 https://httpbingo.org/ip

A few things to keep straight:

  • https://proxy:port — the connection to the proxy is encrypted. The destination site is still its own TLS session.
  • This only works if the proxy actually speaks HTTPS. Most corp/datacenter boxes are boring old http:// but still handle HTTPS targets via CONNECT.
  • With a normal CONNECT tunnel, the proxy can't see your decrypted HTTPS traffic, unless it's doing full-on MITM with a CA your machine trusts.
  • If your HTTPS proxy is using a self-signed cert, curl will freak out unless you pass:
# This only disables verification of the proxy's cert; the site cert is still validated
curl --proxy-insecure -x https://proxy:port https://example.com

💡 If you want "real" HTTPS proxies without praying they support it, ScrapingBee's Proxy mode actually gives you fully-fledged HTTPS endpoints out of the box. You hit a standard HTTPS proxy, drop your API key, and get geo-targeting + rotation baked in.

SOCKS4 / SOCKS5 proxies (more flexible + great for scraping)

SOCKS proxies don't care about the protocol of the final destination; they just move raw traffic. SOCKS5 also supports authentication, UDP traffic, and better compatibility when sites are picky.

SOCKS4 example:

curl -x socks4://127.0.0.1:1080 https://httpbingo.org/ip

SOCKS5 example:

curl -x socks5h://127.0.0.1:1080 https://httpbingo.org/ip

When people talk about a curl socks proxy, they usually mean SOCKS5 because it handles more real-world scraping scenarios.

SOCKS5 and DNS

Keep in mind that the h postfix in the socks5h:// string matters. Its presence or absence affects the DNS resolving process.

  • socks5:// → curl resolves the target hostname locally, then sends the IP to the proxy.
  • socks5h:// → the proxy resolves the hostname (the h postfix means "proxy handles host name").

So if you want all lookups to happen from the proxy's IP (no local DNS queries for the target):

curl -x socks5h://127.0.0.1:1080 https://example.com

If you don't care about DNS privacy and just want the TCP traffic to go through the proxy, socks5:// is fine.

Proxy support varies for FTP

cURL's proxy behavior is consistent for HTTP/HTTPS, but FTP is different. Not every proxy type can handle every FTP operation, and results vary by server.

  • HTTP proxies + FTP uploads are limited — HTTP proxies weren't designed for FTP commands, so uploads (STOR, PUT) often fail or behave inconsistently.
  • SOCKS proxies don't support all FTP modes — FTP's dual connections (control + data) cause issues: passive mode may work, active mode often breaks.
  • FTP over HTTPS proxies is unreliable — An HTTPS proxy is just a TLS-wrapped HTTP proxy, so the same FTP limitations apply.

Which proxy should you use for scraping?

  • HTTP proxies — Fastest, simplest, but easiest for sites to detect.
  • HTTPS proxies — Good balance of privacy and simplicity.
  • SOCKS5 proxies — Best for heavy-duty scraping, bypassing filters, or handling mixed protocols.

If you're building a JavaScript web scraper, especially one that needs dynamic content or JS-rendered pages, pairing SOCKS5 with a headless browser or using a managed solution like ScrapingBee can save you from maintaining rotating proxy pools.

Rotating proxies and scale — two paths

Once you start scraping more aggressively, a single proxy usually doesn't cut it. Sites rate-limit you, ban your IP, or the proxy itself becomes unreliable. That's when curl proxy rotation becomes essential. At scale, you either build your own rotating setup or you hand the whole mess over to a managed provider. Think of this as the "next level" after basic curl use proxy setups.

There are two main approaches.

DIY rotation with a proxy list

If you're just experimenting or running lightweight tasks, you can roll your own rotation using a small pool of proxies. A simple Bash script can pick a random proxy each request:

#!/bin/bash
proxies=("http://proxy1:8080" "http://proxy2:8080" "http://proxy3:8080")
proxy=${proxies[$RANDOM % ${#proxies[@]}]}
curl -x "$proxy" https://httpbingo.org/ip

This is the simplest form of curl proxy rotation, but be warned:

  • Free proxies die constantly
  • They're slow
  • They get banned fast
  • Security is questionable (they may act as malicious MITM)

Use this approach only for testing, not production.

Managed rotation with ScrapingBee

If you want serious reliability, retries, automatic IP rotation, and steady success rates, a managed system is the way to go. Instead of juggling proxy lists, failures, and region targeting, ScrapingBee handles everything behind the scenes.

A simple curl with proxy example using ScrapingBee looks like this:

curl "https://app.scrapingbee.com/api/v1/\
?api_key=YOUR_API_KEY\
&url=https://httpbingo.org/ip"

You still send your request via proxy, but without managing any proxy infrastructure yourself. ScrapingBee also gives you extras like screenshot capture, JavaScript rendering, and smart retries — all features you'd never want to maintain yourself. For anything beyond hobby scraping, this is the path most developers take.

Troubleshooting and common errors

Even when your setup looks perfect, proxies love throwing curveballs. Most curl proxy error situations fall into just a few categories: connection failures, SSL complaints, bad credentials, or simply being blocked by the target site. Here are the most common issues and how to fix them without losing your mind.

1. Debugging proxy connections with verbose mode

When things aren't working and you're not sure whether cURL is even using the proxy, add -v for full request/response detail:

curl -v -x http://proxy.example.com:8080 https://httpbingo.org/ip

Verbose mode will show:

  • Whether the proxy connection was attempted
  • Redirects or handshake issues
  • Authentication challenges
  • Errors returned by the proxy

2. Ignoring SSL certificate errors (testing only)

Sometimes a proxy intercepts HTTPS traffic and uses a self-signed certificate. This triggers SSL warnings. If you're just testing, you can bypass them:

curl -k -x http://proxy.example.com:8080 https://https-only-site.com

-k / --insecure disables certificate validation: useful for local dev, never for production.

3. Proxy settings not applied?

If your request is going out directly (not via the proxy) double-check:

  • Environment variables aren't overriding your CLI flags
  • .curlrc isn't setting something unexpected
  • You haven't added the host to NO_PROXY
  • Your alias isn't redirecting the call

If you ever need to force a single request to go curl without proxy, use --noproxy '*':

curl --noproxy "*" https://httpbingo.org/ip

4. Authentication errors

If you see 407 errors or "Proxy Authentication Required," verify:

  • Username/password spelling
  • Quotes around special characters
  • That the proxy supports your auth scheme

Try --proxy-anyauth if unsure:

curl --proxy-anyauth -U user:pass -x http://proxy.example.com:8080 https://httpbingo.org/ip

5. 403 / blocked responses

A valid proxy connection but a 403 usually means:

  • The site rejected the proxy IP
  • Rate limits kicked in
  • You're scraping too fast

In these cases you may need:

  • A different proxy region
  • IP rotation
  • A managed scraping solution

These fixes cover 90% of real-world cURL proxy issues. When in doubt, start with -v; it's your best friend when debugging curl proxy settings.

6. Proxy doesn't support CONNECT (HTTPS tunneling fails)

curl uses the CONNECT method to tunnel HTTPS traffic through an HTTP proxy. If the proxy does not support CONNECT, HTTPS requests simply cannot be proxied, and curl will not fall back to a direct connection. It will fail with an error.

Example check:

curl -v -x http://proxy.example.com:8080 https://example.com

In the verbose output you should see something like:

> CONNECT example.com:443 HTTP/1.1

If the proxy doesn't support CONNECT, you'll usually see:

  • a 405 Method Not Allowed,
  • a 403 Forbidden,
  • a connection drop, or
  • an SSL handshake error after the failed CONNECT attempt.

Ignore or override proxy for one request

Sometimes you have global proxy settings configured (env vars, .curlrc, aliases), but for one call you just want to go direct. No need to delete anything or edit configs as cURL gives you clean one-off overrides. This is especially useful when you want to test both proxied and non-proxied behavior side by side.

Example 1 — Disable proxies for all requests

curl --noproxy "*" https://httpbingo.org/ip

This completely ignores every proxy source (environment variables, config files, aliases) and makes a direct request. The classic curl noproxy move.

Example 2 — Disable proxy for a specific domain

curl --noproxy "example.com" https://example.com/data

This skips the proxy only for example.com, but still uses your proxies for all other hosts. Perfect when most of your workflow needs a proxy, but one endpoint demands a direct hit.

Example 3 — Override proxy with an empty string

curl -x "" https://httpbingo.org/ip

These tricks keep your proxy setup flexible without constantly editing config files or toggling environment variables.

Handy extras — headers, redirects, compression

Once your proxy setup is running smoothly, you can start tuning your requests to look more like a real browser and less like a raw script. These small additions make a big difference when working through a proxy, especially if a site is picky about clients. Think of these as upgrades to your curl proxy request toolkit.

Example 1 — Custom user agent (-A)

curl -x http://proxy.example.com:8080 \
  -A "Mozilla/5.0 (Windows NT 10.0; Win64; x64)" \
  https://example.com

A custom user agent helps your request blend in. Many servers behave differently when they see the default cURL signature.

Example 2 — Add custom headers (-H)

curl -x http://proxy.example.com:8080 \
  -H "Accept-Language: en-US" \
  https://example.com

Great for locale testing, scraping multilingual sites, or tweaking behavior. This is core curl proxy headers usage.

Example 3 — Follow redirects (-L)

curl -x http://proxy.example.com:8080 -L https://example.com

Some sites redirect you before showing the real content. -L follows those hops automatically so you don't get stuck on a 301/302.

Example 4 — Request compressed response (--compressed)

curl -x http://proxy.example.com:8080 --compressed https://example.com

This asks the server to return gzip/deflate content. Responses are smaller, faster, and more bandwidth-friendly, especially over a proxy.

If you ever want these tweaks handled automatically, ScrapingBee's managed scraper setups handle headers and browser behavior server-side so you don't have to manage them manually.

Using cURL to extract the title of a web page

Here's where cURL really shines: piping its output into other tools so you can extract exactly the data you want. This is classic curl data extraction.

Extracting the <title> tag (HTML)

Let's grab the <title> from a page:

curl --silent https://snapshooter.com/ | grep -oP '(?<=<title>).*(?=</title>)'

This works, but remember: parsing HTML with regex is always a bit brittle. It's fine for quick checks, but not great for production scraping. Also, -P might not work on Mac by default.

Note for Windows users: the grep and jq examples above work on Linux/macOS by default. On Windows CMD they won't run unless you install GNU tools. In PowerShell, you can replace grep with Select-String, and use jq if installed.

Extracting a meta description example

curl --silent https://www.boxmode.com/ | grep -oP '(?<=<meta[^>]+name="description"[^>]+content=")[^"]*'

Again: handy for quick demos, but HTML can break regex easily.

Extracting JSON fields with jq (much cleaner)

When the site returns JSON, prefer a real parser like jq. Here's a clean, reliable example:

curl -s https://httpbingo.org/ip | jq ".origin"

This is far more maintainable than regex and is the recommended approach whenever possible.

You can combine any of these techniques with a proxy if needed, just add your usual -x flag or proxy config. For example:

curl -s -x http://127.0.0.1:5000 https://httpbingo.org/ip | jq ".origin"

A simple, practical curl proxy example with JSON extraction in one line.

💡 Love the power of cURL? Check out our ultimate guide on How to download files at enterprise level scale with cURL.

When to switch from DIY proxies to an API

At some point, every developer hits the same wall: your curl proxy server setup works, but the site keeps banning you, the content changes based on region, or the page is so JavaScript-heavy that raw cURL just can't render it. That's usually the moment to stop wrestling with rotating IPs and start using a dedicated scraping API.

Here's a quick checklist to know when it's time to upgrade:

  • You get frequent IP bans or captchas
  • You need geo-targeted data from specific countries
  • The site requires JavaScript to load content
  • Your DIY proxy rotation keeps failing or slowing down
  • You're spending more time maintaining proxies than extracting data

When any of that starts happening, using a curl with proxy API like ScrapingBee is the cleanest path forward. You still write simple cURL commands, but the service handles IP rotation, retries, JS rendering, and all the tricky scraping logic automatically.

Explore the full web scraping API!

Ready to fetch data faster with fewer blocks?

By now you've seen all the ways to run curl with proxy from simple command-line flags to rotation scripts and config files. These tools are great for learning and small tasks, but once you need reliability, scale, or JavaScript rendering, a managed web scraping API becomes the obvious upgrade.

ScrapingBee handles everything you don't want to: rotating IPs, bypassing blocks, handling retries, executing JS, and returning clean data with a single cURL call. No more babysitting proxies. No more trial-and-error setups.

If you want smoother scraping with fewer bans and less overhead, try ScrapingBee's free trial and see the difference:

👉 Get started now!

Conclusion

Alright, folks, we covered pretty much every way you can wire up a proxy with cURL. From one-off flags to environment variables, aliases, rotation tricks, and full-on config files, you've now got a toolbox that can handle anything from quick debugging to serious scraping runs.

If you're using the same proxy constantly, dropping the setting into your .curlrc is usually the cleanest, least-annoying approach. But honestly, any method here is fair game as long as it gets you the data you came for.

And if you're coding around this stuff, remember tons of languages have solid cURL bindings. For example, Python has great support, so check out how to use it with cURL in our tutorial.

Before you bounce, here are a couple of solid follow-up reads:

FAQs — cURL with proxy

How do I use cURL with an HTTP proxy?

Use the -x or --proxy flag:

curl -x http://127.0.0.1:8080 https://example.com

This tells cURL to route the request through the specified proxy. See the official docs.

How do I add username/password authentication to a cURL proxy?

Include credentials in the proxy URL or use -U:

curl -x http://user:[email protected]:8080 https://example.com
curl -U user:pass -x http://proxy:8080 https://example.com

Special characters require quotes.

What is the difference between HTTP/HTTPS and SOCKS proxies in cURL?

HTTP/HTTPS proxies handle web traffic and tunneling, while SOCKS4/5 simply forward raw data. SOCKS5 supports authentication and works better for scraping or mixed protocols.

How do I ignore or disable proxy settings in cURL?

Use:

curl --noproxy "*" https://example.com

Or override proxy with an empty value:

curl -x "" https://example.com

Why does my real IP show when using a cURL proxy?

This usually means the request didn't really go through the proxy, or DNS is still happening locally.

  • Double-check that cURL is actually using the proxy (no NO_PROXY match, no unexpected .curlrc settings, no environment variable overriding your flags).
  • If you're using SOCKS, use socks5h:// so the proxy resolves hostnames instead of your machine.
  • Run with -v to confirm the proxy connection. If an HTTPS proxy doesn't support CONNECT, the request will fail rather than silently "leaking" your IP.

How do I rotate proxies with cURL?

Use a script to select a random proxy per request:

proxylist=(p1 p2 p3)
curl -x ${proxylist[$RANDOM % ${#proxylist[@]}]} https://example.com

For production, use managed rotation services.

Can I use a managed proxy API instead of setting up proxies manually?

Yes. Services like ScrapingBee handle rotation, retries, JS rendering, and geolocation automatically. You still use simple cURL calls, but without maintaining proxy servers.

How do I set cURL proxy environment variables on macOS/Linux vs Windows?

macOS/Linux:

export http_proxy=http://proxy:8080
export https_proxy=http://proxy:8080

Windows CMD:

set http_proxy=http://proxy:8080

PowerShell:

$env:http_proxy="http://proxy:8080"

When should I use -k or --insecure with a cURL proxy?

Only for testing when a proxy uses a self-signed certificate.

curl -k -x http://proxy:8080 https://example.com

Never use it in production as it disables SSL validation.

How do I safely use cURL with a proxy that has a self-signed certificate?

For quick local testing, many people use:

curl -k -x https://proxy:8443 https://example.com

Or:

curl --proxy-insecure -x https://proxy:8443 https://example.com
  • -k / --insecure disables all TLS peer verification (proxy + target).
  • --proxy-insecure disables verification of the proxy's TLS cert only; site cert is still validated (good for production-ish setups with a weird proxy CA).

That's fine for development, but never use them in production as they make you vulnerable to MITM attacks.

The correct production approach is to point curl to the actual CA certificate that signs your proxy's TLS cert:

curl \
  --proxy-cacert /path/to/proxy-ca.pem \
  -x https://proxy:8443 \
  https://example.com

If you need to validate both the proxy and the target site with custom CAs, you can combine them:

curl \
  --proxy-cacert /path/to/proxy-ca.pem \
  --cacert /path/to/site-ca.pem \
  -x https://proxy:8443 \
  https://example.com

What is the default proxy protocol for cURL?

If you omit the scheme, cURL assumes http:// for the proxy. Example:

curl -x 127.0.0.1:8080 https://example.com
image description
Kevin Sahin

Kevin worked in the web scraping industry for 10 years before co-founding ScrapingBee. He is also the author of the Java Web Scraping Handbook.