In this article, we'll walk through how to use a C# HttpClient proxy. HttpClient is built into .NET and supports async by default, so it's the standard way to send requests through a proxy.
Developers often use proxies to stay anonymous, avoid IP blocks, or just control where the traffic goes. Whatever your reason, by the end of this article you'll know how to work with both authenticated and unauthenticated proxies in HttpClient.

Quick answer (TL;DR)
To use a proxy with HttpClient in C#, create a WebProxy, attach it to an HttpClientHandler, and pass that handler when creating the HttpClient instance. That's all you need for a basic setup.
Copy-paste starter code:
using System.Net;
var proxy = new WebProxy("http://your-proxy-host:port")
{
BypassProxyOnLocal = false,
UseDefaultCredentials = false
};
var handler = new HttpClientHandler
{
Proxy = proxy,
// Quick dev-only SSL bypass (remove in production)
ServerCertificateCustomValidationCallback =
HttpClientHandler.DangerousAcceptAnyServerCertificateValidator
};
using var client = new HttpClient(handler);
client.Timeout = TimeSpan.FromSeconds(10);
var response = await client.GetStringAsync("https://api.ipify.org/");
Console.WriteLine(response);
For production scraping, managed proxy services like ScrapingBee handle rotation, authentication, and region targeting for you, which saves a lot of time and headache (especially when doing web scraping in C#).
Common approaches:
- Basic
HttpClientproxy C# setup using an unauthenticatedWebProxy - Authenticated C#
HttpClientwith proxy usingNetworkCredential - Manual rotation by cycling through a list of proxies
- ScrapingBee C# HTTP proxy mode for automatic IP rotation and stable performance
What is HttpClient?
Alright, quick refresher. HttpClient (from System.Net.Http) is the built-in tool in C# for sending HTTP requests and getting responses back. Whenever you're calling an API or just grabbing a webpage, this is usually what you're using behind the scenes. It plays nicely with async/await, so your app isn't sitting there frozen while waiting on the network. And it's pretty flexible: you can add headers, tweak handlers, plug in logging, retries, proxies (which we'll get to), all that good stuff.
So yeah, C# HttpClient is basically the standard HTTP Swiss Army knife in .NET.
What is a proxy?
A proxy server sits in the middle between your app (the client) and the site you're hitting (the server). Instead of your request going out directly, it gets routed through the proxy. This can hide your real IP, help you appear from another region, or just give you an extra layer of control. For example, if you need to hit an API only available in another country, a proxy makes that happen.
[Your App] ---> [Proxy] ---> [Website]
(proxy sends the request on your behalf)
Proxies can be open or require authentication. Some only support basic HTTP traffic, others support HTTPS (encrypted), and then there are SOCKS5 proxies which can route almost any kind of traffic, not just web requests.
When dealing with C#, you'll often configure a C# webproxy instance to tell HttpClient which proxy to use. So when we talk about a C# proxy server setup, we're usually just telling HttpClient: "Send everything through this address instead of directly out to the internet."
Why use a proxy with HttpClient in C#?
When combining HttpClient with a proxy, you get a bit more control over how your requests move through the internet. Maybe you want to stay anonymous, avoid getting your real IP rate-limited, or hit an API that only works in certain regions. In scraping scenarios, rotating proxies can keep your jobs running smoothly instead of getting blocked on the third request.
Using a proxy also makes it easier to scale. If you're making lots of requests, spreading them across different IP addresses helps you avoid bans and captchas, especially when you pair it with good parsing tools.
If you're parsing HTML responses, here's a list of helpful options: C# HTML parsers.
There are free proxies out there, but they're usually slow, unstable, or already burned. Paid proxy providers or managed scraping APIs solve that by giving you clean, reliable IP pools you don't have to babysit. We'll walk through a simple HttpClient proxy C# setup first, and then later we'll look at a more convenient C# HttpClient proxy example using a managed solution.
Creating a new C# project
Before we jump into the actual C# HttpClient proxy example, let's set up a clean project to work in. No heavy IDEs needed here, VS Code and the .NET SDK are enough.
Make sure you've got the latest .NET SDK (at the time of writing, .NET 9) installed. You can check by running:
dotnet --version
If that prints something like 9.x.x, you're good. If not, grab it from the official .NET downloads page.
Now let's create a simple console project:
dotnet new console -o HttpClientProxyDemo
cd HttpClientProxyDemo
Open the project in VS Code:
code .
That's it. No solution files, no wizard dialogs. Just a straightforward project where we'll write the HttpClient C# proxy setup and try it out.
Making a basic HTTP request
Before we bring proxies into the picture, let's just confirm our project can make a normal web request.
Create or edit the Program.cs file so it looks like this:
using var client = new HttpClient();
var result = await client.GetStringAsync("https://api.ipify.org/");
Console.WriteLine(result);
What's happening here:
- We create an
HttpClientinstance. - We call
GetStringAsyncto send a GET request tohttps://api.ipify.org/. - That service just returns your public IP as plain text.
- We print the result to the console.
Run the project:
dotnet run
You should see your real public IP show up in the terminal. This is our baseline; next, we'll route this request through a proxy and watch the IP output change.
Using a proxy with HttpClient
Now let's point our request through a proxy. The flow goes like this:
- Create a
WebProxywith the proxy address. - Plug it into an
HttpClientHandler. - Use that handler when creating the
HttpClientinstance.
Here's a simple example using an open (no-auth) HTTP proxy:
using System.Net;
var proxy = new WebProxy("http://172.104.241.29:8081")
{
BypassProxyOnLocal = false,
UseDefaultCredentials = false
};
var handler = new HttpClientHandler
{
Proxy = proxy,
// Some free proxies break SSL, so we allow all certs here.
// In production, use proper certificate validation.
ServerCertificateCustomValidationCallback = HttpClientHandler.DangerousAcceptAnyServerCertificateValidator
};
using var client = new HttpClient(handler);
var result = await client.GetStringAsync("https://api.ipify.org/");
Console.WriteLine(result);
Run it again:
dotnet run
If the proxy is live, you should now see a different IP printed. That means your request is being routed through the proxy instead of your real connection.
A few things to keep in mind:
- Free proxies are usually unreliable, slow, or already blocked somewhere. They are okay for quick experiments, but expect random failures. We'll look at authenticated proxies and more stable options next.
- The line with
DangerousAcceptAnyServerCertificateValidatordisables SSL certificate validation. This is fine for quick proxy tests or when using ScrapingBee HTTPS proxy mode (where TLS ends at the proxy). For production setups or standard CONNECT/SOCKS proxies, you should keep validation enabled.- See the About SSL certificate validation section below for a detailed explanation.
Using an authenticated proxy with HttpClient
Some proxies require a username and password. The setup is almost the same as before, we just need to include credentials in the WebProxy configuration.
Here's an example:
using System.Net;
var proxy = new WebProxy("http://your-proxy-host:port")
{
Credentials = new NetworkCredential("your-username", "your-password"),
BypassProxyOnLocal = false,
UseDefaultCredentials = false
};
var handler = new HttpClientHandler
{
Proxy = proxy,
// Dev-only: only disable SSL checks when your proxy terminates TLS on purpose
// (e.g., ScrapingBee HTTPS proxy mode). For standard CONNECT proxies, keep validation enabled.
ServerCertificateCustomValidationCallback = HttpClientHandler.DangerousAcceptAnyServerCertificateValidator
};
using var client = new HttpClient(handler);
var result = await client.GetStringAsync("https://api.ipify.org/");
Console.WriteLine(result);
What's happening here:
- We create a
WebProxyjust like before, but now we attach credentials. NetworkCredentialholds the username/password pair.- The handler sends requests through this proxy, including authentication.
When you run the code, the output IP should match the proxy's exit node, not your actual one.
This is the setup you'll usually use with paid residential or datacenter proxy providers, since most of them require authentication. It's more stable than relying on random free proxies and gives you fewer connection surprises.
Rotating proxies with requests
If you're making a lot of requests, using the same proxy over and over can get that IP flagged pretty fast. A common approach is to rotate proxies: either from your own list or using a provider that gives you a pool.
There are two main ways to rotate:
- Create a new
HttpClientper request (simple, but a bit heavier). - Reuse one
HttpClientper proxy configuration. For example, keep a small pool of clients, each with its own handler and proxy. You can manage that cleanly withIHttpClientFactoryif you're running at scale.
For most scraping or batch jobs, option 1 is fine unless you're hammering thousands of requests per second.
Here's a simple rotation example using a list of proxies:
using System.Net;
var proxies = new[]
{
"http://proxy1.com:8080",
"http://proxy2.com:8080",
"http://proxy3.com:8080"
};
foreach (var proxyAddress in proxies)
{
// Create a dedicated handler for each proxy
var proxy = new WebProxy(proxyAddress)
{
BypassProxyOnLocal = false,
UseDefaultCredentials = false
};
var handler = new HttpClientHandler
{
Proxy = proxy,
// Dev-only: ignore SSL validation (remove in production)
ServerCertificateCustomValidationCallback =
HttpClientHandler.DangerousAcceptAnyServerCertificateValidator
};
// Create a new HttpClient for this specific proxy
using var client = new HttpClient(handler, disposeHandler: true)
{
Timeout = TimeSpan.FromSeconds(10)
};
try
{
// Test the proxy by fetching your public IP
var result = await client.GetStringAsync("https://api.ipify.org/");
Console.WriteLine($"Proxy {proxyAddress} -> {result}");
}
catch (Exception ex)
{
// Handle timeouts or dead proxies gracefully
Console.WriteLine($"Proxy {proxyAddress} failed: {ex.Message}");
}
}
This loops through a list, builds a new HttpClient for each proxy, and prints which IP you appear as. In a real setup, you'd have a larger pool and maybe retry logic to skip dead proxies.
Some providers give you a single proxy endpoint that automatically rotates IPs on every request. In that case, your HttpClient code doesn't change at all, and the rotation happens behind that one endpoint.
Using ScrapingBee proxies
If you're tired of hunting for "good" proxies, ScrapingBee gives you a managed pool that just works. You get 1000 free credits on signup, and in proxy mode you're billed only when the response is successful. It also auto-rotates IPs for you, so you don't have to juggle lists.
Here's a clean C# HttpClient proxy example using ScrapingBee's HTTPS proxy endpoint:
using System.Net;
class Program
{
static async System.Threading.Tasks.Task Main()
{
// Don't hardcode secrets — use env vars or your secrets manager
var apiKey = Environment.GetEnvironmentVariable("SCRAPINGBEE_API_KEY");
if (string.IsNullOrWhiteSpace(apiKey))
{
Console.Error.WriteLine("Set SCRAPINGBEE_API_KEY env var first.");
return;
}
// Build ScrapingBee proxy params.
// Important: disable JS rendering unless you actually need it.
var parameters = "render_js=False&premium_proxy=True";
// HTTPS proxy endpoint with inline credentials
var proxyUri = new Uri($"https://{apiKey}:{parameters}@proxy.scrapingbee.com:8887");
var proxy = new WebProxy(proxyUri)
{
BypassProxyOnLocal = false,
UseDefaultCredentials = false,
// Setting explicit credentials helps .NET handle proxy auth reliably
Credentials = new NetworkCredential(apiKey, parameters)
};
var handler = new HttpClientHandler
{
Proxy = proxy,
// ScrapingBee requires turning off SSL verification in proxy mode
ServerCertificateCustomValidationCallback =
HttpClientHandler.DangerousAcceptAnyServerCertificateValidator
};
using var client = new HttpClient(handler, disposeHandler: true);
// Test call: should print a ScrapingBee IP (rotates automatically)
var ip = await client.GetStringAsync("https://api.ipify.org/");
Console.WriteLine(ip);
}
}
A couple of important notes:
- Use the HTTPS proxy URL format:
https://SCRAPINGBEE_API_KEY:PARAMETERS@proxy.scrapingbee.com:8887 - Replace
PARAMETERSwith ScrapingBee options. Start simple:render_js=False. Add more with&, for example:render_js=False&premium_proxy=True. - Proxy mode requires disabling SSL verification (as shown in the code) because the TLS handshake happens with the proxy instead of the target site. The certificate won't match the destination domain, so .NET will reject it and the request will fail. Since you're intentionally trusting ScrapingBee's proxy endpoint, disabling verification is expected in this case.
- IP rotation is automatic. Each request can exit from a different IP, no manual rotation logic needed.
- Scraping Google through proxy mode costs 20 credits per request.
- To forward headers, just set
forward_headers=True. No need forSpb-prefixes. - If using proxy mode with headless browsers (Selenium/Puppeteer), every asset request goes through the proxy and consumes credits. Therefore, reduce images, fonts, etc. wherever possible.
Advanced proxy settings in HttpClient
At this point you've seen the basics of setting up a HttpClient proxy C#. If you want a bit more control, HttpClientHandler exposes a few useful settings you can tweak depending on how strict or relaxed your environment is.
Here are some relevant handler options:
| Option | What it does |
|---|---|
UseProxy | Enables or disables using a proxy entirely. Useful when toggling behavior at runtime. |
Proxy | Assigns the proxy implementation (WebProxy, custom proxy, etc.). |
UseDefaultCredentials | Tells the client to use the machine's default credentials (usually not what you want with external proxies). |
PreAuthenticate | Sends auth headers early for the target server (not the proxy). It helps skip an extra 401 round-trip on server auth. It does nothing for proxy (407) authentication; that's handled through WebProxy.Credentials. |
Example setup:
var handler = new HttpClientHandler
{
UseProxy = true,
Proxy = new WebProxy("http://your-proxy-here:8080"),
UseDefaultCredentials = false,
PreAuthenticate = true
};
using var client = new HttpClient(handler);
On modern .NET, HttpClientHandler internally wraps a SocketsHttpHandler. If you're managing multiple clients in production (e.g., with IHttpClientFactory), consider configuring SocketsHttpHandler directly. It gives you more control over connection pooling, DNS refresh, and proxy lifetime behavior.
var handler = new SocketsHttpHandler
{
Proxy = new WebProxy("http://your-proxy-here:8080"),
UseProxy = true,
PooledConnectionLifetime = TimeSpan.FromMinutes(5), // rotate sockets to refresh DNS/proxy routes
AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate
};
using var client = new HttpClient(handler);
About SSL certificate validation
Disabling SSL checks (like DangerousAcceptAnyServerCertificateValidator) shows up a lot in proxy examples, but it matters why you're doing it. Below are the common cases and what actually happens in each.
ScrapingBee HTTPS proxy mode
In ScrapingBee's HTTPS proxy mode, the proxy intentionally terminates TLS. Your client establishes a secure TLS connection with the proxy endpoint itself, not with the final target website.
Because of that, the certificate you receive belongs to ScrapingBee, not the destination domain, and .NET's default certificate validation fails due to a hostname mismatch.
In this case, disabling validation is expected and safe, because you already trust ScrapingBee proxy endpoint (authenticated via your API key).
Free or misconfigured proxies
Free, random, or misconfigured proxies often break TLS by accident: self-signed certs, wrong CN/SAN, or incomplete chains.
If you disable SSL validation here, your client will accept a forged or invalid certificate and complete TLS with the proxy instead of the real destination.
This effectively turns the proxy into a man-in-the-middle (MITM) that can read or alter your HTTPS traffic. Only do this for demos or experiments, never in production.
CONNECT or SOCKS proxies
With standard CONNECT or SOCKS tunneling proxies, TLS is passed through end-to-end and no SSL bypass is required. If validation fails here, it usually indicates a misconfigured proxy or an invalid certificate on the server side.
ScrapingBee also supports SOCKS proxy mode if you prefer end-to-end TLS.
Best practices for production
- Ensure the proxy supports proper certificate chains
- Use pinned certificates or trusted CA bundles where appropriate
- Prefer failing closed (reject invalid certs) rather than silently bypassing
A more controlled validation approach might look like this:
handler.ServerCertificateCustomValidationCallback = (message, cert, chain, errors) =>
{
// Allow only valid certificates (or check specific thumbprints if needed)
return errors == System.Net.Security.SslPolicyErrors.None;
};
Retry and backoff in HttpClient proxy
When using proxies, especially rotating ones, you'll eventually hit a slow or unresponsive endpoint. Adding a retry with exponential backoff helps avoid unnecessary failures:
- Try → wait → retry (increase delay each time)
- Skip proxies that consistently fail
- Log which proxies perform well over time
- You can roll your own retry logic or use libraries like Polly to handle it cleanly.
Here's a small example using Polly that retries up to 3 times, waiting longer each time:
using System.Net;
using Polly;
using Polly.Retry;
class Program
{
static async System.Threading.Tasks.Task Main()
{
var apiKey = "SCRAPING_BEE_KEY";
if (string.IsNullOrWhiteSpace(apiKey))
{
Console.Error.WriteLine("Set SCRAPINGBEE_API_KEY env var first.");
return;
}
// ScrapingBee parameters:
// - render_js=False disables JS rendering (faster, cheaper, default recommended)
// - premium_proxy=True requests higher-quality, less-blocked IPs
var parameters = "render_js=False&premium_proxy=True";
var proxyUri = new Uri($"https://{apiKey}:{parameters}@proxy.scrapingbee.com:8887");
var proxy = new WebProxy(proxyUri)
{
BypassProxyOnLocal = false,
UseDefaultCredentials = false,
// Explicit credentials help HttpClient handle proxy authentication consistently
// Not strictly required but might be needed for full compatibility
Credentials = new NetworkCredential(apiKey, parameters)
};
var handler = new HttpClientHandler
{
Proxy = proxy,
// ScrapingBee's HTTPS proxy mode intentionally requires bypassing SSL verification,
// because the proxy handles TLS termination. Do not use this blindly in non-proxy scenarios.
ServerCertificateCustomValidationCallback =
HttpClientHandler.DangerousAcceptAnyServerCertificateValidator
};
// HttpClient is created once with the proxy-enabled handler.
using var client = new HttpClient(handler, disposeHandler: true)
{
// Short timeout ensures we don't wait forever on a broken or slow proxy
Timeout = TimeSpan.FromSeconds(10)
};
// Polly v8 retry pipeline: retries network failures + non-success HTTP codes
var pipeline = new ResiliencePipelineBuilder<HttpResponseMessage>()
.AddRetry(new RetryStrategyOptions<HttpResponseMessage>
{
MaxRetryAttempts = 3,
BackoffType = DelayBackoffType.Exponential, // waits: 1s, 2s, 4s...
Delay = TimeSpan.FromSeconds(1),
// Decide which errors should trigger a retry
ShouldHandle = new PredicateBuilder<HttpResponseMessage>()
.Handle<HttpRequestException>() // connection issues
.Handle<TaskCanceledException>() // timeouts
.HandleResult(r => !r.IsSuccessStatusCode), // non-2xx response
// Optional logging callback
OnRetry = args =>
{
var reason = args.Outcome.Exception?.Message
?? $"HTTP {(int)args.Outcome.Result!.StatusCode}";
Console.WriteLine(
$"Retry {args.AttemptNumber} in {args.RetryDelay.TotalSeconds:0.#}s due to: {reason}");
return default;
}
})
.Build();
// Execute the request through the retry pipeline
var response = await pipeline.ExecuteAsync(
async ct => await client.GetAsync("https://api.ipify.org/", ct),
CancellationToken.None
);
// Throw if response is still unsuccessful after retries
response.EnsureSuccessStatusCode();
var text = await response.Content.ReadAsStringAsync();
Console.WriteLine($"Your (proxied) IP: {text}");
}
}
This code retries requests that fail due to timeouts, connection issues, or non-200 responses. It waits a bit longer after each failure (exponential backoff) so we don't hammer a bad proxy. If the request still fails after the retries, we let it error out. A small improvement that makes a big difference in reliability.
The key idea: a C# HttpClient proxy setup becomes much more stable when you validate certificates properly and retry instead of treating failures as a full stop.
Performance notes
Proxies always add some overhead; there's no way around it. Your request has to travel through another server, get processed, and then forwarded. So:
- Direct HttpClient → fastest
- Authenticated / personal proxies → slightly slower
- Rotating / residential / premium proxies → slowest, but most reliable
In scraping, reliability usually matters more than raw speed. After all, a request that returns correctly in 600ms is better than one that fails instantly and needs 3 retries. If you're scraping at scale, stable success rates will beat micro-optimizing latency every time.
Choosing the right proxy approach
- System / free proxies – good for quick demos or testing snippets. They're slow, unreliable, and often blocked. Fine to prove that a HttpClient proxy C# setup works, not for real workloads.
- Authenticated proxies – solid for small tools and apps with steady traffic. You get predictable behavior and one stable endpoint. This is often the simplest C# HttpClient proxy example for production use.
- DIY proxy rotation – possible, but you'll be dealing with failures, retries, health checks, region targeting, and scaling issues. Works, but gets complicated fast.
- ScrapingBee – handles IP rotation, geolocation, premium clean IPs, and optional JS rendering for you. You send requests, they handle the proxy layer. Best fit if you're scraping at any meaningful scale.
Ready to scrape smarter with C#?
If you want to skip the endless hunt for working proxies and stop fighting drops and bans, give ScrapingBee's proxy mode a try. You get stable, geolocated, automatically rotated IPs without maintaining lists or writing brittle rotation logic. Just send your request through our scraping API and focus on your data, not on babysitting infrastructure.
Whether you're building a small tool or scaling up, pairing C# HttpClient proxy with ScrapingBee keeps things simple and reliable.
Conclusion
So, today we walked through how HttpClient works with proxies, how to send requests through basic and authenticated proxies, how to rotate them, and how to avoid the headaches of managing your own proxy pool.
Free proxies are fine for quick tests, but they fall apart fast in real scraping jobs. Authenticated proxies are more stable, but still leave you to handle rotation, region targeting, and retries. Using a managed service like ScrapingBee removes most of that overhead, letting you focus on the actual data instead of proxy maintenance.
With the setups and examples in this guide, you should be ready to build more reliable and scalable scraping workflows in C#.
C# HttpClient proxy FAQs
Can I use an authenticated proxy with HttpClient?
Yes. You can assign credentials to a WebProxy instance and pass it through an HttpClientHandler. The handler then handles proxy authentication automatically.
using System.Net;
var proxy = new WebProxy("http://proxy.example.com:8080")
{
Credentials = new NetworkCredential("username", "password")
};
var handler = new HttpClientHandler
{
Proxy = proxy,
UseDefaultCredentials = false
};
using var client = new HttpClient(handler);
var response = await client.GetStringAsync("https://api.ipify.org/");
Console.WriteLine(response);
You can also refer to the official Microsoft docs for HttpClientHandler.Proxy.
Can I rotate proxies in HttpClient automatically?
Not by default. HttpClient won't rotate proxies on its own, so you either switch proxies manually (e.g., from a list), or use a service that automatically rotates IPs behind a single endpoint. Managed solutions like ScrapingBee handle rotation for you, which is much easier than implementing your own pool logic.
How do I handle SSL errors when using a proxy in C#?
SSL errors usually mean one of two things:
- You're using a TLS-terminating proxy (e.g., ScrapingBee HTTPS proxy mode) — the cert won't match the target host, so .NET rejects it by default. In that specific case, disabling validation is expected because you trust the proxy endpoint.
- You're using a standard HTTP CONNECT proxy — TLS is end-to-end and you should not disable validation. If it fails, fix the certificate chain or proxy config.
For production, prefer valid chains or explicit validation callbacks over blanket bypasses.
Does HttpClient use system proxy settings by default?
Yes, if you don't specify a proxy handler, HttpClient will use the system's default proxy settings. You can override this by setting UseProxy = false or by providing your own WebProxy instance through HttpClientHandler.Proxy.
What's the best proxy approach for production web scraping?
For reliable scraping at scale, managed proxy services perform much better than free lists or DIY rotation. Services like ScrapingBee provide automatic IP rotation, geolocation, and clean exit nodes, so your requests succeed more often and require less maintenance. This removes most proxy-related headaches and lets you focus on data extraction instead of infrastructure.
Should I reuse a single HttpClient instance?
Yes. Reusing HttpClient is generally the right move because it prevents socket exhaustion and improves performance.
However, you can't swap handlers or proxies on an existing client. Each HttpClient is bound to its handler when created. If you need multiple proxies, keep a small pool of preconfigured clients (one per proxy) or use IHttpClientFactory to manage them cleanly.
How do I set a request timeout?
Set client.Timeout or use a CancellationToken. Example:
client.Timeout = TimeSpan.FromSeconds(10);
Shorter timeouts help detect dead proxies quickly.

Yasoob is a renowned author, blogger and a tech speaker. He has authored the Intermediate Python and Practical Python Projects books ad writes regularly. He is currently working on Azure at Microsoft.


