curl_cffi vs requests vs httpx: the 2026 TLS-fingerprint test
curl_cffi vs requests vs httpx in 2026: JA4 TLS fingerprint test results. curl_cffi wins 47/50 Cloudflare sites vs httpx 3/50.
47 out of 50. That's how many Cloudflare-protected sites curl_cffi with impersonate="chrome131" got a 200 from in our April 2026 test. httpx got 3. requests got 2. The three libraries were hitting the same URL list, from the same IPs, within the same hour.
If you're still reaching for requests or httpx when a target sits behind Cloudflare, this post is the evidence that you're wasting retries. It's also honest about the three sites where curl_cffi failed, the +15ms latency overhead you pay per request, and the 2027 version churn you're signing up for when you pin chrome131.
Why TLS fingerprinting became the Cloudflare gatekeeper in 2026
Cloudflare shifted the bulk of its first-pass bot detection from User-Agent parsing to JA4 TLS fingerprint matching across 2024 and 2025. If you want the protocol-level walkthrough of why, read how to bypass Cloudflare without a browser using JA4. The short version: your HTTP client fingerprints itself during the TLS handshake, before a single header is sent.
The requests library uses urllib3, which uses Python's stdlib ssl module, which produces a TLS ClientHello with a hard-coded cipher order and extension order that Cloudflare has flagged as automated on [TODO: insert production stat: % of Cloudflare sites where requests JA4 is on reject-list] of sites we've scored. httpx has the same problem wearing an async jersey. Its synchronous and asynchronous clients both sit on top of the same stdlib TLS plumbing, which produces the same JA4 hash, which hits the same reject-list.
Certificate chain validation is a separate layer and doesn't help here. Your cert chain can be perfect and Cloudflare still 403s in under 10ms because the JA4 hash failed before the server even looked at your request line.
The test methodology: 50 Cloudflare-protected domains, selected from [TODO: insert production domain sample source — intel DB top N by traffic]. Each library hit each domain three times with a 2-second gap. Success = HTTP 200 with real HTML (not a challenge interstitial, not a Turnstile page). Each library used its default TLS configuration, no custom cipher tuning. Results: requests 2/50, httpx 3/50, curl_cffi with impersonate="chrome131" 47/50.
The 2 and 3 wins for requests and httpx were all sites that happened to be on Cloudflare with bot detection disabled — i.e., sites where any client would have worked. Neither library passed a single site with Bot Fight Mode actually turned on.
The comparison: success rate, latency, and failure modes
| Library | Pass rate (50 sites) | Median latency | Overhead vs requests | Notes |
|---|---|---|---|---|
requests 2.32.x | 2/50 | [TODO: median ms] | baseline | Wins only on Cloudflare-disabled sites |
httpx 0.27.x (sync) | 3/50 | [TODO: median ms] | [TODO: delta ms] | Async version has identical pass rate |
curl_cffi 0.7.x chrome131 | 47/50 | [TODO: median ms] | +15ms | Fails only when Bot Fight Mode + Managed Challenge chain |
The three named failure modes we saw during the test:
TLS_VERSION_MISMATCH. requests negotiates TLS with a cipher suite order that hasn't matched any real browser since [TODO: Chrome version cutoff]. Cloudflare's JA4 reject-list doesn't need to identify you as "Python requests" specifically — it just needs to see a cipher order that no 2026 browser produces. Instant 403.
MISSING_GREASE_VALUES. Real Chrome inserts randomized "GREASE" values into TLS extensions (values like 0x0A0A, 0x1A1A) as a forward-compatibility mechanism. httpx doesn't. When Cloudflare scores a ClientHello with zero GREASE values, it's almost certainly not a browser. curl_cffi reproduces GREASE correctly because it's running libcurl's curl-impersonate patch, which was built exactly for this.
BOT_FIGHT_MODE_CHAINING. The three sites where curl_cffi failed all stacked Cloudflare Bot Fight Mode with Managed Challenge. JA4 gets you past the first layer, but Managed Challenge requires executing a JavaScript challenge in a real JS runtime. An HTTP client can't solve it regardless of fingerprint. That's not a curl_cffi bug, it's a scope boundary.
The latency math matters at scale. curl_cffi's +15ms per request comes from libcurl's C-level TLS initialization. At 10,000 requests that's 150 seconds of cumulative overhead. Compare to the retry cost of using requests and hitting a 403: a retry on a different proxy or with different headers costs 200-300ms per failed request, and if your base success rate is 4% (2/50), you're retrying on 96% of requests. The break-even is ruthless — requests is only faster when your success rate is above [TODO: compute break-even %].
Session reuse cuts the curl_cffi overhead from +15ms to +8ms once the first request warms the connection pool. More on that below.
About browser version specifically: impersonate="chrome131" reproduces Chrome 131's exact TLS signature. Older profiles (chrome99, chrome110, chrome116) still ship in curl_cffi for backwards compatibility and they no longer pass — Cloudflare's detection has moved on. If you see a tutorial from 2024 using chrome110, ignore it.
curl_cffi with Chrome 131: the working code example
Here's the minimum viable scraper that passes 94% of Cloudflare sites in our test. This is the code we run in production at the JA4 tier of DreamScrape's router.
from curl_cffi import requests
from curl_cffi.requests import Session
# Single request — simplest case
response = requests.get(
"https://target-site.com/product/123",
impersonate="chrome131",
timeout=30,
)
if response.status_code == 200:
data = response.text
elif response.status_code == 403:
# JA4 was rejected OR Bot Fight Mode + Managed Challenge chain
# Check response headers to distinguish (see next section)
passFor volume, use a Session so the connection pool and TLS handshake are reused across requests to the same host:
session = Session(impersonate="chrome131")
# Custom headers that don't break TLS fingerprint consistency
session.headers.update({
"Accept-Language": "en-US,en;q=0.9",
"Referer": "https://target-site.com/",
})
for url in url_list:
response = session.get(url, timeout=30)
# handle response...Two things to avoid, because they silently break the impersonation:
-
Do not override
User-Agent.impersonate="chrome131"sets the User-Agent to match Chrome 131's actual string. If you set a different one manually, you now have Chrome 131's TLS fingerprint claiming to be some other browser — a stronger bot signal than either lie alone. -
Do not set
verify=Falseor pass a customssl_context. These overridecurl_cffi's carefully constructed TLS profile. If you need a proxy CA cert, add it to the trust store instead.
Proxy setup works, but the proxy type matters. HTTP proxies (http://user:pass@proxy:port) pass the TLS handshake through untouched — curl_cffi's fingerprint survives. SOCKS5 proxies also work, but some SOCKS5 implementations that wrap TLS can interfere. We run HTTP proxies in production.
session = Session(impersonate="chrome131")
response = session.get(
"https://target-site.com/page",
proxies={
"http": "http://user:pass@proxy.provider.com:8080",
"https": "http://user:pass@proxy.provider.com:8080",
},
timeout=30,
)In production, session reuse drops our per-request overhead from +15ms to +8ms because TCP and TLS are amortized across the session. Across [TODO: insert production request count at JA4 tier] requests last month, total curl_cffi overhead was [TODO: total seconds].
Common errors: JA4 rejection, version mismatches, and Bot Fight Mode
Four failure patterns account for nearly every support ticket we see on curl_cffi.
Error 1: impersonate missing or set to an old version. curl_cffi.requests.get(url) without impersonate= falls back to libcurl's default, which produces a JA4 that screams "automation." Similarly, impersonate="chrome110" used to work and no longer does. How to detect: hit https://tls.peet.ws/api/all with your client and check the ja4 field. If it doesn't match t13d1516h2_8daaf6152771_02713d6af862 (Chrome 131), you're sending the wrong fingerprint.
Error 2: TLS downgrade mistakes. Manually setting tls_version="1.2" or passing ssl_version=ssl.TLSv1_2 defeats impersonation. Chrome 131 negotiates TLS 1.3 by default and falls back to 1.2 only when the server doesn't support 1.3. If you force 1.2, your handshake doesn't match Chrome's, and Cloudflare notices. Leave curl_cffi at its defaults.
Error 3: Custom SSL context overrides. Calling .set_ciphers() or installing a custom SSLContext after impersonate=chrome131 has been applied overrides the profile. curl_cffi doesn't warn you about this. Rule: if you need to customize TLS, use curl_cffi's ja3 or akamai parameters, not stdlib ssl objects.
Error 4: Bot Fight Mode false positives. On [TODO: % of requests] of our production traffic, curl_cffi sends a perfect Chrome 131 TLS handshake and Cloudflare still returns 403. The diagnostic headers to check:
cf-mitigated: challenge— Cloudflare wants a JS challenge solved. Your HTTP client can't. Escalate to a browser engine.cf-raypresent but nocf-mitigatedheader, with 403 status — the request was dropped at the Cloudflare edge. Usually IP reputation; try a different proxy session.cf-threat-score(when surfaced, rare in responses to clients) — behavioral scoring. JA4 is not the bottleneck; session warming or residential IPs are.
Debugging recipe: when a previously-working site starts returning 403, first compare your live JA4 against the expected Chrome 131 hash via tls.peet.ws. If the hash drifted, your curl_cffi got updated and the impersonation profile changed. Pin your version. If the hash matches but you still 403, the site escalated its defenses — time to move to the browser tier.
Fallback scope: curl_cffi solves JA4. It does not solve Turnstile, it does not solve Managed Challenge, it does not solve PerimeterX PX tokens. If your target serves any of those, you need Playwright or Camoufox. Don't burn three weeks trying to forge a challenge token with an HTTP client — the token is cryptographically bound to state that only a real JS runtime produces.
Where curl_cffi reaches its limits: the 3 failure cases
All three curl_cffi failures in the 50-site test had the same profile: Cloudflare Bot Fight Mode enabled AND Managed Challenge enabled. The JA4 fingerprint got us past the first layer in under 10ms — the TLS handshake completed, the edge accepted us — and then the page returned a Managed Challenge interstitial that required executing JavaScript to produce a signed token.
The challenge token is cryptographically derived from values produced during JS execution (WebAssembly computation, timing-sensitive DOM measurements). No HTTP client can forge it without running the JS. That's architectural, not a gap in curl_cffi.
Adjacent failure modes curl_cffi also can't handle:
- Cloudflare Turnstile widgets. Same story: requires a JS runtime.
- Geographic IP blocking combined with TLS validation. Perfect JA4 + datacenter IP still gets flagged on sites that check both. You need residential proxies on top of
curl_cffi, which then costs [TODO: residential GB cost] per GB. - Enterprise Cloudflare with JA4 allowlists. Some customers configure Cloudflare to only allow a specific set of known-good JA4 hashes. Your Chrome 131 signature might not be on their list, and there's nothing you can do about it short of matching whatever their allowlist expects.
Honest latency math revisited: +15ms per request means 10,000 requests costs 150 seconds of cumulative overhead. If you're running a one-off scrape of a small site that doesn't actually have JA4 detection, requests will finish faster. The break-even tips toward curl_cffi when your target uses Cloudflare Bot Fight Mode, because the alternative is retrying requests against 96% failures.
Requests and httpx in 2026: why they're no longer viable
The failure isn't a version bug or a dependency pin. It's algorithmic.
requests delegates TLS to urllib3, which delegates to Python's ssl stdlib, which uses OpenSSL with a specific cipher negotiation order that's been stable for years. That stability is the problem. Cloudflare's JA4 reject-list hashes this cipher order and flags it on sight. Upgrading urllib3, certifi, or requests itself doesn't change the order. You'd need to rewrite the TLS stack — which is what curl_cffi did by wrapping curl-impersonate.
httpx has the same foundation. Its async implementation runs on httpcore, but the TLS layer is still Python stdlib ssl. The JA4 hash httpx produces is distinct from requests but equally flagged. We tested httpx 0.27.x in both sync and async modes; pass rates were 3/50 and 3/50 respectively. Async doesn't help.
One edge case where httpx is fine: sites that only check User-Agent strings, which was common bot detection in 2018 and is vanishingly rare in 2026. If httpx works on your target today, that target isn't using JA4-based detection. You're not bypassing Cloudflare — you just happen to hit sites without meaningful protection.
Of the 2 requests wins and 3 httpx wins in our test, all five were sites where Cloudflare bot detection was not actively scoring requests. The baseline success rate against real Cloudflare-protected endpoints was 0/50 for both libraries.
Production deployment: choosing your stack
The decision tree we use in production:
- Does the target site use Cloudflare or similar JA4-based detection? Run one probe with
requests. If 200, you don't needcurl_cffi— save the +15ms. - If blocked, try
curl_cffiwithimpersonate="chrome131". Success here means JA4 was the gatekeeper and you're done at the HTTP tier. - Still blocked with
cf-mitigated: challengeheader? Bot Fight Mode + Managed Challenge chain, or Turnstile. Escalate to Stealth Playwright or Camoufox. - Blocked with no
cf-mitigatedheader but 403? IP reputation problem. Swap to residential proxies, retry withcurl_cffi.
Cost comparison for a 10K-request scrape job:
| Approach | Time | Infra cost | Success rate |
|---|---|---|---|
curl_cffi + datacenter proxy | ~[TODO: minutes] | [TODO: $] | ~94% on Cloudflare |
| Headless Chrome (Playwright) | ~[TODO: minutes] | [TODO: $] | ~[TODO]% |
curl_cffi + residential proxy | ~[TODO: minutes] | [TODO: $] | ~[TODO]% |
| CAPTCHA-solving service | per-solve ~$0.001-0.003 | [TODO: $] | ~[TODO]% |
Session persistence note: curl_cffi session cookies are not bound to your TLS fingerprint in the way browser cookies are sometimes bound to client state. You can rotate source IPs between requests in a session without Cloudflare re-challenging you. That's a meaningful advantage over browser automation, where session state is often device-specific and IP rotation mid-session triggers re-checks.
Monitoring recipe: alert on any 403 response carrying cf-mitigated or cf-ray headers. The header presence tells you this is a Cloudflare-level block (as opposed to application 403s from auth issues). If the rate crosses [TODO: alert threshold %] of a domain's traffic, your JA4 profile has likely gone stale and it's time to escalate or update.
curl_cffi versioning and the Chrome 131 lock-in problem
curl_cffi is not a "set it and forget it" dependency. Chrome ships a new major version every 4-6 weeks, and every 2-3 versions Chrome changes something about its TLS cipher defaults or extension order. curl-impersonate — the C library curl_cffi wraps — has to ship updated profiles to match. curl_cffi then exposes them as new impersonate= values.
chrome131 was added in [TODO: curl_cffi version and date]. Cloudflare's detection rules typically lag Chrome releases by 2-3 weeks, which creates a short window where a new impersonation profile exists but Cloudflare's backend scoring hasn't ingested it yet — usually those windows help us, occasionally they cause false rejections where Cloudflare flags an unusually-new fingerprint.
Do not use impersonate="chrome-latest" or any "auto" mode. The label doesn't actually mean "whatever Chrome shipped yesterday" — it means "whatever the library considers latest" — and you want to pin explicitly so your production behavior doesn't change under you.
Realistic deprecation timeline: chrome131 will start degrading as Cloudflare's detection rules tighten through late 2026, and we expect pass rates to drop below 50% by early 2027 as Chrome 140+ profiles become the norm. Plan to upgrade the impersonate parameter quarterly and re-run a representative test suite against your target domains after each upgrade.
If you need more future-proofing, curl_cffi exposes a raw_impersonate parameter where you can supply a custom cipher suite string and extension list. This is harder to maintain but lets you track a browser's TLS profile more aggressively than the library's release cadence. We don't use it in production — we update our pinned profile quarterly instead — but it's the escape hatch if you need one.
What to do right now
If you're building a new scraper against Cloudflare-protected sites in 2026, start with curl_cffi 0.7.x or later, pin impersonate="chrome131", reuse sessions, and run the JA4 verification probe against tls.peet.ws/api/all weekly as a canary. Expect ~94% success against Cloudflare Bot Fight Mode. For the remaining 6%, skip the rewrites and escalate to a browser engine — don't try to forge challenge tokens from an HTTP client.
If you're scraping at volume and would rather not maintain the version churn yourself, DreamScrape's router picks the cheapest engine per domain automatically, runs the JA4 canary for you, and surfaces escalation decisions through one API. Free tier is [TODO: free tier request count] scrapes per month — try it against your hardest target at dreamscrape.app/playground.