They built walls.
I spent 7 years finding doors.
I started scraping in 2018. Since then I have worked across five companies, built hundreds of production spiders, and fought every major anti-bot system that exists. This guide is everything that actually worked.
The scraping
decision flow
Walk steps in order. Stop at the first win. Complexity and cost increase right. Most production scraping is solved at steps 1–3.
Frida · mitmproxy
Burpsuite · webclaw
chompjs · Parsel
Scrapy · Scrapling
CloakBrowser
Zyte · Firecrawl
The six steps above tell you what order to try. But to know which step to stop at, and why skipping ahead costs you days, you first need to understand how the detection actually works. Let's go deeper.
Before you send a single byte,
you've already been judged.
The moment your scraper opens a TCP connection to a CDN, a fingerprinting pipeline triggers. By the time your HTTP request body arrives, four independent scoring systems have already assigned you a trust score. Here's exactly what each one measures, and why defeating just one is never enough.
Layer 1, TLS Fingerprinting: The Handshake That Betrays You
This fires before a single HTTP byte is exchanged. Understanding it is non-negotiable.
TLS Version + Cipher Suites + Extensions + Elliptic Curves + Curve FormatsThis produced a stable 32-char hex hash. Python's
requests library has always had the same JA3 hash. Every major anti-bot catalogued it. By 2021, your Python scraper was identifiable before the first HTTP header.JA3's weakness: Chrome started randomising TLS extension order in 2022. Same browser, different JA3 every session. The fingerprint became unstable and unreliable.
JA4 format:
t13d1516h2_8daaf6152771_b0da82dd1658,
t13 = TLS 1.3, d = DTLS, 1516 = cipher count+length hash, h2 = ALPN (HTTP/2), remainder = extension hashJA4+ extends this with: JA4H (HTTP header fingerprint), JA4X (X.509 certificate), JA4SSH (SSH handshake), JA4T (TCP window + options). Cloudflare deployed it in a Rust crate at CDN edge. Akamai in an EdgeWorker. Both fire before your request reaches origin.
HEADER_TABLE_SIZE, MAX_CONCURRENT_STREAMS, INITIAL_WINDOW_SIZE, MAX_FRAME_SIZE, MAX_HEADER_LIST_SIZEChrome's exact values are documented. Python's
httpx sends different values. curl sends different values. The ordering of these settings, the window update frame sizes, and the HPACK compression decisions all create a secondary fingerprint that cannot be spoofed without rewriting the HTTP/2 clientwhich is exactly what curl_cffi does.
Chrome's QUIC stack differs from libcurl's QUIC implementation differs from Python's
aioquic. Each leaves a unique signature in the Initial packets.Current status: JA4+ covers QUIC. Cloudflare has begun collecting QUIC fingerprints. Not yet widely enforced for blocking, but the infrastructure is live. Tools like curl_cffi are actively implementing QUIC parity.
# Test your actual JA4 fingerprint against tls.browserleaks.com import requests from curl_cffi import requests as cffi # ❌ requests, exposes Python/urllib3 JA4, blocked immediately r1 = requests.get("https://tls.browserleaks.com/json") print(r1.json()["ja4"]) # → t13d1516h2_8daaf6152771_b0da82dd1658 (Python fingerprint, catalogued, blocked) # ✓ curl_cffi, emits Chrome 124's exact JA4 hash, HTTP/2 frames, cipher order r2 = cffi.get( "https://tls.browserleaks.com/json"– impersonate="chrome124" # also: chrome110, chrome107, safari17 ) print(r2.json()["ja4"]) # → t13d1517h2_c4b4b4b4b4b4_aaaaaaaaaa (Chrome 124 fingerprint, passes) # Also check HTTP/2 fingerprint print(r2.json()["http2"]) # Chrome's exact SETTINGS frame values
All the JA4+ research is academic until you ship it. Three tiers of solution, in order of how often you should reach for each:
curl_cffi (Python), tls-client (Go), noble-tls, hrequests. One line of code, exact Chrome/Firefox JA4. Drop-in replacement for requests.curl_cffi.requests.get(url, impersonate="chrome131")
scrapy-stealth adds TLS + HTTP/2 fingerprinting + proxy rotation + fingerprint cycling to existing Scrapy spiders via DOWNLOADER_MIDDLEWARE. Per-request engine switching keeps simple URLs fast.meta={"stealth": {"profile": "chrome_147"}}
Camoufox, rayobrowse, or CloakBrowser. C++ binary patches ship a real-browser TLS stack along with everything else.Cost: 200MB+ memory per browser instance
urllib3, you flag faster than no spoofing at all, the mismatch is the signal.2. Forgetting HTTP/2 SETTINGS frames. Even perfect JA4 fails if your HTTP/2 SETTINGS (header table size, max concurrent streams, initial window size) do not match the browser you claim to be.
curl_cffi and tls-client handle this; rolling your own usually does not.3. Using stale impersonation profiles. Chrome 120 fingerprints in 2026 are themselves suspicious, real users rolled forward. Keep
impersonate="chrome131" or newer.
Layer 2, JavaScript Fingerprinting: The Page That Interrogates You
Once your TLS passes, the page loads its anti-bot script. This is a 500KB+ obfuscated interrogation that runs dozens of tests in parallel.
canvas.getContext('2d') then calls canvas.toDataURL(). The exact pixel output varies by:, GPU manufacturer and model (NVIDIA vs AMD vs Intel)
, Driver version and sub-pixel rendering
, OS-level font rendering (Windows ClearType vs macOS CoreText)
, Canvas size and DPI scaling
A headless Chromium with no GPU produces a software-rendered canvas with a known hash. Botaaurus and CloakBrowser spoof this at the C++ level by injecting slight noise into the pixel values before
toDataURL() returns, enough to vary the hash while remaining visually identical.
gl.getParameter(gl.RENDERER) and gl.getParameter(gl.VENDOR). Real Chrome returns something like ANGLE (Intel, Intel(R) UHD Graphics 620 Direct3D11 vs_5_0 ps_5_0).Headless Chrome returns a generic string or crashes on WebGL entirely. Anti-bots cross-reference: if WebGL says "Intel UHD 620" but Canvas hash shows software rendering, that's a contradiction, you're flagged.
WebGL extensions list is also fingerprinted. Real GPUs expose 30–40 extensions. Software renderers expose a different subset. The exact combination is GPU-specific and stable across sessions.
AudioContextgenerates a sine wave through an OscillatorNoderuns it through a DynamicsCompressorNodeand reads the output buffer values. The floating-point output depends on:, CPU architecture (x86 vs ARM floating-point precision)
, Operating system audio stack
, Audio driver implementation
Headless environments often return
0.0 across the buffer (no audio context), or a software-emulated value that differs from hardware. CloakBrowser patches this at the Chromium C++ audio rendering layer.
When JS patches a native function, for example,
navigator.webdriverit replaces the getter with a custom function. Calling Function.prototype.toString.call(getter) on the patched function returns function () { [custom code] } instead of function () { [native code] }.Kasada specifically tests dozens of native functions this way. playwright-stealth patches them in JavaScript, so toString() reveals the patch. PatchRight fixes this at the Python source levelbefore Chrome even starts. There's no JS to inspect.
fetch('chrome-extension://[id]/manifest.json'). Real Chrome browsers have at least a few extensions installed (ad blockers, password managers, etc.).A headless browser returns
net::ERR_FAILED on all 60 requests simultaneously, a statistically impossible result for a real user. The extension IDs probed include:cjpalhdlnbpafiamejdnhcphjbkeiagm (uBlock Origin)hdokiejnpimakedhajhdlcegeplioahd (LastPass)nngceckbapebfimnlniiiahkandclblb (Bitwarden)Fix: CloakBrowser loads real extension profiles. You install 1Password or Bitwarden into it so some probes return real manifest data.
navigator.webdriverCDP-controlled browsers expose themselves through subtler signals:Timing: CDP's
Runtime.enable command leaves a timing gap between page parse and script execution that doesn't exist in real Chrome.Execution context:
window.cdc_adoQpoasnfa76pfcZLmcfl_Array and similar artifacts left by ChromeDriver are checked.Permission API: Real Chrome returns realistic permission states. ChromeDriver returns defaults inconsistent with a "normal" browser.
Plugins: Headless Chrome has zero plugins. Real Chrome always has at least the PDF viewer plugin.
Camoufox's solution: Uses Mozilla's Juggler protocol, which sits below CDP entirely, none of these artifacts exist.
Layer 3, Network Identity: The Five Vectors That Must Agree
geoip=True aligns WebRTC candidates with the proxy exit country.IP country, timezone, Accept-Language, WebRTC candidate, DNS resolver location. A US proxy with Accept-Language: ur-PK fails immediately. All five must tell a consistent geographic story. This is why setting geoip=True in Camoufox is critical, it auto-configures all five to match the proxy's exit country.Layer 3.5, DOM Honeypots: The Trap Doesn't Care About Your Fingerprint
display:none, visibility:hidden, opacity:0zero-dimension elements, off-screen positioning, fields with tabindex="-1"or links placed after the closing </body> tag.getBoundingClientRect()) before interacting.Layer 4, Behavioural ML: You Can't Fake Being Human
document.querySelector() after DOMContentLoaded looks nothing like a human who reads the page for 2.3 seconds first. Warm-up navigation (visiting homepage before target) significantly improves behavioural scores.Now you know the four detection layers. Every vendor below is just a different weighting of those same four signals, some prioritise TLS, others behaviour, others network identity. Knowing the layer tells you which tool to pick. Here are the six walls.
Six companies built the walls.
Here's every key.
Each vendor applies the detection layers differently, different weights, different signals, different architectures. What bypasses Cloudflare has zero effect on Kasada. You need to know exactly which wall you're facing before you choose a tool.
Identify which anti-bot you're facing
Wrong strategy on the wrong vendor wastes hours. Before writing a single line of code, spend 30 seconds identifying exactly what's protecting the target.
Visit the target site, click the Wappalyzer icon in your toolbar. It instantly shows all detected technologies, including the anti-bot vendor. Shows Akamai, Cloudflare, DataDome, PerimeterX, Kasada and more with a single click.
Open DevTools → Application → Cookies. Match any cookie name to identify the vendor. Multiple vendors can run on the same site. For CLI scanning at scale: wafw00f https://target.com identifies WAF + anti-bot vendor in one command.
DevTools → Network → any request → Response Headers. Look for x-datadome, server: cloudflare, x-akamai-request-idor challenge redirect URLs containing vendor names.
Free Chrome + Firefox extension. One click on any site shows:
- Anti-bot / security vendor
- CDN provider
- CMS, framework, analytics
- Server technology
curl_cffi impersonate="chrome124" handles TLS + HTTP/2 layergeoip=True, 100% pass rate Mar 2026 on Instagram, Reddit, X, LinkedInStealthyFetcher solves Turnstile natively and automaticallyboring_challenge is a Rust-compiled state machine that cannot be emulatedit requires actual browser execution to produce valid tokens. IP reputation alone accounts for 25–30% of the total trust score.__NEXT_DATA__ in HTML source, Grainger had 110KB of product data in it, bypassing DataDome entirelycurl_cffi chrome124 + residential proxy → confirmed 200 OK (Grainger.com)geoip=Truealigns all 5 identity vectors with proxy exit country_px3 token generation flow for token replayips.jsrenamed polymorphically each deployment) issues proof-of-work challenges that require real CPU cycles and browser APIs to solve. There are no CAPTCHAs, failures are silent 403s or 429s with no explanation. The critical 2026 fact: Kasada specifically fingerprints playwright-stealth by calling Function.prototype.toString() on patched native functions. The patch signatures are catalogued.Quick identification reference
| What you see | Anti-bot | Key cookie/header | Detection method |
|---|---|---|---|
| "Pardon Our Interruption" page | Akamai block | _abck | Wappalyzer · response body |
| CF-Ray header · Turnstile iframe | Cloudflare challenge | cf_clearance | Response header CF-Ray |
JSON with datadome key | DataDome block | datadome | Response header x-datadome |
_px3 or _pxde set | PerimeterX block | _px3 | Cookie inspection |
| Silent 403 · no body | Kasada silent | x-kpsdk-ct | Response headers · ips.js in source |
reese84 or TS cookie | F5 Shape block | reese84 | Cookie names · Shape JS reference |
Six walls. Now the tools. Every library below exists as a direct response to one of those six systems, curl_cffi was built because JA4 broke Python's TLS. Camoufox because CDP leaks signal automation. PatchRight because Kasada fingerprints JS patches. The arms race made this arsenal.
Every tool built to fight
every wall we just described.
Now that you understand the detection stack and the six anti-bot vendors, every library below makes sense in context. curl_cffi exists because of JA4. Camoufox exists because of CDP leaks. PatchRight exists because of Kasada's toString() inspection. The arsenal wasn't built randomly, each tool is a direct countermeasure to a specific detection innovation.
Master comparison table, all 60+ libraries & tools
| Library (click to expand) | Type | Lang | JS render | TLS spoof | TLS detail | Anti-bot target | MCP | Stars |
|---|---|---|---|---|---|---|---|---|
| curl_cffi ⚡ | HTTP | Python | Chrome JA4+ | Akamai, DataDome | – | |||
|
⚡ HTTP
Under the hood: libcurl C library with custom TLS patches. Emits exact Chrome/Safari/Firefox TLS ClientHello at the C level, cipher suites, extensions, ALPN, GREASE all match real browsers.
✓ Pros
✗ Cons
|
||||||||
| Scrapling ⚡ | HTTP | Python | Chrome TLS | Cloudflare Turnstile | 38k | |||
|
⚡ HTTP
Under the hood: Wraps curl_cffi for stealth HTTP + integrates Camoufox for browser mode. StealthyFetcher uses a real patched Firefox under the hood when needed.
✓ Pros
✗ Cons
|
||||||||
| webclaw ⚡ | HTTP | Rust | Chrome TLS | Medium targets | – | |||
|
⚡ HTTP
Under the hood: Rust HTTP client with TLS fingerprint spoofing. Emits browser TLS signatures from Rust, fast and low-memory.
✓ Pros
✗ Cons
|
||||||||
| httpx ⚡ | HTTP | Python | None | Unprotected only | – | |||
|
⚡ HTTP
Under the hood: Modern Python HTTP library with async support and HTTP/2.
✓ Pros
✗ Cons
|
||||||||
| requests ⚡ | HTTP | Python | None | Unprotected only | 52k | |||
|
⚡ HTTP
Under the hood: Pure Python HTTP library. Sends HTTP/1.1 requests with standard Python TLS.
✓ Pros
✗ Cons
|
||||||||
| tls-client ⚡ | HTTP | Go/Py | Chrome/Firefox TLS | Akamai, DataDome | – | |||
|
⚡ HTTP
Under the hood: Go/Python wrapper around a Go TLS client that mimics browser fingerprints. Predecessor to cycle-tls.
✓ Pros
✗ Cons
|
||||||||
| Playwright 🌐 | Browser | Py/JS | CDP (detectable) | Medium (CDP leaks) | 68k | |||
|
🌐 Browser
Under the hood: Chromium DevTools Protocol (CDP). Microsoft-maintained. Drives real Chromium, Firefox, or WebKit browsers over CDP socket.
✓ Pros
✗ Cons
|
||||||||
| Camoufox 🌐 | Browser | Python | C++ Firefox Juggler | Cloudflare 100%, Akamai | – | |||
|
🌐 Browser
Under the hood: Forked Firefox with C++ binary patches to Juggler protocol (below CDP). Patches navigator, canvas, WebGL, fonts, window.chrome at binary level.
✓ Pros
✗ Cons
|
||||||||
| CloakBrowser 🌐 | Browser | Python | 49 C++ patches | Akamai, reCAPTCHA v3 0.9 | – | |||
|
🌐 Browser
Under the hood: 49 C++ binary patches to Chromium. Patches webdriver, chrome object, plugins, permissions, WebGL, Canvas at the binary level, not patchable by JS.
✓ Pros
✗ Cons
|
||||||||
| PatchRight 🌐 | Browser | Python | Py source patches | Kasada, Cloudflare | – | |||
|
🌐 Browser
Under the hood: Patches Playwright Python source files at install time. Removes CDP signatures, webdriver property, and stealth tells from the JS layer.
✓ Pros
✗ Cons
|
||||||||
| Puppeteer 🌐 | Browser | Node | CDP (detectable) | Medium targets | 89k | |||
|
🌐 Browser
Under the hood: Node.js CDP driver for Chromium. Google-maintained. The original headless browser automation library.
✓ Pros
✗ Cons
|
||||||||
| Selenium 🌐 | Browser | Multi | webdriver=true | Weak (legacy) | 29k | |||
|
🌐 Browser
Under the hood: WebDriver protocol (W3C standard). Drives any browser via standardised JSON protocol. The original browser automation framework.
✓ Pros
✗ Cons
|
||||||||
| SeleniumBase UC 🌐 | Browser | Python | UC removes WD flag | Kasada, general stealth | 10k | |||
|
🌐 Browser
Under the hood: SeleniumBase with undetected-chromedriver mode. Patches Chrome binary to remove webdriver flag and CDP signatures.
✓ Pros
✗ Cons
|
||||||||
| Selenium-Driverless 🌐 | Browser | Python | CDP no WebDriver | Medium targets | – | |||
|
🌐 Browser
Under the hood: Direct CDP connection without ChromeDriver binary, no webdriver flag set. Async Python API.
✓ Pros
✗ Cons
|
||||||||
| nodriver 🌐 | Browser | Python | Raw CDP async | Medium targets | – | |||
|
🌐 Browser
Under the hood: Controls Chrome via its internal DevTools socket without using CDP's standard automation flag. Chrome doesn't know it's being driven.
✓ Pros
✗ Cons
|
||||||||
| pydoll 🌐 | Browser | Python | Async CDP | Medium targets | – | |||
|
🌐 Browser
Under the hood: Pure Python browser automation using Chrome DevTools Protocol directly. No external driver.
✓ Pros
✗ Cons
|
||||||||
| Botright 🌐 | Browser | Python | CAPTCHA solving | CAPTCHA targets | – | |||
|
🌐 Browser
Under the hood: Playwright wrapper focused on CAPTCHA solving and stealth. Uses AI to solve CAPTCHAs during automation.
✓ Pros
✗ Cons
|
||||||||
| Botasaurus 🌐 | Browser | Python | Gaussian mouse | DataDome behaviour | – | |||
|
🌐 Browser
Under the hood: Playwright wrapper that adds Gaussian mouse movement, realistic typing, scroll physics, and session management.
✓ Pros
✗ Cons
|
||||||||
| rayobrowse 🌐 | Browser | Py/Docker | Real device FP DB | Hard targets | – | |||
|
🌐 Browser
Under the hood: Docker-based stealth Chromium browser from Rayobyte. C++ level patches (not JS-level), exposed via CDP so Playwright/Puppeteer/Selenium can connect natively. Self-hosted = free and unlimited; managed Cloud version available.
✓ Pros
✗ Cons
|
||||||||
| undetected-chromedriver 🌐 | Browser | Python | Removes WD flag | Medium targets | 5k | |||
|
🌐 Browser
Under the hood: Patches ChromeDriver binary to remove webdriver=true and CDP automation flags at binary level.
✓ Pros
✗ Cons
|
||||||||
| ⭐ Scrapy ⚡ | Framework | Python | Via curl_cffi mw | Medium (with middleware) | 52k | |||
|
⚡ HTTP
Under the hood: Twisted-based async Python framework. Pure HTTP, sends requests, receives responses, parses with XPath/CSS. No browser.
✓ Pros
✗ Cons
|
||||||||
| Crawlee 🌐 | Framework | Node/Py | Playwright-based | Medium targets | 15k | |||
|
🌐 Browser
Under the hood: Apify's unified Node.js framework. Wraps both HTTP (got-scraping) and Playwright/Puppeteer. Handles retries, deduplication, storage.
✓ Pros
✗ Cons
|
||||||||
| scrapy-camoufox ⚡ | Framework | Python | Camoufox integration | Hard targets | – | |||
|
⚡ HTTP
Under the hood: Scrapy middleware that routes requests through Camoufox browser for stealth. Best of Scrapy + Camoufox.
✓ Pros
✗ Cons
|
||||||||
| scrapy-nodriver ⚡ | Framework | Python | nodriver integration | Medium targets | – | |||
|
⚡ HTTP
Under the hood: Scrapy middleware using nodriver for browser requests, Chrome without CDP flags.
✓ Pros
✗ Cons
|
||||||||
| scrapy-stealth ⚡ | Framework | Python | Browser TLS + HTTP/2 | Cloudflare, Akamai | v0.4 (2026) | |||
|
⚡ HTTP
Under the hood: Pluggable Scrapy DOWNLOADER_MIDDLEWARE with three drivers:
basic + turbo (TLS fingerprint + HTTP/2 impersonation, no browser), and browser (real Chrome via CDP for JS-heavy targets). Per-request engine switching via request.meta["stealth"].✓ Pros
✗ Cons
|
||||||||
| Firecrawl ⚡ | AI | API | FIRE-1 engine | Hard via managed | 111k | |||
|
⚡ HTTP
Under the hood: API service that converts any URL to clean Markdown or structured JSON for LLM consumption. FIRE-1 agent for multi-page crawls.
✓ Pros
✗ Cons
|
||||||||
| Crawl4AI 🌐 | AI | Python | Playwright-based | Medium targets | 60k | |||
|
🌐 Browser
Under the hood: Local Playwright wrapper optimised for LLM output. Runs locally, converts pages to clean Markdown with BM25 relevance filtering.
✓ Pros
✗ Cons
|
||||||||
| ScrapeGraphAI ⚡ | AI | Python | NL graph pipeline | Light protection | 18k | |||
|
⚡ HTTP
Under the hood: LLM-powered extraction that builds a graph pipeline from a natural language prompt. Local or API.
✓ Pros
✗ Cons
|
||||||||
| Jina Reader API ⚡ | AI | API | Built-in rendering | Medium targets | – | |||
|
⚡ HTTP
Under the hood: REST API: prefix r.jina.ai/ to any URL to get clean Markdown back. Zero setup.
✓ Pros
✗ Cons
|
||||||||
| Steel 🌐 | AI | API | Docker browser | Medium targets | – | |||
|
🌐 Browser
Under the hood: Self-hosted browser API with MCP server. AI agents call it as a tool to browse the web.
✓ Pros
✗ Cons
|
||||||||
| Bright Data ⚡ | Managed | API | Full enterprise stack | All incl. F5 Shape | – | |||
|
⚡ HTTP
Under the hood: 72M+ IP network + scraping API. Managed infrastructure handles anti-bot, JS rendering, proxy rotation.
✓ Pros
✗ Cons
|
||||||||
| Zyte ⚡ | Managed | API | Full stack | All targets | – | |||
|
⚡ HTTP
Under the hood: Scrapy company's managed scraping platform. Zyte API + AutoExtract for structured data.
✓ Pros
✗ Cons
|
||||||||
| Apify ⚡ | Managed | API | 10K+ Actors | Medium-hard | – | |||
|
⚡ HTTP
Under the hood: 10,000+ pre-built Actors on serverless cloud. Crawlee at core. MCP server for AI agents.
✓ Pros
✗ Cons
|
||||||||
| ScrapingBee ⚡ | Managed | API | Managed rendering | Medium targets | – | |||
|
⚡ HTTP
Under the hood: Managed scraping API. Handles JS rendering, CAPTCHA, proxies via simple REST call.
✓ Pros
✗ Cons
|
||||||||
| Oxylabs ⚡ | Managed | API | OxyCopilot AI | Hard targets | – | |||
|
⚡ HTTP
Under the hood: 102M+ IP network with OxyCopilot AI extraction and scraper APIs.
✓ Pros
✗ Cons
|
||||||||
| Browserbase 🌐 | Managed | API | Managed browser | Hard targets | – | |||
|
🌐 Browser
Under the hood: Managed Playwright cloud. Run Playwright scripts remotely without managing browser infrastructure.
✓ Pros
✗ Cons
|
||||||||
| chompjs ⚡ | Parser | Python | N/A | Parser only | – | |||
|
⚡ HTTP
Under the hood: Python library to parse JavaScript objects embedded in HTML pages. Converts JS literals to Python dicts.
✓ Pros
✗ Cons
|
||||||||
| Parsel ⚡ | Parser | Python | N/A | Parser only | – | |||
|
⚡ HTTP
Under the hood: Scrapy's HTML/XML parser library. XPath and CSS selectors with a clean Python API.
✓ Pros
✗ Cons
|
||||||||
| BeautifulSoup4 ⚡ | Parser | Python | N/A | Parser only | 10k | |||
|
⚡ HTTP
Under the hood: Python HTML/XML parser. Wraps lxml or html.parser. Builds a parse tree from raw HTML strings.
✓ Pros
✗ Cons
|
||||||||
| mitmproxy ⚡ | RE Tool | Python | N/A | RE / intercept | 37k | |||
|
⚡ HTTP
Under the hood: Python-based HTTPS proxy. Intercepts, inspects, and modifies HTTP/HTTPS traffic between client and server.
✓ Pros
✗ Cons
|
||||||||
| HTTPToolkit ⚡ | RE Tool | Any | N/A | Mobile API intercept | – | |||
|
⚡ HTTP
Under the hood: HTTPS intercepting proxy for development and mobile API discovery. Open source.
✓ Pros
✗ Cons
|
||||||||
| Frida ⚡ | RE Tool | Py/JS | N/A | SSL hooks | – | |||
|
⚡ HTTP
Under the hood: Dynamic instrumentation toolkit. Injects JavaScript into running processes. Used to hook native functions and bypass SSL pinning.
✓ Pros
✗ Cons
|
||||||||
| rebrowser-patches 🌐 | Browser | Python | Chrome source patches | Medium targets | – | |||
|
🌐 Browser
Under the hood: JavaScript patches injected into Playwright/Puppeteer pages to mask automation signals.
✓ Pros
✗ Cons
|
||||||||
| cycle-tls ⚡ | HTTP | Go/JS | Chrome/Firefox TLS | Akamai, DataDome | – | |||
|
⚡ HTTP
Under the hood: Node.js/Go TLS client that cycles through browser fingerprints. Sends real JA3 hashes per request.
✓ Pros
✗ Cons
|
||||||||
| GoLogin 🌐 | Browser | Cloud | Antidetect profiles | Hard multi-account | – | |||
|
🌐 Browser
Under the hood: Cloud anti-detect browser. Manages browser profiles with unique fingerprints stored in cloud. Multi-account management.
✓ Pros
✗ Cons
|
||||||||
| Multilogin 🌐 | Browser | Cloud | Antidetect profiles | Hard multi-account | – | |||
|
🌐 Browser
Under the hood: Commercial anti-detect browser with managed profile fingerprints. Team collaboration on browser profiles.
✓ Pros
✗ Cons
|
||||||||
| ScraperAPI ⚡ | Managed | API | Full stack | All incl. Walmart | – | |||
|
⚡ HTTP
Under the hood: Simple proxy rotation + JS rendering API. Handles geo-targeting and header rotation.
✓ Pros
✗ Cons
|
||||||||
| Decodo ⚡ | Managed | API | Full stack | All targets | – | |||
|
⚡ HTTP
Under the hood: Smartproxy's new brand. Residential, datacenter, and mobile proxy network.
✓ Pros
✗ Cons
|
||||||||
| CapSolver ⚡ | CAPTCHA | API | N/A | reCAPTCHA/hCaptcha | – | |||
|
⚡ HTTP
Under the hood: AI-powered CAPTCHA solving service. Uses computer vision to solve reCAPTCHA v2/v3, hCAPTCHA, Cloudflare Turnstile.
✓ Pros
✗ Cons
|
||||||||
| 2captcha ⚡ | CAPTCHA | API | N/A | All CAPTCHA types | – | |||
|
⚡ HTTP
Under the hood: Human + AI hybrid CAPTCHA solving service. One of the oldest in the market.
✓ Pros
✗ Cons
|
||||||||
| Anti-Captcha ⚡ | CAPTCHA | API | N/A | reCAPTCHA/image | – | |||
|
⚡ HTTP
Under the hood: Human + AI CAPTCHA solving service. Competitor to 2captcha.
✓ Pros
✗ Cons
|
||||||||
| Scrapyd ⚡ | Framework | Python | Via middleware | Scrapy deploy tool | – | |||
|
⚡ HTTP
Under the hood: Daemon that deploys and runs Scrapy spiders via JSON API. Port 6800. Process-based job queue.
✓ Pros
✗ Cons
|
||||||||
| scrapy-redis ⚡ | Framework | Python | N/A | Distributed Scrapy | – | |||
|
⚡ HTTP
Under the hood: Scrapy extension connecting spiders to a Redis shared URL queue. Enables distributed crawling.
✓ Pros
✗ Cons
|
||||||||
| scrapy-cluster ⚡ | Framework | Python | N/A | Enterprise Scrapy | – | |||
|
⚡ HTTP
Under the hood: Distributed Scrapy cluster using Redis + Kafka + Zookeeper. Enterprise-scale distributed crawling.
✓ Pros
✗ Cons
|
||||||||
| scrapy-poet ⚡ | Framework | Python | N/A | Page Object pattern | – | |||
|
⚡ HTTP
Under the hood: Dependency injection framework for Scrapy spiders. Cleaner spider code with page objects.
✓ Pros
✗ Cons
|
||||||||
| Splash 🌐 | Browser | Docker | Lua scripting | Light protection | – | |||
|
🌐 Browser
Under the hood: Lua-scriptable browser for JS rendering, runs in Docker. Integrates with Scrapy via scrapy-splash.
✓ Pros
✗ Cons
|
||||||||
| selectolax ⚡ | Parser | Python | N/A | Fast HTML parser | – | |||
|
⚡ HTTP
Under the hood: C-based HTML parser (lexbor engine). 10–100× faster than BeautifulSoup for pure parsing tasks.
✓ Pros
✗ Cons
|
||||||||
| lxml ⚡ | Parser | Python | N/A | XPath + CSS parser | – | |||
|
⚡ HTTP
Under the hood: C-based XML/HTML parser. Fastest Python HTML parsing option.
✓ Pros
✗ Cons
|
||||||||
| w3lib ⚡ | Parser | Python | N/A | URL/text utils | – | |||
|
⚡ HTTP
Under the hood: Web-related utility functions. URL normalisation, encoding handling. Used internally by Scrapy.
✓ Pros
✗ Cons
|
||||||||
| SwiftShadow ⚡ | Proxy | Python | N/A | Proxy pool manager | – | |||
|
⚡ HTTP
Under the hood: Free proxy pool manager. Fetches, validates and rotates free proxies automatically.
✓ Pros
✗ Cons
|
||||||||
| requests-ip-rotator ⚡ | Proxy | Python | N/A | AWS API Gateway IPs | – | |||
|
⚡ HTTP
Under the hood: Rotates requests through AWS API Gateway endpoints to get rotating IPs.
✓ Pros
✗ Cons
|
||||||||
| Colly ⚡ | Framework | Go | Go TLS | Medium targets | 15k | |||
|
⚡ HTTP
Under the hood: Go HTTP scraping framework. Fast, concurrent, clean API.
✓ Pros
✗ Cons
|
||||||||
| Katana ⚡ | Framework | Go | Go TLS + Chromium | Medium targets | 8k | |||
|
⚡ HTTP
Under the hood: Go-based web crawler by ProjectDiscovery. Designed for security research and recon.
✓ Pros
✗ Cons
|
||||||||
| playwright-go 🌐 | Browser | Go | CDP (detectable) | Medium targets | – | |||
|
🌐 Browser
Under the hood: Go bindings for Playwright. Same Playwright API in Go.
✓ Pros
✗ Cons
|
||||||||
| Charles Proxy ⚡ | RE Tool | Any | N/A | Mobile API intercept | – | |||
|
⚡ HTTP
Under the hood: Commercial HTTPS proxy for request inspection and debugging. GUI-based.
✓ Pros
✗ Cons
|
||||||||
| Selenoid ⚡ | HTTP | Go (Docker) | Browser-as-a-service | Medium targets | 2.6k | |||
|
⚡ HTTP
Under the hood: Docker containers running headless Chrome/Firefox in parallel, Aerokube's Go-based Selenium grid replacement.
✓ Pros
✗ Cons
|
||||||||
| noble-tls ⚡ | HTTP | Python | Chrome JA3/JA4 | Cloudflare, DataDome | – | |||
|
⚡ HTTP
Under the hood: Python port of uTLS via custom TLS handshake stack, emits browser-matching ClientHello.
✓ Pros
✗ Cons
|
||||||||
| hrequests ⚡ | HTTP | Python | Browser-grade TLS | DataDome, Cloudflare | 900 | |||
|
⚡ HTTP
Under the hood: Drop-in requests replacement with TLS impersonation, header order matching, and optional Playwright browser mode.
✓ Pros
✗ Cons
|
||||||||
| crawlee-python 🌐 | Browser | Python | Via curl_cffi backend | Most targets | 6.2k | |||
|
🌐 Browser
Under the hood: Python port of Apify Crawlee, wraps curl_cffi for HTTP and Playwright for browser modes in a unified framework.
✓ Pros
✗ Cons
|
||||||||
|
🌐 Browser
Under the hood: Python port of Apify's Crawlee. Wraps curl_cffi for HTTP and Playwright for browser modes in a unified async framework with built-in retry, dedup, and storage.
✓ Pros
✗ Cons
|
||||||||
| estela ⚡ | Framework | Python (K8s) | Spider-dependent | Distributed Scrapy | 90 | |||
|
⚡ HTTP
Under the hood: Kubernetes orchestrator for Scrapy, schedules and runs spiders as K8s jobs with auto-scaling.
✓ Pros
✗ Cons
|
||||||||
| fake-useragent ⚡ | HTTP | Python | UA strings only | Lightweight only | 3.8k | |||
|
⚡ HTTP
Under the hood: Curated database of real-world User-Agent strings, sampled from browser telemetry sources.
✓ Pros
✗ Cons
|
||||||||
| grequests ⚡ | HTTP | Python | requests + gevent | Unprotected APIs | 4.4k | |||
|
⚡ HTTP
Under the hood: gevent-monkey-patched requests, fires hundreds of HTTP calls in parallel via greenlets.
✓ Pros
✗ Cons
|
||||||||
| Scrapoxy ⚡ | Framework | Node.js | Proxy manager | Self-hosted rotation | 2.1k | |||
|
⚡ HTTP
Under the hood: Self-hosted proxy pool manager, provisions proxies on AWS, Azure, GCP and rotates IPs automatically.
✓ Pros
✗ Cons
|
||||||||
Browser engines, deep dive
Runtime.enable timing, execution context leaks, and binding exposure all signal automation. Camoufox uses Mozilla's Juggler protocol below CDP, no CDP leaks. playwright-stealth patches JS at runtime but Function.toString() exposes the patch.pip install playwright && playwright installfrom camoufox.sync_api import Firefoxpip install patchrightpuppeteer-stealth plugin patches common detection points. CDP signature still visible at protocol level. Better for rendering tasks than hard anti-bot targets.navigator.webdriver=true detectable in 2 JS lines. Use SeleniumBase UC mode to remove. Stock Selenium is dead against Akamai in 2026. Still valid for non-protected targets.navigator.webdriver. Auto-solves many CAPTCHAs. Good for Kasada, medium targets. Not production-safe against Akamai at scale.from seleniumbase import Driverscrapy-nodriver integrates with Scrapy directly. Lighter than full Playwright for medium targets.pip install botasaurusfrom camoufox.sync_api import Firefox # geoip=True: auto-aligns IP, timezone, locale, WebRTC simultaneously with Firefox( geoip=True– # align all 5 identity vectors to proxy exit country humanize=True– # Gaussian mouse jitter proxy={"server": "http://proxy.provider.com:8011"– "username": "user"– "password": "pass"}, screen={"width": 1920– "height": 1080} ) as browser: page = browser.new_page() # Warm up, never go directly to target URL page.goto("https://www.google.com") page.wait_for_timeout(2000) page.goto("https://cloudflare-protected.com") page.wait_for_load_state("networkidle") print(page.content()[:500])
The tools above solve the access problem. But once you have the raw HTML or JSON, you still need to extract meaning from it. That is where AI-native scraping changes everything. In 2026 the bottleneck is not access. It is the extraction layer.
Describe, don't
select
AI-native scraping replaces CSS selectors with natural language. A 2025 NEXT-EVAL benchmark showed LLMs hit F1 > 0.95 on structured extraction when input is properly formatted.
/interact endpoint clicks, fills forms, extracts behind dynamic content. SAP, Zapier, Deloitte.app.scrape(url) | app.crawl(site) | app.search("query")result = await crawler.arun(url)SmartScraperGraph(prompt="...", source=url)pip install webclawr.jina.ai/{url} is the entire API. Returns clean Markdown. Dynamic content handled via built-in rendering. Free tier available, paid ~$0.002–$0.01/page.import asyncio from crawl4ai import AsyncWebCrawler from crawl4ai.extraction_strategy import LLMExtractionStrategy from pydantic import BaseModel # Define exactly what you want, LLM extracts it, no selectors needed class Product(BaseModel): name: str price: float model_number: str brand: str async def extract(url): strategy = LLMExtractionStrategy( provider="openai/gpt-4o-mini"– schema=Product.model_json_schema(), extraction_type="schema"– instruction="Extract all products with prices and model numbers" ) async with AsyncWebCrawler() as crawler: result = await crawler.arun(url=url– extraction_strategy=strategy) import json return json.loads(result.extracted_content) # F1 > 0.95 on well-structured pages, NEXT-EVAL benchmark 2025
When DIY cost exceeds platform cost, these services handle the heavy lifting. Each solves a specific problem, choosing the right one depends on which wall you are facing and at what scale.
When DIY cost
exceeds platform cost
If spending more than 2 engineer-days/month on anti-bot maintenance, a managed platform is cheaper. Crossover typically hits when facing F5 Shape or Kasada at scale.
The best option if you don't want to build scrapers yourself. Apify is a cloud platform where scraping is already done for you, 10,000+ community-built Actors cover almost every major website: Amazon, LinkedIn, Instagram, Google Maps, TikTok, Zillow, Twitter/X, Google Search, and thousands more. You pick an Actor, give it a URL, and get back clean JSON. No Python, no proxies, no infrastructure.
- You need data from a well-known site quickly
- You don't want to maintain scrapers long-term
- You're building an AI agent that needs live web data
- You want someone else to handle anti-bot bypasses
- You need to scale without managing infrastructure
- Your target site has no existing Actor
- You need custom data transformation logic
- You're scraping at very high volume (cost)
- You need full control over request patterns
- Data stays internal and can't touch third-party cloud
act(), extract(), observe(), agent(). Write browser flows in plain English ("click submit button") that survive page redesigns via runtime LLM resolution. Built on CDP, supports OpenAI/Anthropic/Gemini. 65% Mind2Web benchmark. Self-healing + auto-caching. TypeScript and Python.Computer Use Agents when scraping isn't enough
A new category emerged in 2025: AI agents that don't just scrape, they log in as the user, navigate any UI (web apps, legacy portals, desktop software), handle MFA and CAPTCHAs, and return structured JSON. Different from scrapers because the user grants permission, "Plaid for any website." If your problem is utility bills, payroll exports, e-commerce backends, or any portal without a public API, this is the category.
Platforms sort out the browser and the fingerprint. But every request still needs an IP address, and the type of IP matters as much as any other signal in your stack.
IP type matters
more than provider
Rotating proxies is table stakes. The real variable is IP type, datacenter IPs score near-zero on DataDome and PerimeterX regardless of fingerprint quality.
github.com/fabienvauchelles/scrapoxygeoip=True in Camoufox to align all five vectors automatically.http:// and https:// keys must use http:// scheme. Using https:// causes BoringSSL WRONG_VERSION_NUMBER (TLS-over-TLS failure). Fix: "https": "http://key:@proxy.crawlera.com:8011/"from curl_cffi import requests import time– random session = requests.Session(impersonate="chrome124") # Crawlera/Zyte: BOTH keys use http://, never https:// PROXIES = { "http": "http://apikey:@proxy.crawlera.com:8011"– "https": "http://apikey:@proxy.crawlera.com:8011"– # http:// not https:// } def fetch(url– retries=3): for i in range(retries): try: r = session.get(url– proxies=PROXIES– timeout=30– verify=False) # verify=False: proxy cert if r.status_code == 200: return r if r.status_code in (403–429): time.sleep(2**i + random.uniform(0–1)) except Exception as e: print(f"Error: {e}") return None
You now have the full picture: detection layers, six anti-bots, sixty libraries, managed platforms, proxy types. This section collapses all of it into a single decision tree you can follow for any target site.
Walk this in order.
Stop at first win.
Each step adds complexity, cost, and maintenance. Most production scraping is solved at steps 1–3. Never start at step 5.
SSL_read/SSL_write directly. If you find the API endpoint, every HTML anti-bot becomes irrelevant.__NEXT_DATA__. React SPAs often have >50KB script containing all data. Confirmed: Grainger.com (DataDome-protected), 110KB JS state blob bypasses DataDome entirely because it's in initial HTML.curl_cffi with JA4 impersonation resolves most Akamai and DataDome at HTTP layer. Add residential proxy. If __NEXT_DATA__ appears in response, extract it with chompjs.Quick reference cheat sheet
| Anti-bot | Primary vector | Steps 1–2 viable? | Best tool | Key note |
|---|---|---|---|---|
| Akamai | JA4+ + sensor.js + extension probes | Often | curl_cffi + CloakBrowser | Find mobile/GraphQL first |
| Cloudflare | JA4 Rust edge + Turnstile | Sometimes | Camoufox | Origin IP via SecurityTrails |
| DataDome | 85K ML + WASM boring_challenge | Yes | curl_cffi + mobile IP | Check __NEXT_DATA__ first |
| PerimeterX | 5-vector score | Sometimes | Camoufox + residential | Fresh session per domain |
| Kasada | Polymorphic JS PoW | Rarely | PatchRight + residential | Never playwright-stealth |
| F5 Shape | Custom VM + minute expiry | No | Managed API | DIY not practical |
What practitioners are
actually shipping in 2026
Fresh insights from engineers actively solving these problems in production. Shared publicly on LinkedIn.
requests library sends a different cipher suite order than Chrome. httpx is different again. Even with a clean residential IP, if your cipher ordering does not match Chrome's, you are identified before the server processes a single header. Fix: use curl_cffi with impersonate="chrome124"it emits Chrome's exact TLS ClientHello. Also watch HTTP/2 SETTINGS frames, they contain window sizes and header table parameters that vary per client.geoip=True and it automatically aligns all five vectors. Do not simply disable WebRTC, it removes a feature that 99% of real users have, which itself becomes a bot signal.camoufox or rayobrowse to generate sessions, then curl_cffi with the extracted cookies for bulk collection. Rotate sessions every 30-50 requests.blocked_domains list to block tracking/CDN requests in headless mode, automatic proxy-aware retry on network errors, Response.follow() for easy link chaining. Install: pip install scrapling --upgrade.QuickProxy(countries=["FR","DE"]) API filters by exit country. The built-in cache means it does not hit proxy list APIs on every request. Usage: from swiftshadow import QuickProxy; proxy = QuickProxy(); session.proxies = {"http": str(proxy), "https": str(proxy)}. Important: free proxies have high failure rates and low anonymity, do not use for Akamai, DataDome, or PerimeterX targets. Best for scraping open/unprotected sites at scale without cost.pip install cocoindexconfigure sources (files, URLs, S3), define your chunker and embedding model, run cocoindex.build()done in under 10 minutes.curl_cffi for TLS, full Chrome headers via httpx or curl_cffi, random.uniform(1.8, 4.3) delays, requests.Session() for cookie accumulation, residential/mobile proxies for IP. Check your current fingerprint at tls.browserleaks.com/json.Check your own
fingerprint first
Before you bypass anything, you need to know what your setup is leaking. These tools show exactly what anti-bots see when your scraper connects. Run your scraper through them, not just your browser.
How production scrapers
are actually built
From a single Scrapyd daemon to multi-region ECS clusters. Nine real pipeline architectures, from simple to enterprise-scale, with every component and data flow mapped out.
The simplest production setup. One server, Scrapyd managing spiders via JSON API, ScrapydWeb as UI. Good for <50 spiders and teams without Kubernetes. Deploy with scrapyd-deployschedule via /schedule.jsonmonitor at port 6800.
Self-Healing Scraper
powered by Claude
Scrapy spiders break when sites change their HTML. Instead of manually fixing selectors, this architecture uses Claude to detect failures, analyse the new page structure, and write corrected selectors automatically, without human intervention.
You are a web scraping expert. A Scrapy spider broke because the site changed its HTML.
Old selectors (no longer working):
title: h1.product-title::text
price: span.price-now::text
image: img.main-image::attr(src)
New page HTML (truncated):
{{ page_html[:8000] }}
Return ONLY valid JSON with corrected selectors:
{"title": "...", "price": "...", "image": "..."}
Intercept mobile app traffic
before it hits any anti-bot
Mobile APIs serve the same data as the web, but with weaker protection. No Cloudflare, no JA4 fingerprinting. Intercept the traffic once, replicate the call forever.
git clone https://github.com/newbit1/rootAVD.git
cd rootAVD
# Verify AVD is accessible
adb shell
# List your AVDs
./rootAVD.sh ListAllAVDs
# Copy the first command from the output and run it
# e.g: ./rootAVD.sh system-images/android-30/google_apis_playstore/x86_64/ramdisk.imgadb not found? Add to ~/.zshrc: alias adb='/Users/$USER/Library/Android/sdk/platform-tools/adb'# macOS
brew install --cask http-toolkitimport curl_cffi.requests as requests
resp = requests.get(
"https://api.targetapp.com/v2/listings",
headers={
"Authorization": "Bearer <token_from_http_toolkit>",
"X-App-Version": "4.2.1",
"User-Agent": "TargetApp/4.2.1 (Android 11; SDK 30)",
"Accept": "application/json",
},
impersonate="chrome120"
)
data = resp.json()- Property portals, classifieds, marketplaces
- Apps where the web version is heavily protected
- Data only available in the mobile app
- Targets using simple Bearer token auth
- Any app that doesn't pin SSL certificates
- Apps with SSL pinning block interception
- Some apps crash on rooted devices
- ARM-only apps may not run on x86 emulators
- Tokens expire, need refresh logic in scraper
- App updates can silently change endpoints
If the app blocks interception it likely uses SSL pinning. Use Frida or objection to bypass it at runtime, or use Burp Suite with the Xposed + TrustMeAlready module for a more permanent bypass.
Scraping jargon
in simple terms
Every term that makes scraping documentation confusing, explained with an analogy.
Where scrapers
talk to each other
The best scraping techniques rarely come from documentation, they come from people who've already hit the same wall you're hitting. These communities are where the real knowledge lives.
Discord servers
Reddit communities
Newsletters worth reading
Resources from The Web Scraping Club
Auto-healing scrapers
using Claude as the brain
Scrapers break when sites change their HTML structure, add new anti-bots, or rotate selectors. Claude Computer Use can read the error, inspect the live page, rewrite the spider, and redeploy, all without a human in the loop.
A spider that worked yesterday returns empty data today. The site changed a CSS class, added a JavaScript render step, or rotated its anti-bot. Traditional scrapers need a human to notice, debug, and fix. Self-healing scrapers fix themselves.
Claude has a set of computer use toolsbash, file read/write, str_replace, plus a skills knowledge base. When a spider fails, Claude reads the error, fetches the live page, compares old vs new structure, patches the selector, and runs a test crawl to verify.
Monitor checks item count after every run. Zero items, timeout, or HTTP 4xx triggers the healing loop. CloudWatch alarm or a simple Lambda cron handles this.
bash_tool runs shell commands. view reads files. str_replace patches them. Claude reads SKILL.mda knowledge base of scraping patterns, bypass techniques and selector recipes, before deciding the fix.
Claude runs a test crawl after patching. Only if items > 0 does it commit and push. Failed patches loop back to re-diagnose. A Slack message explains exactly what broke and what was changed.
__NEXT_DATA__XHR endpoint patterns, proxy rotation config. Claude reads this before every fix attempt, it is the institutional knowledge your team would otherwise lose when people leave.
From IP bans
to transformer ML
Every bypass technique was born as a direct response to a specific detection innovation. The escalation explains why each tool exists.
navigator.webdriver=true. playwright-stealth emerges. Playwright 2020, Microsoft, cross-browser. F5 acquires Shape Security for $1 billion.Thank you for reading.
This is everything I know about web scraping in 2026, every detection layer, every anti-bot system,
every library, every architecture I've actually built or used in production over the last seven years.
If even one section saved you a late night of debugging, that's why I wrote it.
Build something interesting with this. And if you do, I'd genuinely love to hear about it.