Claude Code can write your edge WAF rules. It cannot pick the thresholds.
The honest division of labor for SMB teams setting up Cloudflare or Fastly WAF and DDoS mitigation in 2026: the agent generates the rule expressions, the test traffic, and the deploy config. The human picks the per-IP rate limit, the country-level challenge cutoff, and the bot-score block-versus-challenge line. Get that split right and the work fits the published small-integration band on this site.
Direct answer, verified 2026-05-07
Yes for expressions and tests. No for the three thresholds.
Claude Code is genuinely useful for edge WAF and DDoS mitigation work. It writes Cloudflare wirefilter expressions, drafts Fastly VCL, generates wrangler or Terraform config, and produces test traffic to validate a rule fires. What it cannot do is decide:
- The per-IP rate limit on hot endpoints (
/api/cart,/api/login,/search). - The country-level challenge cutoff for traffic outside your customer base.
- The bot-score line below which to outright block versus issue a managed challenge.
Reference for the wirefilter language and field names: developers.cloudflare.com/ruleset-engine/rules-language. Bot management scoring fields are documented at developers.cloudflare.com/bots/concepts/bot-score.
What Claude Code is genuinely good at here
Wirefilter is a closed, documented expression language. The field names are public, the operators are listed, and the validator on Cloudflare's side will reject a malformed clause before it ever runs against traffic. That is exactly the shape of problem where an AI coding agent does not get to invent anything dangerous. It either produces a valid expression or it produces something the UI rejects.
On every recent engagement the same loop showed up. Hand the agent a written specification (in plain English) of what a rule should match, plus the relevant fields from the wirefilter docs. It comes back with the expression, a comment block explaining each clause, and a one-line summary. Paste it into the Cloudflare custom-rule editor, and either it validates or you read the error and feed it back. Three iterations on a new rule, max.
That same loop works for Fastly VCL, for a Cloudflare Worker that wraps a managed ruleset with custom logic, and for the Terraform blocks that deploy the rules in CI. None of those are the kinds of tasks where the agent is going to invent a CVE. The constraint is the spec, not the language.
The three judgment calls that stay with the human
Every WAF deployment I have shipped lives or dies on three numbers. None of them is a default. None of them comes from documentation. Two of the three have caused real outages on real client sites when I let them slide.
1. Per-IP rate limit on hot endpoints
The default Cloudflare rate-limit suggestion (something like 60 requests per minute per IP across the whole zone) sounds reasonable, and it will let a credential-stuffing run get tens of thousands of attempts in before tripping. The right number is per-endpoint and anchored in your real traffic. On /api/login a fast typist legitimately reaches 5 to 8 attempts per minute under normal conditions, so a per-IP limit of 10 per minute with a managed challenge is generous to humans and brutal to brute force. On /api/cart a real DTC shopper might tap 20 times in 30 seconds during a flash sale; a limit lower than that breaks revenue. You want the 99th percentile of legitimate per-IP traffic on each endpoint, then a small multiplier on top.
2. Country-level challenge cutoff
For a US-only Shopify shop with no international shipping, issuing a managed challenge to all traffic from a long tail of countries you do not serve is almost free in conversion cost and meaningful in attack reduction. For a SaaS with active customers in 30 countries, the same blanket policy is a conversion incident waiting to happen. The decision needs the actual customer geography from your billing or auth data, not a global heuristic. The agent does not have that data and should not guess it.
3. Bot-score block-versus-challenge line
Cloudflare's bot management score runs 1 to 99 (1 is almost certainly automated, 99 is almost certainly human). The two key lines are the cutoff for managed challenge and the cutoff for block. A site that benefits from named bots (search engines, AI assistants, partner aggregators) wants to allowlist those user-agents first, then challenge below 30, then block only below 5. A site that has no legitimate bot use case can block aggressively below 30. The wrong setting in either direction has visible consequences within hours: either your search traffic tanks because Googlebot got challenged into a JavaScript wall, or your origin keeps getting hit by scrapers that the score thought were borderline-human.
A worked Cloudflare cut-list for a small Shopify-style site
This is the kind of ruleset I land on for a 5 to 15 person SMB with a real cart, a login, a search endpoint, and an SEO surface that needs to be readable by named AI assistants. The thresholds below are illustrative and would be re-tuned per zone after 24 hours in log mode. Treat them as a starting point you adjust to your traffic, not as gospel.
Eight rules, six thresholds, one allowlist. The expressions came out of a 20-minute Claude Code session. The thresholds came out of a longer read of the access logs and a conversation with the client about which markets they actually serve. The asnum list in rule 8 stays empty until log mode shows which networks are generating the noise.
About the Layer 3 and Layer 4 DDoS side
The DDoS half of this question is mostly handled for you on any major edge. Cloudflare advertises unlimited L3 and L4 mitigation on every plan, including Free; Fastly handles volumetric attacks at the edge before traffic ever reaches your origin; AWS Shield Standard ships with every account. There is very little for an AI coding agent (or you) to do at this layer beyond confirming the zone is on the edge in the first place and that your origin IP is not leaking. Where the work actually lives is one layer up: at L7, in the request-shape rules above, where bots, scrapers, and credential-stuffing attempts try to look like real users. The volumetric stuff is solved. The application-shape stuff is what costs you sleep, and it is where the human-and-agent split matters.
One concrete thing the agent can help with at L4: a script that scans your DNS for any hostname pointing at the origin IP directly (a forgotten direct.example.com record), which is the classic way an attacker bypasses the edge entirely. That is a Bash-level task and Claude Code is faster than a human at writing it.
Counterpoint, where the agent-driven workflow is the wrong shape
If your site is a static marketing page with no auth, no API, no user-generated content, and no value behind the door, none of the above is worth your time. Turn on Cloudflare Free, enable Bot Fight Mode, set the security level to Medium, and ship. Spending a small-integration retainer on 8 wirefilter rules for a site with nothing to protect is the kind of thing that earns the AI consulting field a bad name.
The other case where the agent loop runs out is enterprise compliance work: PCI-DSS rule sets, SOC 2 evidence trails, custom certificates, mTLS at the edge. Those rollouts need a security engineer in the room and a lot of paperwork. They are not what this site sells. If that is your shape, hire an enterprise security firm and bring me back when you want the SMB-scale thinking applied to one of your subsidiaries.
What this looks like as a c0nsl engagement
For most SMBs the work is a single small-integration scope: review the access logs, name the threat model in writing, draft 6 to 10 wirefilter rules with Claude Code, deploy in log mode for 24 hours, read the action log, and promote the rules to challenge or block. That fits the $500 to $2,000 small integration tier on the homepage. Teams that also need a Cloudflare Worker with custom request-shape logic, multi-zone alignment across staging and prod, a Fastly VCL component, or a SOC-style alert wire-up land closer to the $2,000 to $10,000+ custom-system tier. Either way, the rate is on the page and the deliverable is named in the scope, not in a follow-up call. Adjacent reading on this site: the Claude Code consulting workflow covers how the human-and-agent split applies on engagements beyond WAF, and context reconciliation covers the multi-writer failure mode that bites parallel agent sessions on shared repos.
Get a real ruleset for your real traffic
Bring 30 days of edge logs, your customer geography, and one paragraph on what you are protecting. I come back with the wirefilter rules, the human-picked thresholds, and a quote at the published rate.
Frequently asked questions
Can Claude Code actually configure edge WAF and DDoS mitigation for me?
Partly. Claude Code is excellent at writing the rule expressions in Cloudflare wirefilter, the Fastly VCL clauses, and the Worker code that wraps a managed ruleset. It is also genuinely useful for generating test traffic, validating that a rule fires, and producing a Terraform or wrangler config you can review. What it cannot do for you is pick the thresholds. A per-IP rate limit on /api/cart, a JS challenge cutoff for high-risk countries, and the bot-score line below which to outright block versus challenge are decisions that need real traffic from your site. The agent does not have your last 30 days of access logs and should not guess. The right division of labor is: agent writes the expression, human picks the number, agent writes the test that proves the number does what you intended.
Which Cloudflare WAF features are on the Free plan and which need Pro or higher?
On Free you get unlimited Layer 3 and Layer 4 DDoS mitigation, Bot Fight Mode (the simple version), 5 custom WAF rules, basic rate limiting at modest thresholds, and access to the Free managed ruleset. On Pro ($25/mo per zone as of 2026-05-07) you get the Cloudflare Managed Ruleset, more custom rules, Super Bot Fight Mode with stronger bot scoring, page rules, and image optimization. On Business and Enterprise you unlock the OWASP managed ruleset, custom certificates, Argo, and the higher rate-limit ceilings. For most c0nsl SMB clients the right starting point is Pro plus 6 to 10 carefully scoped custom rules, which lands well inside the published $500 to $2,000 small-integration tier on this site.
What does the Cloudflare wirefilter language look like, and is it really safe to let an AI agent write it?
Wirefilter is Cloudflare's expression language for matching HTTP requests. An expression like (http.request.uri.path matches "^/api/" and ip.src.country in {"CN" "RU" "KP"} and cf.bot_management.score lt 30) is the kind of clause Claude Code will produce in seconds, with the right operators and the right field names from the official docs. The reason it is safe to delegate is that wirefilter is a closed, documented language with no arbitrary code execution. A wrong expression either fails validation in the Cloudflare UI or matches the wrong traffic. It cannot escape the rule engine. The bigger risk is not the language, it is the threshold inside the expression: cf.bot_management.score lt 30 versus lt 10 is the difference between blocking some bots and blocking your own marketing analytics tag.
What are the three thresholds Claude Code cannot pick for me?
First, the per-IP rate limit on hot endpoints. /api/cart, /api/login, /search, and /webhook each have a different normal request rate, and the wrong limit either lets a credential-stuffing attack through or breaks a legitimate user who clicks fast. Second, the country-level challenge cutoff. Issuing a JS challenge to traffic from a long tail of countries you do not serve is almost free; issuing a managed challenge to traffic from your second-biggest customer market is a conversion incident. Third, the bot-score block-versus-challenge line. Cloudflare's bot management score runs 1 to 99. A site that depends on aggregator scrapers (price comparison, search engines that respect robots.txt, partner integrations) wants to challenge below 30 and only block below 5. A site that has zero legitimate bot traffic can block aggressively under 30. None of these three numbers comes out of a documentation page.
Will edge WAF rules block Claude Code's own WebFetch or ClaudeBot from crawling my site?
It depends on what you write. Anthropic has published the user-agent strings and IP ranges its crawlers use; ClaudeBot identifies itself in user-agent and respects robots.txt. If you write a custom rule that matches user-agents containing 'bot' or that blocks all traffic with a low Cloudflare bot score, you can absolutely catch ClaudeBot in the net along with everything else. The right pattern, if you want AI assistants to be able to read your public pages, is to allowlist named user-agents (ClaudeBot, GPTBot, PerplexityBot, OAI-SearchBot, etc.) before any general bot rule, then block or challenge below the bot-score line. If you do not want them, you also have to write that rule explicitly; the default Cloudflare ruleset does not block named AI crawlers for you.
What is the actual workflow when Claude Code helps with this?
On a real engagement it goes like this. Pull the last 30 days of access logs from Cloudflare or whatever edge sits in front of your origin. Have Claude Code parse the log shape, surface the top URIs by request count, the top source countries, and the bot-score distribution. Ask it to draft 6 to 10 wirefilter rules from a written specification you give it (the human-picked thresholds), each in a single comment-annotated block. Run them in log mode for 24 hours. Read the action log. Promote the rules that fire on real attack traffic to challenge or block, lower the threshold on the ones that fire too often, drop the ones that produce false positives. Claude Code is in the loop for the first and third step; the human picks the numbers in step two and reads the result in step four.
How does this compare to using Vercel's built-in firewall or Fastly's Next-Gen WAF?
Vercel's firewall is a thin layer over the Next.js platform with built-in DDoS mitigation included on Pro and Enterprise. It is the right choice if your origin is Vercel and your team would not configure Cloudflare correctly anyway. Fastly's Next-Gen WAF (formerly Signal Sciences) is a different shape: it observes traffic at the edge and feeds a request-scoring model that can run in front of Vercel, Cloudflare, or your own VCL service. It is the right choice for higher-traffic SaaS where you also want signal back into a SOC. For a 5 to 15 person SMB without a security team, Cloudflare's free or Pro tier covers the realistic threat model and is what fits the small-integration retainer band. The decision is mostly about where your origin already is and how much custom telemetry you want, not about WAF features in the abstract.
What is a realistic budget for this work on a small business site?
If you are starting from a Cloudflare zone with no custom rules, a documented goal (block credential stuffing on /api/login, mitigate scraping on /search, allow AI assistants on public pages, challenge anomalous traffic from countries outside your customer base), and 30 days of logs to look at, the work fits the published $500 to $2,000 small-integration tier on c0nsl.com. The deliverable is a written ruleset, the wrangler or Terraform config that deploys it, a 24-hour log-mode run with the rules disabled, a reading of the action log, and a final promote-or-tune session. If your stack also requires custom Workers code, multi-zone alignment, a Fastly VCL component, or a SOC-style alerting wire-up, that is a different shape and lands closer to the $2,000 to $10,000+ custom system tier. The rate is on the page, no rate-card games.
What does Claude Code get wrong on this topic if you do not babysit it?
Three failure modes show up on every engagement. First, it overfits to recent log lines and writes a rule that protects against last week's specific attack but not against the obvious next variant. The fix is to keep the rule expressions general (path prefix plus risk signal, not a literal CVE-style regex). Second, it under-allows known-good bots. If you do not give it an explicit allowlist of user-agents you want indexing your site, it will treat all bots the same and your search-engine traffic suffers a week later. Third, it picks round-number thresholds (60 requests per minute, score lt 30) that look reasonable but are not anchored in your real traffic. The fix is to feed it the actual percentiles from your logs and have it write the rule from those numbers, not from priors.
Is any of this overkill for a small marketing site with a few hundred visitors a day?
Mostly yes. If your site is static, has no auth, no /api endpoints behind it, and no inventory of real value (a directory, a price feed, an email list), Cloudflare's Free tier with Bot Fight Mode on is a fine answer and you should not pay anyone to write 10 wirefilter rules for you. The work in this guide is for sites where there is something to protect: a login, a cart, a tenant-request form, a booking endpoint, a search index that costs money to serve, or an API that competitors would scrape. If you are unsure which side of the line you are on, the $75 consult on the homepage is the cheapest way to find out without a months-long discovery.