AI automation for SMBs: the 20% to never automate first, then the 80% that's safe.
Every other guide on this topic answers the wrong question. They rank twelve tools and list ten workflows you should automate. The higher leverage decision for a 5 to 15 person SMB in 2026 is the inverse: which slice of your work should never be the AI's first job. Get the do-not-automate list right and the rest of the project shrinks to a $500 to $10,000 fixed scope. Get it wrong and the project ends up on Reddit.
How should an SMB approach AI automation in 2026?
Pick one repetitive workflow that does not touch emotionally loaded edge cases (refund escalations, complaints, anything that could land in a small-claims filing). Scope the 80% that is safe to automate deterministically. Hard-route the remaining 20% to a named human. Budget $500 to $10,000 once for the build, plus optional $1,000 to $5,000 per month for ongoing maintenance.
Source for the price brackets and the SKU catalog: c0nsl.com/services.
Why every other playbook on this leads you wrong
Two patterns dominate the existing content. The first is the twelve-tool listicle: a marketing team that has never shipped a production agent ranks Zapier, Make, n8n, Lindy, Relevance, UiPath, and a few model-name brands in a single column. No tier sort, no failure mode discussion, no cost grounding. The second is the agency funnel: a former social-media operator who pivoted to AI selling a $5,000 cohort that teaches SMB owners to start their own AI agency. Neither pattern produces a working implementation in your shop. Both produce a buying decision that costs three to five times what the right one would have cost.
The honest order of operations is different. Decide which 20% of the work you will not let an autonomous system touch on day one. Pick one of the workflows in the remaining 80% that has a visible pain signal in the last seven days. Sort it for tool fit. Ship it. Measure for six weeks. Add the next workflow only after the first one runs without intervention for a month. The shape that wins for SMBs is small, sequential, and reversible. The shape that loses is large, parallel, and locked in by a twelve-month retainer.
The 20% to never automate first
Four categories show up on every intake call where the buyer is already three months into a failed AI rollout. Each one looks like an attractive target on the surface (high volume, looks repetitive, vendor case studies exist) and each one is the wrong place to start. Read the bottom-line first, then the why.
Refund and damage escalations
The customer is already angry, the order context is incomplete, and the legal exposure is non-trivial. An AI agent that auto-issues a refund inside the wrong policy line burns trust faster than a delayed human reply does. Keep this routed to a person, log everything the AI saw, let the human pick the resolution.
Anything emotionally loaded
Bereavement messages, mental-health adjacent intake, complaints about a staff member, threats. The 80/20 split exists for a reason: the 20% is the part where the cost of getting it wrong is asymmetric. An AI that gets it 95% right still has a 5% tail that ends up screenshotted on Reddit. Hard route this to a human, every time.
Pricing and contract negotiation
An LLM that can quote a discount also is an LLM that can be social-engineered into quoting one it should not. Until you have measured your false-positive rate against a real adversarial set (you have not), keep the agent on read-only reads of the price sheet and route the actual concession decision to a person with authority.
First-touch outbound on cold lists
Plenty of vendors will sell you a flow that uses GPT to generate one thousand cold emails a day. Three things happen: deliverability tanks because the warmup math no longer works, your domain reputation gets dragged into the next quarterly Gmail crackdown, and the leads who do reply are mostly other AI vendors testing your intake. Keep cold outbound human-curated until you can prove the unit economics on a small batch.
None of the four categories above are off-limits forever. They are off-limits as the first project, when you have not yet measured your false-positive rate against an adversarial set, when the operations team has not yet built the muscle for handling AI-flagged edge cases, and when there is no audit log in place to reconstruct what the agent saw.
The 80% that is actually safe (and shipped weekly)
Three workflow shapes carry most of the wins for the 5 to 15 person SMBs that come through intake. First is inbound triage: a deterministic classifier (often a small fine-tuned model, often a single Claude Sonnet 4.6 call with a tight prompt) sorts tickets, leads, and intake forms into a small number of buckets, routes the safe buckets to a templated response, and drops anything ambiguous into a human queue with an explanation. Second is scheduled reporting: instead of a person rebuilding the same dashboard every Monday, a pipeline pulls the data, asks a model for the three things that changed, and lands a five-paragraph summary in the right inbox. Third is internal document Q&A: the operator has a folder of policies, contracts, and SOPs that nobody on the team has read in a year, and a retrieval-augmented prompt over that folder saves more time per week than any customer-facing agent.
None of these three is novel. All three are unsexy. All three ship in the $500 to $2,000 small-integration band on a real calendar, not a sales-deck calendar. The first one a shop picks should be the one with the highest pain signal in the last week, not the one with the most attractive vendor demo.
“Thirty minutes, a named senior engineer on the call. You leave with three concrete automations scoped to your stack, expected hours saved per week, and a fixed-scope quote inside 48 hours.”
c0nsl.com homepage hero, posted rate. Refunded if the call does not produce three named automations and an hours-saved estimate.
The reason the consult is priced at $75 instead of free is the same reason the rest of the rate sheet is published: a posted number filters in operators who are ready to scope and filters out a discovery cycle that would otherwise eat both sides of the call.
Twelve named services, posted prices, no hidden rate card
Every workflow you might want to automate maps to one of twelve SKUs on the c0nsl service catalog. Each SKU has a posted price band and a one-paragraph description of what is in scope and what is not. The catalog is not a sales menu, it is a commitment device: if the work I am proposing does not fit one of these twelve, I say so, and we either rescope or I refer you to someone who actually does that thing. The full descriptions and the failure-mode notes live on the services page.
The catalog
Twelve SKUs that cover most SMB AI work
The OAuth scope mistake every SMB AI stack made in 2025
One specific risk worth flagging on every page about SMB AI in 2026, because nobody is talking about it at the right scale. On April 19 to 20 2026 a Context.ai employee account with an “Allow All” OAuth scope into Vercel's Google Workspace was compromised. The attacker pivoted into Vercel's environment variables and ShinyHunters listed the data for $2 million. The general lesson for any SMB that connected an AI tool to Gmail, Drive, a CRM, or a help desk last year: you almost certainly skimmed the OAuth consent screen, the vendor almost certainly asked for read-write where the workflow needed read-only, and you almost certainly never rotated. SVC-007 on the catalog above is a fixed-fee version of the audit and remediation. You can also walk it yourself in an afternoon by opening the OAuth consent page of every AI tool in your stack and downgrading every scope that does not match an actual workflow.
What this looks like as a Reddit-comment answer
Compressed for the people who will scroll: do not start with the workflow that has the most volume. Start with the workflow that has the highest pain signal and the lowest blast radius if the AI gets it wrong. Ship one. Measure for six weeks. Audit your OAuth scopes before you ship anything else. Pay for an honest scoping conversation up front, refuse any vendor who hides their rate card, and treat any pitch that includes the word “cohort” or “accelerator” as a course sale, not an implementation. The right first project is small, the right second project follows the first by at least a month, and the right consultant is the one whose rate sheet you already saw before you booked the call.
Bring me your top three workflows, get the do-not-automate ring back.
A 30 minute call walks the 80/20 sort live against your actual workflow inventory and ends with a fixed-scope quote inside 48 hours, named engineer, posted rate.
Frequently asked questions
What is the single biggest mistake SMBs make with AI automation in 2026?
They ask which workflow to automate first instead of which workflow to keep human. The asymmetry is brutal: a workflow you successfully automated saves you a few hours a week, a workflow you incorrectly automated produces a public failure that costs you a customer or a fine. The right starting question is 'which 20% of my support, intake, or outbound is emotionally loaded, legally exposed, or impossible to recover from?' Mark that as off-limits to an autonomous agent. Everything outside that ring becomes the candidate set, and the candidate set is where the standard tool sort applies. Doing this in the other order is how shops end up with a Reddit post titled 'how this AI agent refunded $40K in fraudulent returns overnight.'
Should I hire an AI consultant or just buy a SaaS automation tool?
Most SMBs over-buy on tooling and under-buy on scoping. A $99 a month no-code tool with a junior owning it usually outperforms a $1,200 a month enterprise platform with nobody owning it. The right time to bring in a consultant is at the scoping stage, before the tool selection: someone who can sort your workflows into the tiers that actually decide tool fit (does this workflow have an API on both ends, is there a desktop app in the middle, is the pixel a UI tree). After that, the build is often a smaller scope than the original sales pitch implied. The c0nsl shape is a $75 consult that ends with three named automations, an hours-saved estimate per workflow, and a fixed-scope quote inside 48 hours. That is short of a course program and short of a six-figure agency retainer on purpose.
How much should AI automation actually cost a 5 to 15 person SMB?
The honest brackets, posted on c0nsl.com, are $75 for a 30 to 60 minute consult, $500 to $2,000 for a single small integration shipped to production, $2,000 to $10,000+ for a custom system that mixes flows with audit logging and a recovery path, and $1,000 to $5,000 per month if you want ongoing maintenance. A typical first engagement for a 5 person Shopify or property-management shop lands in the $2,000 to $5,000 one-time band plus an optional retainer. Anyone quoting you $30K to $100K up-front is selling a strategy deck or a course, not an integration. Anyone quoting you under $500 is shipping you a Zapier template you could have bought yourself. Both ends of that spread exist in the market right now and both are worth walking away from.
What is the OAuth scope risk every SMB is now exposed to and missed?
On April 19 to 20 2026 a Context.ai employee account that had been issued an 'Allow All' OAuth scope into Vercel's Google Workspace was compromised. The attacker pivoted into Vercel's environment variables and ShinyHunters listed the data for sale at $2 million. The general lesson, applied to SMB AI stacks: every AI tool you integrated last year asked for OAuth scopes you skimmed on the consent screen. Most asked for read-write access to your inbox, your drive, your calendar, or your CRM, when the actual workflow needed read-only on a single mailbox or a single folder. Audit the scopes, downgrade them, rotate every secret a vendor has ever held. SVC-007 on the c0nsl catalog is a fixed-fee version of this audit; you can also do it yourself in an afternoon by walking the OAuth consent pages of every connected app.
Can I run AI automation locally so my customer data never leaves my building?
Yes for a meaningful slice of the workflows, no for the rest. Open-weight models in the Llama 3.1 70B and Mistral Large 2 family are competitive with GPT-4-class models on the bounded tasks SMBs actually need: classifying tickets, extracting fields from forms, summarizing call transcripts, drafting first-pass replies. A single workstation with two consumer GPUs can serve a 5 to 15 person team comfortably. The honest tradeoffs are: frontier reasoning tasks (multi-step planning, hard math, niche-language translation) still favor the closed frontier models, the maintenance burden of running your own stack is real, and the cost equation only works against frontier APIs after the first 12 to 18 months. For clinics, law firms, and bookkeeping-adjacent shops where the data genuinely cannot leave, the math is straightforward. For everyone else, the right pattern in 2026 is hybrid: local inference for sensitive workflows, frontier APIs with prompt caching for the rest.
How do I pick the first workflow to automate without spending three months in discovery?
Two questions on a single notebook page. First: which repetitive task did somebody on my team complain about in the last seven days? That is your candidate. Second: if the AI got the answer 100% wrong on this task, what is the worst thing that could happen? If the worst case is a typo in an internal Slack message, you can ship a v1 in a week with a junior. If the worst case is a refund issued against policy, a tenant locked out of an apartment, a patient missing a callback, you are in the 20% that needs a human in the loop and the project is bigger than 'first workflow.' Iterate from the small end. Most SMBs have at least three workflows in the safe ring (inbound triage, weekly reporting, internal-document Q&A) that can ship before the team has touched the riskier workflows.
Why do you publish your rates when other consultants do not?
Hiding the rate filters out cost-conscious buyers who would have closed and pulls in buyers who only call after they have decided to spend whatever it takes. The first group is most of the 5 to 15 person SMB market, the second group is enterprise procurement. I am not built for enterprise procurement. The other reason is that the published rate is a short-circuit on the buyer's research loop: instead of three discovery calls trying to figure out if I am affordable, the buyer reads one page on the website and books a $75 consult or does not. The opportunity cost of running a misqualified call is higher than the opportunity cost of losing a budget-mismatched lead.
How is this different from hiring an AI agency on a six-month retainer?
Three differences that matter. First, the engineer doing the work is the engineer on the call; agencies typically sell against a senior name and ship with juniors. Second, the unit of pricing is a scope, not a month; you pay for what shipped, not for the calendar. Third, the deliverable is documented for your in-house team to extend, not built as a black box that requires the agency to maintain in perpetuity. Retainers exist on the c0nsl tier sheet too, in the $1,000 to $5,000 per month band, but they are sized for ongoing maintenance and adjacent build work, not for the original implementation. The healthy customer relationship looks like: one small fixed-scope project, then either no follow-on or a thin retainer. The unhealthy one is the agency model where the customer never owns enough of the stack to leave.