The economic reset
Traditional senior data consulting requires leverage. One senior architect designs, four mid-level engineers implement, two juniors test, one project manager keeps the trains on time. Margin lives in the gap between what clients pay for the senior and what the firm pays the juniors.
AI breaks this model on both sides. Clients can hire one senior who delivers the work of all eight, and pay accordingly. The senior keeps a much larger fraction of the engagement value because the leverage isn't held by the firm — it's held by AI tools that cost a few hundred dollars a month in tokens.
This is not "AI replaces engineers." It's "AI eliminates the need to scale teams to deliver senior work." Different problem, different consequence.
AI doesn't replace engineers. It eliminates the need to scale teams to deliver senior work.
Three levers
The interesting AI tooling for data engineering is not the chat window. It's three coordinated capabilities working together:
1. Context loading via CLAUDE.md
A CLAUDE.md file at the root of every repository codifies project conventions, stack, ADRs, naming standards. It loads automatically every session. A 30-minute onboarding becomes a 30-second tool call.
2. Tool calling via MCP
Model Context Protocol lets AI agents reach into Snowflake, BigQuery, Git, your issue tracker, your doc system. The AI doesn't just suggest SQL; it executes it, observes the result, iterates against real data.
3. Iteration loops with checkpoints
Long-running tasks — refactor 20 dbt models, generate test coverage for a package, audit a schema — run as agentic loops where the AI proposes, executes, observes, and reports back at human review checkpoints.
Combined, these three turn AI from autocomplete into a junior engineer who never needs to be re-explained to. The senior reviews; the AI does.
CLAUDE.md is the most underrated artifact
If you've used Claude Code without a CLAUDE.md file, you've used 30% of its capability. The file is project context that loads automatically every session.
Our default template includes:
- Tech stack with versions — Python 3.11, dbt 1.7, Airflow 2.8, Snowflake account region
- Directory layout convention — where models go, where tests go, where ADRs live
- Naming standards — snake_case for SQL, kebab-case for service names, PascalCase for dbt model files
- The list of "things that look fine but break in our environment" — every project has 3-5 of these
- Active ADRs with one-line summaries and links
- Test/lint/CI commands — exact strings, copy-paste-runnable
- Definitions of done — what "PR ready for review" means here
Maintaining this file is 10 minutes a week. Not maintaining it is 30 minutes per AI session that gets re-explained the basics. Compounded across an engagement, the difference is days.
Treat CLAUDE.md as a living document. After every session where the AI got something wrong because of missing context, add the missing context. Within a month the file converges on "everything an AI needs to know to be productive in this codebase."
The 70/30 rule
Across our engagements, we apply this allocation consistently:
| Allocation | Owner | Examples |
|---|---|---|
| ~70% of work | AI handles | Routine implementation, test scaffolding, refactoring, documentation drafts, schema migrations, DDL translation, PR descriptions, log analysis |
| ~30% of work | Senior owns | Architecture, cutover risk decisions, regulatory judgment calls, root-cause diagnosis, client relationship, trade-off conversations |
The 70% is what used to require junior engineers. The 30% is what always required senior judgment. AI doesn't reduce the 30% — it eliminates the cost of the 70%. That's the whole game.
What we won't delegate
Some categories never go to AI in our shop. The list isn't long, but it's firm:
- Cutover sequencing. Business-risk judgment depending on which org chart can absorb which failure modes.
- Discrepancy diagnosis. Pattern matching is not root cause analysis. AI confidently proposes plausible explanations that mislead investigations.
- Compliance decisions. Regulators don't accept "the AI said it was fine." GDPR, SOX, MIFID — humans sign.
- Client-facing recommendations. Trust is built on accountability; accountability requires a human signing.
- Architectural trade-offs that lock in for years. AI can describe trade-offs. Humans live with them.
The principle: anything that becomes embarrassing in a postmortem belongs to a human, not a tool.
A practical workflow
Let me walk through a real session shape. Migrating an Oracle PL/SQL package to dbt models:
- Open Claude Code in the project repo. CLAUDE.md loads automatically.
- Show Claude Code the PL/SQL package: "Decompose this into dbt models per our conventions."
- AI proposes: 4 staging models, 2 intermediate, 1 mart model. Identifies one PL/SQL function that genuinely needs to remain procedural — flags it for me explicitly.
- I review the decomposition (~5 minutes). Agree with 6 of 7 models, push back on one boundary. AI adjusts.
- AI generates models + dbt tests + docs blocks. Runs
dbt buildvia MCP. Reports test failures. - I read the failures. Some are real bugs (AI got nullable wrong on one field), some are expected (target table doesn't exist yet — needs first run). I tell AI which is which.
- AI fixes the real bugs, generates a PR description, opens the PR via the GitHub MCP server.
- I review the PR diff (~15 minutes). Merge.
Total: roughly 30 minutes of senior attention. Output: an Oracle PL/SQL package translated to a complete dbt subgraph with tests and docs.
Without AI tooling, the same task is 4-6 hours of senior implementation, or 1-2 days handed off to a junior.
Cost economics
Token costs for serious AI tooling usage land around $200-400 a month for one engineer working AI-augmented full time. Compare that to the cost of one mid-level engineer for a month and the math is obvious.
The interesting cost isn't tokens. It's discipline: maintaining CLAUDE.md, writing good prompts, reviewing AI output critically, knowing when to abandon an AI-generated approach because it went down a wrong path. None of these are tokens. All of them are senior time.
The senior who wins with AI is the one whose ego accepts that 70% of their previous job is now done by a tool. The senior who loses is the one who insists on hand-rolling the 70% to feel productive.
Failure modes
Three ways this goes wrong, in order of frequency:
Vibe coding
Senior accepts AI output without review. Bugs propagate. One day someone notices the test suite has been silently passing because the assertions were generated by AI alongside the implementation — the tests assert the buggy behaviour as correct. Catastrophic. The defense: read every line of generated code as if a junior wrote it.
Token blowout
Long agentic loops without checkpoints burn $50 in 20 minutes. AI agents iterate on their own output, each iteration a full round-trip. Need budget alerts and review gates between major steps.
Confidence inversion
Senior trusts AI more than themselves. The AI is wrong; the senior overrides their gut. Always trust your gut over confident AI output you can't justify. AI is fluent enough to sound right when it's wrong — that's its most dangerous property.
Treat AI like a very fast junior. Useful, replaceable, never trusted blindly. Your job as senior is to review, decide, sign. The job hasn't changed; the time required for the routine 70% has.
What this means for clients
Three concrete things change when you hire an AI-augmented boutique instead of a traditional consultancy:
- Faster. Migrations that took 3-6 months take 6-10 weeks at the same quality.
- Cheaper. Single-senior pricing on work that previously required teams.
- Cleaner. The forced discipline of AI-augmented work — good docs, good tests, ADRs, CLAUDE.md — ends up better-documented than rushed team work, almost as a side effect.
The trade-off: less parallelizable. We take one or two engagements at a time. We can't scale by adding bodies. Most of our clients prefer this trade.
Closing
The boutique consultancy with one senior delivering enterprise output isn't a magic trick. It's the rational response to AI tooling that — for the first time since the 1990s — makes senior consulting economically viable without leverage from junior labour.
We didn't invent this model. But we did decide to bet our practice on it. Eighteen months in, we have no regrets, two production AI products to show for it, and a quietly growing roster of clients who prefer working with one accountable senior instead of a multi-tier consulting hierarchy.
If your team is rebuilding data infrastructure and would prefer working with one accountable senior, get in touch.