iPaladin Resources: Expertise in Family Office Wealth Management Software

The Luddites Were Right

Written by iPaladin | February 2026

What Family Offices Get Right About AI

We've been using the word wrong for two hundred years.

"Luddite" gets thrown at anyone who resists new technology. The executive skeptical of AI. The operations lead who still prints everything. The founder who trusts her judgment over any system.

But the original Luddites — textile workers in northern England, 1811 — weren't against the machines. They were skilled operators. Many ran the very equipment they later smashed. What they opposed wasn't technology. It was technology deployed without any role for the people who understood the work.

They wanted training. Apprenticeships. Integration. They didn't want to stop the machines. They wanted machines that made craftsmen more capable, not less necessary.

That instinct is everywhere right now. And it's not wrong.

The anxiety I hear from family office executives isn't "AI can't do anything useful." It's "AI is moving fast, the people selling it don't understand our work, and nobody's asking whether we should be in the loop." That's not resistance. That's judgment.

Here's the problem. Most AI being deployed today is built to read. Summarize a document. Answer a question. Draft a memo. These are reading tools, and they're impressive. But in any domain where the stakes are real — where documents don't just say things but create obligations, assign authorities, and establish relationships that need to be tracked across decades — reading is not the hard part. Execution is.

A trust document creates governance requirements. A subscription agreement generates entity structures. A board resolution establishes authorities that need to be maintained for years. AI that reads these documents and tells you what they say hasn't done the work. It's told you what a skilled professional already knew.

The real question is whether AI can do what that professional does — interpret the logic, propose the structures, set up the workflows — and then stop. Wait for her to review. Wait for her to confirm. Execute only with her authority.

That's the design principle the Luddites were fighting for, two centuries before we had language for it. The machine does the work. The craftsman provides the judgment. Neither is optional.

When you build AI this way — propose, wait, confirm, execute — something remarkable happens. The professional gets faster without getting bypassed. Her knowledge gets captured in a system instead of staying in her head. When she eventually leaves, the knowledge doesn't leave with her. It compounds.

When you don't build AI this way, you get the thing every executive is actually afraid of: confident, fast, autonomous systems making decisions in a domain they don't understand, without anyone who does understand checking the work.

That's not a hypothetical fear. That's what happens when AI acts without a human in the loop. And in domains where a wrong answer has real consequences — legal, financial, fiduciary — it's the only thing worth being afraid of.

The Luddites weren't afraid of machines. They were afraid of what happens when machines run without the people who understand the craft. The question was never the technology. It was whether the technology served the craftsmen or replaced them.

Two hundred years later, every organization adopting AI is answering that question. Most of them don't realize it yet.

Jill Creager is the Founder and CEO of iPaladin, The Digital Family Office®.