The dominant story about AI in the last three years has been a fear story. The robots are coming. Your job is on the line. White-collar work is over. We have customers who’ve been told this so often that they walk into our first call expecting us to confirm it.
We don’t. Not because we’re selling them something cheerful, but because the actual experience — in plant managers’ offices, in CFO suites, in SOC dashboards, in legal departments — doesn’t match the headlines. The story we see, deployment after deployment, is the opposite: AI as a power tool that the existing humans use to do their jobs better. Not as a replacement.
And the way it stays a power tool, rather than something else, has a name: open weights, on your own hardware.
The calculator analogy is better than people give it credit for
Calculators didn’t replace accountants. They removed an entire layer of grunt work — arithmetic by hand — and let accountants spend their time on the things that actually require judgement: structuring a return, advising on tax strategy, finding the inconsistency in a set of books, explaining the result to the client.
The number of accountants didn’t go down. The work they do moved up the value chain. The salary moved with it.
The same pattern is showing up everywhere we deploy. The bookkeeper who used to spend 60% of her week reconciling invoices now spends 15% of her week reviewing AI reconciliations and 45% on the parts of her job that always required judgement. The plant engineer who used to manually transcribe machine logs now reviews AI-generated incident reports and uses the saved hours to actually walk the floor. The legal assistant who used to summarise contracts now reviews AI summaries and cross-checks with knowledge the AI doesn’t have.
None of these jobs disappeared. All of them got more interesting. None of these people are scared, in conversation. They’re busy, sometimes overwhelmed, but not scared.
Why “on your own hardware” changes the conversation
There’s a version of this story that is scary. It’s the version where the AI is a black box owned by a vendor in a different jurisdiction, the model can be retrained or deprecated without you, your data is part of the training corpus, and the cost structure can change at the vendor’s discretion. In that world the AI isn’t a tool you own — it’s a utility you rent, on terms set by someone else.
That world is a real one. It’s the default if you build on top of a managed cloud LLM and don’t think hard about what you’re committing to.
It’s also not the only world. Open weights running on your own GPUs flip every one of those dynamics:
- The model is yours. You downloaded it. It’s on disk. You can run it forever, including in a year when the vendor has moved on, gone bankrupt, or been bought by someone with different priorities.
- Your data stays put. Prompts, responses, fine-tuning data, retrieval corpora — nothing leaves your network unless you choose to send it.
- The cost structure is predictable. You bought the GPUs. The marginal cost of an inference is electricity and amortised hardware. There is no per-token bill that can balloon.
- The behaviour is yours to shape. You can fine-tune. You can swap models. You can run the eval and reject regressions. The vendor has no input.
In that world, AI looks a lot less like a force replacing you and a lot more like a power saw replacing a hand saw — a thing you operate, that does the heavy work, that you remain accountable for.
The human stays in the loop — and that’s by design
Every workflow we ship has explicit checkpoints where a human reviews, approves, edits, or rejects. We don’t treat this as a temporary scaffolding to be removed once the AI is “good enough.” We treat it as the architectural commitment.
The reasons are practical, not philosophical:
- Liability stays with the human. When the agent makes a mistake — and they all make mistakes — you need a human who saw it and signed off. Otherwise the legal and audit story falls apart.
- The human catches things AI can’t. Tone, political context, customer history, the off-screen reason this contract is unusual. AI handles the 80% case beautifully and confidently misses the 20%.
- The human compounds. Every approval and rejection is feedback. The system learns from a human in the loop in a way it cannot from a fully automated pipeline.
- The org stays comfortable. Adoption goes up when people feel they’re using the AI, not being used by it. That’s a culture point but it’s also an architecture point.
People don’t reject AI that makes them better at their jobs. They reject AI that takes decisions away from them. The architectural choice is whose hand is on the steering wheel.
What this looks like in a real department
A finance team we work with reconciles ~30,000 invoices/month against ERP entries. Pre-deployment, three full-time AP clerks spent most of their week on this. The AI now flags every line, classifies it, proposes a match, and surfaces the 3% it’s genuinely uncertain about.
The same three people are still there. They review every AI proposal in batch (a few seconds each), spend real time on the uncertain 3%, handle escalations, and have started doing higher-leverage work the team never had capacity for — vendor consolidation analyses, payment-term optimisation, fraud-pattern reviews.
Headcount: unchanged. Morale: better, because the boring work is gone. Output: more, because the interesting work is now being done. Audit: stronger, because every line is double-reviewed. CFO satisfaction: visible.
This is the boring, unglamorous, true story of AI in 2026. It doesn’t make the news because it’s not scary. It does make the customer happy.
The fear story sells clicks. The boring story sells the platform.
What we tell people who are nervous
- The threat to your job is mostly not the AI. It’s coworkers and competitors who learn to use it before you do. That problem is fixable, by you, this quarter.
- Insist on understanding the architecture. “Where does my data go?” is the right question. The answer should not include a third-party cloud you can’t name a contact at.
- Stay in the loop on purpose. Don’t hand a workflow over fully. Build review steps in. Your judgement is the moat.
- Pick employers who deploy AI as a tool, not as a replacement plan. The difference is visible in two minutes once you know what to look for.
The most interesting time to work in any field with AI in it is right now. The boring story — humans doing better work, with better tools, on infrastructure they own — is the one that’s actually playing out, customer by customer, in factories and finance teams across Europe.
It’s also the one we’re betting the company on.