Every six months a new piece appears explaining that everyone needs to learn “prompt engineering” to survive the AI shift. We’ve hired a lot of people in the last 18 months. We have not once cared whether someone could craft a clever prompt. The skills that actually matter are different — and most of them are old.
1. The ability to specify what you want
This is the master skill of the AI era. It looks like prompting only because the tool you specify through right now is text. The skill is older: it’s the same one that distinguishes a senior engineer writing a clear ticket from a junior writing “please fix the thing that’s broken.”
People who can specify well share three habits: they think about the goal before the method, they articulate constraints explicitly, and they preempt the obvious wrong interpretations. Those habits make them good at managing humans, good at writing requirements, good at briefing designers — and good at directing AI agents. The medium is incidental. The skill is enduring.
2. Taste, especially for output you can’t fully verify
AI gives you 10× throughput at the cost of 10× ambiguity. You will produce more text, more code, more drafts, more designs, more analyses than you can fully review. The bottleneck stops being “can you make it?” and becomes “can you tell good from bad in a second?”
Taste is hard to teach and impossible to fake. It compounds with experience — every piece of mediocre output you reject sharpens it — but only if you actually reject. The most expensive mistake of the AI era is going to be teams that ship ten times more, all of it slightly wrong, because no one had the taste or authority to throw work away.
3. Comfort with iteration over perfection
The first AI output is almost never the right one. The good output usually comes on attempt 3-5, and only after you’ve refined what you’re asking for. Engineers and writers who came up in iterative cultures (Agile, design crit, lean startup) handle this naturally. People who came up in waterfall, plan-once-execute-perfectly cultures struggle.
The skill is treating the first attempt as a draft you read to learn what you actually want, not as the final answer that arrived inadequate. It’s the same posture you bring to a senior colleague’s rough first draft — you read it for what’s missing, not for what’s wrong.
4. Reading code, even if you don’t write it
This is the skill no one wants to admit became universal. AI agents will write you code. Useful code. Sometimes broken code. Always code that you, the human reviewer, are responsible for.
You don’t need to write Python from scratch. But you increasingly need to read Python, JS, SQL, YAML, regex — well enough to spot “this looks confidently wrong” and ask for it to be fixed. This skill is now relevant in marketing, finance, ops, legal — anywhere people are using AI to build small automations and need to know if those automations are about to delete the wrong rows.
5. Domain depth (the skill AI doesn’t have)
The most consistent pattern we see across customer deployments: the people getting 10× out of AI are the ones with deep domain expertise applying it to their domain. The people getting 1.2× are the generalists trying to use AI to skip past the domain knowledge.
An accountant who deeply understands accounting can direct an agent to do real accounting work. A non-accountant trying to use AI to do accounting will produce confidently-wrong output and not notice. The AI doesn’t obviate the domain — it amplifies whoever has the domain.
This is the most underrated career advice of the decade: go deep on something real. The deep specialists are about to be more valuable than generalists, not less.
6. Writing — specifically, plain writing
Almost every interesting AI workflow is gated by your ability to write a clear instruction, a clear schema, a clear evaluation criterion, a clear bug report, a clear handoff. The teams shipping the most interesting AI features in 2026 are unusually good at plain English (or plain Czech, or plain German). They write specifications people can read in two minutes and AI agents can act on without ambiguity.
This is the same skill we’ve always wanted from product managers and tech leads. It’s now table-stakes for everyone.
7. Knowing when to say no
AI makes it easy to ship more. The skill that increasingly differentiates good teams is knowing when not to. When the right answer is to delete the feature, archive the project, end the meeting, send the shorter email, escalate the problem instead of solving it.
This was always a skill. AI just makes its absence more expensive.
8. Patience with people who don’t use AI yet
Half your colleagues are six months ahead of you. Half are six months behind. The fastest-growing teams we work with have a culture where it’s safe to say “I don’t know how to use this yet, can you show me?” The slowest-growing have a culture of performative AI fluency that pushes the laggards into hiding.
You can’t scale a team where 30% of the people are silently faking it. The skill is normalising the on-ramp.
The new skills aren’t new. They’re the old skills under unusual leverage. The people who thrive are the ones who already had them — and the people who learned them quickly when the leverage arrived.
What we’re telling people who ask us what to learn
- Pick a real domain and go deep.
- Practice writing things people can act on without follow-up questions.
- Read more code than you write — AI-generated code, your own old code, your colleagues’ code.
- Build a habit of throwing away mediocre work, including your own.
- Get comfortable iterating in public, in front of the agent, the same way you’d brainstorm with a colleague.
- Stop trying to memorise prompt-engineering tricks. They’re obsolete six weeks after they’re published.
None of this is exotic. Most of it is what we’ve always asked of senior contributors. AI just made the gap between “people who already think this way” and “people who don’t” into the most valuable axis of the next decade.