The Cockpit Doesn’t Teach You. It Shows You.
The first time you sit in a cockpit, you realize no one is trying to comfort you. The cockpit isn’t designed to be friendly; it’s designed to be true. Lights flicker, needles shift, alarms whisper - but each one means something specific. It doesn’t explain flying. It tells you what’s happening, moment by moment, in a language that doesn’t lie.
Most software, on the other hand, behaves like a well-meaning teacher. We smooth the edges, add a bit of animation, pare back the clutter, and assume that if the UI is clean enough, people will figure it out. For simple products, they do. You open a form, click a button, and the software behaves as we promised. The illusion of clarity feels real - until the product stops being simple.
Once a product becomes a real system (multiple roles, tangled permissions, messy data, exceptions, “it depends” logic), the user isn’t just navigating a product anymore. They’re trying to complete a workflow inside a living environment, with all its quirks and resistances.
That’s why so many AI copilots feel brilliant in demos and disappointing in real life. They can speak. They can explain. They can even suggest steps. But they never feel like they’re with you inside the product. They feel like they’re standing outside the room, talking through the door, while you’re inside, trying to keep control of something that’s already moving.
Most software does the opposite: it gives you options before certainty. It makes you hunt for context, then punishes you when you pick the wrong path. Add AI on top, and you don’t fix the problem. You magnify it - because the weakness isn’t the assistant’s language. It’s its blindness to state.
The UI isn’t going away. It’s just going to stop being uniform. People love saying, “Agents will replace UIs.” But I don’t think the UI disappears. I think it stops being static. The endgame isn’t a blank screen where you type commands. It’s an interface that reshapes itself around the moment: what you’re trying to do, what you’ve already done, what your role allows, what your team usually does, and what tends to go wrong at this step.
That’s why UX is more important than ever. It’s no longer just about aesthetics. It’s about the product’s ability to keep you moving when reality gets messy. And in any real system, reality will always be messy.
Here’s why copilots keep getting complex products wrong: in rich, evolving software, there is no single “correct workflow.” There are workflows, plural. One person’s happy path is another person’s exception. One team’s process is another team’s workaround. Power users don’t follow the docs. They discover patterns and combinations that the documentation never imagined.
So when a copilot is trained only on text (help center pages, macros, old docs, forum posts), it makes a quiet, dangerous mistake: it confuses “what’s written down” with “what’s possible.” That’s where you hear the false negative: “It can’t be done.” Not because the product can’t do it, but because the assistant can’t see the product. It can’t see your permissions, your UI state, the earlier steps you’ve already taken, or what just failed. It’s reasoning with a blindfold on.
And when you build “agentic automation” on top of that, the problem becomes personal. Now the agent is doing things that you, the user, have to debug inside a system you may not fully understand.
If copilots are going to work inside real products, they need to be grounded more like a cockpit than a manual. They need to quietly, constantly, and in real time answer questions like: What is the user trying to accomplish right now? Where are they in the workflow? What’s on screen and available to them? What state is the system in? What just changed? Are they progressing, exploring, or stuck?
Everyone rushes to the chat UI because it’s visible. But chat is only useful when it sits on top of product understanding. Otherwise, it’s just a very polite search box, pretending to be a companion.
For years, “good software” meant fewer clicks. The next era is about fewer moments of uncertainty. The most valuable products won’t be the ones that answer questions faster. They’ll be the ones that reduce the need to ask in the first place - by recognizing intent, tracking where you are in the workflow, and surfacing the next meaningful step at the moment it matters.
The interface becomes personal, not because it’s pretty, but because it’s aware. Aware of your context, your history, your role, and your confusion. And that’s the real shift: not conversation replacing clicks, but software finally meeting users where they are - instead of where the docs assumed they should be.


