The Real Barrier to AI Adoption Isn't Skill. It's Self-Awareness.
What really stands in the way of AI adoption, in this blog I argue that it's not all about the tools .
27 March 2026
The Real Barrier to AI Adoption Isn't Skill. It's Self-Awareness. I got into a thread recently with Lesia Kalley that went somewhere I wasn't expecting.
Her argument was one that immediately resonated with me: AI isn't weakening human judgement, it's exposing a muscle that was already atrophied. Humans have been outsourcing thinking for centuries, to the crowd, the boss, the expert on TV, the algorithm. GenAI didn't invent that. It just gave us something new to blame. Her closing line was simple: "The discomfort isn't the tool. It's the mirror."
That framing matched something I'd been noticing. The value people get from AI seems to be directly correlated to the quality of their thinking going in. Prompts, plans, instructions, all of it reflects the clarity of the intent behind it. AI doesn't just generate outputs. It shows you how well you've actually thought something through. Strong thinking gets amplified. Weak thinking gets exposed. And both happen faster than before.
Lesia pushed the idea further. Is AI a great equaliser, or is it widening the gap between strong and weak thinkers?
My instinct is that it's widening it. Considerably. The feedback loop is now compressed, which means people who invest in how they think will compound their advantage, while the rest will feel the distance more visibly and more quickly.
But the real insight came next. The gap, as Lesia pointed out, won't just be about skill. It will be about self-awareness. And that's much harder to close, because self-awareness requires something most organisations don't train for: reflection, discomfort, and a genuine willingness to confront your own thinking. Not everyone chooses that path.
What this creates in practice is a split that was probably always there. Some people lean into change, they ask questions, they experiment, they adapt even when they're behind. Others pull back. I'm already seeing it in conversations: curiosity on one side, and something close to paralysis on the other when AI comes up.
This is where the conversation became practical. In her work, Lesia is seeing that naming the fear openly is often what helps people move forward. Not reframing it, not managing it away, just naming it. That aligns closely with how I've always approached change management. The moment you bring people in early, acknowledge concerns without flinching, and involve them in shaping what's coming, the dynamic shifts. It stops being something happening to them and becomes something they're part of.
Most organisations are still framing AI adoption as a skills initiative.
Tools. Training. Capability.
None of that is wrong, but it's incomplete. Because underneath all of it is something more fundamental: whether people are willing, and/or able, to engage at all.
So the question isn't always "How do we train people to use AI?" It's "How do we help people become comfortable enough to engage with it?" That's a very different problem, and it requires a very different approach.
There will always be people who lean in and people who don't, but AI isn't creating that divide. It's making it visible, and accelerating it.
For those who do engage, the upside is enormous. Learning compounds, capability scales, and confidence builds. But none of that starts with tools. It starts with awareness.
Structure before AI.
But human readiness before capability scaling.
In that order.