The leadership conversation about AI has almost entirely missed the developmental question.
The dominant frame is prosthetic. AI handles cognitive load. Leaders focus on higher-order judgement. Organisations invest in tooling, training, and prompt libraries to maximise the productivity transfer from human to machine. This framing is not wrong, exactly. It is incomplete in a way that misses what is actually new about the current moment.
The AETHER phase of the alchemical sequence offers a different relationship with the technology, and a different question to ask about it. Not what should I delegate? But what can this tool see about my own thinking that I cannot see myself?
The prosthetic frame and its limits
The prosthetic frame positions AI as an extension of cognitive capacity. The leader offloads drafting, research, analysis, first-pass synthesis. The saved time is reinvested in higher-order work that only the human can do. Efficiency rises. The leader is nominally more productive.
Three things are commonly underexamined in this framing.
First, the work being offloaded is often the work that was doing developmental work on the leader. Drafting forces articulation. Research forces engagement with evidence. First-pass synthesis forces the leader to confront their own initial framing. When these are offloaded, the leader receives the outputs without having done the processing that would have changed them. Over time, the leader becomes a more efficient user of their existing thinking, rather than a thinker whose thinking is being sharpened by the work.
Second, the “higher-order work that only the human can do” is usually under-specified. In practice, senior leaders given additional time by AI-mediated productivity tend to fill that time with the activities the organisation rewards — which are usually more meetings, more reviews, more visibility. The cognitive space the tooling was supposed to open up gets re-allocated to the activity the organisation was already demanding. Net gain to the leader’s actual capacity: often close to zero.
Third, the relationship the leader develops with the tool in this frame is one of dependency. The leader stops thinking through certain problems because the tool does it faster. The tool produces outputs that the leader evaluates rather than generates. Over months and years, the leader’s capacity to do the offloaded work atrophies. This is not hypothetical. It is the documented pattern across technology-mediated cognitive work, and there is no reason to believe AI will behave differently.
The AETHER relationship
AETHER proposes a different use of the same tool. Not prosthesis. Mirror.
The generative question is not what the tool can do for the leader, but what the tool can show the leader about themselves that they cannot see from the inside. Language models are, among other things, sophisticated engines for pattern recognition and frame identification. Used deliberately, they can make visible the assumptions the leader has been making, the frames they have been using, and the moves they have been defaulting to — which are, by definition, invisible from the position of the person making them.
A CEO I work with uses a customised model to interrogate his own strategy memos before finalising them. Not to improve the memos. To surface what he has been assuming. He pastes the document and asks the model to identify the three assumptions most likely to be unexamined, the three frames most likely to be culturally specific, and the three alternative readings that a thoughtful critic would offer. The model produces the questions. The CEO produces the thinking. The document gets sharper not because the tool has drafted it but because the tool has surfaced what the CEO was not yet seeing.
This is a developmental use of the tool. It compounds. The more the model has learned about the leader’s characteristic thinking, the more useful it becomes as a mirror — because it can notice the specific patterns the leader reliably runs, and ask the specific questions those patterns most resist.
Three practices from the AETHER work
The assumption audit. Paste a significant document — strategy memo, board paper, decision brief — and ask the model to surface the three assumptions you did not know you were making. Read the answers as data about your thinking, not about the document. The useful outputs are not the model’s suggestions for edits. They are the moments you recognise that you had, in fact, been assuming something you would not have defended if asked directly.
The frame audit. Describe a current tension or recurring difficulty — a conflict with a team member, a decision you cannot quite make, a pattern that keeps reappearing. Ask the model to identify the frame you are using, and three substantively different frames you are not. The goal is not to pick a different frame. It is to recognise that the frame you are using is one among several available — which, in itself, usually shifts what is possible.
The pattern interrogation. Over time, let the model learn your characteristic thinking. Then, periodically, ask it to describe the patterns it sees in how you think — what you default to under pressure, what you avoid naming, what frames you over-rely on. Done thoughtfully, this is a form of shadow work that has not been available at this accessibility or speed before. The tool is not doing the work. It is surfacing the patterns so you can.
Why AETHER requires what precedes it
None of this works without the preceding four phases. The assumption audit requires a leader who can tolerate seeing their assumptions. The frame audit requires someone who has done enough WATER work to notice when they are defended against a frame rather than genuinely considering it. The pattern interrogation requires a leader whose relationship with their own shadow is mature enough to receive uncomfortable information without collapsing or dissociating.
The leader who has not done the earlier integration work will, faced with the mirror, either reject what it reveals (because the reflection conflicts with the constructed self) or over-identify with it (because the reflection seems authoritative in a way the leader’s own judgement does not). Either failure mode produces worse thinking, not better. The tool is not the problem. The developmental readiness is.
This is why AETHER sits at the end of the sequence rather than the beginning. The same tool, offered to different leaders, produces different outcomes not because of the tool but because of who is using it. A leader at the end of six months of integration work is materially more capable of deploying a language model as a developmental mirror than a leader at the start. The distinction is not cognitive. It is developmental.
What the future actually looks like
The leadership literature’s current framing of AI — productivity tool, cognitive prosthesis, efficiency amplifier — will likely produce leaders who are faster at what they already do and no better at what they most need to do. This is a reasonable prediction from the current trajectory.
The alternative — AI as a developmental mirror, used deliberately by leaders who have done the preceding work to be capable of using it well — is available to a small minority currently. It will not become mainstream through tooling. It will become available to the leaders who are doing the interior work that makes the mirror useful in the first place.
This is the AETHER proposition. It is not a prediction about technology. It is an observation about what technology makes possible for leaders who have done the other four phases of the work.
The Alchemy of Leadership: Five Elements Workbook
The full developmental architecture of the five-element sequence, including AETHER and the specific practices for using AI as a developmental mirror rather than a cognitive prosthesis. Available free.