A senior team gathers around the table to make a difficult decision. There are six people in the room. There is also, increasingly, a seventh — a generative model, sometimes addressed by name, sometimes summoned as a service, occasionally whispered to privately by an executive who isn’t quite sure whether they should be using it at all.
The human-AI decision room is now common enough that the question is no longer whether to admit AI to the room. The question is what it does to the seven forces that shape every group decision.
AI does not add an eighth force. It modulates the existing seven — amplifying some, suppressing others. Used well, it strengthens the forces that are usually weakest. Used badly, it suppresses the one force the room most needs.
What AI is good at, force by force
It activates the Boundary Breaker. Pressured rooms collapse to two options. An AI participant, asked openly for alternatives, can produce six. Most won’t survive scrutiny — but having them on the page widens the conversation before it narrows. Most groups never enforce this discipline on themselves.
It supplies the Challenger. Real alignment requires the room to articulate its own best objection. Asking an AI to construct the strongest case against the emerging consensus is a low-cost way to do this. The objection still has to be evaluated by humans, but having it written down forces the room to engage with it rather than dismiss it.
It can voice the Systems Thinker. Decisions affect people not in the room. Asked to articulate how a decision would land with a particular constituency — customers, frontline staff, regulators, the team that has to execute — an AI can prevent some of the surprises that come from rooms that decide for groups they cannot see.
It supports the Executor. Drafting timelines, surfacing dependencies, listing next steps. None of this replaces ownership, but it shrinks the gap between the moment a decision is made and the moment work actually starts.
What AI is bad at, force by force
Two failure modes are documented enough now to design around.
Confidence asymmetry suppresses the Challenger. AI outputs are fluent. Fluency reads as competence. A room that hears a fluent answer is more likely to ratify it than to interrogate it, even when the underlying reasoning is thin. The asymmetry is between how confident the AI sounds and how confident the AI ought to be. The Challenger is exactly the force this fluency is most likely to silence — and the Challenger is the force the room can least afford to lose.
Authority laundering imitates the Integrator’s shadow. When a senior person and an AI converge on a recommendation, the AI’s output now carries the authority of the senior person, and the senior person’s view now carries the apparent independence of the AI. Each amplifies the other. The room has heard one opinion, presented as two. This is the alignment illusion with a new mechanism.
The defence in both cases is procedural. Treat AI inputs as one input. Have a human, ideally not the most senior in the room, summarise the AI’s view in their own words and offer their honest reaction to it. This breaks the fluency-confidence loop and prevents authority laundering.
Where to seat the AI
The seating metaphor is more than a metaphor. Where in the conversation an AI participates shapes the conversation’s behaviour.
Before the room. This is usually the strongest position. An AI does the option-generating, scenario-modelling, and counter-argument work before the meeting convenes, and the room enters with a fuller starting set than it could have generated on its own. The room then does the work the room is good at: weighing, integrating, deciding.
Alongside the room, on a specific task. When a particular question arises that a search or a draft or a precedent could help with, an AI is brought in for that task and dismissed afterwards. The room remains the deciding entity.
In the chair. This is almost always wrong. Decisions involve identity, trust, urgency-judgement, and political capital — properties that do not transmit through an AI. A room that lets an AI run the agenda has surrendered the work of being a room.
The Decision Architect’s adjustment
The arrival of AI in the room does not change the Decision Architect’s job description. It changes the shape of what the Architect has to watch for.
The Driver is partly amplified — AI makes options cheaper to generate, which can accelerate closure beyond what the room has actually digested. The Challenger is partly suppressed by fluency. The Integrator is partly bypassed because synthesis is no longer the slow, patient work it used to be. Trust is partly destabilised because the room is now using a tool it does not fully understand.
Each of these shifts has to be named in real time, and adjusted around. Naming forces is the Decision Architect’s practice. AI raises the stakes of doing it well.
The book Decision Shapers develops the practice. The articles on this site introduce the framework one piece at a time.