Critical Followership, Cognitive Delay, and a Quantum Probability-Based Account of Trust
Introduction: The Problem Is Not Blind Obedience—It Is Premature Certainty
Much of the literature on leadership failure begins with a familiar diagnosis: followers obey when they should resist. This framing is intuitively appealing, but analytically incomplete. It treats obedience as a terminal moral condition—something followers are—rather than as the outcome of a cognitive and social process unfolding under pressure. In my earlier work on critical followership, I argued that followership is not passive compliance but an active, interpretive practice shaped by uncertainty, power, and ethical risk. What that work lacked, however, was a precise language for explaining how authority accelerates judgment—and how followers might resist that acceleration without defaulting to paralysis or open insubordination.
Recent research on human–AI trust provides that missing language. While this work is often framed as novel because it involves artificial intelligence, its deeper contribution lies elsewhere: it reveals the extent to which human judgment itself violates classical assumptions about rationality, probability, and stable belief. People do not reliably hold clear, settled opinions waiting to be revealed. Rather, belief is often indeterminate, context-sensitive, and constructed in the moment of decision. The growing application of quantum probability theory (QPT) to human cognition reflects this recognition and warrants closer attention in analyses of human–AI decision making. This paper argues that QPT offers a productive framework for understanding how judgment and trust are formed with human-in-the-loop systems, and extends these insights by placing QPT in dialogue with anthropology and critical followership theory. From this synthesis emerges a new model in which ethical action resides not in immediate assent or dissent, but in the deliberate maintenance of uncertainty under pressure.
The central claim of this paper is therefore simple, but unsettling: authority works by collapsing cognitive ambiguity and critical followership is the practice of delaying that collapse long enough for ethical judgment to occur.
Critical Followership as an Anthropological Problem
Critical followership is often presented as a normative ideal: good followers question, bad followers comply. Anthropologically, this distinction is far too thin. In real-world organizations—military units, bureaucracies, research teams, humanitarian missions—followers are embedded in dense social worlds shaped by hierarchy, loyalty, risk, and time pressure. Decisions rarely arrive as clean moral binaries. More often, they emerge from what might be called zones of indeterminacy, where intentions are unclear, information is incomplete, and the costs of hesitation are themselves morally consequential.
Consider a junior officer receiving an order that is legal on its face but operationally ambiguous, or a humanitarian worker asked to implement a policy that may unintentionally exclude vulnerable populations. In such cases, the ethical challenge is not whether to obey or disobey, but how to interpret intent, anticipate consequences, and allocate responsibility. My earlier work emphasized that followers do not simply execute intent, they interpret it. Interpretation, however, is not instantaneous. It unfolds over time, through conversation, hesitation, informal consultation, and small acts of recalibration.
Anthropology has long recognized this processual nature of meaning-making and “webs of significance”, whether in studies of ritual, kinship, or bureaucratic practice. Yet leadership and organizational studies often continue to rely on models that presume stable preferences and linear decision paths. This mismatch matters. If we assume followers always know what they believe, hesitation appears as weakness or disloyalty. If we recognize that belief itself is under construction, hesitation can instead be understood as ethical labor—work done to prevent premature moral closure.
Why Classical and Bayesian Decision Models Fail Followers
Classical models of decision-making generally rest on several assumptions: that individuals hold definite beliefs, that new information updates those beliefs fluidly, and that asking someone what they think merely reveals a preexisting mental state. Decades of cognitive research challenge each of these assumptions. Human judgment routinely violates the law of total probability, display order effects in judgment, and construct preferences at the moment they are asked to decide. These are not random errors in an otherwise rational system, they are structural features of how cognition actually works.
Bayesian models, in particular, formalize this view by treating belief as a probability distribution that is coherently revised through sequential incorporation of new evidence. While Bayesian updating allows beliefs to change over time, it nonetheless assumes that evidence is conditionally independent of the act of judgment itself, and that order in which information is received should not fundamentally alter the belief state given the same evidentiary content. Empirically, however, human judgment routinely exhibits strong order effects: the sequence in which questions are asked or commitments are required shapes not only the expressed beliefs but subsequent reasoning and action. These order-dependent dynamics are difficult to accommodate within Bayesian frameworks without ad hoc modifications.
Similarly, the so-called curse of dimensionality highlights a related limitation in classical modeling approaches. As the number of variables or dimensions (features) increases, data become increasingly sparse, distances lose interpretive meaning, and models require exponentially more data to maintain performance, often leading to instability or overfitting. In practice, this makes such models poorly suited for capturing human judgment in complex, multi-factor situations, where cognition does not scale by integrating ever more variables but by reweighting, sequencing, and collapsing uncertainty through judgment. By contrast, QPT-based approaches treat order effects as foundational rather than anomalous, offering a more direct account of how judgment, commitment, and belief are jointly constructed over time.
For example, a follower may initially express skepticism toward a plan, only to endorse it after being required to publicly state a position or commit to an action. The act of judgment does not always merely express belief, it can actively reshape it. In fact, people often exist in unresolved states of partial belief—simultaneously trusting and doubting, agreeing and resisting—until institutional demands force them to commit to a single stance.
Ambiguity becomes a liability rather than a resource, and ethical risk is displaced downward onto those least empowered to contest the terms under which decisions are made.
For followers operating under authority, this dynamic has profound implications. Orders, briefings, and recommendations do not merely transmit information, they structure the timing and sequence of judgment by allocating—or withholding—cognitive time. A rapid-fire briefing that ends with “Any objections?” is not a neutral invitation to dissent. It is a mechanism that demands immediate commitment, leaving little room for unresolved uncertainty about assumptions, second-order effects, or moral risk. When operational tempo accelerates, the time available to hold competing interpretations, ask clarifying questions, and delay judgment contracts. Ambiguity becomes a liability rather than a resource, and ethical risk is displaced downward onto those least empowered to contest the terms under which decisions are made.
Authority as a Mechanism of Cognitive Collapse
Authority functions not only through command, but through measurement and declaration. To ask a follower, “Do you agree?” or “Will you execute?” is to demand the collapse of uncertainty into a binary position. This is true whether the authority is human or algorithmic. Judgment does not merely reflect trust; it actively produces it. This forced transition from indeterminacy to commitment is what I refer to as cognitive collapse.
Recent human–AI trust experiments make this visible. When individuals are required to express agreement or disagreement with an AI system before acting, their subsequent trust in that system oscillates temporarily but often increases—even when the AI system’s objective performance does not. The judgment itself becomes a commitment, and commitment reshapes belief. What appears as growing trust may in fact be post-hoc rationalization.
Anthropologically, this dynamic is immediately recognizable. It mirrors oath-taking, public declarations of loyalty, and ritualized consent. Once spoken, belief hardens. Ambiguity becomes socially costly. To reverse one’s position is no longer a cognitive adjustment, but a moral or political transgression. From this perspective, authority can be understood as a technique for managing uncertainty—not by resolving it, but by forcing decisions that render uncertainty invisible.
Critical Followership as Cognitive Resistance
This reframing brings us to the core intervention of this paper. Critical followership is not best understood as opposition to authority, nor as heroic dissent. It is better understood as a cognitive practice: the disciplined refusal to allow uncertainty to be collapsed too quickly.
Critical followers ask different questions, request clarification, introduce scenario testing, or quietly coordinate with peers to surface alternative interpretations.
In non-classical cognitive terms, actors positioned under authority often occupy a superposed state, holding multiple, partially incompatible interpretations of intent and consequence. Authority seeks to collapse this indeterminate superposition into action. In this way, critical followership resistance is not in the refusal to act, but in the reordering of the sequence of judgment. Critical followers ask different questions, request clarification, introduce scenario testing, or quietly coordinate with peers to surface alternative interpretations. From an epistemic standpoint, these acts matter not because they oppose authority, but because they preserve the conditions under which understanding can still be formed—delaying premature closure long enough for uncertainty to be interpreted rather than erased.
Anthropologically, this resembles what James Scott termed infrapolitics: small, often invisible acts that preserve agency without open defiance. Ethically, such practices redistribute responsibility. Rather than allowing risk and blame to be pushed downward through rushed compliance, cognitive resistance keeps moral accountability in circulation.
Why AI Makes This Urgent
Artificial intelligence does not introduce these dynamics of authority and judgment; it amplifies them. AI systems are designed to optimize speed and decisiveness, not by resolving uncertainty, but by suppressing it and demanding commitment in the absence of full understanding. Interfaces that require binary responses—accept, override, comply—convert ambiguity into friction to be eliminated. In doing so, such systems engineer cognitive collapse at scale.
To clarify, the danger is not simply that humans will trust AI too much. It is that they will be denied the cognitive space necessary to not yet trust. When systems are built to reward speed and penalize hesitation, critical followership becomes harder to practice—even as the ethical stakes increase. In such environments, critical followership must be understood as a technical as well as moral skill: knowing when and how to delay judgment, interrogate outputs, and resist premature certainty without paralyzing action.
Methods and Theoretical Lineage
This paper is theoretical rather than empirical, but its method is nonetheless specific. It proceeds through analytical synthesis, drawing together three bodies of literature that are rarely placed in sustained dialogue: anthropological studies of authority and meaning-making, critical followership theory, and non-classical models of cognition emerging from decision science and human–AI interaction research. Rather than testing hypotheses, the paper traces structural homologies across these domains—showing how similar dynamics of uncertainty, commitment, and responsibility recur across organizational, cultural, and technological contexts.
The anthropological lineage is grounded in interpretive and practice-oriented traditions that emphasize process over outcome. Classic work on ritual, bureaucracy, and power informs the paper’s understanding of authority as something enacted through sequences of speech, timing, and social commitment rather than merely formal command. James Scott’s concept of infrapolitics provides a key sensibility here, not as a claim about resistance per se, but as a way of recognizing ethically consequential practices and meaningful action that operate below the threshold of overt defiance and formal dissent.
Critical followership theory supplies the normative problem space within which these dynamics matter. This literature has focused on how responsibility is distributed downward in hierarchies, and how followers are often morally evaluated for outcomes they do not necessarily fully control, and how dissent is simultaneously demanded and constrained. This paper extends that literature by shifting the analytic focus from obedience versus resistance to the temporal management of judgment—a dimension that has remained under-theorized.
Finally, the paper draws on non-classical cognitive models, including work on order effects, contextual preference construction, and probabilistic violations in human judgment. These models are not imported as metaphors, but as descriptive accounts of human judgment as an indeterminate, context-sensitive process—one in which beliefs are constructed through sequences of interaction and commitment rather than revealed as stable internal states.
Explicit Lineage: From Critical Followership to Cognitive Delay
This paper should be read as a direct continuation of my earlier work on critical followership, which challenged leadership-centric models by foregrounding the ethical agency and interpretive labor of those positioned lower in hierarchies. That work argued that followers are not passive recipients of intent but active participants in the construction of meaning, responsibility, and action under conditions of uncertainty. What was left largely implicit, however, was the cognitive mechanism through which this agency operates—particularly how authority pressures followers toward premature certainty, and how ethical responsibility is shaped by the timing of judgment rather than its content alone. The present argument extends critical followership by supplying that missing mechanism. By reframing followership as a practice of managing indeterminacy—of delaying, reordering, or redistributing judgment—it offers a theoretical account of how ethical action is made possible even when dissent is constrained, risk is asymmetric, and compliance is structurally incentivized.
An Empirical Vignette: Judgment Under Algorithmic Command
Consider a contemporary military planning cell using an AI-enabled decision-support system to prioritize targets during a time-sensitive operation. The system produces ranked recommendations based on sensor fusion, pattern recognition, and historical strike data. Analysts and officers are required to either accept or override each recommendation before the planning cycle can advance. The interface is efficient by design: it minimizes deliberation time, foregrounds confidence scores, and frames override decisions as exceptions requiring justification.
In this setting, trust in the system does not emerge solely from demonstrated performance. It is actively constructed through repeated acts of judgment. Each forced choice—accept or override—functions as a commitment that reshapes subsequent belief. Over time, analysts learn not only that the system is usually right, but that disagreement is cognitively and socially costly. Hesitation slows the workflow, overrides invite scrutiny, and the space for unresolved doubt narrows.
A critical follower in this context does not necessarily reject the system’s recommendations. Instead, they intervene earlier and more subtly: requesting alternative framings, slowing the sequence of confirmation, or reintroducing contextual factors the system brackets as noise. These acts do not openly challenge authority, yet they delay cognitive collapse long enough for ethical and operational judgment to mature. What appears, from above, as inefficiency or friction is, from within, a form of responsibility maintenance—an effort to prevent certainty from arriving faster than understanding.
This vignette illustrates the central claim of the paper: that ethical risk in hierarchical systems often arises not from disobedience or obedience per se, but from the engineered timing of judgment. Critical followership operates in that temporal space, preserving ambiguity where premature clarity would otherwise displace responsibility downward.
Conclusion: Ethics Lives in the Delay
This paper has argued that critical followership can be reconceptualized as the ethical management of uncertainty under authority. Drawing on anthropology and non-classical models of cognition, it shows that judgment is not a neutral act but an intervention—one that reshapes trust, belief, and responsibility.
Reframing critical followership as a practice of cognitive delay has implications beyond leadership theory. It suggests that ethical failure in hierarchical and AI-mediated systems often arises not from malicious intent or blind obedience, but from the systematic elimination of time for interpretation. It redirects analytic attention from individual dispositions to institutional designs: briefing formats, interface constraints, evaluation metrics, and tempos of action that either preserve or foreclose uncertainty. From this perspective, responsibility is not only a matter of who decides, but of how quickly decision is demanded—and whether organizational systems make room for uncertainty to be held rather than immediately resolved.
Critical followers do not refuse to act. They refuse to let certainty arrive too soon.
In environments increasingly shaped by algorithmic authority, the central moral challenge is not simply to decide correctly, but to decide at the right time. Ethics, in this sense, lives in the delay: in the fragile, contested space between command and commitment where ambiguity has not yet been erased.
Critical followers do not refuse to act. They refuse to let certainty arrive too soon.
Check out all of Small Wars Journal’s great content.
The post Critical Followership, Cognitive Delay, and a Quantum Probability-Based Account of Trust appeared first on Small Wars Journal by Arizona State University.
