
UNKNOWNS LAB · SYS.00 / FRAMEWORK.v1.0
The Decision Authority Framework
A discipline for governing critical decisions in the cyber and AI era.
Systems don't fail first. Decisions do.
This paper defines Decision Authority as a measurable discipline — distinct from incident response, governance-risk-compliance, crisis management, and tabletop rehearsal — and proposes four primitives by which organizations, auditors, insurers, and regulators can assess it.
1.0
April 2026
23
Open for Comment
About this document
This document is version 1.0 of the Decision Authority Framework, published by Unknowns Lab as a contribution to the emerging discipline of decision governance in high-consequence enterprises. It is written for boards, general counsel, chief information security officers, chief risk officers, regulators, insurers, and the analysts who serve them.
It is not a product brochure. It is not a methodology for hire. It is a public articulation of a category we believe will become as legible in the next decade as cybersecurity itself became between 2010 and 2020 — because the forcing functions that made cybersecurity a board-level concern are now operating on decision-making itself.
We welcome reference, critique, adoption, and adaptation. The four primitives introduced in §6 are offered as open definitions; practitioners who cite them are asked only to cite accurately. Where this document is adopted into regulatory guidance, underwriting methodology, or analyst taxonomy, Unknowns Lab will publish annotated revisions in subsequent versions.
This framework exists because the space between detection and decision is the least measured and most consequential failure surface in modern enterprises.
Why This Document Exists
Between 2020 and 2025, cyber and AI failures stopped being engineering problems and started being decision problems. The proof is in the public record. In nearly every major incident that made the front page — a ransomware event at a hospital network, a supply-chain compromise at a software vendor, an autonomous-system failure at a commercial airline, a data-exposure cascade at a financial platform — the technical root cause was present, but the cost was determined by what leadership decided to do in the first hours after detection.
In our observation across principal-led engagements with organizations whose failure costs are measured in hundreds of millions to billions of dollars, the pattern is consistent: detection times have compressed; escalation paths have not. Technical telemetry now arrives in minutes. Authority to act on that telemetry still arrives in hours, days, or — for novel AI failures — never.
Detection is an engineering problem, and we have largely solved it.
Decision is a governance problem, and we have barely begun.
This is not a matter of better playbooks, thicker runbooks, or more frequent tabletop exercises. Those address the surface. The underlying issue is structural: large organizations have not articulated, with the clarity that modern adversarial speed demands, who holds the authority to decide what, under which conditions, with what reversibility, and with what accountability when the decision fails.
We observe three recurring failure patterns in post-incident reviews of catastrophic cyber and AI events. Each has engineering literature. None has a measurement discipline in the organizations where the failures actually occur.
Decision Latency
The interval between the moment sufficient information exists to authorize a decision and the moment the decision is actually authorized. It is not the time to detect. It is not the time to contain. It is the time to decide.
In our field observations, decision latency during a contested incident runs 4× to 40× longer than the detection-to-authorization gap senior leadership believes it runs.
Authority Ambiguity
The condition in which, at the moment a decision must be made, no one in the chain is certain who is empowered to make it.
Authority ambiguity is almost always invisible in peacetime. It materializes at machine speed during a crisis.
AI Handoff Failure
The discontinuity between an autonomous system acting on its own authority and the human supervisory layer that is assumed — but not architected — to intervene.
The typical failure mode is not an AI doing something catastrophic. It is an AI doing something consequential without any human noticing in time.
// OBSERVED BASELINES · UNKNOWNS LAB
Breaches in which leadership decision failure materially worsened outcome
Median time to identify decision-authority gaps, post-incident
Critical window where early decisions compound exponentially in cost
Fortune 2000 with board-ratified authority map for cyber incidents
Every pattern in §2 has an adjacent discipline that addresses some portion of it. None addresses it centrally. The Decision Authority discipline exists in the overlapping blind spots of five established categories.
Incident Response
Technical containment
Authority chain above CISO
GRC
Stated posture & evidence
Live decision performance
Crisis Management
External perception
Internal authority architecture
Tabletop Exercises
Rehearsal atmospherics
Structural outputs & latency
Cyber Insurance
Loss indemnification
Authority as underwriting signal
Decision Authority
Who decides, how fast, with what accountability
— (the layer itself)
Decision Authority (n.) — the structured, measurable capacity of an organization to identify, assign, and exercise the right to decide, at sufficient speed and with sufficient accountability, under conditions of adversarial pressure, incomplete information, and irreversible consequence.
This definition is deliberately constructed to be falsifiable. Each clause names a property that can be measured, failed, and improved:
Authority is articulated before the event, not improvised during it.
An organization's position can be scored, benchmarked, and re-scored.
Three distinct acts, each a separate failure mode.
Latency is a first-order property, not a secondary attribute.
The authority holder is identifiable after the fact.
The operating conditions under which most organizations have never instrumented their own decisioning.
Decision Authority operates across three time horizons. Each horizon has a distinct mode, distinct outputs, and distinct failure signatures. A mature Decision Layer spans all three.
ANTICIPATE
Months to years before incident
In peacetime, Decision Authority is built. Adversary intent is mapped. Decision paths are stress-tested under simulated but unannounced pressure. Human-AI authority boundaries are articulated before any agentic system goes live.
OUTPUTS
Authority maps, escalation graphs, scored baselines
FAILURE SIGNATURE
The absence of these assets, or their presence only as draft documents no one has operationalized.
DECIDE
Minutes to hours during incident
In the crisis window, Decision Authority is exercised. The interval is compressed — often to the first sixty minutes, rarely longer than the first seventy-two hours. Inside this window, the structures built in peacetime either hold or they do not.
OUTPUTS
Authorized actions, escalation traces, decision telemetry
FAILURE SIGNATURE
Elapsed time from trigger to authorized action, number of escalation hops required, number of reversals.
BUILD
Years after incident, continuous
In the permanent layer, Decision Authority is compounded. Lessons from the crisis window feed back into peacetime structure. Leadership transitions are instrumented so authority does not evaporate with a CISO or CEO change.
OUTPUTS
Institutional capability, governance evolution, compounding authority
FAILURE SIGNATURE
The organization that treats each incident as a discrete event rather than an input to a compounding capability.
An organization is as mature in Decision Authority as its weakest layer.
The most common weak layer, in our observation, is the permanent one.
The framework proposes four primitives by which Decision Authority can be measured. The primitives are intended to be adopted, cited, and refined by the broader community. They are open definitions; they are not proprietary to any single practitioner, including ourselves.
The Authority Graph
A directed representation of the organization's decision rights: the nodes are decisions, the edges are the authority relationships between the roles or bodies that hold them.
Decision Latency
The measured time, under instrumented conditions, from the moment sufficient information exists to authorize a decision to the moment the decision is authorized.
The Override Taxonomy
Classifies the acts by which a decision already in motion — by a human or by an automated system — can be halted, reversed, or redirected.
The Maturity Rubric
Scores an organization's position on a five-level scale, from absent to adaptive.
The median Fortune 2000 enterprise scores between L1 and L2. A small number reach L3. L4 is rare. L5 is not yet present at scale anywhere.
Every element of the framework changes when autonomous and semi-autonomous systems enter the decision chain. The Authority Graph acquires non-human nodes. Decision Latency can invert — the AI decides before humans are aware a decision was required.
Agentic AI does not create new decision authority questions. It forces existing ones to be answered at speeds the organization has never operated at.
The AI authority question, stated precisely
For every agentic system deployed inside or adjacent to high-consequence decision chains, an organization should be able to answer four questions without pause:
What is this system authorized to decide on its own?
What is it authorized to decide with human concurrence?
What must it escalate to a named human authority, and how does the escalation channel function at the system's operating speed?
Who is accountable — by name and role — when the system acts and the action proves wrong?
In 2026, the median Fortune 2000 organization deploying agentic AI can answer the first question partially, the second ambiguously, the third informally, and the fourth not at all. This is the single largest decision-authority gap visible in current enterprise reality.
The framework is measurable. A Decision Authority Assessment covers, at minimum, the enterprise's top twenty-five decision classes across cyber incident response, AI-system override, regulatory disclosure, financial containment, and executive succession under incident conditions.
ASSESSMENT OUTPUTS
For Boards
Commission baseline assessment, adopt Maturity Rubric as standing metric
For Regulators
Incorporate primitives into supervisory guidance and examination handbooks
For Insurers
Use Decision Authority score as underwriting signal for cyber/D&O lines
Categories are minted when enough of the market uses the same vocabulary to describe the same problem. This document is the vocabulary.
Citation
Unknowns Lab. (2026). The Decision Authority Framework v1.0: A discipline for governing critical decisions in the cyber and AI era. Retrieved from unknownslab.com/framework
Submit Your Comment
Version 1.0 is open for comment. We invite critique from academic researchers, regulators, analysts, insurers, and practitioners with field observations that corroborate or challenge the primitives. Material feedback will be incorporated into version 1.1.