Skip to content

Definitions and Terminology

Normative Language

The key words MUST, MUST NOT, REQUIRED, SHALL, SHALL NOT, SHOULD, SHOULD NOT, RECOMMENDED, MAY, and OPTIONAL are to be interpreted as described in RFC 2119 and RFC 8174.

Terms (Working)

  • Artificial Intelligence (AI) system: a system that infers, predicts, generates, or recommends outputs that influence real or simulated environments, including systems built from statistical, symbolic, and/or machine-learned components.
  • Quantum computing (QC) system: hardware, firmware, software, and operational processes used to perform computations leveraging quantum mechanical effects (e.g., superposition/entanglement), including hybrid classical-quantum workflows.
  • Autonomy: the degree to which a system can select goals, plan, and/or execute actions without requiring human initiation or intervention. Autonomy is a spectrum; autonomy claims SHOULD specify the boundary conditions and stop/override mechanisms.
  • Human agency: the practical ability of humans (individually and collectively) to understand material system influence, form intentions, make choices, and pursue remedies without coercion, deception, or undue constraint.
  • Explainability: the ability to provide reasons for system outputs or recommended actions that are understandable and usable by the relevant stakeholder (e.g., affected party, operator, auditor) at the required time.
  • Interpretability: a property of a model or system that supports direct human understanding of how inputs relate to outputs (e.g., through constrained model forms or faithful internal representations). Interpretability is one (not the only) means to achieve explainability.
  • Human oversight: a defined human authority and operational capability to monitor, intervene in, and override system operation, including pre-deployment approvals, runtime stop/rollback mechanisms, and post-incident corrective action.
  • Moral decision-making: a determination that allocates rights, duties, entitlements, burdens, or sanctions to persons or groups, or that materially constrains liberty, dignity, or access to essentials, where normative judgment and due process are required beyond technical optimization.
  • High-impact decision: a decision affecting legal status, access to essential services, health, safety, liberty, employment, education, housing, finance, or civil participation.
  • Affected party: a person or group materially influenced by a system’s outputs or by decisions enabled by those outputs.
  • Meaningful consent: consent that is informed, specific, revocable where feasible, and not obtained through manipulation, coercion, or concealment of material facts.
  • Dual-use technology: capability that can be applied for both beneficial and harmful purposes, including safety/security destabilization.
  • Quantum advantage: a demonstrated performance advantage for a specified task and workload under defined conditions, relative to best-known or best-available classical approaches, supported by transparent benchmarking and stated assumptions. Claims of quantum advantage are context-dependent.
  • Explainability debt: accumulated risk from operating systems whose decisions cannot be adequately explained to appropriate stakeholders at required timescales.

Risk Tiers (See 04_risk_framework/risk_classification.md)

  • Tier 0: research / educational
  • Tier 1: low-risk commercial
  • Tier 2: high-impact societal
  • Tier 3: critical / existential