Reevol

GLOSSARY

Explainability

The ability to articulate, in human-auditable form, why an AI system produced a given output. For regulated trade decisions, explainability is the difference between a defensible decision and a fineable one.

Explainability in AI is the property of being able to give a faithful, understandable account of why a model produced its output. For trade operators, "faithful" means the explanation actually corresponds to the model's decision process — not a post-hoc rationalisation — and "understandable" means it's intelligible to a customs auditor, not just a data scientist.

Why it matters

The EU AI Act, BIS supervisory expectations on model risk, and GDPR Article 22 all converge on the same requirement: when an AI system makes a decision with legal effect — clearing a sanctions match, classifying an HS code, declining a credit line — you must be able to explain how it got there. No explainability, no defence.

  • Model Risk Management
  • Audit Trail
  • High-risk AI System
  • Human-in-the-Loop

Further reading