RST: Reliable, Safe & Trustworthy

RST is a three-part framework for evaluating AI systems, where each component addresses a different layer of the system:

Reliable: Technical practices that ensure consistent performance. Audit trails, benchmark testing, continuous data quality review, bias testing, verification and validation. This is what the engineering team controls directly.

Safe: Management strategies that create cultures of safety. Leadership commitment, internal reporting mechanisms, review boards for failures and near-misses, public reporting, continuous refinement. This is what organizational design controls.

Trustworthy: Independent oversight structures that provide external accountability. Professional standards bodies, government regulators, certification organizations, auditing firms, insurance companies. This is what institutions beyond the organization control.

The three layers are interdependent. Technical reliability without safety culture breeds complacency. Safety culture without independent oversight lacks external validation. Oversight without technical reliability has nothing solid to evaluate.

Related: 07-molecule—hcai-two-dimensional-framework, 05-atom—algorithmic-hubris