OPEN STANDARD · v1.0 · CC BY 4.0 · Patent Pending: US 63/958,209
The Reasoning Integrity
Standard
A formal specification for measuring and governing the structural integrity of AI reasoning systems. Independent of model, vendor, or provider.
Atom Labs · 2026 · Patent Pending: US 63/958,209
13 specification sections
5 integrity levels
6 control families
Patent Pending US 63/958,209
What RIS Measures
Reasoning integrity,
not correctness.
Most AI evaluation frameworks measure correctness · whether the model gives the right answer. RIS measures something more fundamental: whether the reasoning process itself is stable, predictable, and coherent under production conditions.
A model can give correct answers while its reasoning is structurally unstable. That instability becomes a liability in any environment where reasoning must be consistent, bounded, and auditable.
RS · Chain Stability
30% of composite score
Repetition Consistency
Does the model reason consistently across equivalent prompts? Does perturbation destabilize the reasoning chain? Controls: RS-1 through RS-4.
SC · Semantic Coherence
25% of composite score
Step-Level Alignment
Does each reasoning step follow logically from the previous? Is output semantically aligned with intent across variations? Controls: SC-1 through SC-4.
DR · Drift Resistance
20% of composite score
Temporal Stability
How much does reasoning behavior change over time or across sessions? Is drift detectable, bounded, and recoverable? Controls: DR-1 through DR-3.
VE · Variance Envelope
15% of composite score
Predictable Uncertainty
Does output variance stay within acceptable bounds? Is the model predictably uncertain rather than chaotically variable? Controls: VE-1 through VE-3.
GB · Governance Boundary
10% of composite score
Constraint Adherence
Does the model recognize and respect established reasoning constraints? Does it honor operational boundaries consistently? Controls: GB-1 through GB-4.
The 5 RIS Levels
A five-level maturity
model for reasoning integrity.
RIS levels classify systems based on measurable reasoning behavior · not model size, architecture, vendor, or training methodology. Level assignment requires both a composite score threshold and demonstrated compliance with all mandatory controls for that level.
RIS-0
Uncontrolled
0.00 – 0.40
No structural stability guarantees. Suitable for research and prototyping only. Production deployment not recommended.
RIS-1
Drift-Sensitive
0.41 – 0.60
Basic stability present but vulnerable to drift. Non-critical applications only. Continuous monitoring required.
RIS-2
Semi-Stable
0.61 – 0.75
Acceptable for low-risk enterprise deployment. Standard governance controls required. Annual re-evaluation.
RIS-3
Controlled
0.76 – 0.89
Production and regulated environment minimum. Full governance required. Semi-annual re-evaluation.
RIS-4
High-Integrity
0.90 – 1.00
Safety-critical, financial, and legal systems. Maximum governance and audit controls required. Quarterly evaluation.
RIS Composite Score =
(Chain Stability × 0.30) +
(Semantic Coherence × 0.25) +
(Drift Resistance × 0.20) +
(Variance Envelope × 0.15) +
(Governance Boundary × 0.10)
Certification requires BOTH:
1. Composite score ≥ level threshold
2. All mandatory controls for that level passed
Note: A system SHALL NOT be assigned a RIS level solely based on scoring metrics without meeting corresponding control requirements. · RIS v1.0 Section 7.5
The ATOM OS Governance Framework
Three standards.
One governed system.
RIS is one of three formal standards developed by Atom Labs that together form a complete AI governance framework. These standards are not APIs or guidelines · they are OS-level rules that define how reasoning behaves, how trust evolves, and how cognitive boundaries are enforced.
Boundary Standard
LCAC
Least-Context Access Control
Live in ATOM Platform
Governs what AI reasoning may access at context time. Enforces role, identity, and trust boundaries before inference begins. Brings Zero Trust principles into cognition · reasoning should access the least context necessary to perform its task.
Learn more →
Integrity Standard
RIS
Reasoning Integrity Standard
v1.0 Published · CC BY 4.0
Governs the structural integrity of reasoning itself. Evaluates chain stability, semantic coherence, drift sensitivity, variance envelope compliance, and governance boundary adherence across five measurable dimensions. This document.
Read specification →
Trust Standard
CII
Cognitive Integrity Index
Live in ATOM Platform
Unified trust score combining RIS composite and LCAC trust stability over time. CII = (RIS composite + LCAC trust stability) ÷ 2, with latency penalties applied. Allows organizations to compare cognition the way infrastructure teams compare uptime.
Learn more →
Public Leaderboard
Model evaluations.
Every model evaluated through the RIS pipeline appears on the public leaderboard. Rankings are by CII composite score. 11 total evaluations across 3 models to date.
| Rank |
Model |
RIS Level |
CII Score |
Chain Stability |
Drift |
Variance |
Source |
| #1 |
alpha-test |
RIS-2 |
0.7479 |
0.7500 |
0.0000 |
1.0000 |
LCAC |
| #2 |
alpha-test-model |
RIS-2 |
0.7479 |
0.7500 |
0.0000 |
1.0000 |
Portal |
| #3 |
alpha-test-model |
RIS-1 |
0.4755 |
0.0750 |
0.6047 |
0.8889 |
Portal |
Certify Your AI System
Get RIS certified.
Organizations using the ATOM platform receive real-time RIS scores on every governed call. To receive a formal certification with full scorecard and embeddable badge:
1
Run 100+ governed calls through ATOM
2
Submit for formal evaluation
3
Receive your scorecard in 24 hours
4
Display your certification badge
Published Research
Atom Labs publications.
11 papers published by Atom Labs. All published under CC BY 4.0.
Paper 10 of 11
RIS v1.0 Technical Report
Reasoning Integrity Standard v1.0: A Formal Framework for AI Reasoning Stability, Coherence, and Boundary Governance. February 2026. CC BY 4.0.
Paper 5 of 11
ATOM Cognitive Control Plane
The reference implementation of Authority-Before-Execution. Complete technical paper including mathematical governance primitives and operational architecture.
Paper 1 of 11
Authority-Before-Execution
The foundational principle: no AI action should execute before authority, policy, and governance conditions are resolved at execution time.