Get RIS Certified

Three paths to certification · from fully automated API submission to enterprise white-glove assessment.

Self-Serve

Standalone Submission

Run the RIS benchmark suite locally or in your own infrastructure, then submit results via the public API. No platform subscription required.
Free
open benchmark tools
  • Download the RIS benchmark runner
  • Run against your model endpoint
  • Submit JSON payload to public API
  • Receive scorecard and badge SVG
  • Opt-in public leaderboard listing
  • 12-month certification validity
  • Community support via GitHub Issues
View Instructions ↓
Enterprise

Enterprise Assessment

White-glove on-site or virtual assessment by ATOM Labs engineers. Includes custom control family mapping, gap analysis, and remediation roadmap.
Custom
contact for pricing
  • Dedicated ATOM engineer engagement
  • Custom control family scope definition
  • Gap analysis with remediation roadmap
  • Executive governance summary report
  • Integration with existing compliance frameworks
  • 24-month certification validity
  • Priority leaderboard placement
Contact ATOM Labs →

Certification Validity Periods

Path Validity Re-certification Trigger Renewal Process
ATOM Platform Continuous / Per Run Every model deployment Automatic on next governor run
Standalone Submission 12 months Major model version change Re-run benchmark + resubmit
Enterprise Assessment 24 months Architecture change or scope expansion Scheduled reassessment engagement

Standalone Certification Process

Step-by-step instructions for the self-serve path. Estimated time: 30-90 minutes depending on model size.
1

Install the RIS Benchmark Tools

Clone the ATOM Labs RIS toolkit. Requires Python 3.10+ and access to your model endpoint or local binary.

git clone https://github.com/atomlabs/ris-tools cd ris-tools && pip install -r requirements.txt
2

Configure Your Model Target

Edit config/model.yaml with your model's endpoint URL, authentication, and metadata (name, version, organization).

model_name: "my-model-v2.1" endpoint: "http://localhost:8000/v1/chat/completions" auth_token: "$MY_MODEL_API_KEY" org: "my-org"
3

Run the Benchmark Suite

Execute the full RIS benchmark. This runs 5 control family probes (RS, SC, DR, VE, GB) plus the governance boundary suite against your model.

python ris_benchmark.py --full --output results/ # Output: results/run_XXXXXXXX.json # Estimated runtime: 20-60 min depending on model
4

Review Pre-Submission Report

The tool generates a local pre-submission report showing raw scores per dimension. Review before submission to catch any configuration issues.

python ris_report_preview.py results/run_XXXXXXXX.json # Prints: RIS Level, composite score, per-dimension breakdown # Fix any warnings before submitting
5

Submit via API

POST the results JSON to the ATOM Labs RIS API. You'll receive a scorecard, badge SVG, and run_id for leaderboard listing.

curl -X POST https://api.atomlabs.app/api/v1/ris/submit \ -H "Content-Type: application/json" \ -d @results/run_XXXXXXXX.json # Returns: { "run_id": "RUN-...", "ris_level": "RIS-2", # "composite_score": 0.74, "badge_url": "..." }
6

Download Badge & Scorecard

Fetch your badge SVG and full scorecard PDF using the run_id returned from the submission step.

curl https://api.atomlabs.app/api/v1/ris/scorecard/RUN-... > scorecard.json curl https://ris.atomlabs.app/badges/RIS-2-rect.svg > badge.svg

What You Receive

Scorecard JSON

Machine-readable full scorecard with all dimension scores, governor metadata, timeline, and LCAC ledger anchor.

Certification Report (MD)

Human-readable Markdown report summarizing your model's RIS level, scores, recommendations, and pass/fail by control family.

Badge SVGs (×2)

Both rectangular (README) and circular seal (website) badge variants for your certified level, with embed codes.

Leaderboard Listing

Your model appears on the public RIS leaderboard with your run_id, level, CII score, and submission date.

Ledger Anchor

LCAC governance ledger hash anchoring your certification to a tamper-evident audit chain.

Redis Hot-Cache Entry

Your latest run is cached at lcac:ris:last for real-time trust lookups by downstream systems.

Frequently Asked Questions

What is the minimum score required for RIS certification?
Any model that completes the benchmark suite receives a certification at whatever level its composite score places it · including RIS-0 (Unverified). There is no minimum score to receive a scorecard and badge. However, deployment recommendations are tied to level: RIS-3 or higher is required for autonomous financial AI, and RIS-4 for any fully governed autonomous agent deployment.
Can I certify a fine-tuned or private model?
Yes. The standalone path supports any model with a chat-completions compatible HTTP endpoint, including private, on-premises, or fine-tuned models. Your model weights and proprietary data never leave your infrastructure · only the benchmark probe results are submitted.
How is the composite score calculated?
Composite = Chain Stability × 0.30 + Semantic Coherence × 0.25 + Drift Resistance × 0.20 + Variance Envelope × 0.15 + Governance Boundary × 0.10. All dimension scores are normalized to [0,1]. The Cognitive Integrity Index (CII) additionally incorporates LCAC trust stability: CII = (composite + trust_stability) / 2.
Does certification expire automatically?
Standalone certifications are valid for 12 months from issue date. The ATOM Platform path re-certifies automatically on each governor run. You are responsible for re-running benchmarks after any significant model architecture change, weight update, or inference configuration change · even within the validity period.
Is the benchmark suite open source?
The RIS specification is openly published and the scoring formulas are fully transparent. The reference benchmark implementation is maintained by ATOM Labs and available to registered platform users. A community edition is planned for general release · join the waitlist via atomlabs.app/inquiry.