Three paths to certification · from fully automated API submission to enterprise white-glove assessment.
| Path | Validity | Re-certification Trigger | Renewal Process |
|---|---|---|---|
| ATOM Platform | Continuous / Per Run | Every model deployment | Automatic on next governor run |
| Standalone Submission | 12 months | Major model version change | Re-run benchmark + resubmit |
| Enterprise Assessment | 24 months | Architecture change or scope expansion | Scheduled reassessment engagement |
Clone the ATOM Labs RIS toolkit. Requires Python 3.10+ and access to your model endpoint or local binary.
Edit config/model.yaml with your model's endpoint URL, authentication, and metadata (name, version, organization).
Execute the full RIS benchmark. This runs 5 control family probes (RS, SC, DR, VE, GB) plus the governance boundary suite against your model.
The tool generates a local pre-submission report showing raw scores per dimension. Review before submission to catch any configuration issues.
POST the results JSON to the ATOM Labs RIS API. You'll receive a scorecard, badge SVG, and run_id for leaderboard listing.
Fetch your badge SVG and full scorecard PDF using the run_id returned from the submission step.
Machine-readable full scorecard with all dimension scores, governor metadata, timeline, and LCAC ledger anchor.
Human-readable Markdown report summarizing your model's RIS level, scores, recommendations, and pass/fail by control family.
Both rectangular (README) and circular seal (website) badge variants for your certified level, with embed codes.
Your model appears on the public RIS leaderboard with your run_id, level, CII score, and submission date.
LCAC governance ledger hash anchoring your certification to a tamper-evident audit chain.
Your latest run is cached at lcac:ris:last for real-time trust lookups by downstream systems.