API for prediction stability

Know when a score stream
stops being trustworthy.

Send numeric scores to CI-1T and get back CI, EMA, authority levels, and ghost detection. Monitor model outputs, agent telemetry, or any confidence-like signal before failure looks normal.

Get your API key
1,000 free credits on signup.
terminal

One engine, three ways in.

Use the raw API if you want full control, the Python SDK if you want speed, or the MCP server if you want CI-1T available inside your tooling.

Hosted API

Post raw scores to CI-1T and get structured stability outputs back without running the engine or maintaining scoring infrastructure yourself.

Session and fleet monitoring

Start with a single score stream, then grow into persistent sessions and multi-node fleet monitoring without changing the core contract.

Built for integration

Create an API key in the dashboard, fund credits, and move between REST, the Python SDK, and MCP tools from the same account.

REST

Direct HTTPS endpoints for evaluate, fleet, sessions, lab, and probing workflows.

SDK

Typed Python client with auth, Q0.16 conversion, monitoring helpers, and fleet/session support.

MCP

Use CI-1T inside Copilot, Cursor, Claude Desktop, and other MCP clients without building your own UI first.

Fleet

Monitor multiple nodes at once and compare stability across a live or historical window.

A simple contract, specialized outputs.

CI-1T accepts numeric score streams and returns stability metrics that are hard to derive reliably from a single confidence value. Use the same contract across models, agents, sensors, trading systems, or any other time-series of scores.

Any numeric signal Rust engine <200ms evals API keys in dashboard Python SDK quickstart
Domain agnostic by design

Use it for LLM output confidence today, then point the same API at sensor telemetry, ensemble outputs, or any other score stream tomorrow.

Simple developer on-ramp

Start with the Python SDK for the fastest path to production, or call the API directly if you want CI-1T embedded deeper into your own services.

python
from ci1t import Client

client = Client("ci_...")
result = client.evaluate(scores=[0.72, 0.74, 0.12], n=3)
print(result["episodes"][-1]["ci_out"])

The integration contract is small.

Send scores in. Get stability metrics back. Keep your own alerting, policy, and product logic on your side of the boundary.

Your scores /api/evaluate CI-1T API CI | EMA | AL | ghost Your app
Single-model evaluation

Pass a score stream and classify whether it is stable, drifting, flipping, or collapsing over time.

Fleet and session monitoring

Track multiple nodes over time, compare windows, and inspect episode history across rounds.

Guardrail-ready outputs

Use CI, EMA, authority levels, and ghost flags to drive your own dashboards, alerts, or runtime policies.

pip install ci1t-sdk GitHub ↗

Built for production.

Live API keys, real credits, and production-facing integrations. The product is built to drop into real monitoring flows, not stay trapped in demos.

Rust engine

<200ms evaluations. Lightweight, zero runtime dependencies.

Live workload validation

Used against live model traffic with inspectable run reports, session history, and drift analysis.

Dashboard-issued API keys

Sign in, create a CI-1T API key, fund credits, and call the engine directly from the API, SDK, or MCP server.

Python SDK

pip install ci1t-sdk. Three lines to your first evaluation. Auto Q0.16 conversion.

Usage-based from the first call.

Create your API key in the dashboard, integrate through the API or SDK, and pay only for what you use. No monthly subscription treadmill. No enterprise sales detour.

Dashboard first

Sign in, create an API key, and manage credits in one place before you ship an integration.

No subscriptions

Use the engine when you need it. Pay for evaluations, not seats or annual contracts.

Built for your stack

Use the raw API, the Python SDK, or MCP tools as the integration layer that makes sense for your product.