Multi-LLM Failover
Bring Your Own Model — automatic cascade across providers with circuit breaker logic. Never block on a single LLM outage.
First open-source control plane with a 5-stage Verified Autonomy Pipeline.
Every action verified. Every decision audited. Your infrastructure.
5-minute quickstart. Curated catalog (15+ certified). Zero lock-in.
Bring your own LLM key — encrypted at rest (AES-256-GCM). We never store plaintext.
Governed by default — every action passes through 5 policy gates before execution.
Your data stays yours — on-prem or your cloud. Zero telemetry. Full audit trail.
$ pip install -e ".[dev]" && occp demo
Every agent action traverses the Verified Autonomy Pipeline before touching your system. No shortcuts, no overrides, no exceptions.
Not just a wrapper. A complete governance layer with policy enforcement, observability, and failover — built for teams that can't afford incidents.
Bring Your Own Model — automatic cascade across providers with circuit breaker logic. Never block on a single LLM outage.
PII guard, prompt injection defense, output sanitization, and fully customizable rule sets — all enforced at the gate.
Runs on Your Machine — 5-stage pipeline ensures every agent action is planned, gated, executed, validated, and shipped — with no bypass.
Full Observability — SHA-256 chained audit log with full provenance. Every decision, every output — immutably recorded for compliance.
JWT authentication, role-based access control, and EU AI Act aligned controls (Art. 12, 14, 19) — ready for enterprise procurement. Not legal advice; verify compliance for your deployment.
Code execution in nsjail, bubblewrap, or process-level sandboxes. Auto-detected at startup based on available binaries and kernel capabilities.
Server-side tool dispatch: WordPress REST API, SSH node execution, filesystem sandbox, HTTP client. Brain controls 4 infrastructure nodes via SSH.
Hard-stop with state capture, E2E drill tested. Safe self-improvement pipeline: propose → sandbox → verify → approve → merge with git worktree isolation.
OCCP supports EU AI Act requirements: record-keeping (Art. 12), human oversight (Art. 14), audit trails, and log retention (Art. 19). Not legal advice; verify compliance for your deployment.
Drop OCCP into any Python async stack. The pipeline handles policy enforcement, failover, and audit logging automatically — you just define the task.
from occp import Pipeline, PolicyEngine from occp.planners import ClaudePlanner # Initialize with policy enforcement pipeline = Pipeline( planner=ClaudePlanner(api_key="..."), policy_engine=PolicyEngine( pii_guard=True, injection_defense=True, ), ) # Run task through the Verified Autonomy Pipeline result = await pipeline.run(task) # Every step: verified, logged, auditable print(result.audit_chain) # SHA-256 provenance print(result.policy_report) # Gate decisions
OCCP core is MIT-licensed and free forever. The full Verified Autonomy Pipeline, policy engine, audit logging, and multi-LLM failover are open source. llms.txt is available at occp.ai/llms.txt for AI discoverability. Enterprise Edition adds SSO, advanced analytics, SLA-backed support, and on-premise deployment.