Executive Function as a Service

Stop babysitting
your AI.
OpenKoi thinks.

AI coding tools generate a first draft and leave you to fix it. OpenKoi deliberates through a Sovereign-Parliament cognitive stack — iterating until quality thresholds are met.
Single Rust binary. Zero dependencies. Any model.

Install OpenKoi →
$curl -fsSL https://openkoi.ai/install.sh | shCopy
⭐ Star on GitHub
openkoi
$ openkoi think "refactor auth module to use JWT"
 
SOVEREIGNValues: security-first, concise code, full test coverage
PARLIAMENTGuardian APPROVE · Scholar flags missing refresh logic
EXECRewriting token.rs, middleware.rs, handlers.rs
EVALcorrectness=9.2 safety=9.5 style=8.8
REFNStyle below 9.0 — tightening error types
 
EVALPass 2: correctness=9.4 safety=9.5 style=9.3
LEARNEDPattern: "JWT auth setup" · Confidence: 0.5 → 0.65
✓ Done — 4 files changed, 47 insertions, 89 deletions.
$  

The status quo

Current AI coding tools generate output and stop. You become the reviewer, the debugger, the QA team — on top of being the developer.

🔄

Manual iteration

You re-prompt corrections 3–5 times per task. The AI generates; you iterate.

💸

Wasted tokens

You pay for rounds of manual back-and-forth that a system should handle automatically.

🧠

Lost learnings

Patterns from today's work vanish tomorrow. You re-teach the same corrections every session.

OpenKoi thinks before it acts

Instead of generating and stopping, OpenKoi runs a full cognitive pipeline. A Sovereign directive guides deliberation through a Parliament of agencies — then executes, evaluates, and learns from every outcome.

1
Sovereign
2
Parliament
3
Execute
4
Evaluate
5
Learn
loop
$_

CLI Native

<10ms startup. ~5MB memory. Single static binary. Pipe stdin, get structured output.

{}

Model Agnostic

Claude, GPT, Gemini, Bedrock, Ollama, or local models. Switch with a flag. No vendor lock-in.

.rs

Rust Core

No Python. No Node. No runtime dependencies. Just a single binary that finds your API keys automatically.

Built for

think

Cognitive CLI

Six cognitive commands — think, soul, mind, world, reflect, trust — expose the full deliberation pipeline.

MCP

Three-Tier Plugins

MCP tool servers, sandboxed WASM modules, and Rhai scripts. Full hook system.

*.skill

OpenClaw Skills

Compatible with the OpenClaw Skill system. Use existing .SKILL.md files.

What changes

From babysitting your AI agent to shipping code that's already been deliberated.

You manually review every AI output
OpenKoi evaluates its own work against rubrics
No idea how the AI decided
Sovereign directive + Parliament deliberation visible on every task
You re-prompt corrections 3–5 times
Automatic iteration — stops when quality threshold is met
Learnings vanish between sessions
Patterns persist locally; skills improve over time
Locked to one provider
Switch with a flag; different models per role
Data on someone else's cloud
Everything stays on your machine

Works with

AI Providers

AnthropicOpenAIGoogleAWS BedrockOllamaOpenRouterGroqTogetherDeepSeekxAIQwenMiniMax

Integrations

SlackDiscordTelegramTeamsNotionGoogle DocsEmailiMessage

Extensibility

MCP ServersWASM PluginsRhai ScriptsOpenClaw Skills
quickstart
# 1. Install
$ curl -fsSL https://openkoi.ai/install.sh | sh
 
# 2. Think — API keys are detected automatically
$ openkoi think "refactor auth module to use JWT"
 
# 3. Ship — it deliberates, executes, and learns
$ openkoi status --live

Three steps to ship

Install with one command. Think by describing what you want — OpenKoi discovers your API keys from environment variables, CLI tools, and keychains automatically. Ship code that's already been through the cognitive pipeline.

Install OpenKoi →CLI Reference