Consensus before commit.

Every AI model has blind spots. Conclave runs multiple AI CLIs in parallel and surfaces where they agree. When 2 out of 3 models flag the same issue, it's probably real. When only one does, it might be noise.

View on GitHub
Claude
Codex
Gemini
Qwen
Mistral
Ollama
Grok
One command. Multiple perspectives. Clear signal.
Terminal — ~/dev/my-project
/review

⊛ conclave — reviewing staged changes

Claude reviewing...
Codex reviewing...
Gemini reviewing...
Qwen reviewing...
Mistral reviewing...
Ollama reviewing...

✓ All reviews complete

2/3 consensus Race condition in fetchData()
Claude and Codex both flagged this. Fix it? (y/n)
Build your own commands
~/.claude/commands/my-command.md
# Write your prompt to a temp file
cat > /tmp/prompt.md << 'EOF'
Analyze this and suggest improvements.
$ARGUMENTS
EOF

# Run all configured models in parallel
bash ~/.claude/scripts/conclave-run.sh \
--scope my-command \
--prompt /tmp/prompt.md

# Returns JSON with each model's output
# → {"tools_run": ["codex", "gemini"], "results": {...}}

The engine is a shell script. The commands are markdown files. Configure which models to use, write a prompt, call the script. That's it.