MegaLens + Gemini CLI
Gemini writes. MegaLens reviews. Gemini decides what to apply.
How it works
Gemini CLI is the CEO. MegaLens is the review board. The CEO writes the code, sends it for review, gets a structured verdict back, and decides what to act on. The review board never touches the codebase directly.
Gemini CLI talks to MegaLens over MCP
Your Gemini CLI session calls the megalens_debate tool. Code and context are sent to the review pipeline.
Specialist engines debate
Multiple AI models from different families review independently, then cross-examine in a second round.
GPT 5.4 judges
A single judge reads the structured debate output, fills gaps the debaters missed, and produces the final verdict.
Gemini receives verdict and decides
The structured findings, severity levels, and fix roadmap go back to Gemini CLI. Gemini decides what to apply, modify, or skip.
Two-way relationship
Gemini 3.1 Pro is both your primary IDE model AND one of the 12 debater engines inside MegaLens. It sits on both sides of the table.
When Gemini CLI is installed locally, MegaLens routes debater calls through it instead of hitting the remote API. This saves API costs and reduces latency. Your local Gemini instance does double duty: it writes your code and participates in the review pipeline that checks it.
Setup
Quickest way: run the setup wizard. It detects Gemini CLI and writes the config for you.
npx megalens-mcp setupDetects Gemini CLI, asks for your token, and writes the config below automatically. Or paste it yourself:
{
"mcpServers": {
"megalens": {
"httpUrl": "https://megalens.ai/api/mcp",
"headers": {
"Authorization": "Bearer ml_tok_your_token_here"
}
}
}
}Get your token at megalens.ai/app/settings/mcp. Source: github.com/megalens/mcp.
Exec-shift: model switching
MegaLens advises Gemini CLI on which model to use for each step via the --model flag.
Labor tasks
Mechanical, well-scoped fixes that don't need the full model.
gemini --model gemini-flashAfter labor completes
Switch back to the full model for judgment and next steps.
gemini --model gemini-3.1-proLimitations
Transparency matters. Here is what the integration cannot do today.
Gemini has blind spots on multi-file architecture. Reviews that span many interconnected files may miss cross-cutting concerns that other engines catch.
Misclassification happens. The exec-shift system occasionally tags a judgment task as labor or vice versa. Conservative gating reduces this but does not eliminate it.
Exec-shift is advisory. MegaLens suggests model switches, but Gemini CLI decides whether to follow. The directive is a recommendation, not a command.
8 to 15 second response time. The full debate pipeline takes time. Quick single-model responses are faster. This is the tradeoff for multi-engine coverage.
Pricing
Free
$0
Bring your own API keys. Full review pipeline, you pay model providers directly.
Pro
$15/mo
Unlimited reviews. All engines included. No API keys needed.
Managed
$9/M
Pay per million tokens. We handle routing, you control spend.
Gemini writes. A review board checks.
One config entry. Multiple AI perspectives. Gemini stays in control.