MegaLens gives Claude Code a second opinion before you ship.
Multi-engine debate runs inside your terminal via MCP. One config line. No tab-switching.
How it works
Opus writes code. You invoke megalens_debate with a skill tag.
Specialist engines review the same code independently.
GPT 5.4 judges the debate.
Returns structured findings with line-level detail.
Claude Code receives the verdict.
It decides what to apply, reject, or investigate. Your primary IDE stays the CEO.
Token savings: exec-shift advisory
MegaLens can suggest splitting work by model weight. Mechanical tasks go to Haiku sub-agent. Opus handles architecture.
Rough numbers
A 2,000-token task costs ~$0.001 on Haiku vs ~$0.030 on Opus 4. On a 50-file refactor, that gap adds up.
MegaLens surfaces the advisory. Claude Code executes it. MegaLens cannot switch your primary model.
Setup
Quickest way: run the setup wizard. It detects Claude Code and writes the config for you.
npx megalens-mcp setup
Detects Claude Code, asks for your token, and writes the config below automatically. Or paste it yourself:
Add to ~/.claude.json:
{
"mcpServers": {
"megalens": {
"type": "http",
"url": "https://megalens.ai/api/mcp",
"headers": {
"Authorization": "Bearer ml_tok_your_token_here"
}
}
}
}Get your token at megalens.ai/app/settings/mcp. Restart Claude Code, then run megalens_status to confirm. Source code: github.com/megalens/mcp.
What MegaLens catches that Claude Code misses
Security assumptions
Auth logic, token handling, and CORS rules that a single model accepts at face value. A second family of models often spots what the first assumes is safe.
Framework-specific anti-patterns
Next.js, Rails, Django, Laravel. Each framework has conventions that a generalist model may not flag. Specialist reviewers trained on framework-specific corpora catch these.
Contradictions across files
A type defined one way in the schema and used differently in the handler. Cross-file consistency checks are where multi-model review earns its keep.
Overconfident rewrites
When Claude Code rewrites a function, it rarely questions its own output. An independent review from a different model family catches regressions the author misses.
Honest limitations
Advisory only. MegaLens does not merge, push, or run tests.
Exec-shift recommends model routing but cannot force it.
Debate quality depends on prompt specificity. Use specific skill tags.
Response time is 8-15 seconds. Not designed for autocomplete.
Pricing
Free
$0/mo
BYOK (bring your own keys)
- ✓5 reviews/day
- ✓30/month fair-use
- ✓2 engines + GPT 5.4 judge
Pro
$15/mo
Unlimited reviews
- ✓Unlimited reviews
- ✓Up to 8 engines
- ✓REST API access
Managed Credits
$9/M tokens
Blended rate, from $20 prepaid
- ✓No API keys needed
- ✓Pay only for what you use
- ✓All engines included
Your code deserves more than one perspective.
Start your first multi-engine review inside Claude Code. Free, no credit card.