Integrations/Codex CLI

MegaLens + Codex CLI

Multi-engine code review with actual model-switch commands.

How it works

1

Write code in Codex CLI

Use Codex CLI as your primary coding environment. Write, edit, and iterate as you normally would.

2

Trigger megalens_debate

Call the megalens_debate MCP tool from within Codex CLI. Your code and prompt are sent to the MegaLens review pipeline.

3

Specialist engines debate in 2 rounds

Standard tier gets 3 debaters. Each reviews independently in round one, then cross-examines in round two.

4

GPT 5.4 acts as sole judge

A single judge model reads the structured debate output, fills gaps, and produces a final verdict.

5

Codex CLI receives structured verdict with exec-shift directive

The verdict includes findings, severity levels, and a fix roadmap. The exec-shift directive tells Codex which model to use for each step.

6

MegaLens directs model switches via codex set-model

For mechanical steps, MegaLens advises switching to a cheaper model. For judgment calls, it advises staying on your primary model.

Setup

Quickest way: run the setup wizard. It detects Codex CLI and writes the config for you.

npx megalens-mcp setup

Detects Codex CLI, asks for your token, and writes the config below automatically. Or paste it yourself:

~/.codex/config.toml

[mcp_servers.megalens]
url = "https://megalens.ai/api/mcp"
http_headers = { "Authorization" = "Bearer ml_tok_your_token_here" }

Get your token at megalens.ai/app/settings/mcp. Source: github.com/megalens/mcp.

Preference modes

Control how MegaLens handles exec-shift directives. You decide how much autonomy the review system gets.

Ask (default)

MegaLens suggests a model switch. Codex CLI prompts you before executing. You approve or skip.

Auto

MegaLens directs model switches automatically. No confirmation prompt. Best for trusted, repetitive workflows.

Never

MegaLens never attempts model switches. You still get the full review verdict, but exec-shift directives are suppressed.

Two-way relationship

MegaLens reviews your code AND uses Codex as a local engine. This is not a one-directional plugin.

GPT 5.4 judge calls route through local Codex when available. This means lower latency for judge operations and reduced API costs. Codex CLI is both the consumer and a provider in the MegaLens pipeline.

Limitations

Transparency matters. Here is what the integration cannot do today.

1.

Cannot verify that a model switch was actually executed. MegaLens sends the directive, but Codex CLI controls execution.

2.

Per-step revert is recommended but not enforced. If a cheaper model introduces a regression, you need to catch it yourself.

3.

Conservative eligibility gate. Only narrow, well-scoped mechanical tasks qualify for exec-shift. Ambiguous changes stay on your primary model.

4.

Best on audit-plus-apply sessions. If you are only writing code without review cycles, the integration has nothing to trigger on.

Pricing

Free

$0BYOK

Bring your own API keys. Full review pipeline, you pay model providers directly.

Pro

$15/mo

Unlimited reviews. All engines included. No API keys needed.

Managed Credits

$9/M

Pay per million tokens. We handle routing, you control spend.

Multi-engine review inside Codex CLI.

One config block. Multiple AI perspectives. Model switches that actually happen.