Overview
The ICRL CLI is an interactive, tool-calling coding assistant that lives in your terminal. If you’ve used Claude Code or OpenAI Codex, you’ll feel right at home — the core experience is the same: type a task in natural language, and the agent reads files, writes code, runs commands, and iterates until the job is done. The key difference is that ICRL gets better the more you use it. Every successful run is stored as a trajectory. On future tasks, the agent retrieves similar past trajectories and uses them as in-context examples, producing better plans and fewer mistakes over time. It’s a coding agent that learns your codebase.icrl chat — Interactive Mode
icrl chat is the primary way to use ICRL. It launches a multi-turn terminal UI where you can have an ongoing conversation with the agent.
What the Agent Can Do
Just like Claude Code, the ICRL chat agent has full access to your local environment:- Read, write, and edit files in your working directory
- Run shell commands — git, python, npm, cargo, anything in your PATH
- Search your codebase with glob patterns and regex
- Search the web for documentation or solutions
- Fetch web pages and parse their content
- Ask you questions when the task is ambiguous
What Makes It Different from Claude Code
The agent works like Claude Code on any individual task. The difference is what happens between tasks:- After a successful run, ICRL asks whether to store the trajectory (the full sequence of reasoning, actions, and observations).
- On future tasks, the agent retrieves similar past trajectories using semantic search and includes them as in-context examples.
- Over time, a curation system prunes low-utility trajectories so the example set stays high quality.
Example Session
/api/users trajectory and uses it as a reference — it already knows your project uses Zod, how your validation utilities are structured, and your test patterns.
Multi-Turn Conversations
The session maintains full conversation history across turns. You can ask follow-up questions, request changes to what the agent just did, or start entirely new tasks — all within the same session.- Type
/clearto reset the conversation and start fresh - Type
exit,quit, orqto end the session
Options
| Flag | Description | Default |
|---|---|---|
-m, --model TEXT | Override the LLM model | claude-opus-4-5 |
-d, --dir PATH | Set the working directory | Current directory |
--compare | Generate two candidate strategies and choose which to store | Off |
--stats / --no-stats | Show latency, token usage, and cache statistics | --stats |
-y, --auto-approve / --no-auto-approve | Auto-approve file writes without confirmation | --auto-approve |
Compare Mode
Compare mode (--compare) is useful when you want to explore different approaches to a task:
icrl run — Single-Task Mode
If you just need to fire off a one-shot task without an interactive session:
chat, plus a few extras:
| Flag | Description |
|---|---|
--no-train | Don’t store the trajectory even if the run succeeds |
--ablate | Run with and without retrieved examples and print a comparison |
-v, --verbose | Verbose output |
--vertex-credentials PATH | Path to Vertex AI credentials file |
--vertex-project TEXT | Google Cloud project ID for Vertex AI |
--vertex-location TEXT | Google Cloud region for Vertex AI |
Configuration
View Configuration
Set a Value
| Key | Description | Default |
|---|---|---|
model | LLM model identifier | claude-opus-4-5 |
temperature | Sampling temperature | 1.0 |
max_tokens | Max tokens per response | 16384 |
max_steps | Max tool-call steps per turn | 200 |
k | Number of examples to retrieve | 3 |
context_compression_threshold | Token threshold for context compression | 80000 |
show_stats | Show performance statistics | true |
auto_approve | Auto-approve file operations | true |
db_path | Custom trajectory database path | Project-local |
vertex_credentials_path | Path to Vertex AI credentials | — |
vertex_project_id | Google Cloud project for Vertex AI | — |
vertex_location | Google Cloud region for Vertex AI | — |
Reset Configuration
Trajectory Database
Every project gets its own trajectory database at<working_dir>/.icrl/trajectories. Use the --global flag on any db command to target the global fallback database instead.
Inspect
Manage
db commands accept --dir PATH to specify the project directory and --global to use the global database.
Provider Selection
The CLI auto-detects which LLM provider to use based on the model string:- If the model matches a Vertex AI alias or pattern, the CLI uses the Anthropic Vertex provider
- Otherwise, it uses the generic LiteLLM provider (supports OpenAI, Anthropic, and many others)
claude-opus-4-5. To use a different model:

