REPL Interface
The Bilge REPL is the primary way to interact with the coding copilot. It provides an interactive terminal session where you can ask the LLM to read, write, edit, and search your codebase.
Starting the REPL
With Ollama (Local Models)
using Bilge
bilge(ollama=true, model="qwen3")With OpenAI
using Bilge
bilge() # Reads OPENAI_API_KEY from environmentWith Custom API
using Bilge
bilge(api_key="your-key", base_url="https://api.example.com/v1", model="your-model")Full Parameter List
bilge(;
api_key = nothing, # OpenAI API key (or ENV["OPENAI_API_KEY"])
model = nothing, # Model name (default: "gpt-4o" or "llama3.1")
base_url = "https://api.openai.com/v1", # API base URL
ollama = false, # Use Ollama backend
host = "http://localhost:11434", # Ollama host
use_openai_compat = false, # Use Ollama's OpenAI-compatible endpoint
working_dir = pwd() # Working directory for tools
)REPL Commands
All commands start with /:
| Command | Description |
|---|---|
/help | Show available commands |
/exit, /quit | Exit Bilge |
/clear | Clear conversation history and start fresh |
/history | Show a summary of the conversation |
/tokens | Show cumulative token usage (input and output) |
/cd PATH | Change the working directory for all tools |
/clear
Resets the conversation history and turn count. The LLM will have no memory of previous exchanges. Useful when switching topics or starting a new task.
/history
Displays a condensed summary of the conversation:
- User messages — Shows first 80 characters of each message
- Assistant messages — Shows tool calls or text preview
- Tool results — Omitted for brevity
/tokens
Shows cumulative token usage across all turns:
Total tokens: 15420 in / 3280 out
Turns: 5/cd PATH
Changes the working directory for all tools. The system prompt and tool paths are updated automatically:
bilge> /cd /path/to/other/project
Working directory: /path/to/other/projectMulti-Line Input
Use a trailing \ to continue input on the next line:
bilge> Write a function that \
...> takes a filename and \
...> returns the line count.The continuation prompt ...> appears automatically. Lines are joined with newlines before being sent to the LLM.
How It Works
Each time you send a message, Bilge follows this process:
- User message is added to the conversation history
- Full context (system prompt + conversation history) is sent to the LLM
- The LLM may request tool calls (read a file, run a command, etc.)
- Bilge executes each tool and feeds results back to the LLM
- Steps 3-4 repeat until the LLM produces a final text response (up to 50 rounds)
- The response is displayed along with a summary of tool executions
Tool Execution Summary
When tools are used, Bilge displays a summary:
⚙ glob_files **/*.jl (12ms)
⚙ read_file src/Bilge.jl (3ms)
⚙ read_file src/agent.jl (2ms)
This project is a Julia package called Bilge.jl...
[tokens: 2450 in / 180 out]Tips
The LLM is instructed to read files before making edits. This ensures it understands the existing code and makes accurate modifications.