The AI coding agent built for the terminal.
To use PersistenceAI, you'll need:
curl -fsSL https://persistence-ai.github.io/Landing/install.sh | bash
Full-featured Terminal IDE with AI Chat, File Explorer, and Code Editor
💡 All features run directly in your terminal - no browser required
Install PersistenceAI with a single command
curl -fsSL https://persistence-ai.github.io/Landing/install.sh | bash
Set up your LLM provider API keys
# In TUI, run:
/connect
# Or via CLI:
persistenceai auth login
Start coding with AI assistance
cd /path/to/project
persistenceai
# Ask anything!
Get full IDE features—file explorer, code editor, and LSP support—all in your terminal. No GUI overhead, works over SSH and on headless units with terminals.
Run multiple AI conversations in parallel. Switch between sessions like tmux. Each session has its own workspace.
Use any LLM provider—Claude, GPT-4, Gemini, or local models via Ollama. No vendor lock-in. Switch providers per task.
10x lighter than VSCode. Runs on low-end machines, cloud instances, Docker containers, and headless units with terminals. Works in CI/CD pipelines.
Clear documentation, familiar keybindings, and intuitive interface. Easy to learn, powerful to use.
Join our active Discord community or reach out to support—fast, friendly, and AI-savvy.
Work on remote servers via SSH without heavy GUI tools.
# SSH into remote server
ssh user@remote-server
cd /path/to/project
persistenceai
# Full IDE features over SSH
Run multiple AI conversations in parallel for different tasks.
# Session 1: Frontend refactoring
# Session 2: Backend API design
# Session 3: Database optimization
# Switch with <leader>L (Ctrl+X then L)
# Create new with <leader>N (Ctrl+X then N)
Integrate AI assistance directly into GitHub workflows.
# In GitHub issue or PR:
/persistenceai fix this
/persistenceai explain this issue
# Runs in GitHub Actions
Run on cloud instances, Docker containers, and headless units.
# Start server mode
persistenceai serve
# Access via API or attach client
persistenceai run --attach http://server:4096
Get AI-powered code reviews and explanations.
# Review specific file
Review @packages/api/src/auth.ts for security issues
# Explain complex function
How does @utils/encryption.ts work?
Use any LLM provider—OpenAI, Anthropic, local models, or custom APIs.
# Switch providers on the fly
/connect
# Use local Ollama models
persistenceai run --model ollama/llama3
PersistenceAI combines the power of AI coding assistants with a native terminal interface. Get full IDE features—file explorer, code editor, and LSP support—all in your terminal.
PersistenceAI includes three built-in primary agents you can switch between using the Tab key. Each agent has different capabilities and approaches to problem-solving.
LSP support brings IDE-like features to the terminal. Get go to definition, real-time diagnostics, and hover documentation.
Run multiple AI conversations in parallel. Each session has its own workspace. Switch between sessions like tmux. Share sessions via links.
PersistenceAI includes powerful subagents and customization options. Create specialized agents for specific tasks or invoke them with @ mentions.
Powerful features to accelerate your workflow. File references, bash commands, themes, and more.
The easiest way to install PersistenceAI is through the install script. You can also use your favorite package manager.
Quick install for Linux and macOS
For Windows, use PowerShell: iwr -useb https://persistence-ai.github.io/Landing/install.ps1 | iex
Install via npm
Install via Bun
Install via pnpm or Yarn
macOS and Linux
Install via Paru
Install via Chocolatey
Install via Scoop
PersistenceAI is highly configurable. Customize tools, agents, models, themes, keybinds, formatters, permissions, and more through your PersistenceAI.json config file.
Configure API keys for any LLM provider. PersistenceAI supports Claude, OpenAI, Google, and local models via Ollama.
# Run in TUI
/connect
# Or via CLI
persistenceai auth login
# Or use environment variables
export OPENAI_API_KEY="your-key-here"
export ANTHROPIC_API_KEY="your-key-here"
Control which tools the LLM can use. Configure globally or per-agent. Built-in tools include:
Execute shell commands
Modify existing files
Create new files
Read file contents
Search with regex
Find files by pattern
Fetch web content
Apply patches
# Configure tools globally
{
"tools": {
"write": true,
"bash": true,
"webfetch": false
},
"agent": {
"plan": {
"tools": {
"write": false,
"bash": false
}
}
}
}
Provide custom instructions to PersistenceAI by creating an AGENTS.md file. This helps PersistenceAI understand your project structure and coding patterns.
# Initialize AGENTS.md
/init
# Or create manually
# Project-specific: ./AGENTS.md
# Global: ~/.config/PersistenceAI/AGENTS.md
PersistenceAI automatically formats code using language-specific formatters. Supports Prettier, Biome, Ruff, gofmt, and many more.
{
"formatter": {
"prettier": {
"disabled": false
},
"custom-formatter": {
"command": ["npx", "prettier", "--write", "$FILE"],
"extensions": [".js", ".ts"]
}
}
}
Control what actions agents can take. Set permissions to ask, allow, or deny for file edits, bash commands, and web fetching.
{
"permission": {
"edit": "ask",
"bash": {
"git push": "ask",
"*": "allow"
}
}
}
Extend PersistenceAI with Model Context Protocol servers. Add database access, API integrations, and third-party services.
Out-of-the-box enabled MCPs:
Remote web search via Exa AI for current information
Local code search and transformation tool
Documentation search for official library docs
Code search across millions of GitHub repositories
{
"mcp": {
"my-mcp-server": {
"type": "local",
"command": ["npx", "-y", "@modelcontextprotocol/server-everything"],
"enabled": true
}
}
}
Configure Language Server Protocol servers for IDE-like features. PersistenceAI automatically downloads and manages LSP servers.
{
"lsp": {
"typescript": {
"command": ["typescript-language-server", "--stdio"]
}
}
}
Define your own tools that the LLM can call. Create custom functions for project-specific workflows.
{
"customTools": {
"deploy": {
"description": "Deploy the application",
"command": ["npm", "run", "deploy"]
}
}
}
PersistenceAI supports multiple config file locations with merging precedence. Configs are merged, not replaced.
# Global config (applies to all projects)
~/.config/PersistenceAI/PersistenceAI.json
# Project-specific config (overrides global)
./PersistenceAI.json
# Custom config path via environment variable
export PersistenceAI_CONFIG=/path/to/config.json
Create custom modes with specific temperature, prompts, and tool configurations for different workflows.
{
"mode": {
"analyze": {
"temperature": 0.1,
"prompt": "{file:./prompts/analysis.txt}",
"tools": {
"write": false,
"bash": false
}
},
"brainstorm": {
"temperature": 0.7,
"prompt": "{file:./prompts/creative.txt}"
}
}
}
Define reusable commands for repetitive tasks. Commands can have templates, descriptions, and agent assignments.
{
"command": {
"test": {
"template": "Run the full test suite with coverage report.",
"description": "Run tests with coverage",
"agent": "build",
"model": "anthropic/claude-haiku-4-5"
},
"component": {
"template": "Create a new React component named $ARGUMENTS with TypeScript.",
"description": "Create a new component"
}
}
}
Configure how conversations are shared. Options: manual (default), auto, or disabled.
{
"share": "manual"
// "auto" - Automatically share new conversations
// "disabled" - Disable sharing entirely
}
Use environment variables and file contents in your config for sensitive data and dynamic configuration.
{
"model": "{env:PersistenceAI_MODEL}",
"provider": {
"anthropic": {
"options": {
"apiKey": "{env:ANTHROPIC_API_KEY}"
}
},
"openai": {
"options": {
"apiKey": "{file:~/.secrets/openai-key}"
}
}
},
"instructions": ["CONTRIBUTING.md", "docs/guidelines.md"]
}
PersistenceAI automatically detects and uses available tools. Some features require specific dependencies:
Prettier, Biome, Ruff, gofmt, etc. Auto-detected from project dependencies.
Many auto-install. Some require: TypeScript, Python, Go, Ruby, etc.
Modern terminal emulator (Windows Terminal, WezTerm, Alacritty, etc.)
xclip, xsel, or wl-clipboard for copy/paste functionality.
Control automatic updates. PersistenceAI checks for updates on startup by default.
{
"autoupdate": true
// false - Disable updates
// "notify" - Notify but don't auto-update
}
Customize the terminal UI experience with scroll speed and acceleration settings.
{
"tui": {
"scroll_speed": 3,
"scroll_acceleration": {
"enabled": true
}
}
}
Prevent specific providers from loading even if credentials are available.
{
"disabled_providers": ["openai", "gemini"]
}
Learn more: Check out the full configuration documentation for all available options.
Navigate to your project directory and run PersistenceAI. Then initialize it for your project.
cd /path/to/project
persistenceai
# In TUI, run:
/init
This analyzes your project and creates an AGENTS.md file to help PersistenceAI understand your project structure and coding patterns.
You are now ready to use PersistenceAI to work on your project. Feel free to ask it anything!
You can ask PersistenceAI to explain the codebase to you.
How is authentication handled in @packages/functions/src/api/index.ts
Tip: Use the @ key to fuzzy search for files in the project.
You can ask PersistenceAI to add new features to your project. First, switch to Plan mode using the Tab key to see how it'll implement the feature.
# Switch to Plan mode
<TAB>
# Describe what you want
When a user deletes a note, we'd like to flag it as deleted in the database.
Then create a screen that shows all the recently deleted notes.
From this screen, the user can undelete a note or permanently delete it.
For more straightforward changes, you can ask PersistenceAI to directly build it without having to review the plan first.
We need to add authentication to the /settings route. Take a look at how this is
handled in the /notes route in @packages/functions/src/notes.ts and implement
the same logic in @packages/functions/src/settings.ts
If you realize a change isn't what you wanted, you can undo it using the /undo command.
/undo
You can run /undo multiple times to undo multiple changes. Or use /redo to redo changes.
Use the @ symbol to reference files in your messages. PersistenceAI will automatically include the file content.
How is authentication handled in @packages/functions/src/api/index.ts
Start a message with ! to run shell commands directly. The output is added to the conversation.
!git status
Use @ mentions to invoke specialized subagents for specific tasks.
@general help me search for this function across the codebase
PersistenceAI provides powerful CLI commands for scripting, automation, and programmatic access. Run commands directly or use the interactive TUI.
Run PersistenceAI in non-interactive mode by passing a prompt directly. Perfect for scripting and automation.
persistenceai run "Explain how closures work in JavaScript"
# With specific agent
persistenceai run --agent plan "Review this code for potential issues"
# With specific model
persistenceai run --model anthropic/claude-sonnet-4 "Analyze this function"
Start a headless PersistenceAI server for API access. Perfect for remote development, CI/CD pipelines, and programmatic integration.
# Start server
persistenceai serve
# Attach to running server to avoid cold boot times
persistenceai run --attach http://localhost:4096 "Explain async/await"
Run on headless units with terminals—perfect for remote servers, cloud instances, and Docker containers.
Create and manage custom agents with specialized configurations.
# Create a new agent
persistenceai agent create
# Interactive wizard guides you through configuration
Manage API keys for different LLM providers.
# Login to configure API keys
persistenceai auth login
# List configured providers
persistenceai auth list
# Logout from a provider
persistenceai auth logout
List all available models from your configured providers.
# List all models
persistenceai models
# Filter by provider
persistenceai models anthropic
# Refresh model cache
persistenceai models --refresh
Integrate PersistenceAI directly into your GitHub workflow with GitHub Actions.
# Install GitHub agent in your repository
persistenceai github install
# Mention in issues or PRs
/persistenceai explain this issue
/persistenceai fix this
PersistenceAI runs securely inside your GitHub Actions runners, with full access to your repository context.
PersistenceAI (also known as PAI) is an enterprise-grade terminal AI IDE and agentic coding multiplexer. It provides full IDE features with multi-agent AI integration, multi-session AI conversations, terminal-first design, and vibe coding capabilities.
PersistenceAI requires a modern terminal emulator that supports TUI (Terminal User Interface). Recommended terminals include WezTerm, Alacritty, Ghostty, Kitty, or Windows Terminal. On Windows, PowerShell 5.1 won't work—use Windows Terminal or PowerShell 7+.
PersistenceAI supports 75+ LLM providers through Models.dev, including OpenAI, Anthropic (Claude), Google (Gemini), and local models via Ollama. You can also configure custom providers with OpenAI-compatible APIs.
Yes! You can use local models with Ollama. Install Ollama, download a model, and configure PersistenceAI to use it. No API keys required for local development.
You can run multiple AI conversations simultaneously in separate sessions. Each session maintains its own context and history. Create new sessions with <leader>N (default: Ctrl+X then N) or list/switch sessions with <leader>L (default: Ctrl+X then L). Perfect for working on different features or tasks in parallel.
Yes! PersistenceAI is designed for remote development. Simply SSH into your remote server and run PersistenceAI. All IDE features work over SSH, making it perfect for cloud instances and remote servers.
Yes! PersistenceAI supports fully customizable keybindings through your config file. You can remap any command to your preferred key combination. See the Configure section for details.
The Plan agent creates a step-by-step plan before making changes, perfect for complex features. The Build agent directly implements changes, ideal for straightforward tasks. The Oligarchy agent combines both with a voting system—both Build and Plan propose solutions and vote on each other's proposals, ensuring consensus before execution. You can switch between them with Tab or create custom agents for specific workflows.
Use the /share command in the TUI to generate a shareable link. This creates a read-only view of your conversation that others can access via web browser. Perfect for code reviews, debugging, or sharing AI-generated solutions.
PersistenceAI is not fully open source, though some areas may be open sourced in the future. The project is actively developed and available for use. Check our GitHub repository for the latest licensing information.
Yes! PersistenceAI's server mode is perfect for CI/CD. Start a headless server and access it via API or attach a client. It also integrates directly with GitHub Actions—mention /persistenceai in issues or PRs to trigger AI assistance.
Still have questions? Join our Discord or check our GitHub
Stay up to date with the latest features, improvements, and bug fixes in PersistenceAI.
The changelog will be available on our GitHub repository. Check back regularly for updates!