Providers
PersistenceAI supports multiple LLM providers. You can configure API keys for any provider you want to use.
Connecting a Provider
Run the /connect command in the TUI
Select a provider from the list
Follow the provider-specific authentication steps
Enter your API key when prompted
/connect
Supported Providers
PersistenceAI supports all major LLM providers:
And many more...
Provider Configuration
Provider configurations are stored in your global config file:
%USERPROFILE%\.config\pai\pai.json~/.config/pai/pai.jsonExample Configuration
{
"providers": {
"openai": {
"apiKey": "sk-..."
},
"anthropic": {
"apiKey": "sk-ant-..."
},
"ollama": {
"baseURL": "http://localhost:11434"
}
}
}
Ollama (Self-Hosted)
PersistenceAI has excellent support for Ollama, allowing you to run models locally without API costs.
Setup
Install Ollama from ollama.ai
Pull the models you want to use:
ollama pull llama2
ollama pull codellama
Connect to Ollama in PersistenceAI:
/connect
Select "Ollama" and enter your base URL (default: http://localhost:11434)
Recommended Models
PersistenceAI Zero
PersistenceAI Zero is a curated list of models that have been tested and verified by the PersistenceAI team.
Run /connect and select "PersistenceAI Zero"
Visit the authentication page
Sign in and get your API key
Paste the key in PersistenceAI
Model Selection
You can switch between models using:
F2 to cycle through recently used models/models to see all available modelsM (default: Ctrl+X M)Provider-Specific Notes
OpenAI
Requires API key from platform.openai.com
Supports GPT-4, GPT-3.5, and other models
Usage-based billing
Anthropic
Requires API key from console.anthropic.com
Supports Claude 3 models
Usage-based billing
Requires API key from Google AI Studio
Supports Gemini models
Usage-based billing
Ollama
Free and self-hosted
No API key required
Runs models locally on your machine
Best for privacy-sensitive projects
For more information, see: