Copilot Settings
How to configure Copilot settings to get the most out of it.
Basic
At the top of the Copilot settings page, you can find general settings for the plugin.
API Keys
In the Basic Settings section, you can configure API keys for built-in providers like OpenAI, Anthropic, Google, and Cohere, which is essential for enabling the Copilot plugin to perform tasks, for example, summarizing. Simply paste your API keys into the respective fields to get started. While API keys for built-in providers are set in the API Settings section, keys for third-party providers can be added in the custom model section when you add a custom model.
See Getting Started for detailed instructions.
Default Chat Model
This is where you set your default model for the plugin. Whenever the plugin starts or reloads, it will use the model you set here as the default Chat model. Note: Selected models you have added or enabled will be available here for setting as the default model. You can select your preferred default model in the Model Settings.

Embedding Model
Embedding models are used to convert text into vector representations, which are crucial for tasks like semantic search and vault indexing. In the Model Settings, you can choose the embedding model that best fits your use case, balancing performance and cost. Note: Selected models you have added or enabled will be available here for setting as the default model. You can select your preferred models in the Model Settings.
You can set the embedding model here; for example, if you use OpenAI's API, you might select the text-embedding-3-small
model.

Conversation Settings
- Default Conversation Folder Name: The default folder name where chat conversations will be saved. Default is 'copilot-conversations'. Tip: To save a conversation to a specific location, type the folder path, e.g., “04_Resources/Copilot/copilot-conversations.”
- Custom Prompts Folder Name: The default folder name where custom prompts will be saved. Default is 'copilot-custom-prompts'.
- Default Conversation Tag: The default tag to be used when saving a conversation. Default is 'ai-conversations'.
- Conversation Filename Template: Customzie the file name of your conversation notes. The filename must include elements: topic, date, time. For example,
{$topic}@{$date}_{$time}
for journal_last_week@20250222_111111. - Autosave Chat: Automatically save the chat when starting a new one or when the plugin reloads. If it's turned off, you will need to manually click Save Chat as Note above your chat box.
- Suggested Prompts: Recommended prompts appear above your chat view for easy access.
- Relevant Notes: Display notes related to your active note in the chat view.

Model
Copilot supports a variety of chat models, including those specialized for vision and reasoning tasks. These models allow you to leverage advanced capabilities such as image understanding and complex logical reasoning within your conversations.
You can select your preferred chat model in the Model Settings to tailor the plugin’s behavior to your needs. For example, if you'd like to parse images in your chat, you should select a model with vision icon light on.

LLM Parameters for Chat Model
- Temperature: Controls randomness; default is 0.1. Lower values make answers more focused, higher values make them more creative.
- Token Limit: Maximum tokens generated per response. Default is 1000.
- Conversation Turns: Number of past exchanges kept for context. One turn consists of 2 messages (your input and AI response). More turns give better context but use more tokens and may cost more. Default is 15 turns.

Adding Custom Models
You can add your own models to the plugin, no matter if they are models from the existing providers such as OpenAI, Anthropic, Google, Cohere, Groq, OpenRouter, or local models powered by Ollama or LM Studio, or any other 3rd party models as long as they support OpenAI API format.
How to Add Custom Model? An Easy Guide
To add a custom model in Copilot, follow these steps:
- Go to Copilot Basic Settings.
- Navigate to API Keys and click on Set Keys and enter your API key (for example, your OpenAI API Key).
- Verify that the API key is accepted.
- Click on Add Model.
- Select a Custom Model from the options.
Add a custom model in Set API Keys module under General tab.

Select the custom model from the dropdown menu via Add Model button.

⚠️ If you see an error related to CORS when calling from Copilot, it means Copilot cannot directly call that API due to CORS restrictions; as a workaround, follow "Add Custom Model" examples below and enable the CORS option to bypass this limitation.
Example 1: Adding a model from an existing provider, Anthropic
First, fill your API key for Anthropic in the API settings. You can get your API key from here.
Next, got to Copilot Settings, under Model tab, scroll down to click Add Custom Model button, fill in the model name, select Anthropic as the provider, and click on the Add Model
button. No need to provide URL (because it is an builtin provider) or API key here (as you already have it in the API Settings). Click "Add Model" and you will see the model in the list above!

In the pop-up menu for adding a custom model, input the required information.

The same process applies to other providers, such as OpenAI, Google, Cohere, Groq, OpenRouter, etc.
How to Add Custom Model? Mores Examples
Example 2: Adding a local model powered by LM Studio
First you need to install LM Studio on your computer. In LM Studio, download a model you like, and go to the developer tab. Load the model, enable CORS (⚠️ important), and click on "Start Server".

Now, find that model name on the right panel and copy it.

Next, go back to Copilot Settings page, toggle open the Custom Models section, paste the model name you just copied, select "LM Studio" as the provider, and click on the Add Model
button.
Example 3: Adding a local model powered by Ollama
First you need to install Ollama on your computer. In Ollama, run ollama pull
to pull a model you like, and close the Ollama app (important!).
Then go to your terminal and run ollama serve
to start the server.
Similar to LM Studio, just fill in the model name, select "Ollama" as the provider, and click on the Add Model
button. Make sure you have the correct model name you pulled, or your server will show 404 error.
Example 4: Adding an OpenAI compatible model from a 3rd party provider
Here I use Perplexity as an example. First, get an API key from them and make sure you have payment method added. You can find their endpoint URL https://api.perplexity.ai
from their docs here.
In the custom model form, fill in the model name, select "3rd party (openai format)" as the provider, and paste the API URL you just got from Perplexity into the Base URL field. Also fill in the model name e.g. llama-3.1-sonar-small-128k-online
and API key.

⚠️ Perplexity is used as an example because their server does not support CORS. If you don't check CORS in Copilot Settings, the chat will fail with these errors. Make sure you check CORS and then save the settings.


With CORS enabled, you should be able to chat with it. Note that Obsidian currently does not support streaming with this CORS mode, so in this case you will lose streaming.
Providers other than Perplexity may support CORS natively so you don't need to enable CORS. For any new 3rd party provider, always check the dev console for CORS errors.
QA
To customize your Vault QA experience, you can adjust several settings:
-
Auto-Index Strategy: This determines when your vault is indexed for searching. Options include:
- ON MODE SWITCH (Recommended): Automatically update the index when switching to Vault QA mode.
- NEVER: Index only when you manually run index (refresh) vault or force reindex vault command.
- ON STARTUP: Refresh the index each time you load or reload Copilot plugin.
-
Embedding Model: Choose your embedding model (vector representation) of your notes. Options may include OpenAI embedding models, Google or Cohere, or local models like those from Ollama. Same as chat model, you can use any 3rd party embedding model you want as long as it has an OpenAI compatible API.
-
Requests per Minute: Adjust this if you're experiencing rate limiting from your embedding provider. Default is 90.
-
Indexing & Exclusions: Specify folders, tags, or note name patterns you want to exclude from indexing. This helps manage large vaults or keep certain information private so they are never sent to the LLM.
-
Number of Partitions: Set controls how your vault's index is split into multiple parts during the Vault QA indexing process. By default, this value is set to 1, meaning the entire vault index is stored in a single file. If your vault files are too large, consider increase (troubleshooting)
-
Inclusions or Exclusions: Set tags, folders, notes, or extension types to be included or excluded from your vault search. Note: Tags should be in the properties of notes, not inline tags

Command
You can use Copilot commands in the Obsidian command palette to quickly summarize, translate, or ask questions about the current file or selection. You can find all the commands in your Obsidian command palette.
Go to Copilot Settings under Command tab, you can set up commands:
- Toggle some commands off if you do not need them. Note that some important commands cannot be turned off, for example, the
index vault
command will always be available. - Edit the existing commands to tailor its action to your preference. For example, translate in Spanish can be changed to translate in Japanese if you live in Japan. Tip: Commands are automatically loaded from .md files in your custom prompts folder copilot-custom-prompts. Modifying the files will also update the command settings.
- Add a command by being creative here. See all Copilot commands here.
Configure Copilot commands under Command tab in Copilot Settings. Choose the commands you would like to list in your right-click or invoke via slash commmand in the chat pane. Edit the commands if necessary.

Click Add Command, customize a command in the pop-up window.

Plus
Customize your Copilot Plus experience with these Plus settings:
- Include Current Note in Context Menu: Automatically adds the current note to chat context.
- Images in Markdown: Sends embedded images to AI (only with multimodal models).
- Autocomplete: Enable AI-powered sentence suggestions while typing, with upcoming support for word completion based on your vault. Customize the key used to accept suggestions.
- Allow Additional Context: Enable access to relevant notes beyond the current one for more accurate responses.
- Chat Model Default is "copilot-plus-flash" (Gemini Flash with one context window), subject to change based on the best available model.
- Embedding Model Options for Vault Search
- copilot-plus-small: Default embedding model for general use.
- copilot-plus-multilingual: Use for notes in multiple languages.
- copilot-plus-large (Believer plan only): Powerful embedding model designed for advanced reasoning tasks such as legal notes, math formulas, and complex multilingual content.
- Other Basic Settings - Auto-index strategy, indexing inclusions & exclusions, and more to tailor your Copilot Plus experience.
