Troubleshooting & FAQ
Protocol for troubleshooting and FAQ from user feedback
Troubleshooting
If you are experiencing any issues, please follow these steps to troubleshoot:
- Check if you are on the latest version of Copilot.
- Disable all other plugins to see if Copilot is causing the issue.
- Turn on debug mode in Copilot settings.
- Open the dev console in Obsidian by pressing
cmd + option + i
on Mac orctrl + shift + i
on Windows. - Try to understand the error message.
- If you can't solve the problem and believe it's a bug, check GitHub to see if it's already reported. If not, please open an issue following the bug report template.
Frequently Asked Questions
๐ธ RangeError: invalid string length
Q: My vault files are too big. How do I fix indexing issues caused by file size?
A: Split the vault into smaller partitions using the Partitioning setting in QA options. Rule of thumb: the first partition file should be less than 400mb.
๐ธ Referencing notes.
Q: Can I reference a note in Copilot chat?
A: Yesโuse [ [Note Name] ]
syntax to reference notes.
๐ธ Truncated AI responses.
Q: My AI response is getting cut off. How do I fix that?
A: Raise the Max Token limit in Copilot Settings -> Model Settings -> Token Limit.
๐ธ Indexing doesn't work.
Q: Indexing isn't working. What should I do?
A:
- Use the 3-dot menu โ Run Copilot: Index (refresh) vault or Copilot: Force reindex vault in Command Palette (Mac: Cmd + P, Windows: Ctrl + P).
- Switch to a larger text embedding model and tune batch size + RPM if using OpenAI's small model.
- If you see a "RangeError: invalid string length", increase the number of partitions in QA settings if the first index file exceeds ~300MB. Check
.obsidian/
forcopilot-index
files for their sizes.
๐ธ Indexing but not retrieving.
Q: Copilot isn't finding notes I know are indexed. What should I check?
A:
- If you're not a Copilot Plus subscriber, use Vault QA. If you are, switch to Copilot Plus mode and use the @vault tool in chat.
- Raise "Max Sources" in QA settings.
- Try the multilingual model for non-English notes.
- Review QA filters for inclusion/exclusion.
- Run the "List Indexed Files" command.
- Run "Copilot:Force reindex vault" in Command Palette (Mac: Cmd + P, Windows: Ctrl + P).
๐ธ API Basics.
Q: What is an API and how do I set it up for Copilot?
A:
- Use your OpenAI API key under Copilot Settings > Basic Settings -> API Keys -> Set Key.
- For Copilot Plus, enter your license key in the Plus License field. Obtain your license key in your dashboard on https://www.obsidiancopilot.com/en/dashboard.
๐ธ Models with Vision.
Q: Why can't the model understand images?
A: Only models marked with a vision icon support image understanding. Enable via Copilot Settings โ Model โ Capabilities -> Vision icon is shown.
๐ธ Enforcing English Responses.
Q: How can I make sure Copilot always replies in English?
A:
- Copilot Settings -> Advanced Settings -> Set "User System Prompt" to "Always respond in English".
๐ธ Image as context.
Q: Can Copilot understand images in my vault?
A:
- Only supported in Copilot Plus.
- Ensure Copilot Settings -> Model Settings -> Select a model with "vision capabilities" enabled
- Ensure Copilot Settings -> Advanced Settings -> "Images in markdown" turned on
- ๐ก Tip: Set a longer context window for your selected models.
๐ธ PDF reading issues.
Q: Why can't Copilot read my PDF?
A:
- Large PDFs (>10MB) should be converted to Markdown.
- PDF parsing is costly; Project Mode is better suited for large PDFs, coming soon.
- ๐ก Tip: Reduce size by converting embedded images to text.
๐ธ Mobile/iPad support.
Q: Why doesn't Copilot work fully on mobile or iPad?
A: Mobile/iPad support is experimental and may lack full feature parity with desktop.
๐ธ Privacy Concerns.
Q: Which AI model should I use with Copilot to minimize the risk of exposing sensitive data, while still being able to process non-private data?
A: Gemini, when accessed via its paid API (which is the foundation for copilot-plus-flash), does not use user input data for training its models. This makes it a safe option for processing your data. For maximum privacy, consider using local or self-hosted models. We are going to ship an "offline mode" for Believers with maximum privacy in mind.