Skip to content

Show embedding model errors and prompt for download missing local embedding models #5211

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

fbricon
Copy link
Contributor

@fbricon fbricon commented Apr 17, 2025

Description

This PR provides 2 fixes:

Because indexing occurs mostly as a background process, I added a new messenger action to report errors to the IDE, the IDE may decide to do whatever with that.
For VS Code, if the reported error is an LLM Error for a missing model, and the attached LLM provider supports it, we're showing the usual notification, offering to download the model.
This notification mechanism had to be improved so that subsequent "missing downloadable models" notification would stop popping up while said model download is in progress. In order to do that I introduced a new isInstallingModel function to the ModelInstaller interface. While testing, I found there are multiple instances of the Ollama LLM class, so I needed to guard against concurrency issues when checking whether a model is being installed or not. I noticed the Docker LLM provider also implements ModelInstaller so I extended the changes there (although missing Docker models are currently ignored in handleLLMError, but that's another story).

2 main indexing workflows are covered:

  • codebase indexing (triggered on startup or when forcing indexing manually)
  • docs indexing (that is backing the @Docs provider)

There might be other workflows that I missed, if that is so, please let me know.

There's one caveat though. The missing embedder exception might be triggered several times in a row, via different parts of Continue, so the notification might show and slide back up as long as the model is missing.

Checklist

  • [] The relevant docs, if any, have been updated or created
  • [] The relevant tests, if any, have been updated or created

Screenshots

install-ollama-embedder.mp4

Testing instructions

  • install ollama
  • make sure you don't have nomic-embed-text installed, run ollama rm nomic-embed-text install if necessary
  • configure nomic-embed-text as embedder model, via ollama
  • start Continue on VS Code or force re-index
  • open the Indexing section, the indexing error should show in Red
  • in parallel, you should see the popup to install the model
  • similar popup should show when adding a new Documentation config
  • no popup should show while the model is being downloaded by VS Code

@fbricon fbricon requested a review from a team as a code owner April 17, 2025 16:43
@fbricon fbricon requested review from Patrick-Erichsen and removed request for a team April 17, 2025 16:43
Copy link

netlify bot commented Apr 17, 2025

Deploy Preview for continuedev canceled.

Name Link
🔨 Latest commit 1dc58c6
🔍 Latest deploy log https://app.netlify.com/sites/continuedev/deploys/68012fa4e36fc60008d09a1e

@fbricon fbricon changed the title Download embedder ollama Show embedding model errors and prompt for download missing local embedding models Apr 17, 2025
@Patrick-Erichsen
Copy link
Collaborator

Will circle back to review this early next week! 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Prompt for download for local embedding models Indexing fails silently on inaccessible embeddings model
2 participants