Show embedding model errors and prompt for download missing local embedding models #5211
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
This PR provides 2 fixes:
Because indexing occurs mostly as a background process, I added a new messenger action to report errors to the IDE, the IDE may decide to do whatever with that.
For VS Code, if the reported error is an LLM Error for a missing model, and the attached LLM provider supports it, we're showing the usual notification, offering to download the model.
This notification mechanism had to be improved so that subsequent "missing downloadable models" notification would stop popping up while said model download is in progress. In order to do that I introduced a new
isInstallingModel
function to the ModelInstaller interface. While testing, I found there are multiple instances of the Ollama LLM class, so I needed to guard against concurrency issues when checking whether a model is being installed or not. I noticed the Docker LLM provider also implements ModelInstaller so I extended the changes there (although missing Docker models are currently ignored in handleLLMError, but that's another story).2 main indexing workflows are covered:
@Docs
provider)There might be other workflows that I missed, if that is so, please let me know.
There's one caveat though. The missing embedder exception might be triggered several times in a row, via different parts of Continue, so the notification might show and slide back up as long as the model is missing.
Checklist
Screenshots
install-ollama-embedder.mp4
Testing instructions
nomic-embed-text
installed, runollama rm nomic-embed-text install
if necessarynomic-embed-text
as embedder model, via ollama