Replies: 1 comment
-
|
Cursor parity from Continue + Ollama is achievable but you have to wire four separate roles correctly - that is the part the docs gloss over. Here is a working config that gets you there on a 2070 (8GB VRAM is fine if you size models right). 1. The four model roles you need filled. Continue separates models by role. Cursor-equivalent behaviour needs all four:
If only 2. Working YAML config (drop into name: Local Cursor-like
version: 0.1.0
schema: v1
models:
- name: chat-qwen
provider: ollama
model: qwen2.5-coder:7b-instruct
apiBase: http://localhost:11434
roles: [chat, edit, apply]
- name: autocomplete-qwen
provider: ollama
model: qwen2.5-coder:1.5b-base
apiBase: http://localhost:11434
roles: [autocomplete]
defaultCompletionOptions:
maxTokens: 256
temperature: 0.1
- name: embed-nomic
provider: ollama
model: nomic-embed-text
apiBase: http://localhost:11434
roles: [embed]
context:
- provider: code
- provider: codebase
- provider: diff
- provider: terminal
- provider: openPull the models first: ollama pull qwen2.5-coder:7b-instruct
ollama pull qwen2.5-coder:1.5b-base
ollama pull nomic-embed-text3. The "I don't have access to your project" failure is almost always missing Continue passes only what you've named in 4. Confirm the agent mode is actually engaging with tools. After loading a workspace, in the chat panel use 5. If you want true agent (multi-step tool use, not just chat-with-context), enable Agent mode and add MCP tools. mcpServers:
- name: filesystem
type: stdio
command: uvx
args: ["mcp-server-filesystem", "/path/to/your/project"]Switch the chat panel mode dropdown from "Chat" to "Agent" - now the model can call tools ( Reality-check on what you cannot get from local-only setup. Cursor's "Composer" multi-file edit feels different because it relies on a fast remote model. With a 7b local model you get correct single-file edits and decent autocomplete; multi-file refactors land slower and benefit a lot from temperature 0 + small Recipe: pull all 3 ollama models -> drop the YAML above into |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi all
I am stuck, i have been trying with smaller models and several types of configs from json to yaml and ultimately i get nowhere with both asking how to configure continue in my machine to talk to ollama and act as a fully fledged agent. It would be great if continue could provide a baseline to get an agent working in visual studio....
Page at https://docs.continue.dev/docs/ide-extensions/autocomplete/model-setup#model-recommendations seems to be dead
And all attempts i have tried yield responses like
``I'm sorry but as an AI language model, I don't have access to specific projects or their codebases. If you can provide more information about the project and what kind of files you are looking for, I'd be happy to help you find them.
and various error in the logs, if anything at all.
I found now the page at https://docs.continue.dev/guides/ollama-guide#method-3-manual-configuration
but i was wondering if it was possible to ask for a confirmation of how to get this plugin to talk to my ollama running local models so that indeed it behaves as close as possible to cursor.
My machine is not massive but still has an RTX 2070 and whilst following the docs or asking online I am still unable to get it work
Beta Was this translation helpful? Give feedback.
All reactions