@@ -32,7 +32,7 @@ To enable CodeGate, enable **Use custom base URL** and enter
3232
3333You need an [ OpenAI API] ( https://openai.com/api/ ) account to use this provider.
3434To use a different OpenAI-compatible endpoint, set the ` CODEGATE_OPENAI_URL `
35- [ configuration parameter] ( ../how-to/configure.md ) .
35+ [ configuration parameter] ( ../how-to/configure.md ) when you launch CodeGate .
3636
3737In the Cline settings, choose ** OpenAI Compatible** as your provider, enter your
3838OpenAI API key, and set your preferred model (example: ` gpt-4o-mini ` ).
@@ -80,9 +80,14 @@ locally using `ollama pull`.
8080<TabItem value = " lmstudio" label = " LM Studio" >
8181
8282You need LM Studio installed on your local system with a server running from LM
83- Studio's Developer tab to use this provider. See the
83+ Studio's ** Developer** tab to use this provider. See the
8484[ LM Studio docs] ( https://lmstudio.ai/docs/api/server ) for more information.
8585
86+ Cline uses large prompts, so you will likely need to increase the context length
87+ for the model you've loaded in LM Studio. In the Developer tab, select the model
88+ you'll use with CodeGate, open the ** Load** tab on the right and increase the
89+ ** Context Length** to _ at least_ 18k (18,432) tokens, then reload the model.
90+
8691<ThemedImage
8792 alt = ' LM Studio dev server'
8893 sources = { {
@@ -96,7 +101,8 @@ In the Cline settings, choose LM Studio as your provider and set the **Base
96101URL** to ` http://localhost:8989/openai ` .
97102
98103Set the ** Model ID** to ` lm_studio/<MODEL_NAME> ` , where ` <MODEL_NAME> ` is the
99- name of the model you're serving through LM Studio (shown in the Developer tab).
104+ name of the model you're serving through LM Studio (shown in the Developer tab),
105+ for example ` lm_studio/qwen2.5-coder-7b-instruct ` .
100106
101107<LocalModelRecommendation />
102108
0 commit comments