Skip to content
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -288,7 +288,7 @@ Then open **http://127.0.0.1:8088/** for the Console. Config, memory, and skills
> --add-host=host.docker.internal:host-gateway \
> -v copaw-data:/app/working agentscope/copaw:latest
> ```
> Then in CoPaw **Settings → Models → Ollama**, change the Base URL to `http://host.docker.internal:11434` or your corresponding port.
> Then in CoPaw **Settings → Models**, change the Base URL to `http://host.docker.internal:<port>` — for example, `http://host.docker.internal:11434` for Ollama, or `http://host.docker.internal:1234/v1` for LM Studio.
>
> **Option B** — Host networking (Linux only):
> ```bash
Expand Down
2 changes: 1 addition & 1 deletion README_zh.md
Original file line number Diff line number Diff line change
Expand Up @@ -290,7 +290,7 @@ docker run -p 127.0.0.1:8088:8088 -v copaw-data:/app/working agentscope/copaw:la
> --add-host=host.docker.internal:host-gateway \
> -v copaw-data:/app/working agentscope/copaw:latest
> ```
> 然后在 CoPaw **设置 → 模型 → Ollama** 中,将 Base URL 改为 `http://host.docker.internal:11434` 或对应端口
> 然后在 CoPaw **设置 → 模型** 中,将 Base URL 改为 `http://host.docker.internal:<端口>` — 例如 Ollama 填 `http://host.docker.internal:11434`,LM Studio 填 `http://host.docker.internal:1234/v1`
>
> **方式 B** — 使用宿主机网络(仅限 Linux):
> ```bash
Expand Down
1 change: 1 addition & 0 deletions console/src/api/types/provider.ts
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ export interface ProviderInfo {
freeze_url: boolean;
api_key: string;
base_url: string;
require_api_key: boolean;
}

export interface ProviderConfigRequest {
Expand Down
1 change: 1 addition & 0 deletions console/src/locales/en.json
Original file line number Diff line number Diff line change
Expand Up @@ -346,6 +346,7 @@
"azureEndpointHint": "Azure OpenAI endpoint, e.g. https://<resource>.openai.azure.com/openai/v1",
"anthropicEndpointHint": "Anthropic endpoint, e.g. https://api.anthropic.com",
"ollamaEndpointHint": "Ollama endpoint, e.g. http://localhost:11434",
"lmstudioEndpointHint": "LM Studio endpoint, e.g. http://localhost:1234/v1",
"apiEndpointHint": "API endpoint, e.g. https://api.example.com",
"pleaseEnterBaseURL": "Please enter the API base URL",
"pleaseEnterValidURL": "Please enter a valid URL",
Expand Down
1 change: 1 addition & 0 deletions console/src/locales/ja.json
Original file line number Diff line number Diff line change
Expand Up @@ -343,6 +343,7 @@
"azureEndpointHint": "Azure OpenAIエンドポイント、例: https://<resource>.openai.azure.com/openai/v1",
"anthropicEndpointHint": "Anthropicエンドポイント、例: https://api.anthropic.com",
"ollamaEndpointHint": "Ollamaエンドポイント、例: http://localhost:11434",
"lmstudioEndpointHint": "LM Studioエンドポイント、例: http://localhost:1234/v1",
"apiEndpointHint": "APIエンドポイント、例: https://api.example.com",
"pleaseEnterBaseURL": "APIベースURLを入力してください",
"pleaseEnterValidURL": "有効なURLを入力してください",
Expand Down
1 change: 1 addition & 0 deletions console/src/locales/ru.json
Original file line number Diff line number Diff line change
Expand Up @@ -346,6 +346,7 @@
"azureEndpointHint": "Azure OpenAI endpoint, например https://<resource>.openai.azure.com/openai/v1",
"anthropicEndpointHint": "Anthropic endpoint, например https://api.anthropic.com",
"ollamaEndpointHint": "Ollama endpoint, например http://localhost:11434",
"lmstudioEndpointHint": "LM Studio endpoint, например http://localhost:1234/v1",
"apiEndpointHint": "API endpoint, например https://api.example.com",
"pleaseEnterBaseURL": "Пожалуйста, введите базовый URL API",
"pleaseEnterValidURL": "Пожалуйста, введите корректный URL",
Expand Down
1 change: 1 addition & 0 deletions console/src/locales/zh.json
Original file line number Diff line number Diff line change
Expand Up @@ -346,6 +346,7 @@
"azureEndpointHint": "Azure OpenAI 端点,例如 https://<resource>.openai.azure.com/openai/v1",
"anthropicEndpointHint": "Anthropic 端点,例如 https://api.anthropic.com",
"ollamaEndpointHint": "Ollama 端点,例如 http://localhost:11434",
"lmstudioEndpointHint": "LM Studio 端点,例如 http://localhost:1234/v1",
"apiEndpointHint": "API 端点,例如 https://api.example.com",
"pleaseEnterBaseURL": "请输入 API 基础 URL",
"pleaseEnterValidURL": "请输入有效的 URL",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,7 @@ export function RemoteProviderCard({
const isConfigured =
provider.is_local ||
(provider.is_custom && provider.base_url) ||
(!provider.require_api_key && !!provider.base_url) ||
provider.api_key;
const hasModels = totalCount > 0;
const isAvailable = isConfigured && hasModels;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -78,6 +78,9 @@ export function ProviderConfigModal({
if (provider.id === "ollama") {
return t("models.ollamaEndpointHint");
}
if (provider.id === "lmstudio") {
return t("models.lmstudioEndpointHint");
}
if (provider.is_custom) {
return effectiveChatModel === "AnthropicChatModel"
? t("models.anthropicEndpointHint")
Expand All @@ -102,6 +105,9 @@ export function ProviderConfigModal({
if (provider.id === "ollama") {
return "http://localhost:11434";
}
if (provider.id === "lmstudio") {
return "http://localhost:1234/v1";
}
if (provider.is_custom && effectiveChatModel === "AnthropicChatModel") {
return "https://api.anthropic.com";
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ interface ModelsSectionProps {
api_key?: string;
is_custom: boolean;
is_local?: boolean;
require_api_key?: boolean;
}>;
activeModels: {
active_llm?: {
Expand Down Expand Up @@ -50,7 +51,7 @@ export function ModelsSection({
(p.models?.length ?? 0) + (p.extra_models?.length ?? 0) > 0;
if (!hasModels) return false;
if (p.is_local) return true;
if (p.id === "ollama") return !!p.base_url;
if (!p.require_api_key) return !!p.base_url;
Comment thread
garyzhang99 marked this conversation as resolved.
Outdated
if (p.is_custom) return !!p.base_url;
return !!p.api_key;
}),
Expand Down
44 changes: 44 additions & 0 deletions src/copaw/providers/lm_studio_provider.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# -*- coding: utf-8 -*-
"""An LM Studio provider implementation.

LM Studio exposes an OpenAI-compatible local server (default
http://localhost:1234/v1). This provider auto-discovers loaded models
on each get_info() call, similar to OllamaProvider."""

from __future__ import annotations

import logging
from typing import List

from copaw.providers.openai_provider import OpenAIProvider
from copaw.providers.provider import ModelInfo, ProviderInfo

logger = logging.getLogger(__name__)
Comment thread
garyzhang99 marked this conversation as resolved.


class LMStudioProvider(OpenAIProvider):
"""Provider for LM Studio's OpenAI-compatible local server."""

async def get_info(self, mock_secret: bool = True) -> ProviderInfo:
models: List[ModelInfo] = []
try:
models = await self.fetch_models(timeout=1)
self.models = models
except Exception:
pass
Comment thread
garyzhang99 marked this conversation as resolved.
Outdated
return ProviderInfo(
id=self.id,
name=self.name,
base_url=self.base_url,
api_key=self.api_key_prefix + "*" * 6
Comment thread
garyzhang99 marked this conversation as resolved.
Outdated
if mock_secret and self.api_key
else self.api_key,
chat_model=self.chat_model,
models=models,
extra_models=self.extra_models,
api_key_prefix=self.api_key_prefix,
is_local=self.is_local,
is_custom=self.is_custom,
freeze_url=self.freeze_url,
require_api_key=self.require_api_key,
Comment thread
garyzhang99 marked this conversation as resolved.
Outdated
)
Comment thread
garyzhang99 marked this conversation as resolved.
Outdated
13 changes: 13 additions & 0 deletions src/copaw/providers/provider_manager.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@
ProviderInfo,
)
from copaw.providers.openai_provider import OpenAIProvider
from copaw.providers.lm_studio_provider import LMStudioProvider
from copaw.providers.anthropic_provider import AnthropicProvider
from copaw.providers.ollama_provider import OllamaProvider
from copaw.constant import SECRET_DIR
Expand Down Expand Up @@ -159,6 +160,15 @@
require_api_key=False,
)

PROVIDER_LMSTUDIO = LMStudioProvider(
id="lmstudio",
name="LM Studio",
base_url="http://localhost:1234/v1",
require_api_key=False,
api_key_prefix="",
models=[],
)


class ModelSlotConfig(BaseModel):
provider_id: str = Field(
Expand Down Expand Up @@ -216,6 +226,7 @@ def _init_builtins(self):
self._add_builtin(PROVIDER_AZURE_OPENAI)
self._add_builtin(PROVIDER_ANTHROPIC)
self._add_builtin(PROVIDER_OLLAMA)
self._add_builtin(PROVIDER_LMSTUDIO)
self._add_builtin(PROVIDER_LLAMACPP)
self._add_builtin(PROVIDER_MLX)

Expand Down Expand Up @@ -419,6 +430,8 @@ def _provider_from_data(self, data: Dict) -> Provider:
return AnthropicProvider.model_validate(data)
if provider_id == "ollama" or chat_model == "OllamaChatModel":
return OllamaProvider.model_validate(data)
if provider_id == "lmstudio":
return LMStudioProvider.model_validate(data)
Comment thread
garyzhang99 marked this conversation as resolved.
if data.get("is_local", False):
return DefaultProvider.model_validate(data)
return OpenAIProvider.model_validate(data)
Expand Down
15 changes: 10 additions & 5 deletions website/public/docs/config.en.md
Original file line number Diff line number Diff line change
Expand Up @@ -287,11 +287,16 @@ CoPaw needs an LLM provider to work. You can set it up in three ways:

### Built-in providers

| Provider | ID | Default Base URL | API Key Prefix |
| ---------- | ------------ | --------------------------------------------------- | -------------- |
| ModelScope | `modelscope` | `https://api-inference.modelscope.cn/v1` | `ms` |
| DashScope | `dashscope` | `https://dashscope.aliyuncs.com/compatible-mode/v1` | `sk` |
| Custom | `custom` | _(you set it)_ | _(any)_ |
| Provider | ID | Default Base URL | API Key Prefix |
| ------------ | -------------- | --------------------------------------------------- | -------------- |
| ModelScope | `modelscope` | `https://api-inference.modelscope.cn/v1` | `ms` |
| DashScope | `dashscope` | `https://dashscope.aliyuncs.com/compatible-mode/v1` | `sk` |
| OpenAI | `openai` | `https://api.openai.com/v1` | _(any)_ |
| Azure OpenAI | `azure-openai` | _(you set it)_ | _(any)_ |
| Anthropic | `anthropic` | `https://api.anthropic.com` | _(any)_ |
| Ollama | `ollama` | `http://localhost:11434` | _(none)_ |
| LM Studio | `lmstudio` | `http://localhost:1234/v1` | _(none)_ |
| Custom | `custom` | _(you set it)_ | _(any)_ |
Comment thread
garyzhang99 marked this conversation as resolved.
Outdated

For each provider you need to set:

Expand Down
15 changes: 10 additions & 5 deletions website/public/docs/config.zh.md
Original file line number Diff line number Diff line change
Expand Up @@ -277,11 +277,16 @@ CoPaw 需要 LLM 提供商才能运行。有三种设置方式:

### 内置提供商

| 提供商 | ID | 默认 Base URL | API Key 前缀 |
| ------------------ | ------------ | --------------------------------------------------- | ------------ |
| ModelScope(魔搭) | `modelscope` | `https://api-inference.modelscope.cn/v1` | `ms` |
| DashScope(灵积) | `dashscope` | `https://dashscope.aliyuncs.com/compatible-mode/v1` | `sk` |
| 自定义 | `custom` | _(你自己填)_ | _(任意)_ |
| 提供商 | ID | 默认 Base URL | API Key 前缀 |
| ------------------ | -------------- | --------------------------------------------------- | ------------ |
| ModelScope(魔搭) | `modelscope` | `https://api-inference.modelscope.cn/v1` | `ms` |
| DashScope(灵积) | `dashscope` | `https://dashscope.aliyuncs.com/compatible-mode/v1` | `sk` |
| OpenAI | `openai` | `https://api.openai.com/v1` | _(任意)_ |
| Azure OpenAI | `azure-openai` | _(你自己填)_ | _(任意)_ |
| Anthropic | `anthropic` | `https://api.anthropic.com` | _(任意)_ |
| Ollama | `ollama` | `http://localhost:11434` | _(无需)_ |
| LM Studio | `lmstudio` | `http://localhost:1234/v1` | _(无需)_ |
| 自定义 | `custom` | _(你自己填)_ | _(任意)_ |
Comment thread
garyzhang99 marked this conversation as resolved.
Outdated

每个提供商需要设置:

Expand Down
46 changes: 34 additions & 12 deletions website/public/docs/console.en.md
Original file line number Diff line number Diff line change
Expand Up @@ -327,6 +327,28 @@ automatically when models are added/removed via Ollama CLI or Console.
> `copaw models ollama-list`, `copaw models ollama-remove`. See
> [CLI](./cli#ollama-models).

### LM Studio provider

The LM Studio provider connects to the LM Studio desktop application's
OpenAI-compatible local server to discover and use loaded models.

**Prerequisites:**

- Install LM Studio from [lmstudio.ai](https://lmstudio.ai)
- Load a model and start the local server in LM Studio (default: `http://localhost:1234`)

**Configure:**

1. Click **Settings** on the LM Studio provider card.
2. The default Base URL is `http://localhost:1234/v1`. Adjust if needed, then
click **Save**.
3. Click **Manage Models** to see models loaded in LM Studio. You can also
manually add model IDs.
4. Select **LM Studio** in the **Provider** dropdown and pick a model.

> LM Studio does not require an API key by default. Models must be loaded
> in LM Studio before they appear in CoPaw.

### Choose the active model

1. In the **LLM Config** section, select a **Provider** from the dropdown
Expand Down Expand Up @@ -375,18 +397,18 @@ Select rows → click **Delete** in the toolbar → confirm.

## Quick Reference

| Page | Sidebar path | What you can do |
| --------------------- | -------------------------------- | ---------------------------------------------------- |
| Chat | Chat → Chat | Talk with CoPaw, manage sessions |
| Channels | Control → Channels | Enable/disable channels, configure credentials |
| Sessions | Control → Sessions | Filter, rename, delete sessions |
| Cron Jobs | Control → Cron Jobs | Create/edit/delete jobs, run immediately |
| Workspace | Agent → Workspace | Edit persona files, view memory, upload/download |
| Skills | Agent → Skills | Enable/disable/create/delete skills |
| MCP | Agent → MCP | Enable/disable/create/delete MCP clients |
| Runtime Config | Agent → Runtime Config | Modify runtime configuration |
| Models | Settings → Models | Configure providers, manage local/Ollama, pick model |
| Environment Variables | Settings → Environment Variables | Add/edit/delete environment variables |
| Page | Sidebar path | What you can do |
| --------------------- | -------------------------------- | -------------------------------------------------------------- |
| Chat | Chat → Chat | Talk with CoPaw, manage sessions |
| Channels | Control → Channels | Enable/disable channels, configure credentials |
| Sessions | Control → Sessions | Filter, rename, delete sessions |
| Cron Jobs | Control → Cron Jobs | Create/edit/delete jobs, run immediately |
| Workspace | Agent → Workspace | Edit persona files, view memory, upload/download |
| Skills | Agent → Skills | Enable/disable/create/delete skills |
| MCP | Agent → MCP | Enable/disable/create/delete MCP clients |
| Runtime Config | Agent → Runtime Config | Modify runtime configuration |
| Models | Settings → Models | Configure providers, manage local/Ollama/LM Studio, pick model |
| Environment Variables | Settings → Environment Variables | Add/edit/delete environment variables |

---

Expand Down
42 changes: 30 additions & 12 deletions website/public/docs/console.zh.md
Original file line number Diff line number Diff line change
Expand Up @@ -309,6 +309,24 @@ Ollama 提供商集成本地 Ollama 守护进程,动态加载其中的模型

> 也可以通过 CLI 管理 Ollama 模型:`copaw models ollama-pull`、`copaw models ollama-list`、`copaw models ollama-remove`。详见 [CLI](./cli#ollama-模型)。

### LM Studio 提供商

LM Studio 提供商连接 LM Studio 桌面应用内置的 OpenAI 兼容本地服务器,自动发现并使用已加载的模型。

**前置条件:**

- 从 [lmstudio.ai](https://lmstudio.ai) 安装 LM Studio
- 在 LM Studio 中加载模型并启动本地服务器(默认地址:`http://localhost:1234`)

**配置步骤:**

1. 点击 LM Studio 提供商卡片的 **设置** 按钮。
2. 默认 Base URL 为 `http://localhost:1234/v1`,如有需要可修改,点击 **保存**。
3. 点击 **模型** 查看 LM Studio 中当前已加载的模型,也可手动添加模型 ID。
4. 在 **提供商** 下拉菜单中选择 LM Studio,选择模型后点击 **保存**。

> LM Studio 默认不需要 API Key。模型必须在 LM Studio 中加载后才会在 CoPaw 中显示。

### 选择活跃模型

1. 在顶部**LLM配置**的**提供商**下拉菜单中选择一个提供商(只显示已授权或
Expand Down Expand Up @@ -354,18 +372,18 @@ Ollama 提供商集成本地 Ollama 守护进程,动态加载其中的模型

## 快速索引

| 页面 | 侧边栏路径 | 你能做什么 |
| -------- | ----------------- | -------------------------------------------------- |
| 聊天 | 聊天 → 聊天 | 和 CoPaw 对话、管理会话 |
| 频道 | 控制 → 频道 | 启用/禁用频道、填入凭据 |
| 会话 | 控制 → 会话 | 筛选、重命名、删除会话 |
| 定时任务 | 控制 → 定时任务 | 创建/编辑/删除任务、立即执行 |
| 工作区 | 智能体 → 工作区 | 编辑人设文件、查看记忆、上传/下载 |
| 技能 | 智能体 → 技能 | 启用/禁用/创建/删除技能 |
| MCP | 智能体 → MCP | 启用/禁用/创建/删除MCP |
| 运行配置 | 智能体 → 运行配置 | 修改运行配置 |
| 模型 | 设置 → 模型 | 配置提供商 API Key、管理本地/Ollama 模型、选择模型 |
| 环境变量 | 设置 → 环境变量 | 添加/编辑/删除环境变量 |
| 页面 | 侧边栏路径 | 你能做什么 |
| -------- | ----------------- | ------------------------------------------------------------ |
| 聊天 | 聊天 → 聊天 | 和 CoPaw 对话、管理会话 |
| 频道 | 控制 → 频道 | 启用/禁用频道、填入凭据 |
| 会话 | 控制 → 会话 | 筛选、重命名、删除会话 |
| 定时任务 | 控制 → 定时任务 | 创建/编辑/删除任务、立即执行 |
| 工作区 | 智能体 → 工作区 | 编辑人设文件、查看记忆、上传/下载 |
| 技能 | 智能体 → 技能 | 启用/禁用/创建/删除技能 |
| MCP | 智能体 → MCP | 启用/禁用/创建/删除MCP |
| 运行配置 | 智能体 → 运行配置 | 修改运行配置 |
| 模型 | 设置 → 模型 | 配置提供商 API Key、管理本地/Ollama/LM Studio 模型、选择模型 |
| 环境变量 | 设置 → 环境变量 | 添加/编辑/删除环境变量 |

---

Expand Down
Loading
Loading