Skip to content

LLM endpoint switching on the fly in telegram #193

@superhero75

Description

@superhero75

What problem does this solve?
Most llm endpoints have rate limit, it is difficult to switch to another model/endpoint on the road.
Proposed solution
in telegram, /models and /model to switch endpoints or models (openclaw feature)
Alternatives considered
n/a
Scope estimate
No idea

Would you like to implement this?
No

Metadata

Metadata

Assignees

No one assigned

    Labels

    featNew feature or capability

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions