What problem does this solve?
Most llm endpoints have rate limit, it is difficult to switch to another model/endpoint on the road.
Proposed solution
in telegram, /models and /model to switch endpoints or models (openclaw feature)
Alternatives considered
n/a
Scope estimate
No idea
Would you like to implement this?
No
What problem does this solve?
Most llm endpoints have rate limit, it is difficult to switch to another model/endpoint on the road.
Proposed solution
in telegram, /models and /model to switch endpoints or models (openclaw feature)
Alternatives considered
n/a
Scope estimate
No idea
Would you like to implement this?
No