SillyTavern 1.15.0
Highlights
Introducing the first preview of Macros 2.0, a comprehensive overhaul of the macro system that enables nesting, stable evaluation order, and more. You are encouraged to try it out by enabling "Experimental Macro Engine" in User Settings -> Chat/Message Handling. Legacy macro substitution will not receive further updates and will eventually be removed.
Breaking Changes
{{pick}}macros are not compatible between the legacy and new macro engines. Switching between them will change the existing pick macro results.- Due to the change of group chat metadata files handling, existing group chat files will be migrated automatically. Upgraded group chats will not be compatible with previous versions.
Backends
- Chutes: Added as a Chat Completion source.
- NanoGPT: Exposed additional samplers to UI.
- llama.cpp: Supports model selection and multi-swipe generation.
- Synchronized model lists for OpenAI, Google, Claude, Z.AI.
- Electron Hub: Supports caching for Claude models.
- OpenRouter: Supports system prompt caching for Gemini and Claude models.
- Gemini: Supports thought signatures for applicable models.
- Ollama: Supports extracting reasoning content from replies.
Improvements
- Experimental Macro Engine: Supports nested macros, stable evaluation order, and improved autocomplete.
- Unified group chat metadata format with regular chats.
- Added backups browser in "Manage chat files" dialog.
- Prompt Manager: Main prompt can be set at an absolute position.
- Collapsed three media inlining toggles into one setting.
- Added verbosity control for supported Chat Completion sources.
- Added image resolution and aspect ratio settings for Gemini sources.
- Improved CharX assets extraction logic on character import.
- Backgrounds: Added UI tabs and ability to upload chat backgrounds.
- Reasoning blocks can be excluded from smooth streaming with a toggle.
- start.sh script for Linux/MacOS no longer uses nvm to manage Node.js version.
STscript
- Added
/message-roleand/message-namecommands. /api-urlcommand supports VertexAI for setting the region.
Extensions
- Speech Recognition: Added Chutes, MistralAI, Z.AI, ElevenLabs, Groq as STT sources.
- Image Generation: Added Chutes, Z.AI, OpenRouter, RunPod Comfy as inference sources.
- TTS: Unified API key handling for ElevenLabs with other sources.
- Image Captioning: Supports Z.AI (common and coding) for captioning video files.
- Web Search: Supports Z.AI as a search source.
- Gallery: Now supports video uploads and playback.
Bug Fixes
- Fixed resetting the context size when switching between Chat Completion sources.
- Fixed arrow keys triggering swipes when focused into video elements.
- Fixed server crash in Chat Completion generation when invalid endpoint URL passed.
- Fixed pending file attachments not being preserved when using "Attach a File" button.
- Fixed tool calling not working with deepseek-reasoner model.
- Fixed image generation not using character prefixes for 'brush' message action.
Community Updates
- Gallery: Add video uploads to gallery by @Cohee1207 in #4796
- Add new WORLDINFO_SCAN_DONE event with mutable state for extensions by @Wolfsblvt in #4797
- Remove nvm install from start.sh by @Cohee1207 in #4804
- TC: Add a toggle for empty json schemas by @Cohee1207 in #4807
- /api-url: Add VertexAI region management by @Cohee1207 in #4808
- Backgrounds: Restore drawer title header by @Cohee1207 in #4809
- Add credit to bryc for writing getStringHash. by @DeclineThyself in #4811
- Refactor loadOpenAISettings by @Cohee1207 in #4815
- Fix resetting the context size when switching between CC sources by @Cohee1207 in #4816
- Update group chat metadata format by @Cohee1207 in #4805
- Trigger CHARACTER_RENAMED_IN_PAST_CHAT for group chats by @leandrojofre in #4818
- Unify chat timestamps format by @Cohee1207 in #4806
- Backport
feat/chat-treeand fix #4709 by @DeclineThyself in #4712 - Fix - Check for samplers in the connections pannel by @leandrojofre in #4822
- Prompt Manager: Make main/PHI/aux injectable by @Cohee1207 in #4829
- Chat Completion: Reduce number of toggles in AI Response Configuration by @Cohee1207 in #4821
- Empty
swipesare not handled byensureSwipes. by @DeclineThyself in #4828 - Return character prefixes to brush image generation by @drake1138 in #4833
- Backfill missing swipe_info by @Cohee1207 in #4831
- Add verbosity control by @Cohee1207 in #4837
- Vertexaisearch by @mightytribble in #4834
- Enhanced CharX Import with Asset Extraction by @axAilotl in #4825
- Gemini: Add image request settings by @Cohee1207 in #4838
- Backported
/testsfrommacros-2.0. by @DeclineThyself in #4842 - Backgrounds menu tabs by @Cohee1207 in #4845
- Sync OpenRouter providers list by @cloak1505 in #4846
- Convert OAI tool_choice to Gemini functionCallingConfig for Gemini requests by @mightytribble in #4840
- Added MockServer class for tests by @DeclineThyself in #4843
- Chutes integration by @cxmplex in #4844
- Add optional Setter to changeMainApi by @SammCheese in #4853
- Feat: Add toggle to exclude think/reason blocks from smooth streaming by @Dakraid in #4849
- Facillitate extension use of ConnectionManagerRequestService by @qvink in #4841
- custom-request: Pass api-url for Z.AI and Vertex and fix if omitted by @Cohee1207 in #4859
- Fix Mistral's Max Temperature by @kashmirmydon in #4856
- Fix path to sprites construction by @Cohee1207 in #4860
- Implement chat backups browse menu by @Cohee1207 in #4862
- Regex cache by @Cohee1207 in #4858
- Fixed
isModifiedKeyboardEventorder of operations. by @DeclineThyself in #4866 - "N" support for llama.cpp by @Beinsezii in #4869
- Backported refactor of the chats endpoint from
feat/chat-tree.by @DeclineThyself in #4870 - Add docker data directories to .dockerignore by @equal-l2 in #4873
- Add an explicit
cache_controlto the first system message for OpenRouter Claude by @chungchandev in #4872 - Fix: preserve attached files during file input changes by @Cohee1207 in #4877
- Replace Google Translate library by @Cohee1207 in #4884
- Refactor CC API async route handlers by @Cohee1207 in #4885
- Chevrons can overlap with other elements. by @DeclineThyself in #4878
- Correct error message in evalBoolean by @Kexus in #4889
- Macros 2.0 (v0.3) - Replacing the existing Macro System with a new Macro Engine by @Wolfsblvt in #4820
- Z.AI: Add image generation and web search by @Cohee1207 in #4895
- Gemini: add media resolution select by @Cohee1207 in #4775
- Implement Gemini thought signatures by @mightytribble in #4886
- Migrate substituteParams calls to new engine by @Cohee1207 in #4901
- Refactor ElevenLabs TTS API key handling by @Cohee1207 in #4906
- Add caching system prompt feature for OpenRouter Gemini by @chungchandev in #4903
- Add model selection support for llama.cpp router mode by @my-alt-acct in #4910
- Fix context size limitation for llama.cpp router mode by @my-alt-acct in #4914
- Updated Avatar Style and
Hide Chat Avatarstitle text. by @DeclineThyself in #4908 - Comfyui serverless runpod image generation by @9nbf7c4q6b-lgtm in #4891
- chchar slash cmd by @AphidGit in #4916
- filter out models that don't have a valid id for chutes by @cxmplex in #4920
- Fix OpenRouter compatibility for OpenAI models by @equal-l2 in #4917
- Allow editing of global Worldinfo settings by @SammCheese in #4921
- [Electron Hub] Prompt Caching Support for Claude models by @snowby666 in #4918
- Staging by @Cohee1207 in #4925
New Contributors
- @drake1138 made their first contribution in #4833
- @mightytribble made their first contribution in #4834
- @axAilotl made their first contribution in #4825
- @cxmplex made their first contribution in #4844
- @SammCheese made their first contribution in #4853
- @kashmirmydon made their first contribution in #4856
- @chungchandev made their first contribution in #4872
- @Kexus made their first contribution in #4889
- @my-alt-acct made their first contribution in #4910
- @9nbf7c4q6b-lgtm made their first contribution in #4891
- @AphidGit made their first contribution in #4916
Full Changelog: 1.14.0...1.15.0