You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fixed the rendering lag of the data distillation page when processing a large amount of distillation data
Fixed the lag of the fully automatic distillation progress popup in high-concurrency scenarios #694
Fixed the thought chain processing issue during model evaluation with Ollama #714
Fixed the issue that custom model evaluation prompts did not take effect #687
⚡ Optimizations
Optimized the compatibility of OpenAI-Compatible API
Optimized the configuration of Temperature and TOP-P parameters in model settings #717
✨ New Features
Added MiniMax as a built-in model provider
Added active COT synthesis function (automatically enabled when the original model COT is not extracted), ensuring all datasets are equipped with thought chains #722
🌍 Internationalization
Added Turkish support and improved compatibility with a large number of international copy #702#706#708
Added Italian support
🙏 Special Thanks
@workcode-del for optimizing OpenAI-Compatible API compatibility
@chengqiangrd for fixing distillation page and progress popup lag issues #694
@octo-patch for adding MiniMax built-in provider support