feat(route/qwen): add research route#21595
Open
27Aaron wants to merge 2 commits intoDIYgod:masterfrom
Open
Conversation
Contributor
|
Successfully generated as following: http://localhost:1200/qwen/research - Success ✔️<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
<channel>
<title>Qwen Research</title>
<link>https://qwen.ai/research</link>
<atom:link href="http://localhost:1200/qwen/research" rel="self" type="application/rss+xml"></atom:link>
<description>Qwen Research - Powered by RSSHub</description>
<generator>RSSHub</generator>
<webMaster>contact@rsshub.app (RSSHub)</webMaster>
<language>en</language>
<lastBuildDate>Thu, 02 Apr 2026 06:42:25 GMT</lastBuildDate>
<ttl>5</ttl>
<item>
<title>Qwen-Image-Edit-2511: Improve Consistency</title>
<description><p>We are excited to introduce Qwen-Image-Edit-2511, an enhanced version over Qwen-Image-Edit-2509, featuring multiple improvements—including notably better consistency. To try out the latest model, please visit <a href="https://chat.qwen.ai/?inputFeature=image_edit">Qwen Chat</a> and select the Image Editing fea</p>
</description>
<link>https://qwen.ai/blog?id=qwen-image-edit-2511</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image-edit-2511</guid>
<pubDate>Tue, 23 Dec 2025 05:08:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3-TTS Steps Up: Voice Cloning and Voice Design!</title>
<description><p><strong>Qwen3-TTS</strong> family has launched two new models: the voice design model Qwen3-TTS-VD-Flash (accessible via the <a href="https://www.alibabacloud.com/help/en/model-studio/qwen-tts-voice-design">Qwen API</a>) and the voice cloning model Qwen3-TTS-VC-Flash (accessible via the [Qwen API](https://www.alibabacloud.</p>
</description>
<link>https://qwen.ai/blog?id=qwen3-tts-vc-voicedesign</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-tts-vc-voicedesign</guid>
<pubDate>Mon, 22 Dec 2025 16:00:45 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen-Image-Layered: Layered Decomposition for Inherent Editablity</title>
<description><p>Today, we are excited to introduce Qwen-Image-Layered, a model capable of decomposing an image into multiple RGBA layers. This layered representation unlocks inherent editability: each layer can be independently manipulated without affecting other content. Meanwhile, such a layered representation na</p>
</description>
<link>https://qwen.ai/blog?id=qwen-image-layered</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image-layered</guid>
<pubDate>Fri, 19 Dec 2025 05:08:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen3-Omni-Flash-2025-12-01:Hear You. See You. Follow Smarter!</title>
<description><p><strong>Qwen3-Omni</strong> is a next-generation native multimodal large model capable of seamlessly processing multiple input modalities—including text, images, audio, and video—and generating both text and natural-sounding speech outputs simultaneously via real-time streaming responses. This version introduces</p>
</description>
<link>https://qwen.ai/blog?id=qwen3-omni-flash-20251201</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-omni-flash-20251201</guid>
<pubDate>Mon, 08 Dec 2025 21:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>SAPO: A Stable and Performant Reinforcement Learning Method for Training Large Language Models</title>
<description><p>Reinforcement learning (RL) has become a core ingredient in advancing the reasoning capabilities of large language models (LLMs). Modern RL pipelines enable models to solve harder mathematical problems, write complex code, and reason over multimodal inputs. In practice, group‑based policy optimizati</p>
</description>
<link>https://qwen.ai/blog?id=sapo</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=sapo</guid>
<pubDate>Thu, 04 Dec 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen3-TTS Update! 49 Timbres + 10 Languages + 9 Dialects</title>
<description><p><strong>Qwen3-TTS-Flash</strong> is a flagship text-to-speech model that supports multi-timbre, multi-lingual, and multi-dialect speech synthesis. It aims to produce natural and expressive speech and is available via <a href="https://help.aliyun.com/zh/model-studio/qwen-tts">Qwen API</a>.Major Improvements:Qwen3-TTS offers</p>
</description>
<link>https://qwen.ai/blog?id=qwen3-tts-1128</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-tts-1128</guid>
<pubDate>Thu, 04 Dec 2025 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen DeepResearch: When Inspiration Becomes Its Own Reason</title>
<description><p><a href="https://chat.qwen.ai/?inputFeature=deep_research">Click here to experience the latest Qwen DeepResearch</a>_<strong>How does inspiration die?</strong>_It usually doesn’t die from “not being good enough”, but from being “too much trouble”.When a thought flashes, it’s still fragile and unverified. After a brief mome</p>
</description>
<link>https://qwen.ai/blog?id=qwen-deepresearch</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-deepresearch</guid>
<pubDate>Wed, 12 Nov 2025 20:59:26 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3-Max: Just Scale it</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwen3max-banner.png" referrerpolicy="no-referrer"><p>Following the release of the Qwen3-2507 series, we are thrilled to introduce Qwen3-Max — our largest and most capable model to date. The preview version of Qwen3-Max-Instruct currently ranks third on</p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://www.alibabacloud.com/help/en/model-studio/models#c2d5833ae4jmo">API</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-max</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-max</guid>
<pubDate>Wed, 24 Sep 2025 04:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3‑LiveTranslate: Real‑Time Multimodal Interpretation — See It, Hear It, Speak It!</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-LiveTranslate-Flash/blog_pic_without_subtitles(1).png#center" referrerpolicy="no-referrer"><p><strong>Qwen3‑LiveTranslate‑Flash</strong> delivers high‑precision, lightning‑fast and ultra‑reliable real‑time multilingual audio and video interpretation. With the extensive capabilities of Qwen3‑Omni and traini</p>
<p><a href="https://www.alibabacloud.com/help/en/document_detail/2983281.html">DASHSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-Livetranslate-Demo">HUGGING FACE DEMO</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-livetranslate</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-livetranslate</guid>
<pubDate>Mon, 22 Sep 2025 23:00:26 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3-VL: Sharper Vision, Deeper Thought, Broader Action</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-VL/qwen3vl-head.png#center" referrerpolicy="no-referrer"><p>Today, we officially launch the all-new Qwen3-VL series — the most powerful vision-language model in the Qwen family to date. In this generation, we’ve made major improvements across multiple dimensio</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen3-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen3-vl-68d2a7c1b8a8afce4ebd2dbe">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen3-VL-5c7a94c8cb144b">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-vl</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-vl</guid>
<pubDate>Mon, 22 Sep 2025 22:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Travel Planner: Your Smart Travel Designer</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/qwen-travel-planner/en_q1.png" referrerpolicy="no-referrer"><p>We are excited to introduce our <strong>brand-new Travel Planning Assistant</strong>, a powerful system built on a <strong>Multi-Agent architecture</strong> with robust <strong>real-world tool-calling capabilities</strong>. It is designed</p>
<p><a href="https://chat.qwen.ai/?inputFeature=travel">QWEN CHAT</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=agent</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=agent</guid>
<pubDate>Mon, 22 Sep 2025 21:00:59 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3Guard: Real-time Safety for Your Token Stream</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3Guard/banner.png" referrerpolicy="no-referrer"><p>We are excited to introduce Qwen3Guard, the first safety guardrail model in the Qwen family. Built upon the powerful Qwen3 foundation models and fine-tuned specifically for safety classificatoin, Qwen</p>
<p><a href="https://github.com/QwenLM/Qwen3Guard/blob/main/Qwen3Guard_Technical_Report.pdf">Tech Report</a> | <a href="https://github.com/QwenLM/Qwen3Guard">GitHub</a> | <a href="https://huggingface.co/collections/Qwen/qwen3guard-68d2729abbfae4716f3343a1">Hugging Face</a> | <a href="https://modelscope.cn/collections/Qwen3Guard-308c39ef5ffb4b">ModelScope</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3guard</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3guard</guid>
<pubDate>Mon, 22 Sep 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen-Image-Edit-2509: Multi-Image Support, Improved Consistency</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen-Image/edit2509/edit2509_top.jpg#center" referrerpolicy="no-referrer"><p>This September, we are pleased to introduce Qwen-Image-Edit-2509, the monthly iteration of Qwen-Image-Edit. To experience the latest model, please visit <a href="https://qwen.ai/">Qwen Chat</a> and select the "I</p>
<p><a href="https://qwen.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen-Image">GITHUB</a> | <a href="https://huggingface.co/Qwen/Qwen-Image-Edit-2509">HUGGING FACE</a> | <a href="https://modelscope.cn/models/Qwen/Qwen-Image-Edit-2509">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-image-edit-2509</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image-edit-2509</guid>
<pubDate>Mon, 22 Sep 2025 16:08:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3-Omni: Natively Omni-Modal Foundation Models!</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen3-Omni/q3o.png#center" referrerpolicy="no-referrer"><p><strong>Qwen3-Omni</strong> is the natively end-to-end multilingual omni model. It processes text, images, audio, and video, and delivers real-time streaming responses in both text and natural speech. We introduce</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://huggingface.co/collections/Qwen/qwen3-omni-68d100a86cd0906843ceccbe">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen3-Omni-867aef131e7d4f">MODELSCOPE</a> | <a href="https://www.alibabacloud.com/help/en/model-studio/qwen-omni">DASHSCOPE</a> | <a href="https://github.com/QwenLM/Qwen3-Omni">GITHUB</a> | <a href="https://github.com/QwenLM/Qwen3-Omni/tree/main/assets/Qwen3_Omni.pdf">PAPER</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-Omni-Demo">HUGGING FACE DEMO</a> | <a href="https://modelscope.cn/studios/Qwen/Qwen3-Omni-Demo">MODELSCOPE DEMO</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-omni</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-omni</guid>
<pubDate>Sun, 21 Sep 2025 21:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3-TTS-Flash: Multi-timbre & Multi-lingual & Multi-dialect Speech Synthesis.</title>
<description><img src="http://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-TTS-Flash/table2.png#center" referrerpolicy="no-referrer"><p><strong>Qwen3-TTS-Flash</strong> is a flagship text-to-speech model that supports multi-timbre, multi-lingual, and multi-dialect speech synthesis. It aims to produce natural and expressive speech and is available</p>
<p><a href="https://www.alibabacloud.com/help/en/model-studio/qwen-tts">DASHSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-TTS-Demo">HUGGING FACE DEMO</a> | <a href="https://modelscope.cn/studios/Qwen/Qwen3-TTS-Demo">MODELSCOPE DEMO</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-tts</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-tts</guid>
<pubDate>Sun, 21 Sep 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3-Next: Towards Ultimate Training & Inference Efficiency</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwen3-next.png" referrerpolicy="no-referrer"><p>We believe that <strong>Context Length Scaling</strong> and <strong>Total Parameter Scaling</strong> are two major trends in the future of large models. To further improve training and inference efficiency under long-context a</p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://www.alibabacloud.com/help/en/model-studio/models#c5414da58bjgj">API</a> | <a href="https://huggingface.co/collections/Qwen/qwen3-next-68c25fd6838e585db8eeea9d">Hugging Face</a> | <a href="https://modelscope.cn/collections/Qwen3-Next-c314f23bd0264a">ModelScope</a> | <a href="https://www.kaggle.com/models/qwen-lm/qwen3-next-80b">Kaggle</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-next</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-next</guid>
<pubDate>Wed, 10 Sep 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3 ASR: Hear clearly, transcribe smartly.</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3-ASR/qwenasr-bear2.png#center" referrerpolicy="no-referrer"><p>We introduce Qwen3-ASR-Flash, a speech recognition service built upon the strong intelligence of Qwen3-Omni and large amount of multi-modal data especially ASR data on the scale of tens of millions ho</p>
</description>
<link>https://qwen.ai/blog?id=qwen3-asr-flash</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-asr-flash</guid>
<pubDate>Mon, 08 Sep 2025 06:38:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen-Image-Edit: Image Editing with Higher Quality and Efficiency</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen-Image/edit_homepage.jpg#center" referrerpolicy="no-referrer"><p>We are excited to introduce Qwen-Image-Edit, the image editing version of Qwen-Image. Built upon our 20B Qwen-Image model, Qwen-Image-Edit successfully extends Qwen-Image's unique text rendering capab</p>
<p><a href="https://qwen.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen-Image">GITHUB</a> | <a href="https://huggingface.co/Qwen/Qwen-Image-Edit">HUGGING FACE</a> | <a href="https://modelscope.cn/models/Qwen/Qwen-Image-Edit">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-image-edit</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image-edit</guid>
<pubDate>Mon, 18 Aug 2025 17:30:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen-Image: Crafting with Native Text Rendering</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen-Image/merge3.jpg#center" referrerpolicy="no-referrer"><p>We are thrilled to release <strong>Qwen-Image</strong>, a 20B MMDiT image foundation model that achieves significant advances in complex text rendering and precise image editing. To try the latest model, feel free</p>
<p><a href="https://github.com/QwenLM/Qwen-Image">GITHUB</a> | <a href="https://huggingface.co/Qwen/Qwen-Image">HUGGING FACE</a> | <a href="https://modelscope.cn/models/Qwen/Qwen-Image">MODELSCOPE</a> | <a href="https://modelscope.cn/aigc/imageGeneration?tab=advanced">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-image</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image</guid>
<pubDate>Mon, 04 Aug 2025 14:08:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>GSPO: Towards Scalable Reinforcement Learning for Language Models</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/results.jpg#center" referrerpolicy="no-referrer"><p>Reinforcement Learning (RL) has emerged as a pivotal paradigm for scaling language models and enhancing their deep reasoning and problem-solving capabilities. To scale RL, the foremost prerequisite is</p>
<p><a href="https://huggingface.co/papers/2507.18071">PAPER</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=gspo</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=gspo</guid>
<pubDate>Sun, 27 Jul 2025 07:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen-MT: Where Speed Meets Smart Translation</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwen-mt-001.jpeg" referrerpolicy="no-referrer"><p>Here we introduce the latest update of Qwen-MT (qwen-mt-turbo) via [Qwen API](https://modelstudio.console.alibabacloud.com/?tab=doc#/doc/?type=model&amp;url=https://www.alibabacloud.com/help/en/doc-detail</p>
<p><a href="https://huggingface.co/spaces/Qwen/Qwen3-MT-Demo">DEMO</a> | <a href="https://modelstudio.console.alibabacloud.com/?tab=doc#/doc/?type=model&amp;url=https://www.alibabacloud.com/help/en/doc-detail/2840914_2.html&amp;renderType=component&amp;modelId=qwen-mt-turbo">API</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-mt</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-mt</guid>
<pubDate>Thu, 24 Jul 2025 14:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen3-Coder: Agentic Coding in the World</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3-Coder/qwen3-coder-main.jpg" referrerpolicy="no-referrer"><p>Today, we're announcing Qwen3-Coder, our most agentic code model to date. Qwen3-Coder is available in multiple sizes, but we're excited to introduce its most powerful variant first: Qwen3-Coder-480B-A</p>
<p><a href="https://github.com/QwenLM/Qwen3-Coder">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-coder</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-coder</guid>
<pubDate>Tue, 22 Jul 2025 13:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Time to Speak Some Dialects, Qwen-TTS!</title>
<description><p>Here we introduce the latest update of <strong>Qwen-TTS</strong> (<code>qwen-tts-latest</code> or <code>qwen-tts-2025-05-22</code>) through <a href="https://help.aliyun.com/zh/model-studio/qwen-tts">Qwen API</a> . Trained on a large-scale dataset</p>
<p><a href="https://help.aliyun.com/zh/model-studio/qwen-tts">API</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-tts</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-tts</guid>
<pubDate>Fri, 27 Jun 2025 07:01:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen VLo: From "Understanding" the World to "Depicting" It</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen-VLo/vlo.png" referrerpolicy="no-referrer"><p>The evolution of multimodal large models is continually pushing the boundaries of what we believe technology can achieve. From the initial QwenVL to the latest Qwen2.5 VL, we have made progress in enh</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-vlo</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-vlo</guid>
<pubDate>Thu, 26 Jun 2025 14:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3 Embedding: Advancing Text Embedding and Reranking Through Foundation Models</title>
<description><img src="https://mitalinlp.oss-cn-hangzhou.aliyuncs.com/dingkun/models/qwen-embedding/q3e-mteb-result-0605.png" referrerpolicy="no-referrer"><p>We release <strong>Qwen3 Embedding series</strong>, a new proprietary model of the Qwen model family. These models are specifically designed for <strong>text embedding</strong>, <strong>retrieval</strong>, and <strong>reranking</strong> tasks, built on</p>
<p><a href="https://github.com/QwenLM/Qwen3-Embedding">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-embedding</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-embedding</guid>
<pubDate>Thu, 05 Jun 2025 13:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3: Think Deeper, Act Faster</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwen3-banner.png" referrerpolicy="no-referrer"><p>Today, we are excited to announce the release of <strong>Qwen3</strong>, the latest addition to the Qwen family of large language models. Our flagship model, <strong>Qwen3-235B-A22B</strong>, achieves competitive results in be</p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen3">GitHub</a> | <a href="https://huggingface.co/collections/Qwen/qwen3-67dd247413f0e2e4f653967f">Hugging Face</a> | <a href="https://modelscope.cn/collections/Qwen3-9743180bdc6b48">ModelScope</a> | <a href="https://www.kaggle.com/models/qwen-lm/qwen-3">Kaggle</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3</guid>
<pubDate>Mon, 28 Apr 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>QVQ-Max: Think with Evidence</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/QVQ-Max/test_time.png" referrerpolicy="no-referrer"><p>Last December, we launched QVQ-72B-Preview as an exploratory model, but it had many issues. Today, we are officially releasing the first version of QVQ-Max, our visual reasoning model. This model can</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen2.5-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-vl-6795ffac22b334a837c0f9a5">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen25-VL-58fbb5d31f1d47">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qvq-max-preview</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qvq-max-preview</guid>
<pubDate>Thu, 27 Mar 2025 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen2.5 Omni: See, Hear, Talk, Write, Do It All!</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2.5-Omni/qwen_omni.png" referrerpolicy="no-referrer"><p>We release <strong>Qwen2.5-Omni</strong>, the new flagship end-to-end multimodal model in the Qwen series. Designed for comprehensive multimodal perception, it seamlessly processes diverse inputs including text, i</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://huggingface.co/Qwen/Qwen2.5-Omni-7B">HUGGING FACE</a> | <a href="https://modelscope.cn/models/Qwen/Qwen2.5-Omni-7B">MODELSCOPE</a> | <a href="https://help.aliyun.com/zh/model-studio/user-guide/qwen-omni">DASHSCOPE</a> | <a href="https://github.com/QwenLM/Qwen2.5-Omni">GITHUB</a> | <a href="https://github.com/QwenLM/Qwen2.5-Omni/blob/main/assets/Qwen2.5_Omni.pdf">PAPER</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-Omni-7B-Demo">DEMO</a> | <a href="https://discord.com/invite/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-omni</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-omni</guid>
<pubDate>Wed, 26 Mar 2025 16:00:45 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5-VL-32B: Smarter and Lighter</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-VL-32B/qwen2.5vl-32b-vision.jpg" referrerpolicy="no-referrer"><p>At the end of January this year, we launched the Qwen2.5-VL series of models, which received widespread attention and positive feedback from the community. Building on the Qwen2.5-VL series, we contin</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen2.5-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-vl-6795ffac22b334a837c0f9a5">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen25-VL-58fbb5d31f1d47">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-vl-32b</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-vl-32b</guid>
<pubDate>Sun, 23 Mar 2025 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>QwQ-32B: Embracing the Power of Reinforcement Learning</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwq-32b-final.jpg" referrerpolicy="no-referrer"><p>Scaling Reinforcement Learning (RL) has the potential to enhance model performance beyond conventional pretraining and post-training methods. Recent studies have demonstrated that RL can significantly</p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://huggingface.co/Qwen/QwQ-32B">Hugging Face</a> | <a href="https://modelscope.cn/models/Qwen/QwQ-32B">ModelScope</a> | <a href="https://huggingface.co/spaces/Qwen/QwQ-32B-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwq-32b</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwq-32b</guid>
<pubDate>Wed, 05 Mar 2025 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title><think>...</think> QwQ-Max-Preview</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwq-banner-en.jpg" referrerpolicy="no-referrer"><p>This is a blog created by QwQ-Max-Preview. We hope you enjoy it!We’re happy to unveil QwQ-Max-Preview , the latest advancement in the Qwen series, designed to push the boundaries of deep reasoning and</p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwq-max-preview</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwq-max-preview</guid>
<pubDate>Mon, 24 Feb 2025 18:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen2.5-Max: Exploring the Intelligence of Large-scale MoE Model</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2.5-max-banner.png" referrerpolicy="no-referrer"><p>It is widely recognized that continuously scaling both data size and model size can lead to significant improvements in model intelligence. However, the research and industry community has limited exp</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://www.alibabacloud.com/help/en/model-studio/developer-reference/what-is-qwen-llm">API</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-Max-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-max</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-max</guid>
<pubDate>Tue, 28 Jan 2025 15:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen2.5-1M: Deploy Your Own Qwen with Context Length up to 1M Tokens</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-1M/passkey_retrieval.png" referrerpolicy="no-referrer"><p>Two months after upgrading <a href="https://qwen.ai/qwen2.5-turbo">Qwen2.5-Turbo</a> to support context length up to one million tokens, we are back with the open-source Qwen2.5-1M models and the corresponding inference fram</p>
<p><a href="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-1M/Qwen2_5_1M_Technical_Report.pdf">Tech Report</a> | <a href="https://huggingface.co/Qwen">HuggingFace</a> | <a href="https://modelscope.cn/organization/qwen">ModelScope</a> | <a href="https://chat.qwenlm.ai/">Qwen Chat</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-1M-Demo">HuggingFace Demo</a> | <a href="https://www.modelscope.cn/studios/Qwen/Qwen2.5-1M-Demo">ModelScope Demo</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-1m</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-1m</guid>
<pubDate>Sun, 26 Jan 2025 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5 VL! Qwen2.5 VL! Qwen2.5 VL!</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2.5-vl-Capybara.png" referrerpolicy="no-referrer"><p>We release <strong>Qwen2.5-VL</strong>, the new flagship vision-language model of Qwen and also a significant leap from the previous Qwen2-VL. To try the latest model, feel free to visit [Qwen Chat](https://chat.q</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen2-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-vl-6795ffac22b334a837c0f9a5">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen25-VL-58fbb5d31f1d47">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-vl</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-vl</guid>
<pubDate>Sun, 26 Jan 2025 11:08:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Global-batch load balance almost free lunch to improve your MoE LLM training</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/assets/balance/main_results.png" referrerpolicy="no-referrer"><p>The Mixture-of-Experts (MoEs) architecture has become a popular model-parameter-scale-up technique. Typically, one MoE layer consists of a router (often parameterized as one single Linear layer) and a</p>
<p><a href="https://github.com/QwenLM/Qwen2-Math">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-math-66eaa240a1b7d5ee65f1da3e">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=global-load-balance</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=global-load-balance</guid>
<pubDate>Mon, 20 Jan 2025 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Towards Effective Process Supervision in Mathematical Reasoning</title>
<description><img src="http://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2.5/Qwen2.5-Math-PRM/Qwen2.5-Math-PRM.png" referrerpolicy="no-referrer"><p>In recent years, Large Language Models (LLMs) have made remarkable advances in mathematical reasoning, yet they can make mistakes, such as miscalculations or logical errors, leading to wrong conclusio</p>
<p><a href="https://github.com/QwenLM/Qwen2-Math">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-math-66eaa240a1b7d5ee65f1da3e">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-math-prm</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-math-prm</guid>
<pubDate>Mon, 13 Jan 2025 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>QVQ: To See the World with Wisdom</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/QVQ/QVQ.jpg" referrerpolicy="no-referrer"><p>Language and vision intertwine in the human mind, shaping how we perceive and understand the world around us. Our ability to reason is deeply rooted in both linguistic thought and visual memory - but</p>
<p><a href="https://github.com/QwenLM/Qwen2-VL">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://www.kaggle.com/models/qwen-lm/qvq-72b-preview">KAGGLE</a> | <a href="https://huggingface.co/spaces/Qwen/QVQ-72B-Preview">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qvq-72b-preview</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qvq-72b-preview</guid>
<pubDate>Tue, 24 Dec 2024 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>QwQ: Reflect Deeply on the Boundaries of the Unknown</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/assets/blog/qwq-32b-preview/QwQ-32B-Preview_result.png" referrerpolicy="no-referrer"><p>*Note: This is the pronunciation of QwQ: /kwju:/ , similar to the word "quill".*What does it mean to think, to question, to understand? These are the deep waters that QwQ (Qwen with Questions) wades i</p>
<p><a href="https://github.com/QwenLM/Qwen2.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/QwQ-32B-preview">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwq-32b-preview</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwq-32b-preview</guid>
<pubDate>Wed, 27 Nov 2024 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Extending the Context Length to 1M Tokens!</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Turbo/cover_en.png" referrerpolicy="no-referrer"><p>After the release of Qwen2.5, we heard the community's demand for processing longer contexts. In recent months, we have made many optimizations for the model capabilities and inference performance of</p>
<p><a href="https://help.aliyun.com/zh/model-studio/getting-started/first-api-call-to-qwen">API Documentation (Chinese)</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-Turbo-1M-Demo">HuggingFace Demo</a> | <a href="https://www.modelscope.cn/studios/Qwen/Qwen2.5-Turbo-1M-Demo">ModelScope Demo</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-turbo</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-turbo</guid>
<pubDate>Thu, 14 Nov 2024 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen2.5-Coder Series: Powerful, Diverse, Practical.</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5/Qwen2.5-Coder-Family/32b-top.jpg" referrerpolicy="no-referrer"><p>Today, we are excited to open source the "Powerful", "Diverse", and "Practical" Qwen2.5-Coder series, dedicated to continuously promoting the development of Open CodeLLMs.Additionally, the multi-langu</p>
<p><a href="https://github.com/QwenLM/Qwen2.5-Coder">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-coder-66eaa22e6f99801bf65b0c2f">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://www.kaggle.com/models/qwen-lm/qwen2.5-coder">KAGGLE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-Coder-demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-coder-family</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-coder-family</guid>
<pubDate>Mon, 11 Nov 2024 16:00:02 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5: A Party of Foundation Models!</title>
<description><img src="http://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5/Qwen2.5%20modelcard.001.jpeg" referrerpolicy="no-referrer"><p>In the past three months since Qwen2's release, numerous developers have built new models on the Qwen2 language models, providing us with valuable feedback. During this period, we have focused on crea</p>
<p><a href="https://github.com/QwenLM/Qwen2.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a> | <a href="https://qwenlm.github.io/blog/qwen2.5-llm">Qwen2.5 LLM</a> | <a href="https://qwenlm.github.io/blog/qwen2.5-coder">Qwen2.5-Coder</a> | <a href="https://qwenlm.github.io/blog/qwen2.5-math">Qwen2.5-Math</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5</guid>
<pubDate>Wed, 18 Sep 2024 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5-LLM: Extending the boundary of LLMs</title>
<description><p>In this blog, we delve into the details of our latest Qwen2.5 series language models. We have developed a range of decoder-only dense models, with seven of them open-sourced, spanning from 0.5B to 72B</p>
<p><a href="https://github.com/QwenLM/Qwen2.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-72B-Instruct">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-llm</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-llm</guid>
<pubDate>Wed, 18 Sep 2024 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen2.5-Coder: Code More, Learn More!</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5/Qwen2.5-Coder/coder-main.png" referrerpolicy="no-referrer"><p>In early April, we introduced CodeQwen1.5, which garnered significant attention from the community. Since then, we have been working to enhance the coding model. Today, we are excited to announce the</p>
<p><a href="https://github.com/QwenLM/Qwen2.5-Coder">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-Coder-7B-Instruct">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-coder</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-coder</guid>
<pubDate>Wed, 18 Sep 2024 16:00:02 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5-Math: The world's leading open-sourced mathematical LLMs</title>
<description><img src="http://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5/2024-08-qwen2.5-math-72B.png" referrerpolicy="no-referrer"><p>**🚨 Qwen2.5-Math mainly supports solving English and Chinese math problems through CoT and TIR. We do not recommend using this series of models for other tasks.**A month ago, we released the first se</p>
<p><a href="https://github.com/QwenLM/Qwen2-Math">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-math</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-math</guid>
<pubDate>Wed, 18 Sep 2024 16:00:01 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2-VL: To See the World More Clearly</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen2-VL/qwen2vl-head.jpeg" referrerpolicy="no-referrer"><p>After a year's relentless efforts, today we are thrilled to release <strong>Qwen2-VL</strong>! Qwen2-VL is the latest version of the vision language models based on <strong>Qwen2</strong> in the Qwen model familities. Compared</p>
<p><a href="https://huggingface.co/spaces/Qwen/Qwen2-VL">DEMO</a> | <a href="https://github.com/QwenLM/Qwen2-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen2-vl-66cee7455501d7126940800d">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://help.aliyun.com/zh/model-studio/developer-reference/qwen-vl-api">API</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2-vl</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2-vl</guid>
<pubDate>Wed, 28 Aug 2024 16:24:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2-Audio: Chat with Your Voice!</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-Audio/demo/radar_compare_qwen_audio.png" referrerpolicy="no-referrer"><p>To achieve the objective of building an AGI system, the model should be capable of understanding information from different modalities. Thanks to the rapid development of large language models, LLMs a</p>
<p><a href="https://huggingface.co/spaces/Qwen/Qwen2-Audio-Instruct-Demo">DEMO</a> | <a href="https://arxiv.org/pdf/2407.10759">PAPER</a> | <a href="https://github.com/QwenLM/Qwen2-Audio">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen2-audio-66b628d694096020e0c52ff6">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2-audio</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2-audio</guid>
<pubDate>Fri, 09 Aug 2024 08:18:19 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Introducing Qwen2-Math</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/assets/blog/qwen2-math/fig1.jpg" referrerpolicy="no-referrer"><p>**🚨 This model mainly supports English. We will release bilingual (English and Chinese) math models soon.**Over the past year, we have dedicated significant effort to researching and enhancing the re</p>
<p><a href="https://github.com/QwenLM/Qwen2-Math">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2-math</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2-math</guid>
<pubDate>Wed, 07 Aug 2024 16:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Hello Qwen2</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/qwen2/qwen.jpg" referrerpolicy="no-referrer"><p>After months of efforts, we are pleased to announce the evolution from Qwen1.5 to Qwen2. This time, we bring to you:We have opensourced the models in Hugging Face and ModelScope to you and we are look</p>
<p><a href="https://github.com/QwenLM/Qwen2">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2-72B-Instruct">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2</guid>
<pubDate>Thu, 06 Jun 2024 16:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Generalizing an LLM from 8k to 1M Context using Qwen-Agent</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/assets/qwen_agent/qwen-agent-2405-lv1-agent.png" referrerpolicy="no-referrer"><p><strong>TLDR:</strong> We've created an agent using Qwen2 models with an 8k context size to understand documents with 1M tokens, surpassing RAG and native long-context models. This agent was also used to generate</p>
<p><a href="https://github.com/QwenLM/Qwen-Agent">Qwen-Agent</a> | <a href="https://github.com/QwenLM/Qwen-Agent/blob/main/examples/assistant_rag.py">RAG Code</a> | <a href="https://github.com/QwenLM/Qwen-Agent/blob/main/examples/parallel_doc_qa.py">Agent Code</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-agent-2405</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-agent-2405</guid>
<pubDate>Thu, 06 Jun 2024 03:59:59 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Notes on Qwen-Max-0428</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/arena_leaderboard.jpg" referrerpolicy="no-referrer"><p>Previously, we opensourced a series of Qwen1.5 model ranging from 0.5 to 110 billion parameters. Now, we release a larger model, Qwen-Max-0428. Qwen-Max-0428 is an instruction-tuned model for chat ser</p>
<p><a href="https://dashscope.aliyun.com/">API</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen-Max-0428">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-max-0428</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-max-0428</guid>
<pubDate>Sat, 11 May 2024 10:10:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen1.5-110B: The First 100B+ Model of the Qwen1.5 Series</title>
<description><p>Recently we have witnessed a burst of large-scale models with over 100 billion parameters in the opensource community. These models have demonstrated remarkable performance in both benchmark evaluatio</p>
<p><a href="https://github.com/QwenLM/Qwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen1.5-110B-Chat-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen1.5-110b</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen1.5-110b</guid>
<pubDate>Thu, 25 Apr 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Code with CodeQwen1.5</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/codeqwen1.5/intro.png" referrerpolicy="no-referrer"><p>The advent of advanced programming tools, which harnesses the power of large language models (LLMs), has significantly enhanced programmer productivity and accuracy. Notwithstanding these advancements</p>
<p><a href="https://github.com/QwenLM/CodeQwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/CodeQwen1.5-7b-Chat-demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=codeqwen1.5</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=codeqwen1.5</guid>
<pubDate>Tue, 16 Apr 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen1.5-32B: Fitting the Capstone of the Qwen1.5 Language Model Series</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/qwen1.5-32b/32b.png" referrerpolicy="no-referrer"><p>The open-source community has long sought a model that strikes an ideal balance between performance, efficiency, and memory footprint. Despite the emergence of cutting-edge models like Qwen1.5-72B and</p>
<p><a href="https://github.com/QwenLM/Qwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen1.5-72B-Chat">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen1.5-32b</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen1.5-32b</guid>
<pubDate>Tue, 02 Apr 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen1.5-MoE: Matching 7B Model Performance with 1/3 Activated Parameters</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/assets/blog/qwen1.5/qwen-moe.jpg" referrerpolicy="no-referrer"><p>Since the surge in interest sparked by Mixtral, research on mixture-of-expert (MoE) models has gained significant momentum. Both researchers and practitioners are keenly interested in understanding ho</p>
<p><a href="https://github.com/QwenLM/Qwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/qwen1.5-MoE-A2.7B-Chat-demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-moe</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-moe</guid>
<pubDate>Thu, 28 Mar 2024 03:31:44 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Introducing Qwen1.5</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/qwen1.5/intro.jpg" referrerpolicy="no-referrer"><p>In recent months, our focus has been on developing a "good" model while optimizing the developer experience. As we progress towards <strong>Qwen1.5</strong>, the next iteration in our Qwen series, this update arri</p>
<p><a href="https://github.com/QwenLM/Qwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen1.5-72B-Chat">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen1.5</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen1.5</guid>
<pubDate>Sun, 04 Feb 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Introducing Qwen-VL</title>
<description><p>Along with the rapid development of our large language model Qwen, we leveraged Qwen’s capabilities and unified multimodal pretraining to address the limitations of multimodal models in generalization</p>
</description>
<link>https://qwen.ai/blog?id=qwen-vl</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-vl</guid>
<pubDate>Thu, 25 Jan 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Introducing Qwen</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/qwen/family.png" referrerpolicy="no-referrer"><p>4 months after our first release of Qwen-7B, which is the starting point of our opensource journey of large language models (LLM), we now provide an introduction to the Qwen series to give you a whole</p>
<p><a href="https://arxiv.org/abs/2309.16609">PAPER</a> | <a href="https://github.com/QwenLM/Qwen">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/CV4E9rpNSD">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen</guid>
<pubDate>Tue, 23 Jan 2024 14:13:29 GMT</pubDate>
... |
Contributor
http://localhost:1200/qwen/research/zh-cn - Success ✔️<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
<channel>
<title>Qwen 研究</title>
<link>https://qwen.ai/research</link>
<atom:link href="http://localhost:1200/qwen/research/zh-cn" rel="self" type="application/rss+xml"></atom:link>
<description>Qwen 研究 - Powered by RSSHub</description>
<generator>RSSHub</generator>
<webMaster>contact@rsshub.app (RSSHub)</webMaster>
<language>en</language>
<lastBuildDate>Thu, 02 Apr 2026 06:42:34 GMT</lastBuildDate>
<ttl>5</ttl>
<item>
<title>Qwen-Image-Edit-2511: 一致性再提升</title>
<description><p>我们很高兴推出 Qwen-Image-Edit-2511,相比于Qwen-Image-Edit-2509,进行了包括一致性提升在内的多项增强。如需体验最新模型,欢迎访问 <a href="https://chat.qwen.ai/?inputFeature=image_edit">Qwen Chat</a> 并选择“图像编辑”功能。注意,线上版本有一定优化加速,如果要获取模型最佳效果,可以去 <a href="https://modelscope.cn/models/Qwen/Qwen-Image-Edit-2511">ModelScope</a> 本地部署以获取最佳性能。Qwen-Image-Edit-2511 的主要特性包括:**</p>
</description>
<link>https://qwen.ai/blog?id=qwen-image-edit-2511</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image-edit-2511</guid>
<pubDate>Tue, 23 Dec 2025 05:08:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3-TTS 全面升级: 音色设计与音色克隆!</title>
<description><p><strong>Qwen3-TTS</strong> 家族新推出两款模型,音色创造模型Qwen3-TTS-VD-Flash(可通过<a href="https://www.alibabacloud.com/help/zh/model-studio/qwen-tts-voice-design">Qwen API</a>访问)和音色克隆模型Qwen3-TTS-VC-Flash(可通过<a href="https://www.alibabacloud.com/help/zh/model-studio/qwen-tts-voice-cloning">Qwen API</a>访问)。主要特点:Qwen3-TTS 支持通过自然语言描述生成定制化的音色形象。用户可以随意输入声</p>
</description>
<link>https://qwen.ai/blog?id=qwen3-tts-vc-voicedesign</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-tts-vc-voicedesign</guid>
<pubDate>Mon, 22 Dec 2025 16:00:45 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen-Image-Layered: 面向内在可编辑性的图层分解</title>
<description><p>今天我们很高兴推出 Qwen-Image-Layered,这是一款能够将图像分解为多个 RGBA 图层的模型。这种分层表示赋予了图像内在的可编辑性:每个图层都可以独立操作,而不会影响其他内容。同时,这种分层结构天然支持高保真的基本编辑操作,例如缩放、移动和重新着色。通过将不同元素物理地隔离到不同的图层中,我们的方法实现了高保真的编辑效果。给定一张图像,Qwen-Image-Layered 可将其分解为若干个 RGBA 图层:分解完成后,编辑操作仅作用于目标图层,将其与其他内容物理隔离,从根本上确保了编辑的一致性。例如,我们可以对第一个图层重新着色,而保持其余内容不变:我们也可以将第二个图层中的</p>
</description>
<link>https://qwen.ai/blog?id=qwen-image-layered</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image-layered</guid>
<pubDate>Fri, 19 Dec 2025 05:08:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen3-Omni-Flash-2025-12-01:声形意合,令出智随!</title>
<description><p><strong>Qwen3-Omni</strong>是新一代原生全模态大模型,能够无缝处理文本、图像、音频和视频等多种输入形式,并通过实时流式响应同时生成文本与自然语音输出。我们引入了多种升级来提升模型表现和效率。<strong>Qwen3-Omni-Flash-2025-12-01</strong>是在Qwen3-Omni基础上进行全面升级的版本。此次升级版本主要特点为:在客观性能指标上,<strong>Qwen3-Omni-Flash-2025-12-01</strong>全模态能力全面跃升,各项能力均显著超越Qwen3-Omni-Flash:此次升级,让 Qwen3-Omni-Flash-20251201 在全模态场景下真正做到“声形意合,令出智随”,为用户带来</p>
</description>
<link>https://qwen.ai/blog?id=qwen3-omni-flash-20251201</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-omni-flash-20251201</guid>
<pubDate>Mon, 08 Dec 2025 21:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>SAPO:一种稳定且高性能的大语言模型强化学习方法</title>
<description><p>强化学习(Reinforcement Learning, RL)已经成为提升大语言模型(Large Language Models, LLM)推理能力的核心技术之一。现代 RL 训练流程使模型能够解决困难的数学问题、编写复杂代码和进行多模态推理。实践中,一种被广泛采用的方法是基于组的策略优化(group‑based policy optimization):对每个提示采样多个回复,并在组内进行奖励归一化。<br>
然而,尽管该方法效果显著,稳定且高性能的策略优化仍然困难。关键挑战在于 token 级重要性比率(importance ratio)的高方差,尤其是在 MoE 模型中。该比率衡量当前策略偏离</p>
</description>
<link>https://qwen.ai/blog?id=sapo</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=sapo</guid>
<pubDate>Thu, 04 Dec 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen3-TTS 全面升级!49种音色 + 10种语言 + 9种方言</title>
<description><p><strong>Qwen3-TTS</strong> 是支持多音色、多语种和多方言的旗舰语音合成模型,致力于实现稳定、自然和高效的语音生成,目前可通过<a href="https://help.aliyun.com/zh/model-studio/qwen-tts">Qwen API</a>访问。主要改进:Qwen3-TTS 提供了个性鲜明、情感饱满的多元声音形象供用户选择,可满足多样化的场景需求。以下是一些合成样音:Qwen3-TTS 深度支持多种汉语方言表达,精准还原口音语调与地域韵味。以下是一些合成样音:Qwen3-TTS 同样支持了地道自然的多语种音色,发声习惯更贴近母语表达。以下是一些合成样例:通过 Qwen API 使用 Qwe</p>
</description>
<link>https://qwen.ai/blog?id=qwen3-tts-1128</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-tts-1128</guid>
<pubDate>Thu, 04 Dec 2025 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen DeepResearch: 当灵感不再需要理由</title>
<description><p><a href="https://chat.qwen.ai/?inputFeature=deep_research">点我体验最新 Qwen DeepResearch</a>_<strong>灵感是如何死掉的?</strong>_它通常不是死于“不够好”,而是死于“太麻烦”。当一个念头闪现时,它还是脆弱的、未经证实的。我们的大脑在短暂兴奋后,会立刻开始评估“成本”:就在这个“成本评估”的瞬间,绝大多数灵感就被“理性”地扼杀了。我们下意识地回避了它,因为“深入研究”的传统门槛实在太高。我们一直在思考,如何让“深入研究”不再是一个需要启动的重型任务,而是成为思考的自然延伸。**这就是 Qwen DeepResearch 诞生的使命。**我们想做</p>
</description>
<link>https://qwen.ai/blog?id=qwen-deepresearch</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-deepresearch</guid>
<pubDate>Wed, 12 Nov 2025 20:59:26 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3-Max:大就是好</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwen3max-banner.png" referrerpolicy="no-referrer"><p>继 Qwen3-2507 系列发布之后,我们非常高兴地推出 Qwen3-Max —— 我们迄今为止规模最大、能力最强的模型。目前,Qwen3-Max-Instruct 的预览版在 LMArena 文本排行榜上位列第三,超越了 GPT-5-Chat。正式版本在代码能力和智能体(agent)能力方面进一步提升,在涵盖知识、推理、编程、指令遵循、人类偏好对齐、智能体任务和多语言理解的全面基准测试中均达</p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://help.aliyun.com/zh/model-studio/models#qwen-max-cn-bj">API</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-max</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-max</guid>
<pubDate>Wed, 24 Sep 2025 04:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3-LiveTranslate:视、听、说全模态同传大模型!</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-LiveTranslate-Flash/blog_pic_without_subtitles(1).png#center" referrerpolicy="no-referrer"><p><strong>Qwen3-LiveTranslate-Flash</strong> 是一款基于大语言模型的高精度、高响应、高鲁棒性的多语言实时音视频同传模型。依托Qwen3-Omni强大的基座能力、海量多模态数据、百万小时音视频数据,Qwen3-LiveTranslate-Flash 实现了覆盖18种语言的离线和实时两种音视频翻译能力。核心亮点:在公开测试集上中英及多语言语音翻译,Qwen3-LiveTranslate-</p>
<p><a href="https://help.aliyun.com/document_detail/2983281.html">DASHSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-Livetranslate-Demo">HUGGING FACE DEMO</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-livetranslate</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-livetranslate</guid>
<pubDate>Mon, 22 Sep 2025 23:00:26 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3-VL:明察、深思、广行</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-VL/qwen3vl-head.png#center" referrerpolicy="no-referrer"><p>今天,我们正式推出全新升级的 <strong>Qwen3-VL</strong> 系列——这是迄今为止 Qwen 系列中最强大的视觉语言模型。在这一代模型中,我们在多个维度实现了全面跃升:无论是纯文本理解与生成,还是视觉内容的感知与推理;无论是上下文长度的支持能力,还是对空间关系、动态视频的理解深度;乃至在与Agent交互中的表现,Qwen3-VL 都展现出显著进步。今天,我们率先开源的是该系列的旗舰模型 —— **Qwe</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen3-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen3-vl-68d2a7c1b8a8afce4ebd2dbe">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen3-VL-5c7a94c8cb144b">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-vl</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-vl</guid>
<pubDate>Mon, 22 Sep 2025 22:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>旅行规划师:你的专属智能行程设计师</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/qwen-travel-planner/zn_q1.png" referrerpolicy="no-referrer"><p>我们非常高兴推出全新的<strong>旅行规划助手</strong>,这是一个基于 <strong>Multi-Agent 架构</strong> 并具备强大 <strong>真实工具调用能力</strong> 的旅行规划系统,能够高效应对复杂、多变的行程安排任务。无论你计划的是多城市连线旅行,还是单城深度游,它都能为你提供精准、可落地的旅行方案:旅行规划是一项系统工程,涵盖交通、景点、住宿、用餐等环节,它们环环相扣、相互影响,任何单一 Agent 都难以全面驾驭其中的复杂</p>
<p><a href="https://chat.qwen.ai/?inputFeature=travel">QWEN CHAT</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=agent</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=agent</guid>
<pubDate>Mon, 22 Sep 2025 21:00:59 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3Guard: 实时安全,逐词响应</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3Guard/banner.png" referrerpolicy="no-referrer"><p>我们隆重推出 Qwen3Guard —— Qwen 家族中首款专为安全防护设计的护栏模型。该模型基于强大的 Qwen3 基础架构打造,并针对安全分类任务进行了专项微调,旨在为人工智能交互提供精准、可靠的安全保障。无论是用户输入的提示,还是模型生成的回复,Qwen3Guard 均可高效识别潜在风险,输出细粒度的风险等级与分类标签,助力实现更负责任的 AI 应用。在多项主流安全评测基准上,Qwen3G</p>
<p><a href="https://github.com/QwenLM/Qwen3Guard/blob/main/Qwen3Guard_Technical_Report.pdf">Tech Report</a> | <a href="https://github.com/QwenLM/Qwen3Guard">GitHub</a> | <a href="https://huggingface.co/collections/Qwen/qwen3guard-68d2729abbfae4716f3343a1">Hugging Face</a> | <a href="https://modelscope.cn/collections/Qwen3Guard-308c39ef5ffb4b">ModelScope</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3guard</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3guard</guid>
<pubDate>Mon, 22 Sep 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen-Image-Edit-2509: 多图编辑支持,单图一致性提升</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen-Image/edit2509/edit2509_top.jpg#center" referrerpolicy="no-referrer"><p>这个9月,我们很高兴推出 Qwen-Image-Edit-2509,作为 Qwen-Image-Edit 的月迭代版本。如需体验最新模型,欢迎访问 <a href="https://qwen.ai/">Qwen Chat</a> 并选择“图像编辑”功能。相比于8月发布的 Qwen-Image-Edit,Qwen-Image-Edit-2509 的主要特性包括:**Qwen-Image-Edit-2509 的首要更新是支</p>
<p><a href="https://qwen.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen-Image">GITHUB</a> | <a href="https://huggingface.co/Qwen/Qwen-Image-Edit-2509">HUGGING FACE</a> | <a href="https://modelscope.cn/models/Qwen/Qwen-Image-Edit-2509">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-image-edit-2509</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image-edit-2509</guid>
<pubDate>Mon, 22 Sep 2025 16:08:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3-Omni:新一代原生全模态大模型!</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen3-Omni/q3o.png#center" referrerpolicy="no-referrer"><p><strong>Qwen3-Omni</strong>是新一代原生全模态大模型,能够无缝处理文本、图像、音频和视频等多种输入形式,并通过实时流式响应同时生成文本与自然语音输出。我们引入了多种升级来提升模型表现和效率。主要特点:Qwen3-Omni采用Thinker-Talker架构:Thinker负责文本生成,Talker专注于流式语音Token生成,直接接收来自Thinker的高层语义表征。为实现超低延迟流式生成,Tal</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://huggingface.co/collections/Qwen/qwen3-omni-68d100a86cd0906843ceccbe">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen3-Omni-867aef131e7d4f">MODELSCOPE</a> | <a href="https://help.aliyun.com/zh/model-studio/qwen-omni">DASHSCOPE</a> | <a href="https://github.com/QwenLM/Qwen3-Omni">GITHUB</a> | <a href="https://github.com/QwenLM/Qwen3-Omni/tree/main/assets/Qwen3_Omni.pdf">PAPER</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-Omni-Demo">HUGGING FACE DEMO</a> | <a href="https://modelscope.cn/studios/Qwen/Qwen3-Omni-Demo">MODELSCOPE DEMO</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-omni</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-omni</guid>
<pubDate>Sun, 21 Sep 2025 21:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3-TTS-Flash:多音色 & 多语言 & 多方言的语音合成</title>
<description><img src="http://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-TTS-Flash/table2.png#center" referrerpolicy="no-referrer"><p><strong>Qwen3-TTS-Flash</strong> 是支持多音色、多语言和多方言的旗舰语音合成模型,旨在生成自然且具有表现力的语音,目前可通过<a href="https://help.aliyun.com/zh/model-studio/qwen-tts">Qwen API</a>访问。主要特点:这里有一些样例展示了单说话人的多语种生成能力:这里有一些样例展示了中英文的音色:这里有一些样例展示了方言的音色:这里有一些样例展示了混</p>
<p><a href="https://help.aliyun.com/zh/model-studio/qwen-tts">DASHSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-TTS-Demo">HUGGING FACE DEMO</a> | <a href="https://modelscope.cn/studios/Qwen/Qwen3-TTS-Demo">MODELSCOPE DEMO</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-tts</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-tts</guid>
<pubDate>Sun, 21 Sep 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3-Next:迈向更极致的训练推理性价比</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwen3-next.png" referrerpolicy="no-referrer"><p>我们认为<strong>Context Length Scaling</strong>和<strong>Total Parameter Scaling</strong>是未来大模型发展的两大趋势,为了进一步提升模型在长上下文和大规模总参数下的训练和推理效率,我们设计了全新的Qwen3-Next的模型结构。该结构相比Qwen3的MoE模型结构,进行了以下核心改进:<strong>混合注意力机制</strong>、<strong>高稀疏度 MoE 结构</strong>、一系列<strong>训练稳定友好的优化</strong></p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://help.aliyun.com/zh/model-studio/models#2c9c4628c9yyd">API</a> | <a href="https://huggingface.co/collections/Qwen/qwen3-next-68c25fd6838e585db8eeea9d">Hugging Face</a> | <a href="https://modelscope.cn/collections/Qwen3-Next-c314f23bd0264a">ModelScope</a> | <a href="https://www.kaggle.com/models/qwen-lm/qwen3-next-80b">Kaggle</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-next</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-next</guid>
<pubDate>Wed, 10 Sep 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3 ASR:听得清楚,转写聪明。</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3-ASR/qwenasr-bear-zh.png#center" referrerpolicy="no-referrer"><p>Qwen3-ASR-Flash现已正式发布,一个基于Qwen3基座模型强大的智能、海量多模态数据以及千万小时规模的ASR数据构建的语音识别服务。<br>
Qwen3-ASR-Flash实现了高精度高鲁棒性的语音识别性能,支持11种语言和多种口音。与众不同的是,Qwen3-ASR-Flash支持用户以任意格式提供文本上下文,从而获得定制化的 ASR 结果,同时还支持歌声识别。<strong>📊 性能表现:</strong>**</p>
</description>
<link>https://qwen.ai/blog?id=qwen3-asr-flash</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-asr-flash</guid>
<pubDate>Mon, 08 Sep 2025 11:37:47 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen-Image-Edit: 全能图像编辑,驱动内容创作提质增效</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen-Image/edit_homepage.jpg#center" referrerpolicy="no-referrer"><p>我们很高兴推出 Qwen-Image-Edit,Qwen-Image 的图像编辑版本。Qwen-Image-Edit 基于我们20B的 Qwen-Image 模型进一步训练,成功将 Qwen-Image 的独特的文本渲染能力延展至图像编辑领域,实现了对图片中文字的精准编辑。此外,Qwen-Image-Edit 将输入图像同时输入到 Qwen2.5-VL(实现视觉语义控制)和 VAE Encoder</p>
<p><a href="https://qwen.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen-Image">GITHUB</a> | <a href="https://huggingface.co/Qwen/Qwen-Image-Edit">HUGGING FACE</a> | <a href="https://modelscope.cn/models/Qwen/Qwen-Image-Edit">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-image-edit</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image-edit</guid>
<pubDate>Mon, 18 Aug 2025 17:30:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen-Image:擅长文字渲染的创作利器</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen-Image/merge3.jpg#center" referrerpolicy="no-referrer"><p>我们很高兴推出 Qwen-Image,一个20B的MMDiT模型。这是通义千问系列中首个图像生成基础模型,其在复杂文本渲染和精确图像编辑方面取得了显著进展。如需体验最新模型,欢迎访问 <a href="https://chat.qwen.ai/">Qwen Chat</a> 并选择“图像生成”功能。主要特性包括:我们在多个公开基准上对Qwen-Image进行了全面评估,包括用于通用图像生成的GenEval、DPG和O</p>
<p><a href="https://github.com/QwenLM/Qwen-Image">GITHUB</a> | <a href="https://huggingface.co/Qwen/Qwen-Image">HUGGING FACE</a> | <a href="https://modelscope.cn/models/Qwen/Qwen-Image">MODELSCOPE</a> | <a href="https://modelscope.cn/aigc/imageGeneration?tab=advanced">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-image</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image</guid>
<pubDate>Mon, 04 Aug 2025 14:08:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>GSPO:迈向持续拓展的语言模型强化学习</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/results.jpg#center" referrerpolicy="no-referrer"><p>强化学习 (Reinforcement Learning,RL)已成为拓展语言模型、增强其深度推理与问题求解能力的关键技术范式。为了持续拓展 RL,首要前提是确保稳定、鲁棒的训练过程。然而,我们观察到现有的 RL 算法(如 GRPO)在长期训练中会暴露出严重的不稳定性问题并招致不可逆转的模型崩溃,阻碍了通过增加计算以获得进一步的性能提升。为了能够持续拓展 RL,我们提出了 **Group Sequ</p>
<p><a href="https://huggingface.co/papers/2507.18071">PAPER</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=gspo</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=gspo</guid>
<pubDate>Sun, 27 Jul 2025 07:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen-MT:速度与智能翻译的完美融合</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwen-mt-001.jpeg" referrerpolicy="no-referrer"><p>我们通过<a href="https://bailian.console.aliyun.com/?tab=model#/model-market/detail/qwen-mt-turbo">Qwen API</a> 推出了 Qwen-MT(qwen-mt-turbo)的最新升级版本。本次更新基于强大的 Qwen3 模型,进一步使用超大规模多语言和翻译数据对模型进行训练,全面增强其多语言理解与翻译能力,并结合强化学习技术</p>
<p><a href="https://modelscope.cn/studios/Qwen/Qwen3-MT-demo">DEMO</a> | <a href="https://bailian.console.aliyun.com/?tab=model#/model-market/detail/qwen-mt-turbo">API</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-mt</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-mt</guid>
<pubDate>Thu, 24 Jul 2025 14:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen3-Coder: 在世界中自主编程</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3-Coder/qwen3-coder-main.jpg" referrerpolicy="no-referrer"><p>今天我们正式发布 Qwen3-Coder,这是我们迄今为止最具代理能力的代码模型。Qwen3-Coder 拥有多个尺寸,但我们迫不及待地给大家提供当前最强大的版本,Qwen3-Coder-480B-A35B-Instruct。这是一个总参数量 480B,激活 35B 的 MoE 模型,原生支持 256K token 的上下文并可通过 YaRN 扩展到 1M token,拥有卓越的代码和 Agent</p>
<p><a href="https://github.com/QwenLM/Qwen3-Coder">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-coder</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-coder</guid>
<pubDate>Tue, 22 Jul 2025 13:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Time to Speak Some Dialects, Qwen-TTS!</title>
<description><p>我们通过 <a href="https://help.aliyun.com/zh/model-studio/qwen-tts">Qwen API</a> 更新了 <strong>Qwen-TTS</strong> ( <code>qwen-tts-latest</code> or <code>qwen-tts-2025-05-22</code> ) 的最新版本。Qwen-TTS 使用了超过 300 万小时的大规模语料库进行训练,合成效果实现了人类级别的自然度和表现力。比较亮眼的是,Qwe</p>
<p><a href="https://help.aliyun.com/zh/model-studio/qwen-tts">API</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-tts</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-tts</guid>
<pubDate>Fri, 27 Jun 2025 07:01:34 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen VLo: 从“看懂”世界到“描绘”世界</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen-VLo/vlo.png" referrerpolicy="no-referrer"><p>多模态大模型的演进正在不断突破我们对技术边界的认知。从最初的 QwenVL 到如今的 Qwen2.5 VL ,我们在提升模型对图像内容的理解能力方面取得了一些进展。今天,我们正式推出 Qwen VLo ——一个多模态统一理解与生成模型。这一全新升级的模型不仅能够“看懂”世界,更能基于理解进行高质量的再创造,真正实现了从感知到生成的跨越。需要注意的是,这是一款预览版本,您可以通过 Qwen Chat</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-vlo</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-vlo</guid>
<pubDate>Thu, 26 Jun 2025 14:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3 Embedding:新一代文本表征与排序模型</title>
<description><img src="https://mitalinlp.oss-cn-hangzhou.aliyuncs.com/dingkun/models/qwen-embedding/q3e-mteb-result-0605.png" referrerpolicy="no-referrer"><p>我们正式发布 Qwen3 Embedding 系列模型, Qwen 模型家族的新成员。该系列模型专为文本表征、检索与排序任务设计,基于 Qwen3 基础模型进行训练,充分继承了 Qwen3 在多语言文本理解能力方面的优势。在多项基准测试中,Qwen3 Embedding 系列在文本表征和排序任务中展现了卓越的性能。我们使用了 Apache 2.0 协议在 Hugging Face 和 ModelS</p>
<p><a href="https://github.com/QwenLM/Qwen3-Embedding">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-embedding</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-embedding</guid>
<pubDate>Thu, 05 Jun 2025 13:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3:思深,行速</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwen3-banner.png" referrerpolicy="no-referrer"><p>今天,我们宣布推出 <strong>Qwen3</strong>,这是 Qwen 系列大型语言模型的最新成员。我们的旗舰模型 <strong>Qwen3-235B-A22B</strong> 在代码、数学、通用能力等基准测试中,与 DeepSeek-R1、o1、o3-mini、Grok-3 和 Gemini-2.5-Pro 等顶级模型相比,表现出极具竞争力的结果。此外,小型 MoE 模型 <strong>Qwen3-30B-A3B</strong> 的激活参数数量是 QwQ</p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen3">GitHub</a> | <a href="https://huggingface.co/collections/Qwen/qwen3-67dd247413f0e2e4f653967f">Hugging Face</a> | <a href="https://modelscope.cn/collections/Qwen3-9743180bdc6b48">ModelScope</a> | <a href="https://www.kaggle.com/models/qwen-lm/qwen-3">Kaggle</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3</guid>
<pubDate>Mon, 28 Apr 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>QVQ-Max:有依据地思考</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/QVQ-Max/test_time.png" referrerpolicy="no-referrer"><p>去年12月,我们推出了 QVQ-72B-Preview, 作为一个探索模型,它存在很多问题。今天,我们正式推出 QVQ-Max 视觉推理模型的第一版。这款模型的特点是,它不仅能够“看懂”图片和视频里的内容,还能结合这些信息进行分析、推理,甚至给出解决方案。从数学题到生活小问题,从编程代码到艺术创作,QVQ-Max 都表现出了不俗的能力。虽然这只是我们的第一个版本,但它的潜力已经让人眼前一亮。Mat</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen2.5-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-vl-6795ffac22b334a837c0f9a5">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen25-VL-58fbb5d31f1d47">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qvq-max-preview</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qvq-max-preview</guid>
<pubDate>Thu, 27 Mar 2025 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen2.5 Omni:看得见、听得到、会说话、能写作,样样精通!</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2.5-Omni/qwen_omni.png" referrerpolicy="no-referrer"><p>我们发布了 <strong>Qwen2.5-Omni</strong>,Qwen 模型家族中新一代端到端多模态旗舰模型。该模型专为全方位多模态感知设计,能够无缝处理文本、图像、音频和视频等多种输入形式,并通过实时流式响应同时生成文本与自然语音合成输出。想要体验最新的模型,请访问 <a href="https://chat.qwenlm.ai/">Qwen Chat</a> 并选择Qwen2.5-Omni-7B。该模型现已在 [Hugging Fa</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://huggingface.co/Qwen/Qwen2.5-Omni-7B">HUGGING FACE</a> | <a href="https://modelscope.cn/models/Qwen/Qwen2.5-Omni-7B">MODELSCOPE</a> | <a href="https://help.aliyun.com/zh/model-studio/user-guide/qwen-omni">DASHSCOPE</a> | <a href="https://github.com/QwenLM/Qwen2.5-Omni">GITHUB</a> | <a href="https://github.com/QwenLM/Qwen2.5-Omni/blob/main/assets/Qwen2.5_Omni.pdf">PAPER</a> | <a href="https://modelscope.cn/studios/Qwen/Qwen2.5-Omni-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-omni</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-omni</guid>
<pubDate>Wed, 26 Mar 2025 16:00:45 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5-VL-32B: 更聪明、更轻量!</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-VL-32B/qwen2.5vl-32b-vision.jpg" referrerpolicy="no-referrer"><p>今年一月底,我们推出了 Qwen2.5-VL 系列模型,获得了社区的广泛关注和积极反馈。在 Qwen2.5-VL 系列的基础上,我们使用强化学习持续优化模型,并使用 Apache 2.0 协议开源 32B 这个备受喜爱的参数规模的新 VL 模型—— <strong>Qwen2.5-VL-32B-Instruct</strong>。相比此前发布的 Qwen2.5-VL 系列模型,本次推出的 32B 模型的特点如下:我们与业内</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen2.5-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-vl-6795ffac22b334a837c0f9a5">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen25-VL-58fbb5d31f1d47">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-vl-32b</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-vl-32b</guid>
<pubDate>Sun, 23 Mar 2025 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>QwQ-32B: 领略强化学习之力</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwq-32b-final.jpg" referrerpolicy="no-referrer"><p>大规模强化学习(RL)有潜力超越传统的预训练和后训练方法来提升模型性能。近期的研究表明,强化学习可以显著提高模型的推理能力。例如,DeepSeek R1 通过整合冷启动数据和多阶段训练,实现了最先进的性能,使其能够进行深度思考和复杂推理。这一次,我们探讨了大规模强化学习(RL)对大语言模型的智能的提升作用,同时很高兴推出我们最新的推理模型 QwQ-32B。这是一款拥有 320 亿参数的模型,其性能</p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://huggingface.co/Qwen/QwQ-32B">Hugging Face</a> | <a href="https://modelscope.cn/models/Qwen/QwQ-32B">ModelScope</a> | <a href="https://huggingface.co/spaces/Qwen/QwQ-32B-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwq-32b</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwq-32b</guid>
<pubDate>Wed, 05 Mar 2025 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title><think>...</think> QwQ-Max-Preview</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwq-banner-zh.jpg" referrerpolicy="no-referrer"><p>这篇博客出自 QwQ-Max-Preview 之手。希望各位看官喜欢!我们很高兴向大家介绍 QwQ-Max-Preview,这是 Qwen 系列的最新成果。这一版本基于 Qwen2.5-Max 构建,在数学、编程以及通用任务中展现了更强的能力,同时在与 Agent 相关的工作流中也有不错的表现。作为即将发布的 QwQ-Max 的预览版,这个版本还在持续优化中。我们计划在不久的将来以 Apache</p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwq-max-preview</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwq-max-preview</guid>
<pubDate>Mon, 24 Feb 2025 18:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen2.5-Max:探索大规模 MoE 模型的智能</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-max-banner.png" referrerpolicy="no-referrer"><p>过去有一种观点认为,持续地增长数据规模和模型参数规模是一种通向 AGI 的可能的路径。然而,整个大模型社区对于训练超大规模的模型的经验都相对匮乏,不论是稠密模型还是 MoE 模型。近期,DeepSeek V3 的发布让大家了解到超大规模 MoE 模型的效果及实现方法,而同期,Qwen 也在研发超大规模的 MoE 模型 Qwen2.5-Max,使用超过 20 万亿 token 的预训练数据及精心设计</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://www.alibabacloud.com/help/en/model-studio/getting-started/first-api-call-to-qwen?spm=a2c63.p38356.help-menu-2400256.d_0_1_0.1f6574a72ddbKE">API</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-Max-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-max</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-max</guid>
<pubDate>Tue, 28 Jan 2025 15:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen2.5-1M: 支持100万Token上下文的开源Qwen模型</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-1M/passkey_retrieval.png" referrerpolicy="no-referrer"><p>两个月前,我们升级了 <a href="https://qwen.ai/qwen2.5-turbo">Qwen2.5-Turbo</a>,使其支持最多一百万个Tokens的上下文长度。今天,我们正式推出开源的 Qwen2.5-1M 模型及其对应的推理框架支持。以下是本次发布的亮点:现在,你可以访问我们在 <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-1M-Demo">Huggingface</a> 和 [Mo</p>
<p><a href="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-1M/Qwen2_5_1M_Technical_Report.pdf">Tech Report</a> | <a href="https://huggingface.co/Qwen">HuggingFace</a> | <a href="https://modelscope.cn/organization/qwen">ModelScope</a> | <a href="https://chat.qwenlm.ai/">Qwen Chat</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-1M-Demo">HuggingFace Demo</a> | <a href="https://www.modelscope.cn/studios/Qwen/Qwen2.5-1M-Demo">ModelScope Demo</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-1m</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-1m</guid>
<pubDate>Sun, 26 Jan 2025 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5 VL!Qwen2.5 VL!Qwen2.5 VL!</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2.5-vl-Capybara.png" referrerpolicy="no-referrer"><p>我们发布了 <strong>Qwen2.5-VL</strong>,Qwen 模型家族的旗舰视觉语言模型,对比此前发布的 Qwen2-VL 实现了巨大的飞跃。欢迎访问 <a href="https://chat.qwenlm.ai/">Qwen Chat</a> 并选择 Qwen2.5-VL-72B-Instruct 进行体验。此外,我们在 [Hugging Face](https://huggingface.co/collections/Qwe</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen2-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-vl-6795ffac22b334a837c0f9a5">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen25-VL-58fbb5d31f1d47">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-vl</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-vl</guid>
<pubDate>Sun, 26 Jan 2025 11:08:41 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>通过全局负载均衡提升混合专家模型的性能和特异化程度</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/assets/balance/main_results.png" referrerpolicy="no-referrer"><p>混合专家模型(MoEs)通过路由机制动态并稀疏地激活模型参数,使得能高效地增大模型参数规模。基于 TopK 机制的稀疏激活会在训练中会遇到专家激活不均衡的问题:少数被频繁选择的专家会被优化得更多,进一步使得这些专家被更频繁地选择,最终导致只选择少数专家,造成剩余专家的冗余。因此,MoE 在训练中需要引入额外的辅助损失(load balance loss,LBL)来鼓励专家的选择趋于均衡。目前主流</p>
<p><a href="https://github.com/QwenLM/Qwen2-Math">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-math-66eaa240a1b7d5ee65f1da3e">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=global-load-balance</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=global-load-balance</guid>
<pubDate>Mon, 20 Jan 2025 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>面向有效的数学推理过程监督</title>
<description><img src="http://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2.5/Qwen2.5-Math-PRM/Qwen2.5-Math-PRM.png" referrerpolicy="no-referrer"><p>近年来,大型语言模型(LLMs)在数学推理方面取得了显著进展,但它们仍可能犯错误,如计算错误或逻辑错误,导致得出错误结论。<br>
此外,即使最终答案正确,这些强大的模型也经常编造看似合理的推理步骤,其中最终答案基于有缺陷的计算或推导过程,这削弱了LLMs推理过程的可靠性和可信度。<br>
因此,自动识别推理过程中的错误对于其可扩展监督变得越来越重要。过程奖励模型(Process Reward Models, P</p>
<p><a href="https://github.com/QwenLM/Qwen2-Math">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-math-66eaa240a1b7d5ee65f1da3e">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-math-prm</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-math-prm</guid>
<pubDate>Mon, 13 Jan 2025 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>QVQ: 更睿智地看世界</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/QVQ/QVQ.jpg" referrerpolicy="no-referrer"><p>在人类的思维中,语言和视觉紧密交织,塑造着我们感知和理解世界的方式。我们的推理能力深深植根于语言思维和视觉记忆之中。那么,当我们将这些能力赋予人工智能时,会发生什么呢?如今的大语言模型已经展现出卓越的推理能力,但我们不禁思考:它们能否通过掌握视觉理解的力量,攀登认知能力的新高峰?设想一下,一个人工智能能够像物理学大师一样,面对复杂的物理问题,沉着冷静地通过逻辑推理找到解决方案。正是这样的愿景激发我</p>
<p><a href="https://github.com/QwenLM/Qwen2-VL">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://www.kaggle.com/models/qwen-lm/qvq-72b-preview">KAGGLE</a> | <a href="https://huggingface.co/Qwen/QVQ-72B-Preview">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qvq-72b-preview</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qvq-72b-preview</guid>
<pubDate>Tue, 24 Dec 2024 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>QwQ: 思忖未知之界</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/assets/blog/qwq-32b-preview/QwQ-32B-Preview_result.png" referrerpolicy="no-referrer"><p>*注意:QwQ 的发音为 /kwju:/ ,与单词 "quill" 的读音近似。*思考、质疑、理解,是人类探索未知的永恒追求。在这条探索之路上,QwQ犹如一位怀抱无尽好奇的学徒,以思考和疑问照亮前路。QwQ体现了古老的哲学精神:它深知自己一无所知,而这种认知正是其好奇心的源泉。在探寻答案的过程中,它始终保持自省,以理性之光审视每一个假设,在不同的思维维度中穿行,追寻更深层的真理。然而,正如所有智慧</p>
<p><a href="https://github.com/QwenLM/Qwen2.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/QwQ-32B-preview">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwq-32b-preview</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwq-32b-preview</guid>
<pubDate>Wed, 27 Nov 2024 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>将上下文长度扩展至百万 Tokens !</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Turbo/cover_cn.png" referrerpolicy="no-referrer"><p>在 Qwen2.5 发布之后,我们听到社区对处理更长序列的需求。在这段时间,我们针对长序列处理能力以及长序列下的推理效率进行了很多优化。今天,我们隆重推出新的 Qwen2.5-Turbo 版本,其特点在于:现在,你可以通过[阿里云大模型服务平台](https://help.aliyun.com/zh/model-studio/developer-reference/what-is-qwen-llm</p>
<p><a href="https://help.aliyun.com/zh/model-studio/getting-started/first-api-call-to-qwen">API文档</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-Turbo-1M-Demo">HuggingFace Demo</a> | <a href="https://www.modelscope.cn/studios/Qwen/Qwen2.5-Turbo-1M-Demo">ModelScope Demo</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-turbo</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-turbo</guid>
<pubDate>Thu, 14 Nov 2024 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen2.5-Coder 全系列: 强大、多样、实用。</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5/Qwen2.5-Coder-Family/32b-top.jpg" referrerpolicy="no-referrer"><p>今天,我们很高兴开源「强大」、「多样」、「实用」的 Qwen2.5-Coder 全系列模型,致力于持续推动 Open CodeLLMs 的发展。另外,Qwen2.5-Coder-32B-Instruct 的多编程语言代码修复能力同样令人惊喜,这将有助于用户理解和修改自己熟悉的编程语言,极大缓解陌生语言的学习成本。与 McEval 类似,MdEval 是多编程语言的代码修复基准,Qwen2.5-Co</p>
<p><a href="https://github.com/QwenLM/Qwen2.5-Coder">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-coder-66eaa22e6f99801bf65b0c2f">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://www.kaggle.com/models/qwen-lm/qwen2.5-coder">KAGGLE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-Coder-demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-coder-family</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-coder-family</guid>
<pubDate>Mon, 11 Nov 2024 16:00:02 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5: 基础模型大派对!</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5/qwen2.5-main.jpg" referrerpolicy="no-referrer"><p>在 Qwen2 发布后的过去三个月里,许多开发者基于 Qwen2 语言模型构建了新的模型,并为我们提供了宝贵的反馈。在这段时间里,我们专注于创建更智能、更博学的语言模型。今天,我们很高兴地向大家介绍 Qwen 家族的最新成员:<strong>Qwen2.5</strong>。我们将要宣布的可能是历史上最大的开源发布!让我们开始这场盛会吧!我们的最新发布包括了语言模型 <strong>Qwen2.5</strong>,以及专门针对编程的 **Qwen</p>
<p><a href="https://github.com/QwenLM/Qwen2.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a> | <a href="https://qwenlm.github.io/blog/qwen2.5-llm">Qwen2.5 LLM</a> | <a href="https://qwenlm.github.io/blog/qwen2.5-coder">Qwen2.5-Coder</a> | <a href="https://qwenlm.github.io/blog/qwen2.5-math">Qwen2.5-Math</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5</guid>
<pubDate>Wed, 18 Sep 2024 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5-LLM:扩展大型语言模型的边界</title>
<description><p>我们隆重推出最新发布的Qwen2.5系列语言模型!我们共开源了7款decoder-only的稠密模型,参数规模从0.5B到72B不等。我们调研发现产品对10B至30B模型的兴趣明显增加,同时3B规模的模型也越来越适用于移动端场景。为此,Qwen2.5系列开源了Qwen2.5-3B、Qwen2.5-14B 和 Qwen2.5-32B。同时,我们还推出了Qwen-Plus与Qwen-Turbo版本,可</p>
<p><a href="https://github.com/QwenLM/Qwen2.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-72B-Instruct">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-llm</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-llm</guid>
<pubDate>Wed, 18 Sep 2024 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen2.5-Coder: 码无止境,学无止境!</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5/Qwen2.5-Coder/coder-main.png" referrerpolicy="no-referrer"><p>四月初,我们发布了 CodeQwen1.5, 得到了社区广泛的关注与喜爱。自那以后,我们一直在继续努力提升代码模型。今天,我们很高兴地宣布新一代的开放代码模型 Qwen2.5-Coder 的发布。并正式将 CodeQwen 的命名改为 Qwen-Coder,我们认为 Coder 更加拟人、灵动,期待其可以在未来真正与人类结对编程。Qwen2.5-Coder 是我们 Qwen2.5 开源家族的一员,</p>
<p><a href="https://github.com/QwenLM/Qwen2.5-Coder">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-Coder-7B-Instruct">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-coder</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-coder</guid>
<pubDate>Wed, 18 Sep 2024 16:00:02 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5-Math: 世界领先的数学开源大语言模型</title>
<description><img src="http://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5/2024-08-qwen2.5-math-72B.png" referrerpolicy="no-referrer"><p>**🚨 Qwen2.5-Math主要被设计用于通过CoT或TIR的方式解中英数学题,我们不推荐在其他任务上使用该系列模型。**一个月前,我们开源了 Qwen 家族的第一款数学专项大语言模型- <a href="https://qwenlm.github.io/blog/qwen2-math/">Qwen2-Math</a>。 今天,我们将它再度升级并开源 <strong>Qwen2.5-Math</strong> 系列,包括基础模型 **Qw</p>
<p><a href="https://github.com/QwenLM/Qwen2-Math">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-math</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-math</guid>
<pubDate>Wed, 18 Sep 2024 16:00:01 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2-VL: 更清晰地看世界</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen2-VL/qwen2vl-head.jpeg" referrerpolicy="no-referrer"><p>经历了接近一年时间的持续努力,今天我们很高兴地宣布我们最新一代的视觉语言模型:<strong>Qwen2-VL</strong> !Qwen2-VL 基于 Qwen2 打造,相比 Qwen-VL,它具有以下特点:我们以 Apache 2.0 协议开源了 Qwen2-VL-2B 和 Qwen2-VL-7B,并发布了 Qwen2-VL-72B 的 API!开源代码已集成到 Hugging Face Transformers、v</p>
<p><a href="https://huggingface.co/spaces/Qwen/Qwen2-VL">DEMO</a> | <a href="https://github.com/QwenLM/Qwen2-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen2-vl-66cee7455501d7126940800d">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://help.aliyun.com/zh/model-studio/developer-reference/qwen-vl-api">API</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2-vl</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2-vl</guid>
<pubDate>Wed, 28 Aug 2024 16:24:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2-Audio:开启语音对话!</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-Audio/demo/radar_compare_qwen_audio.png" referrerpolicy="no-referrer"><p>在一个通用的AI系统中,核心模型应该能够理解不同模态的信息。当前的大语言模型现在已经能够理解语言并进行推理,并且已经扩展到了更多的模态,包括视觉和音频。此前我们陆续发布了多个 Qwen 语言模型系列以及 Qwen-VL 和 Qwen-Audio 等多模态模型。今天,我们正式发布 Qwen2-Audio。这是 Qwen-Audio 的下一代版本,它能够接受音频和文本输入,并生成文本输出。Qwen2-</p>
<p><a href="https://huggingface.co/spaces/Qwen/Qwen2-Audio-Instruct-Demo">DEMO</a> | <a href="https://arxiv.org/pdf/2407.10759">PAPER</a> | <a href="https://github.com/QwenLM/Qwen2-Audio">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen2-audio-66b628d694096020e0c52ff6">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2-audio</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2-audio</guid>
<pubDate>Fri, 09 Aug 2024 08:22:39 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2-Math,新一代数学模型</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/assets/blog/qwen2-math/fig1.jpg" referrerpolicy="no-referrer"><p>**🚨 此模型目前主要支持英语。我们将尽快推出中英双语版本。**在过去的一年里,我们非常关注大模型的推理能力的提升,尤其关注其在数学相关的任务上的表现。今天,我们非常高兴地介绍 Qwen2 开源家族的新成员——Qwen2-Math-1.5B/7B/72B 系列。Qwen2-Math 是一系列基于 Qwen2 LLM 构建的专门用于数学解题的语言模型,其数学能力显著超越了开源模型,甚至超过了闭源模</p>
<p><a href="https://github.com/QwenLM/Qwen2-Math">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2-math</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2-math</guid>
<pubDate>Wed, 07 Aug 2024 16:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>你好,Qwen2</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/qwen2/qwen.jpg" referrerpolicy="no-referrer"><p>历经数月努力, 我们很高兴迎来了Qwen系列模型从Qwen1.5到Qwen2的重大升级。这一次,我们为大家带来了:目前,我们已在Hugging Face和ModelScope上同步开源。期待听到你们的使用反馈!Qwen2系列包含5个尺寸的预训练和指令微调模型,其中包括Qwen2-0.5B、Qwen2-1.5B、Qwen2-7B、Qwen2-57B-A14B和Qwen2-72B。如下表所示:在Qwe</p>
<p><a href="https://github.com/QwenLM/Qwen2">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2-72B-Instruct">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2</guid>
<pubDate>Thu, 06 Jun 2024 16:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>使用Qwen-Agent将上下文记忆扩展到百万量级</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/assets/qwen_agent/qwen-agent-2405-lv1-agent.png" referrerpolicy="no-referrer"><p><strong>长话短说:</strong> 我们开发了一个智能体用于理解包含百万字词的文档,虽然仅使用Qwen2模型的8k上下文,但效果超过RAG和长序列原生模型。我们还利用此智能体合成长上下文数据,用于训练长上下文的Qwen模型。近期,能够原生处理数百万字输入的大型语言模型(LLMs)成为了一种趋势。大部分工作集中在模型架构调整,如位置编码扩展或线性注意力机制等。然而,准备足够长度的微调数据作为讨论较少但同样重要的议题</p>
<p><a href="https://github.com/QwenLM/Qwen-Agent">Qwen-Agent</a> | <a href="https://github.com/QwenLM/Qwen-Agent/blob/main/examples/assistant_rag.py">RAG Code</a> | <a href="https://github.com/QwenLM/Qwen-Agent/blob/main/examples/parallel_doc_qa.py">Agent Code</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-agent-2405</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-agent-2405</guid>
<pubDate>Thu, 06 Jun 2024 03:59:59 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen-Max-0428模型介绍</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/arena_leaderboard.jpg" referrerpolicy="no-referrer"><p>此前,我们开源了Qwen1.5系列的模型,参数规模最小至5亿,最大至1100亿。这一次,我们推出更大规模模型Qwen-Max-0428(通义千问网页端及APP产品版本从2.1升级至2.5)。Qwen-Max-0428是经过指令微调的Chat模型。近期该模型登陆了<a href="https://chat.lmsys.org/">Chatbot Arena</a>,并登榜前十。此外,我们在MT-Bench的评测上也观察到</p>
<p><a href="https://dashscope.aliyun.com/">API</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen-Max-0428">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-max-0428</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-max-0428</guid>
<pubDate>Sat, 11 May 2024 10:10:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen1.5-110B:Qwen1.5系列的首个千亿参数开源模型</title>
<description><p>近期开源社区陆续出现了千亿参数规模以上的大模型,这些模型都在各项评测中取得杰出的成绩。今天,我们开源1100亿参数的Qwen1.5系列首个千亿参数模型Qwen1.5-110B,该模型在基础能力评估中与Meta-Llama3-70B相媲美,在Chat评估中表现出色,包括MT-Bench和AlpacaEval 2.0。Qwen1.5-110B与其他Qwen1.5模型相似,采用了相同的Transform</p>
<p><a href="https://github.com/QwenLM/Qwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen1.5-110B-Chat-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen1.5-110b</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen1.5-110b</guid>
<pubDate>Thu, 25 Apr 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>与 CodeQwen1.5 结对编程</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/codeqwen1.5/intro.png" referrerpolicy="no-referrer"><p>代码助手,是一种基于 LLMs 的智能化的编程工具,它可以帮助程序员更高效、更准确的编写代码,使得整个软件开发过程更加流畅和高效。然而流行的代码助手,比如 Github Copilot,依赖于闭源的商业模型,不仅昂贵还会引起如隐私、安全、版权等方面的担忧。幸运的是,开源社区正在致力于打造开放代码模型来实现开放的代码助手。近期涌现出了一批优秀的 Open CodeLLMs,比如 StarCoder2</p>
<p><a href="https://github.com/QwenLM/CodeQwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/CodeQwen1.5-7b-Chat-demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=codeqwen1.5</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=codeqwen1.5</guid>
<pubDate>Tue, 16 Apr 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen1.5-32B:Qwen1.5语言模型系列的最后一块拼图</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/qwen1.5-32b/32b.png" referrerpolicy="no-referrer"><p>开源社区长期以来一直在寻求一种能在性能、效率和内存占用之间达到理想平衡的模型。尽管出现了诸如Qwen1.5-72B和DBRX这样的SOTA模型,但这些模型持续面临诸如内存消耗巨大、推理速度缓慢以及显著的微调成本等问题。当前,参数量约30B的模型往往在这方面被看好,得到很多用户的青睐。顺应这一趋势,我们推出Qwen1.5语言模型系列的最新成员:Qwen1.5-32B和Qwen1.5-32B-Chat</p>
<p><a href="https://github.com/QwenLM/Qwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen1.5-72B-Chat">DEMO</a> | <a href="https://github.com/QwenLM/Qwen/blob/main/assets/wechat.png">WeChat</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen1.5-32b</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen1.5-32b</guid>
<pubDate>Tue, 02 Apr 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen1.5-MoE: 1/3的激活参数量达到7B模型的性能</title>
<description><p>今天,我们推出Qwen系列的首个MoE模型,Qwen1.5-MoE-A2.7B。它仅拥有27亿个激活参数,但其性能却能与当前最先进的70亿参数模型,如Mistral 7B和Qwen1.5-7B相媲美。相较于包含65亿个Non-Embedding参数的Qwen1.5-7B,Qwen1.5-MoE-A2.7B只有20亿个Non-Embedding参数,约为原模型大小的三分之一。此外,相比Qwen1.5</p>
<p><a href="https://github.com/QwenLM/Qwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/qwen1.5-MoE-A2.7B-Chat-demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-moe</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-moe</guid>
<pubDate>Thu, 28 Mar 2024 03:31:44 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen1.5 介绍</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/qwen1.5/intro.jpg" referrerpolicy="no-referrer"><p>最近几个月,我们专注探索如何构建一个真正「卓越」的模型,并在此过程中不断提升开发者的使用体验。农历新年到来之际,我们推出通义千问开源模型 1.5 版本: <strong>Qwen1.5</strong>。我们开源了包括 0.5B、1.8B、4B、7B、14B、32B、72B 和 110B 共计 8 个不同规模的 Base 和 Chat 模型,, 以及一个 MoE 模型(点击[博客](https://qwenlm.githu</p>
<p><a href="https://github.com/QwenLM/Qwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen1.5-72B-Chat">DEMO</a> | <a href="https://github.com/QwenLM/Qwen/blob/main/assets/wechat.png">WeChat</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen1.5</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen1.5</guid>
<pubDate>Sun, 04 Feb 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen-VL全新升级!</title>
<description><p>我们在 Qwen 语言模型的基础上,结合此前我们提出的多模态多任务训练,以解决多模态模型在泛化能力上的局限性,并于 2023 年 9 月开源了多模态模型 Qwen-VL。最近,Qwen-VL 系列有了重大升级,推出了两个增强版本:Qwen-VL-Plus 和 Qwen-VL-Max。这两个版本的关键提升包括:相比于开源版本的 Qwen-VL,这两个模型在多个文本-图像多模态任务中与 Gemini</p>
</description>
<link>https://qwen.ai/blog?id=qwen-vl</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-vl</guid>
<pubDate>Thu, 25 Jan 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen介绍</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/qwen/family.png" referrerpolicy="no-referrer"><p>四个月前,我们首次发布Qwen-7B大型语言模型(LLM),正式开启了我们的开源之旅。今天,我们介绍Qwen开源家族,更全面的展示我们的工作和目标。下面是开源项目和社区的重要链接。Additionally, we have WeChat groups for chatting and we invite you to join the groups through the provided lin</p>
<p><a href="https://arxiv.org/abs/2309.16609">PAPER</a> | <a href="https://github.com/QwenLM/Qwen">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/CV4E9rpNSD">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen</guid>
<pubDate>Tue, 23 Jan 2024 14:13:29 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>OFASys:一行代码带你搞定多任务学习!</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/ofasys/demo.jpg" referrerpolicy="no-referrer"><p>通用模型非常火!我们现在跟随多模态多任务学习的发展似乎看到了实现一个真正的通用模型的机会。我们此前推出的OFA便是朝着这个目标迈向的重要一步。但是,我们在实际实现过程中遇到了非常多的困难。比如说,把多任务训练的模型搭建起来,组织多任务的训练比如给数据打batch和保证训练稳定等等,都非常困难。因此,我们推出一个AI系统OFASys,它主要解决多模态多任务学习的实现问题。简单来说,它主要通过一个叫做</p>
<p><a href="https://arxiv.org/abs/2212.04408">论文</a> | <a href="https://github.com/OFA-Sys/OFASys">GitHub</a></p>
</description>
<link>https://qwen.ai/blog?id=ofasys</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=ofasys</guid>
<pubDate>Wed, 28 Dec 2022 10:01:21 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Chinese CLIP: 中文图文对比学习预训练</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/cnclip/search.jpg" referrerpolicy="no-referrer"><p>CLIP[^1]是多模态表示学习领域一个现象级的模型。它不仅扮演基础模型,并且建立了视觉和语言的桥梁。它还推动了很多其他领域技术的发展,尤其是文本生成图像。然而,我们还需要特定语言的CLIP,尤其在现实应用中,比如跨模... |
Contributor
http://localhost:1200/qwen/research/en/Research - Success ✔️<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
<channel>
<title>Qwen Research - Research</title>
<link>https://qwen.ai/research</link>
<atom:link href="http://localhost:1200/qwen/research/en/Research" rel="self" type="application/rss+xml"></atom:link>
<description>Qwen Research - Research - Powered by RSSHub</description>
<generator>RSSHub</generator>
<webMaster>contact@rsshub.app (RSSHub)</webMaster>
<language>en</language>
<lastBuildDate>Thu, 02 Apr 2026 06:42:35 GMT</lastBuildDate>
<ttl>5</ttl>
<item>
<title>Qwen-Image-Layered: Layered Decomposition for Inherent Editablity</title>
<description><p>Today, we are excited to introduce Qwen-Image-Layered, a model capable of decomposing an image into multiple RGBA layers. This layered representation unlocks inherent editability: each layer can be independently manipulated without affecting other content. Meanwhile, such a layered representation na</p>
</description>
<link>https://qwen.ai/blog?id=qwen-image-layered</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image-layered</guid>
<pubDate>Fri, 19 Dec 2025 05:08:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>SAPO: A Stable and Performant Reinforcement Learning Method for Training Large Language Models</title>
<description><p>Reinforcement learning (RL) has become a core ingredient in advancing the reasoning capabilities of large language models (LLMs). Modern RL pipelines enable models to solve harder mathematical problems, write complex code, and reason over multimodal inputs. In practice, group‑based policy optimizati</p>
</description>
<link>https://qwen.ai/blog?id=sapo</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=sapo</guid>
<pubDate>Thu, 04 Dec 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen3 ASR: Hear clearly, transcribe smartly.</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3-ASR/qwenasr-bear2.png#center" referrerpolicy="no-referrer"><p>We introduce Qwen3-ASR-Flash, a speech recognition service built upon the strong intelligence of Qwen3-Omni and large amount of multi-modal data especially ASR data on the scale of tens of millions ho</p>
</description>
<link>https://qwen.ai/blog?id=qwen3-asr-flash</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-asr-flash</guid>
<pubDate>Mon, 08 Sep 2025 06:38:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen-Image-Edit: Image Editing with Higher Quality and Efficiency</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen-Image/edit_homepage.jpg#center" referrerpolicy="no-referrer"><p>We are excited to introduce Qwen-Image-Edit, the image editing version of Qwen-Image. Built upon our 20B Qwen-Image model, Qwen-Image-Edit successfully extends Qwen-Image's unique text rendering capab</p>
<p><a href="https://qwen.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen-Image">GITHUB</a> | <a href="https://huggingface.co/Qwen/Qwen-Image-Edit">HUGGING FACE</a> | <a href="https://modelscope.cn/models/Qwen/Qwen-Image-Edit">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-image-edit</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image-edit</guid>
<pubDate>Mon, 18 Aug 2025 17:30:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen-Image: Crafting with Native Text Rendering</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen-Image/merge3.jpg#center" referrerpolicy="no-referrer"><p>We are thrilled to release <strong>Qwen-Image</strong>, a 20B MMDiT image foundation model that achieves significant advances in complex text rendering and precise image editing. To try the latest model, feel free</p>
<p><a href="https://github.com/QwenLM/Qwen-Image">GITHUB</a> | <a href="https://huggingface.co/Qwen/Qwen-Image">HUGGING FACE</a> | <a href="https://modelscope.cn/models/Qwen/Qwen-Image">MODELSCOPE</a> | <a href="https://modelscope.cn/aigc/imageGeneration?tab=advanced">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-image</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image</guid>
<pubDate>Mon, 04 Aug 2025 14:08:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>GSPO: Towards Scalable Reinforcement Learning for Language Models</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/results.jpg#center" referrerpolicy="no-referrer"><p>Reinforcement Learning (RL) has emerged as a pivotal paradigm for scaling language models and enhancing their deep reasoning and problem-solving capabilities. To scale RL, the foremost prerequisite is</p>
<p><a href="https://huggingface.co/papers/2507.18071">PAPER</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=gspo</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=gspo</guid>
<pubDate>Sun, 27 Jul 2025 07:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen-MT: Where Speed Meets Smart Translation</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwen-mt-001.jpeg" referrerpolicy="no-referrer"><p>Here we introduce the latest update of Qwen-MT (qwen-mt-turbo) via [Qwen API](https://modelstudio.console.alibabacloud.com/?tab=doc#/doc/?type=model&amp;url=https://www.alibabacloud.com/help/en/doc-detail</p>
<p><a href="https://huggingface.co/spaces/Qwen/Qwen3-MT-Demo">DEMO</a> | <a href="https://modelstudio.console.alibabacloud.com/?tab=doc#/doc/?type=model&amp;url=https://www.alibabacloud.com/help/en/doc-detail/2840914_2.html&amp;renderType=component&amp;modelId=qwen-mt-turbo">API</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-mt</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-mt</guid>
<pubDate>Thu, 24 Jul 2025 14:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen3-Coder: Agentic Coding in the World</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3-Coder/qwen3-coder-main.jpg" referrerpolicy="no-referrer"><p>Today, we're announcing Qwen3-Coder, our most agentic code model to date. Qwen3-Coder is available in multiple sizes, but we're excited to introduce its most powerful variant first: Qwen3-Coder-480B-A</p>
<p><a href="https://github.com/QwenLM/Qwen3-Coder">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-coder</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-coder</guid>
<pubDate>Tue, 22 Jul 2025 13:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Global-batch load balance almost free lunch to improve your MoE LLM training</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/assets/balance/main_results.png" referrerpolicy="no-referrer"><p>The Mixture-of-Experts (MoEs) architecture has become a popular model-parameter-scale-up technique. Typically, one MoE layer consists of a router (often parameterized as one single Linear layer) and a</p>
<p><a href="https://github.com/QwenLM/Qwen2-Math">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-math-66eaa240a1b7d5ee65f1da3e">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=global-load-balance</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=global-load-balance</guid>
<pubDate>Mon, 20 Jan 2025 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen2.5-LLM: Extending the boundary of LLMs</title>
<description><p>In this blog, we delve into the details of our latest Qwen2.5 series language models. We have developed a range of decoder-only dense models, with seven of them open-sourced, spanning from 0.5B to 72B</p>
<p><a href="https://github.com/QwenLM/Qwen2.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-72B-Instruct">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-llm</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-llm</guid>
<pubDate>Wed, 18 Sep 2024 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Generalizing an LLM from 8k to 1M Context using Qwen-Agent</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/assets/qwen_agent/qwen-agent-2405-lv1-agent.png" referrerpolicy="no-referrer"><p><strong>TLDR:</strong> We've created an agent using Qwen2 models with an 8k context size to understand documents with 1M tokens, surpassing RAG and native long-context models. This agent was also used to generate</p>
<p><a href="https://github.com/QwenLM/Qwen-Agent">Qwen-Agent</a> | <a href="https://github.com/QwenLM/Qwen-Agent/blob/main/examples/assistant_rag.py">RAG Code</a> | <a href="https://github.com/QwenLM/Qwen-Agent/blob/main/examples/parallel_doc_qa.py">Agent Code</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-agent-2405</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-agent-2405</guid>
<pubDate>Thu, 06 Jun 2024 03:59:59 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>OFASys: Enabling Multitask Learning with One Line of Code!</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/ofasys/demo.jpg" referrerpolicy="no-referrer"><p>Generalist Models are hot! We all see an opportunity towards a real generalist model by multimodal multitask learning. We previously release an opensourced unified multimodal pretrained model OFA for</p>
<p><a href="https://arxiv.org/abs/2212.04408">Paper</a> | <a href="https://github.com/OFA-Sys/OFASys">GitHub</a></p>
</description>
<link>https://qwen.ai/blog?id=ofasys</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=ofasys</guid>
<pubDate>Wed, 28 Dec 2022 10:01:21 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/cnclip/search.jpg" referrerpolicy="no-referrer"><p>CLIP[^1] is a phenomenal playmaker in vision and multimodal representation learning. It plays not only as a foundation model but also a bridge between vision and language. It has triggered a series of</p>
<p><a href="https://arxiv.org/abs/2211.01335">Paper</a> | <a href="https://github.com/OFA-Sys/Chinese-CLIP">GitHub</a> | <a href="https://www.modelscope.cn/models/damo/multi-modal_clip-vit-base-patch16_zh/summary">ModelScope</a> | <a href="https://huggingface.co/spaces/OFA-Sys/chinese-clip-zero-shot-image-classification">Demo</a></p>
</description>
<link>https://qwen.ai/blog?id=chinese-clip</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=chinese-clip</guid>
<pubDate>Sat, 24 Dec 2022 06:54:19 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>OFA: Towards Building a One-For-All Model</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/ofa/uniter.jpg" referrerpolicy="no-referrer"><p>2022 is a year of generalist models! With the bloom of multimodal pretraining, especially the unified model, we have witnessed the opportunity to building a generalist model that is capable of process</p>
<p><a href="https://arxiv.org/abs/2202.03052">Paper</a> | <a href="https://github.com/OFA-Sys/OFA">Github</a> | <a href="https://www.modelscope.cn/models?name=ofa">ModelScope</a> | <a href="https://huggingface.co/spaces/OFA-Sys/OFA-Generic_Interface">Demo</a></p>
</description>
<link>https://qwen.ai/blog?id=ofa</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=ofa</guid>
<pubDate>Mon, 14 Nov 2022 08:01:41 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
</channel>
</rss>http://localhost:1200/qwen/research/en/Open-Source - Success ✔️<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
<channel>
<title>Qwen Research - Open-Source</title>
<link>https://qwen.ai/research</link>
<atom:link href="http://localhost:1200/qwen/research/en/Open-Source" rel="self" type="application/rss+xml"></atom:link>
<description>Qwen Research - Open-Source - Powered by RSSHub</description>
<generator>RSSHub</generator>
<webMaster>contact@rsshub.app (RSSHub)</webMaster>
<language>en</language>
<lastBuildDate>Thu, 02 Apr 2026 06:42:36 GMT</lastBuildDate>
<ttl>5</ttl>
<item>
<title>Qwen-Image-Edit-2511: Improve Consistency</title>
<description><p>We are excited to introduce Qwen-Image-Edit-2511, an enhanced version over Qwen-Image-Edit-2509, featuring multiple improvements—including notably better consistency. To try out the latest model, please visit <a href="https://chat.qwen.ai/?inputFeature=image_edit">Qwen Chat</a> and select the Image Editing fea</p>
</description>
<link>https://qwen.ai/blog?id=qwen-image-edit-2511</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image-edit-2511</guid>
<pubDate>Tue, 23 Dec 2025 05:08:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3-VL: Sharper Vision, Deeper Thought, Broader Action</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-VL/qwen3vl-head.png#center" referrerpolicy="no-referrer"><p>Today, we officially launch the all-new Qwen3-VL series — the most powerful vision-language model in the Qwen family to date. In this generation, we’ve made major improvements across multiple dimensio</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen3-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen3-vl-68d2a7c1b8a8afce4ebd2dbe">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen3-VL-5c7a94c8cb144b">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-vl</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-vl</guid>
<pubDate>Mon, 22 Sep 2025 22:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3Guard: Real-time Safety for Your Token Stream</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3Guard/banner.png" referrerpolicy="no-referrer"><p>We are excited to introduce Qwen3Guard, the first safety guardrail model in the Qwen family. Built upon the powerful Qwen3 foundation models and fine-tuned specifically for safety classificatoin, Qwen</p>
<p><a href="https://github.com/QwenLM/Qwen3Guard/blob/main/Qwen3Guard_Technical_Report.pdf">Tech Report</a> | <a href="https://github.com/QwenLM/Qwen3Guard">GitHub</a> | <a href="https://huggingface.co/collections/Qwen/qwen3guard-68d2729abbfae4716f3343a1">Hugging Face</a> | <a href="https://modelscope.cn/collections/Qwen3Guard-308c39ef5ffb4b">ModelScope</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3guard</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3guard</guid>
<pubDate>Mon, 22 Sep 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen-Image-Edit-2509: Multi-Image Support, Improved Consistency</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen-Image/edit2509/edit2509_top.jpg#center" referrerpolicy="no-referrer"><p>This September, we are pleased to introduce Qwen-Image-Edit-2509, the monthly iteration of Qwen-Image-Edit. To experience the latest model, please visit <a href="https://qwen.ai/">Qwen Chat</a> and select the "I</p>
<p><a href="https://qwen.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen-Image">GITHUB</a> | <a href="https://huggingface.co/Qwen/Qwen-Image-Edit-2509">HUGGING FACE</a> | <a href="https://modelscope.cn/models/Qwen/Qwen-Image-Edit-2509">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-image-edit-2509</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image-edit-2509</guid>
<pubDate>Mon, 22 Sep 2025 16:08:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3-Omni: Natively Omni-Modal Foundation Models!</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen3-Omni/q3o.png#center" referrerpolicy="no-referrer"><p><strong>Qwen3-Omni</strong> is the natively end-to-end multilingual omni model. It processes text, images, audio, and video, and delivers real-time streaming responses in both text and natural speech. We introduce</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://huggingface.co/collections/Qwen/qwen3-omni-68d100a86cd0906843ceccbe">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen3-Omni-867aef131e7d4f">MODELSCOPE</a> | <a href="https://www.alibabacloud.com/help/en/model-studio/qwen-omni">DASHSCOPE</a> | <a href="https://github.com/QwenLM/Qwen3-Omni">GITHUB</a> | <a href="https://github.com/QwenLM/Qwen3-Omni/tree/main/assets/Qwen3_Omni.pdf">PAPER</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-Omni-Demo">HUGGING FACE DEMO</a> | <a href="https://modelscope.cn/studios/Qwen/Qwen3-Omni-Demo">MODELSCOPE DEMO</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-omni</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-omni</guid>
<pubDate>Sun, 21 Sep 2025 21:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3-TTS-Flash: Multi-timbre & Multi-lingual & Multi-dialect Speech Synthesis.</title>
<description><img src="http://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-TTS-Flash/table2.png#center" referrerpolicy="no-referrer"><p><strong>Qwen3-TTS-Flash</strong> is a flagship text-to-speech model that supports multi-timbre, multi-lingual, and multi-dialect speech synthesis. It aims to produce natural and expressive speech and is available</p>
<p><a href="https://www.alibabacloud.com/help/en/model-studio/qwen-tts">DASHSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-TTS-Demo">HUGGING FACE DEMO</a> | <a href="https://modelscope.cn/studios/Qwen/Qwen3-TTS-Demo">MODELSCOPE DEMO</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-tts</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-tts</guid>
<pubDate>Sun, 21 Sep 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3-Next: Towards Ultimate Training & Inference Efficiency</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwen3-next.png" referrerpolicy="no-referrer"><p>We believe that <strong>Context Length Scaling</strong> and <strong>Total Parameter Scaling</strong> are two major trends in the future of large models. To further improve training and inference efficiency under long-context a</p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://www.alibabacloud.com/help/en/model-studio/models#c5414da58bjgj">API</a> | <a href="https://huggingface.co/collections/Qwen/qwen3-next-68c25fd6838e585db8eeea9d">Hugging Face</a> | <a href="https://modelscope.cn/collections/Qwen3-Next-c314f23bd0264a">ModelScope</a> | <a href="https://www.kaggle.com/models/qwen-lm/qwen3-next-80b">Kaggle</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-next</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-next</guid>
<pubDate>Wed, 10 Sep 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3: Think Deeper, Act Faster</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwen3-banner.png" referrerpolicy="no-referrer"><p>Today, we are excited to announce the release of <strong>Qwen3</strong>, the latest addition to the Qwen family of large language models. Our flagship model, <strong>Qwen3-235B-A22B</strong>, achieves competitive results in be</p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen3">GitHub</a> | <a href="https://huggingface.co/collections/Qwen/qwen3-67dd247413f0e2e4f653967f">Hugging Face</a> | <a href="https://modelscope.cn/collections/Qwen3-9743180bdc6b48">ModelScope</a> | <a href="https://www.kaggle.com/models/qwen-lm/qwen-3">Kaggle</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3</guid>
<pubDate>Mon, 28 Apr 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5 Omni: See, Hear, Talk, Write, Do It All!</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2.5-Omni/qwen_omni.png" referrerpolicy="no-referrer"><p>We release <strong>Qwen2.5-Omni</strong>, the new flagship end-to-end multimodal model in the Qwen series. Designed for comprehensive multimodal perception, it seamlessly processes diverse inputs including text, i</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://huggingface.co/Qwen/Qwen2.5-Omni-7B">HUGGING FACE</a> | <a href="https://modelscope.cn/models/Qwen/Qwen2.5-Omni-7B">MODELSCOPE</a> | <a href="https://help.aliyun.com/zh/model-studio/user-guide/qwen-omni">DASHSCOPE</a> | <a href="https://github.com/QwenLM/Qwen2.5-Omni">GITHUB</a> | <a href="https://github.com/QwenLM/Qwen2.5-Omni/blob/main/assets/Qwen2.5_Omni.pdf">PAPER</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-Omni-7B-Demo">DEMO</a> | <a href="https://discord.com/invite/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-omni</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-omni</guid>
<pubDate>Wed, 26 Mar 2025 16:00:45 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5-VL-32B: Smarter and Lighter</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-VL-32B/qwen2.5vl-32b-vision.jpg" referrerpolicy="no-referrer"><p>At the end of January this year, we launched the Qwen2.5-VL series of models, which received widespread attention and positive feedback from the community. Building on the Qwen2.5-VL series, we contin</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen2.5-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-vl-6795ffac22b334a837c0f9a5">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen25-VL-58fbb5d31f1d47">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-vl-32b</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-vl-32b</guid>
<pubDate>Sun, 23 Mar 2025 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>QwQ-32B: Embracing the Power of Reinforcement Learning</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwq-32b-final.jpg" referrerpolicy="no-referrer"><p>Scaling Reinforcement Learning (RL) has the potential to enhance model performance beyond conventional pretraining and post-training methods. Recent studies have demonstrated that RL can significantly</p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://huggingface.co/Qwen/QwQ-32B">Hugging Face</a> | <a href="https://modelscope.cn/models/Qwen/QwQ-32B">ModelScope</a> | <a href="https://huggingface.co/spaces/Qwen/QwQ-32B-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwq-32b</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwq-32b</guid>
<pubDate>Wed, 05 Mar 2025 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5-1M: Deploy Your Own Qwen with Context Length up to 1M Tokens</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-1M/passkey_retrieval.png" referrerpolicy="no-referrer"><p>Two months after upgrading <a href="https://qwen.ai/qwen2.5-turbo">Qwen2.5-Turbo</a> to support context length up to one million tokens, we are back with the open-source Qwen2.5-1M models and the corresponding inference fram</p>
<p><a href="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-1M/Qwen2_5_1M_Technical_Report.pdf">Tech Report</a> | <a href="https://huggingface.co/Qwen">HuggingFace</a> | <a href="https://modelscope.cn/organization/qwen">ModelScope</a> | <a href="https://chat.qwenlm.ai/">Qwen Chat</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-1M-Demo">HuggingFace Demo</a> | <a href="https://www.modelscope.cn/studios/Qwen/Qwen2.5-1M-Demo">ModelScope Demo</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-1m</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-1m</guid>
<pubDate>Sun, 26 Jan 2025 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5 VL! Qwen2.5 VL! Qwen2.5 VL!</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2.5-vl-Capybara.png" referrerpolicy="no-referrer"><p>We release <strong>Qwen2.5-VL</strong>, the new flagship vision-language model of Qwen and also a significant leap from the previous Qwen2-VL. To try the latest model, feel free to visit [Qwen Chat](https://chat.q</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen2-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-vl-6795ffac22b334a837c0f9a5">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen25-VL-58fbb5d31f1d47">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-vl</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-vl</guid>
<pubDate>Sun, 26 Jan 2025 11:08:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5-Coder Series: Powerful, Diverse, Practical.</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5/Qwen2.5-Coder-Family/32b-top.jpg" referrerpolicy="no-referrer"><p>Today, we are excited to open source the "Powerful", "Diverse", and "Practical" Qwen2.5-Coder series, dedicated to continuously promoting the development of Open CodeLLMs.Additionally, the multi-langu</p>
<p><a href="https://github.com/QwenLM/Qwen2.5-Coder">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-coder-66eaa22e6f99801bf65b0c2f">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://www.kaggle.com/models/qwen-lm/qwen2.5-coder">KAGGLE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-Coder-demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-coder-family</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-coder-family</guid>
<pubDate>Mon, 11 Nov 2024 16:00:02 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5: A Party of Foundation Models!</title>
<description><img src="http://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5/Qwen2.5%20modelcard.001.jpeg" referrerpolicy="no-referrer"><p>In the past three months since Qwen2's release, numerous developers have built new models on the Qwen2 language models, providing us with valuable feedback. During this period, we have focused on crea</p>
<p><a href="https://github.com/QwenLM/Qwen2.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a> | <a href="https://qwenlm.github.io/blog/qwen2.5-llm">Qwen2.5 LLM</a> | <a href="https://qwenlm.github.io/blog/qwen2.5-coder">Qwen2.5-Coder</a> | <a href="https://qwenlm.github.io/blog/qwen2.5-math">Qwen2.5-Math</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5</guid>
<pubDate>Wed, 18 Sep 2024 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5-Coder: Code More, Learn More!</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5/Qwen2.5-Coder/coder-main.png" referrerpolicy="no-referrer"><p>In early April, we introduced CodeQwen1.5, which garnered significant attention from the community. Since then, we have been working to enhance the coding model. Today, we are excited to announce the</p>
<p><a href="https://github.com/QwenLM/Qwen2.5-Coder">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-Coder-7B-Instruct">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-coder</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-coder</guid>
<pubDate>Wed, 18 Sep 2024 16:00:02 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5-Math: The world's leading open-sourced mathematical LLMs</title>
<description><img src="http://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5/2024-08-qwen2.5-math-72B.png" referrerpolicy="no-referrer"><p>**🚨 Qwen2.5-Math mainly supports solving English and Chinese math problems through CoT and TIR. We do not recommend using this series of models for other tasks.**A month ago, we released the first se</p>
<p><a href="https://github.com/QwenLM/Qwen2-Math">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-math</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-math</guid>
<pubDate>Wed, 18 Sep 2024 16:00:01 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2-VL: To See the World More Clearly</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen2-VL/qwen2vl-head.jpeg" referrerpolicy="no-referrer"><p>After a year's relentless efforts, today we are thrilled to release <strong>Qwen2-VL</strong>! Qwen2-VL is the latest version of the vision language models based on <strong>Qwen2</strong> in the Qwen model familities. Compared</p>
<p><a href="https://huggingface.co/spaces/Qwen/Qwen2-VL">DEMO</a> | <a href="https://github.com/QwenLM/Qwen2-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen2-vl-66cee7455501d7126940800d">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://help.aliyun.com/zh/model-studio/developer-reference/qwen-vl-api">API</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2-vl</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2-vl</guid>
<pubDate>Wed, 28 Aug 2024 16:24:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2-Audio: Chat with Your Voice!</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-Audio/demo/radar_compare_qwen_audio.png" referrerpolicy="no-referrer"><p>To achieve the objective of building an AGI system, the model should be capable of understanding information from different modalities. Thanks to the rapid development of large language models, LLMs a</p>
<p><a href="https://huggingface.co/spaces/Qwen/Qwen2-Audio-Instruct-Demo">DEMO</a> | <a href="https://arxiv.org/pdf/2407.10759">PAPER</a> | <a href="https://github.com/QwenLM/Qwen2-Audio">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen2-audio-66b628d694096020e0c52ff6">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2-audio</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2-audio</guid>
<pubDate>Fri, 09 Aug 2024 08:18:19 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Hello Qwen2</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/qwen2/qwen.jpg" referrerpolicy="no-referrer"><p>After months of efforts, we are pleased to announce the evolution from Qwen1.5 to Qwen2. This time, we bring to you:We have opensourced the models in Hugging Face and ModelScope to you and we are look</p>
<p><a href="https://github.com/QwenLM/Qwen2">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2-72B-Instruct">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2</guid>
<pubDate>Thu, 06 Jun 2024 16:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Code with CodeQwen1.5</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/codeqwen1.5/intro.png" referrerpolicy="no-referrer"><p>The advent of advanced programming tools, which harnesses the power of large language models (LLMs), has significantly enhanced programmer productivity and accuracy. Notwithstanding these advancements</p>
<p><a href="https://github.com/QwenLM/CodeQwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/CodeQwen1.5-7b-Chat-demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=codeqwen1.5</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=codeqwen1.5</guid>
<pubDate>Tue, 16 Apr 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Introducing Qwen1.5</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/qwen1.5/intro.jpg" referrerpolicy="no-referrer"><p>In recent months, our focus has been on developing a "good" model while optimizing the developer experience. As we progress towards <strong>Qwen1.5</strong>, the next iteration in our Qwen series, this update arri</p>
<p><a href="https://github.com/QwenLM/Qwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen1.5-72B-Chat">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen1.5</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen1.5</guid>
<pubDate>Sun, 04 Feb 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
</channel>
</rss>... |
Contributor
http://localhost:1200/qwen/research/en/Release - Success ✔️<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
<channel>
<title>Qwen Research - Release</title>
<link>https://qwen.ai/research</link>
<atom:link href="http://localhost:1200/qwen/research/en/Release" rel="self" type="application/rss+xml"></atom:link>
<description>Qwen Research - Release - Powered by RSSHub</description>
<generator>RSSHub</generator>
<webMaster>contact@rsshub.app (RSSHub)</webMaster>
<language>en</language>
<lastBuildDate>Thu, 02 Apr 2026 06:42:36 GMT</lastBuildDate>
<ttl>5</ttl>
<item>
<title>Qwen3-TTS Steps Up: Voice Cloning and Voice Design!</title>
<description><p><strong>Qwen3-TTS</strong> family has launched two new models: the voice design model Qwen3-TTS-VD-Flash (accessible via the <a href="https://www.alibabacloud.com/help/en/model-studio/qwen-tts-voice-design">Qwen API</a>) and the voice cloning model Qwen3-TTS-VC-Flash (accessible via the [Qwen API](https://www.alibabacloud.</p>
</description>
<link>https://qwen.ai/blog?id=qwen3-tts-vc-voicedesign</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-tts-vc-voicedesign</guid>
<pubDate>Mon, 22 Dec 2025 16:00:45 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3-Omni-Flash-2025-12-01:Hear You. See You. Follow Smarter!</title>
<description><p><strong>Qwen3-Omni</strong> is a next-generation native multimodal large model capable of seamlessly processing multiple input modalities—including text, images, audio, and video—and generating both text and natural-sounding speech outputs simultaneously via real-time streaming responses. This version introduces</p>
</description>
<link>https://qwen.ai/blog?id=qwen3-omni-flash-20251201</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-omni-flash-20251201</guid>
<pubDate>Mon, 08 Dec 2025 21:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3-TTS Update! 49 Timbres + 10 Languages + 9 Dialects</title>
<description><p><strong>Qwen3-TTS-Flash</strong> is a flagship text-to-speech model that supports multi-timbre, multi-lingual, and multi-dialect speech synthesis. It aims to produce natural and expressive speech and is available via <a href="https://help.aliyun.com/zh/model-studio/qwen-tts">Qwen API</a>.Major Improvements:Qwen3-TTS offers</p>
</description>
<link>https://qwen.ai/blog?id=qwen3-tts-1128</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-tts-1128</guid>
<pubDate>Thu, 04 Dec 2025 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen DeepResearch: When Inspiration Becomes Its Own Reason</title>
<description><p><a href="https://chat.qwen.ai/?inputFeature=deep_research">Click here to experience the latest Qwen DeepResearch</a>_<strong>How does inspiration die?</strong>_It usually doesn’t die from “not being good enough”, but from being “too much trouble”.When a thought flashes, it’s still fragile and unverified. After a brief mome</p>
</description>
<link>https://qwen.ai/blog?id=qwen-deepresearch</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-deepresearch</guid>
<pubDate>Wed, 12 Nov 2025 20:59:26 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3-Max: Just Scale it</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwen3max-banner.png" referrerpolicy="no-referrer"><p>Following the release of the Qwen3-2507 series, we are thrilled to introduce Qwen3-Max — our largest and most capable model to date. The preview version of Qwen3-Max-Instruct currently ranks third on</p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://www.alibabacloud.com/help/en/model-studio/models#c2d5833ae4jmo">API</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-max</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-max</guid>
<pubDate>Wed, 24 Sep 2025 04:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3‑LiveTranslate: Real‑Time Multimodal Interpretation — See It, Hear It, Speak It!</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-LiveTranslate-Flash/blog_pic_without_subtitles(1).png#center" referrerpolicy="no-referrer"><p><strong>Qwen3‑LiveTranslate‑Flash</strong> delivers high‑precision, lightning‑fast and ultra‑reliable real‑time multilingual audio and video interpretation. With the extensive capabilities of Qwen3‑Omni and traini</p>
<p><a href="https://www.alibabacloud.com/help/en/document_detail/2983281.html">DASHSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-Livetranslate-Demo">HUGGING FACE DEMO</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-livetranslate</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-livetranslate</guid>
<pubDate>Mon, 22 Sep 2025 23:00:26 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Travel Planner: Your Smart Travel Designer</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/qwen-travel-planner/en_q1.png" referrerpolicy="no-referrer"><p>We are excited to introduce our <strong>brand-new Travel Planning Assistant</strong>, a powerful system built on a <strong>Multi-Agent architecture</strong> with robust <strong>real-world tool-calling capabilities</strong>. It is designed</p>
<p><a href="https://chat.qwen.ai/?inputFeature=travel">QWEN CHAT</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=agent</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=agent</guid>
<pubDate>Mon, 22 Sep 2025 21:00:59 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Time to Speak Some Dialects, Qwen-TTS!</title>
<description><p>Here we introduce the latest update of <strong>Qwen-TTS</strong> (<code>qwen-tts-latest</code> or <code>qwen-tts-2025-05-22</code>) through <a href="https://help.aliyun.com/zh/model-studio/qwen-tts">Qwen API</a> . Trained on a large-scale dataset</p>
<p><a href="https://help.aliyun.com/zh/model-studio/qwen-tts">API</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-tts</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-tts</guid>
<pubDate>Fri, 27 Jun 2025 07:01:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen VLo: From "Understanding" the World to "Depicting" It</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen-VLo/vlo.png" referrerpolicy="no-referrer"><p>The evolution of multimodal large models is continually pushing the boundaries of what we believe technology can achieve. From the initial QwenVL to the latest Qwen2.5 VL, we have made progress in enh</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-vlo</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-vlo</guid>
<pubDate>Thu, 26 Jun 2025 14:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3 Embedding: Advancing Text Embedding and Reranking Through Foundation Models</title>
<description><img src="https://mitalinlp.oss-cn-hangzhou.aliyuncs.com/dingkun/models/qwen-embedding/q3e-mteb-result-0605.png" referrerpolicy="no-referrer"><p>We release <strong>Qwen3 Embedding series</strong>, a new proprietary model of the Qwen model family. These models are specifically designed for <strong>text embedding</strong>, <strong>retrieval</strong>, and <strong>reranking</strong> tasks, built on</p>
<p><a href="https://github.com/QwenLM/Qwen3-Embedding">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-embedding</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-embedding</guid>
<pubDate>Thu, 05 Jun 2025 13:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>QVQ-Max: Think with Evidence</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/QVQ-Max/test_time.png" referrerpolicy="no-referrer"><p>Last December, we launched QVQ-72B-Preview as an exploratory model, but it had many issues. Today, we are officially releasing the first version of QVQ-Max, our visual reasoning model. This model can</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen2.5-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-vl-6795ffac22b334a837c0f9a5">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen25-VL-58fbb5d31f1d47">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qvq-max-preview</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qvq-max-preview</guid>
<pubDate>Thu, 27 Mar 2025 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title><think>...</think> QwQ-Max-Preview</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwq-banner-en.jpg" referrerpolicy="no-referrer"><p>This is a blog created by QwQ-Max-Preview. We hope you enjoy it!We’re happy to unveil QwQ-Max-Preview , the latest advancement in the Qwen series, designed to push the boundaries of deep reasoning and</p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwq-max-preview</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwq-max-preview</guid>
<pubDate>Mon, 24 Feb 2025 18:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen2.5-Max: Exploring the Intelligence of Large-scale MoE Model</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2.5-max-banner.png" referrerpolicy="no-referrer"><p>It is widely recognized that continuously scaling both data size and model size can lead to significant improvements in model intelligence. However, the research and industry community has limited exp</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://www.alibabacloud.com/help/en/model-studio/developer-reference/what-is-qwen-llm">API</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-Max-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-max</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-max</guid>
<pubDate>Tue, 28 Jan 2025 15:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Towards Effective Process Supervision in Mathematical Reasoning</title>
<description><img src="http://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2.5/Qwen2.5-Math-PRM/Qwen2.5-Math-PRM.png" referrerpolicy="no-referrer"><p>In recent years, Large Language Models (LLMs) have made remarkable advances in mathematical reasoning, yet they can make mistakes, such as miscalculations or logical errors, leading to wrong conclusio</p>
<p><a href="https://github.com/QwenLM/Qwen2-Math">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-math-66eaa240a1b7d5ee65f1da3e">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-math-prm</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-math-prm</guid>
<pubDate>Mon, 13 Jan 2025 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>QVQ: To See the World with Wisdom</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/QVQ/QVQ.jpg" referrerpolicy="no-referrer"><p>Language and vision intertwine in the human mind, shaping how we perceive and understand the world around us. Our ability to reason is deeply rooted in both linguistic thought and visual memory - but</p>
<p><a href="https://github.com/QwenLM/Qwen2-VL">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://www.kaggle.com/models/qwen-lm/qvq-72b-preview">KAGGLE</a> | <a href="https://huggingface.co/spaces/Qwen/QVQ-72B-Preview">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qvq-72b-preview</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qvq-72b-preview</guid>
<pubDate>Tue, 24 Dec 2024 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>QwQ: Reflect Deeply on the Boundaries of the Unknown</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/assets/blog/qwq-32b-preview/QwQ-32B-Preview_result.png" referrerpolicy="no-referrer"><p>*Note: This is the pronunciation of QwQ: /kwju:/ , similar to the word "quill".*What does it mean to think, to question, to understand? These are the deep waters that QwQ (Qwen with Questions) wades i</p>
<p><a href="https://github.com/QwenLM/Qwen2.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/QwQ-32B-preview">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwq-32b-preview</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwq-32b-preview</guid>
<pubDate>Wed, 27 Nov 2024 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Extending the Context Length to 1M Tokens!</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Turbo/cover_en.png" referrerpolicy="no-referrer"><p>After the release of Qwen2.5, we heard the community's demand for processing longer contexts. In recent months, we have made many optimizations for the model capabilities and inference performance of</p>
<p><a href="https://help.aliyun.com/zh/model-studio/getting-started/first-api-call-to-qwen">API Documentation (Chinese)</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-Turbo-1M-Demo">HuggingFace Demo</a> | <a href="https://www.modelscope.cn/studios/Qwen/Qwen2.5-Turbo-1M-Demo">ModelScope Demo</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-turbo</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-turbo</guid>
<pubDate>Thu, 14 Nov 2024 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Introducing Qwen2-Math</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/assets/blog/qwen2-math/fig1.jpg" referrerpolicy="no-referrer"><p>**🚨 This model mainly supports English. We will release bilingual (English and Chinese) math models soon.**Over the past year, we have dedicated significant effort to researching and enhancing the re</p>
<p><a href="https://github.com/QwenLM/Qwen2-Math">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2-math</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2-math</guid>
<pubDate>Wed, 07 Aug 2024 16:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Notes on Qwen-Max-0428</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/arena_leaderboard.jpg" referrerpolicy="no-referrer"><p>Previously, we opensourced a series of Qwen1.5 model ranging from 0.5 to 110 billion parameters. Now, we release a larger model, Qwen-Max-0428. Qwen-Max-0428 is an instruction-tuned model for chat ser</p>
<p><a href="https://dashscope.aliyun.com/">API</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen-Max-0428">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-max-0428</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-max-0428</guid>
<pubDate>Sat, 11 May 2024 10:10:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen1.5-110B: The First 100B+ Model of the Qwen1.5 Series</title>
<description><p>Recently we have witnessed a burst of large-scale models with over 100 billion parameters in the opensource community. These models have demonstrated remarkable performance in both benchmark evaluatio</p>
<p><a href="https://github.com/QwenLM/Qwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen1.5-110B-Chat-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen1.5-110b</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen1.5-110b</guid>
<pubDate>Thu, 25 Apr 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen1.5-32B: Fitting the Capstone of the Qwen1.5 Language Model Series</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/qwen1.5-32b/32b.png" referrerpolicy="no-referrer"><p>The open-source community has long sought a model that strikes an ideal balance between performance, efficiency, and memory footprint. Despite the emergence of cutting-edge models like Qwen1.5-72B and</p>
<p><a href="https://github.com/QwenLM/Qwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen1.5-72B-Chat">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen1.5-32b</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen1.5-32b</guid>
<pubDate>Tue, 02 Apr 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen1.5-MoE: Matching 7B Model Performance with 1/3 Activated Parameters</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/assets/blog/qwen1.5/qwen-moe.jpg" referrerpolicy="no-referrer"><p>Since the surge in interest sparked by Mixtral, research on mixture-of-expert (MoE) models has gained significant momentum. Both researchers and practitioners are keenly interested in understanding ho</p>
<p><a href="https://github.com/QwenLM/Qwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/qwen1.5-MoE-A2.7B-Chat-demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-moe</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-moe</guid>
<pubDate>Thu, 28 Mar 2024 03:31:44 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Introducing Qwen-VL</title>
<description><p>Along with the rapid development of our large language model Qwen, we leveraged Qwen’s capabilities and unified multimodal pretraining to address the limitations of multimodal models in generalization</p>
</description>
<link>https://qwen.ai/blog?id=qwen-vl</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-vl</guid>
<pubDate>Thu, 25 Jan 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Introducing Qwen</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/qwen/family.png" referrerpolicy="no-referrer"><p>4 months after our first release of Qwen-7B, which is the starting point of our opensource journey of large language models (LLM), we now provide an introduction to the Qwen series to give you a whole</p>
<p><a href="https://arxiv.org/abs/2309.16609">PAPER</a> | <a href="https://github.com/QwenLM/Qwen">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/CV4E9rpNSD">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen</guid>
<pubDate>Tue, 23 Jan 2024 14:13:29 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
</channel>
</rss>http://localhost:1200/qwen/research/zh-cn/Research - Success ✔️<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
<channel>
<title>Qwen 研究 - Research</title>
<link>https://qwen.ai/research</link>
<atom:link href="http://localhost:1200/qwen/research/zh-cn/Research" rel="self" type="application/rss+xml"></atom:link>
<description>Qwen 研究 - Research - Powered by RSSHub</description>
<generator>RSSHub</generator>
<webMaster>contact@rsshub.app (RSSHub)</webMaster>
<language>en</language>
<lastBuildDate>Thu, 02 Apr 2026 06:42:37 GMT</lastBuildDate>
<ttl>5</ttl>
<item>
<title>Qwen-Image-Layered: 面向内在可编辑性的图层分解</title>
<description><p>今天我们很高兴推出 Qwen-Image-Layered,这是一款能够将图像分解为多个 RGBA 图层的模型。这种分层表示赋予了图像内在的可编辑性:每个图层都可以独立操作,而不会影响其他内容。同时,这种分层结构天然支持高保真的基本编辑操作,例如缩放、移动和重新着色。通过将不同元素物理地隔离到不同的图层中,我们的方法实现了高保真的编辑效果。给定一张图像,Qwen-Image-Layered 可将其分解为若干个 RGBA 图层:分解完成后,编辑操作仅作用于目标图层,将其与其他内容物理隔离,从根本上确保了编辑的一致性。例如,我们可以对第一个图层重新着色,而保持其余内容不变:我们也可以将第二个图层中的</p>
</description>
<link>https://qwen.ai/blog?id=qwen-image-layered</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image-layered</guid>
<pubDate>Fri, 19 Dec 2025 05:08:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>SAPO:一种稳定且高性能的大语言模型强化学习方法</title>
<description><p>强化学习(Reinforcement Learning, RL)已经成为提升大语言模型(Large Language Models, LLM)推理能力的核心技术之一。现代 RL 训练流程使模型能够解决困难的数学问题、编写复杂代码和进行多模态推理。实践中,一种被广泛采用的方法是基于组的策略优化(group‑based policy optimization):对每个提示采样多个回复,并在组内进行奖励归一化。<br>
然而,尽管该方法效果显著,稳定且高性能的策略优化仍然困难。关键挑战在于 token 级重要性比率(importance ratio)的高方差,尤其是在 MoE 模型中。该比率衡量当前策略偏离</p>
</description>
<link>https://qwen.ai/blog?id=sapo</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=sapo</guid>
<pubDate>Thu, 04 Dec 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen3 ASR:听得清楚,转写聪明。</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3-ASR/qwenasr-bear-zh.png#center" referrerpolicy="no-referrer"><p>Qwen3-ASR-Flash现已正式发布,一个基于Qwen3基座模型强大的智能、海量多模态数据以及千万小时规模的ASR数据构建的语音识别服务。<br>
Qwen3-ASR-Flash实现了高精度高鲁棒性的语音识别性能,支持11种语言和多种口音。与众不同的是,Qwen3-ASR-Flash支持用户以任意格式提供文本上下文,从而获得定制化的 ASR 结果,同时还支持歌声识别。<strong>📊 性能表现:</strong>**</p>
</description>
<link>https://qwen.ai/blog?id=qwen3-asr-flash</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-asr-flash</guid>
<pubDate>Mon, 08 Sep 2025 11:37:47 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen-Image-Edit: 全能图像编辑,驱动内容创作提质增效</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen-Image/edit_homepage.jpg#center" referrerpolicy="no-referrer"><p>我们很高兴推出 Qwen-Image-Edit,Qwen-Image 的图像编辑版本。Qwen-Image-Edit 基于我们20B的 Qwen-Image 模型进一步训练,成功将 Qwen-Image 的独特的文本渲染能力延展至图像编辑领域,实现了对图片中文字的精准编辑。此外,Qwen-Image-Edit 将输入图像同时输入到 Qwen2.5-VL(实现视觉语义控制)和 VAE Encoder</p>
<p><a href="https://qwen.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen-Image">GITHUB</a> | <a href="https://huggingface.co/Qwen/Qwen-Image-Edit">HUGGING FACE</a> | <a href="https://modelscope.cn/models/Qwen/Qwen-Image-Edit">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-image-edit</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image-edit</guid>
<pubDate>Mon, 18 Aug 2025 17:30:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen-Image:擅长文字渲染的创作利器</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen-Image/merge3.jpg#center" referrerpolicy="no-referrer"><p>我们很高兴推出 Qwen-Image,一个20B的MMDiT模型。这是通义千问系列中首个图像生成基础模型,其在复杂文本渲染和精确图像编辑方面取得了显著进展。如需体验最新模型,欢迎访问 <a href="https://chat.qwen.ai/">Qwen Chat</a> 并选择“图像生成”功能。主要特性包括:我们在多个公开基准上对Qwen-Image进行了全面评估,包括用于通用图像生成的GenEval、DPG和O</p>
<p><a href="https://github.com/QwenLM/Qwen-Image">GITHUB</a> | <a href="https://huggingface.co/Qwen/Qwen-Image">HUGGING FACE</a> | <a href="https://modelscope.cn/models/Qwen/Qwen-Image">MODELSCOPE</a> | <a href="https://modelscope.cn/aigc/imageGeneration?tab=advanced">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-image</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image</guid>
<pubDate>Mon, 04 Aug 2025 14:08:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>GSPO:迈向持续拓展的语言模型强化学习</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/results.jpg#center" referrerpolicy="no-referrer"><p>强化学习 (Reinforcement Learning,RL)已成为拓展语言模型、增强其深度推理与问题求解能力的关键技术范式。为了持续拓展 RL,首要前提是确保稳定、鲁棒的训练过程。然而,我们观察到现有的 RL 算法(如 GRPO)在长期训练中会暴露出严重的不稳定性问题并招致不可逆转的模型崩溃,阻碍了通过增加计算以获得进一步的性能提升。为了能够持续拓展 RL,我们提出了 **Group Sequ</p>
<p><a href="https://huggingface.co/papers/2507.18071">PAPER</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=gspo</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=gspo</guid>
<pubDate>Sun, 27 Jul 2025 07:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen-MT:速度与智能翻译的完美融合</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwen-mt-001.jpeg" referrerpolicy="no-referrer"><p>我们通过<a href="https://bailian.console.aliyun.com/?tab=model#/model-market/detail/qwen-mt-turbo">Qwen API</a> 推出了 Qwen-MT(qwen-mt-turbo)的最新升级版本。本次更新基于强大的 Qwen3 模型,进一步使用超大规模多语言和翻译数据对模型进行训练,全面增强其多语言理解与翻译能力,并结合强化学习技术</p>
<p><a href="https://modelscope.cn/studios/Qwen/Qwen3-MT-demo">DEMO</a> | <a href="https://bailian.console.aliyun.com/?tab=model#/model-market/detail/qwen-mt-turbo">API</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-mt</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-mt</guid>
<pubDate>Thu, 24 Jul 2025 14:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen3-Coder: 在世界中自主编程</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3-Coder/qwen3-coder-main.jpg" referrerpolicy="no-referrer"><p>今天我们正式发布 Qwen3-Coder,这是我们迄今为止最具代理能力的代码模型。Qwen3-Coder 拥有多个尺寸,但我们迫不及待地给大家提供当前最强大的版本,Qwen3-Coder-480B-A35B-Instruct。这是一个总参数量 480B,激活 35B 的 MoE 模型,原生支持 256K token 的上下文并可通过 YaRN 扩展到 1M token,拥有卓越的代码和 Agent</p>
<p><a href="https://github.com/QwenLM/Qwen3-Coder">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-coder</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-coder</guid>
<pubDate>Tue, 22 Jul 2025 13:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>通过全局负载均衡提升混合专家模型的性能和特异化程度</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/assets/balance/main_results.png" referrerpolicy="no-referrer"><p>混合专家模型(MoEs)通过路由机制动态并稀疏地激活模型参数,使得能高效地增大模型参数规模。基于 TopK 机制的稀疏激活会在训练中会遇到专家激活不均衡的问题:少数被频繁选择的专家会被优化得更多,进一步使得这些专家被更频繁地选择,最终导致只选择少数专家,造成剩余专家的冗余。因此,MoE 在训练中需要引入额外的辅助损失(load balance loss,LBL)来鼓励专家的选择趋于均衡。目前主流</p>
<p><a href="https://github.com/QwenLM/Qwen2-Math">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-math-66eaa240a1b7d5ee65f1da3e">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=global-load-balance</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=global-load-balance</guid>
<pubDate>Mon, 20 Jan 2025 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen2.5-LLM:扩展大型语言模型的边界</title>
<description><p>我们隆重推出最新发布的Qwen2.5系列语言模型!我们共开源了7款decoder-only的稠密模型,参数规模从0.5B到72B不等。我们调研发现产品对10B至30B模型的兴趣明显增加,同时3B规模的模型也越来越适用于移动端场景。为此,Qwen2.5系列开源了Qwen2.5-3B、Qwen2.5-14B 和 Qwen2.5-32B。同时,我们还推出了Qwen-Plus与Qwen-Turbo版本,可</p>
<p><a href="https://github.com/QwenLM/Qwen2.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-72B-Instruct">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-llm</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-llm</guid>
<pubDate>Wed, 18 Sep 2024 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>使用Qwen-Agent将上下文记忆扩展到百万量级</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/assets/qwen_agent/qwen-agent-2405-lv1-agent.png" referrerpolicy="no-referrer"><p><strong>长话短说:</strong> 我们开发了一个智能体用于理解包含百万字词的文档,虽然仅使用Qwen2模型的8k上下文,但效果超过RAG和长序列原生模型。我们还利用此智能体合成长上下文数据,用于训练长上下文的Qwen模型。近期,能够原生处理数百万字输入的大型语言模型(LLMs)成为了一种趋势。大部分工作集中在模型架构调整,如位置编码扩展或线性注意力机制等。然而,准备足够长度的微调数据作为讨论较少但同样重要的议题</p>
<p><a href="https://github.com/QwenLM/Qwen-Agent">Qwen-Agent</a> | <a href="https://github.com/QwenLM/Qwen-Agent/blob/main/examples/assistant_rag.py">RAG Code</a> | <a href="https://github.com/QwenLM/Qwen-Agent/blob/main/examples/parallel_doc_qa.py">Agent Code</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-agent-2405</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-agent-2405</guid>
<pubDate>Thu, 06 Jun 2024 03:59:59 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>OFASys:一行代码带你搞定多任务学习!</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/ofasys/demo.jpg" referrerpolicy="no-referrer"><p>通用模型非常火!我们现在跟随多模态多任务学习的发展似乎看到了实现一个真正的通用模型的机会。我们此前推出的OFA便是朝着这个目标迈向的重要一步。但是,我们在实际实现过程中遇到了非常多的困难。比如说,把多任务训练的模型搭建起来,组织多任务的训练比如给数据打batch和保证训练稳定等等,都非常困难。因此,我们推出一个AI系统OFASys,它主要解决多模态多任务学习的实现问题。简单来说,它主要通过一个叫做</p>
<p><a href="https://arxiv.org/abs/2212.04408">论文</a> | <a href="https://github.com/OFA-Sys/OFASys">GitHub</a></p>
</description>
<link>https://qwen.ai/blog?id=ofasys</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=ofasys</guid>
<pubDate>Wed, 28 Dec 2022 10:01:21 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Chinese CLIP: 中文图文对比学习预训练</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/cnclip/search.jpg" referrerpolicy="no-referrer"><p>CLIP[^1]是多模态表示学习领域一个现象级的模型。它不仅扮演基础模型,并且建立了视觉和语言的桥梁。它还推动了很多其他领域技术的发展,尤其是文本生成图像。然而,我们还需要特定语言的CLIP,尤其在现实应用中,比如跨模态检索。在此之前还没有效果较好的开源中文CLIP。因此我们希望通过这个项目推动中文多模态的发展。在诸如跨模态检索的图文应用中,语言往往扮演重要的角色。假设直接使用CLIP和翻译文本,</p>
<p><a href="https://arxiv.org/abs/2211.01335">论文</a> | <a href="https://github.com/OFA-Sys/Chinese-CLIP">Github</a> | <a href="https://www.modelscope.cn/models/damo/multi-modal_clip-vit-base-patch16_zh/summary">ModelScope</a> | <a href="https://huggingface.co/spaces/OFA-Sys/chinese-clip-zero-shot-image-classification">体验</a></p>
</description>
<link>https://qwen.ai/blog?id=chinese-clip</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=chinese-clip</guid>
<pubDate>Sat, 24 Dec 2022 06:54:19 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>OFA:走向通用统一模型</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/ofa/uniter.jpg" referrerpolicy="no-referrer"><p>2022年可以说是属于通用模型的一年!随着多模态预训练的蓬勃发展,尤其是通用模型,我们看到实现一个具有处理多种模态的多种任务的能力的通用模型的机会。因此我们提出OFA[^1],即One-For-All。它是一个统一的多模态预训练模型,以统一的模型架构和任务形式兼容多模态和单模态的理解与生成任务。我们使用多模态多任务的方式预训练OFA,使其成为一个接近全能的模型。我们将OFA的模型和代码全部开源到社</p>
<p><a href="https://arxiv.org/abs/2202.03052">论文</a> | <a href="https://github.com/OFA-Sys/OFA">GitHub</a> | <a href="https://www.modelscope.cn/models?name=ofa">ModelScope</a> | <a href="https://huggingface.co/spaces/OFA-Sys/OFA-Generic_Interface">体验</a></p>
</description>
<link>https://qwen.ai/blog?id=ofa</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=ofa</guid>
<pubDate>Mon, 14 Nov 2022 08:01:41 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
</channel>
</rss>... |
Contributor
http://localhost:1200/qwen/research/zh-cn/Open-Source - Success ✔️<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
<channel>
<title>Qwen 研究 - Open-Source</title>
<link>https://qwen.ai/research</link>
<atom:link href="http://localhost:1200/qwen/research/zh-cn/Open-Source" rel="self" type="application/rss+xml"></atom:link>
<description>Qwen 研究 - Open-Source - Powered by RSSHub</description>
<generator>RSSHub</generator>
<webMaster>contact@rsshub.app (RSSHub)</webMaster>
<language>en</language>
<lastBuildDate>Thu, 02 Apr 2026 06:42:37 GMT</lastBuildDate>
<ttl>5</ttl>
<item>
<title>Qwen-Image-Edit-2511: 一致性再提升</title>
<description><p>我们很高兴推出 Qwen-Image-Edit-2511,相比于Qwen-Image-Edit-2509,进行了包括一致性提升在内的多项增强。如需体验最新模型,欢迎访问 <a href="https://chat.qwen.ai/?inputFeature=image_edit">Qwen Chat</a> 并选择“图像编辑”功能。注意,线上版本有一定优化加速,如果要获取模型最佳效果,可以去 <a href="https://modelscope.cn/models/Qwen/Qwen-Image-Edit-2511">ModelScope</a> 本地部署以获取最佳性能。Qwen-Image-Edit-2511 的主要特性包括:**</p>
</description>
<link>https://qwen.ai/blog?id=qwen-image-edit-2511</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image-edit-2511</guid>
<pubDate>Tue, 23 Dec 2025 05:08:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3-VL:明察、深思、广行</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-VL/qwen3vl-head.png#center" referrerpolicy="no-referrer"><p>今天,我们正式推出全新升级的 <strong>Qwen3-VL</strong> 系列——这是迄今为止 Qwen 系列中最强大的视觉语言模型。在这一代模型中,我们在多个维度实现了全面跃升:无论是纯文本理解与生成,还是视觉内容的感知与推理;无论是上下文长度的支持能力,还是对空间关系、动态视频的理解深度;乃至在与Agent交互中的表现,Qwen3-VL 都展现出显著进步。今天,我们率先开源的是该系列的旗舰模型 —— **Qwe</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen3-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen3-vl-68d2a7c1b8a8afce4ebd2dbe">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen3-VL-5c7a94c8cb144b">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-vl</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-vl</guid>
<pubDate>Mon, 22 Sep 2025 22:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3Guard: 实时安全,逐词响应</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3Guard/banner.png" referrerpolicy="no-referrer"><p>我们隆重推出 Qwen3Guard —— Qwen 家族中首款专为安全防护设计的护栏模型。该模型基于强大的 Qwen3 基础架构打造,并针对安全分类任务进行了专项微调,旨在为人工智能交互提供精准、可靠的安全保障。无论是用户输入的提示,还是模型生成的回复,Qwen3Guard 均可高效识别潜在风险,输出细粒度的风险等级与分类标签,助力实现更负责任的 AI 应用。在多项主流安全评测基准上,Qwen3G</p>
<p><a href="https://github.com/QwenLM/Qwen3Guard/blob/main/Qwen3Guard_Technical_Report.pdf">Tech Report</a> | <a href="https://github.com/QwenLM/Qwen3Guard">GitHub</a> | <a href="https://huggingface.co/collections/Qwen/qwen3guard-68d2729abbfae4716f3343a1">Hugging Face</a> | <a href="https://modelscope.cn/collections/Qwen3Guard-308c39ef5ffb4b">ModelScope</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3guard</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3guard</guid>
<pubDate>Mon, 22 Sep 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen-Image-Edit-2509: 多图编辑支持,单图一致性提升</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen-Image/edit2509/edit2509_top.jpg#center" referrerpolicy="no-referrer"><p>这个9月,我们很高兴推出 Qwen-Image-Edit-2509,作为 Qwen-Image-Edit 的月迭代版本。如需体验最新模型,欢迎访问 <a href="https://qwen.ai/">Qwen Chat</a> 并选择“图像编辑”功能。相比于8月发布的 Qwen-Image-Edit,Qwen-Image-Edit-2509 的主要特性包括:**Qwen-Image-Edit-2509 的首要更新是支</p>
<p><a href="https://qwen.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen-Image">GITHUB</a> | <a href="https://huggingface.co/Qwen/Qwen-Image-Edit-2509">HUGGING FACE</a> | <a href="https://modelscope.cn/models/Qwen/Qwen-Image-Edit-2509">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-image-edit-2509</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image-edit-2509</guid>
<pubDate>Mon, 22 Sep 2025 16:08:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3-Omni:新一代原生全模态大模型!</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen3-Omni/q3o.png#center" referrerpolicy="no-referrer"><p><strong>Qwen3-Omni</strong>是新一代原生全模态大模型,能够无缝处理文本、图像、音频和视频等多种输入形式,并通过实时流式响应同时生成文本与自然语音输出。我们引入了多种升级来提升模型表现和效率。主要特点:Qwen3-Omni采用Thinker-Talker架构:Thinker负责文本生成,Talker专注于流式语音Token生成,直接接收来自Thinker的高层语义表征。为实现超低延迟流式生成,Tal</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://huggingface.co/collections/Qwen/qwen3-omni-68d100a86cd0906843ceccbe">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen3-Omni-867aef131e7d4f">MODELSCOPE</a> | <a href="https://help.aliyun.com/zh/model-studio/qwen-omni">DASHSCOPE</a> | <a href="https://github.com/QwenLM/Qwen3-Omni">GITHUB</a> | <a href="https://github.com/QwenLM/Qwen3-Omni/tree/main/assets/Qwen3_Omni.pdf">PAPER</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-Omni-Demo">HUGGING FACE DEMO</a> | <a href="https://modelscope.cn/studios/Qwen/Qwen3-Omni-Demo">MODELSCOPE DEMO</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-omni</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-omni</guid>
<pubDate>Sun, 21 Sep 2025 21:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3-TTS-Flash:多音色 & 多语言 & 多方言的语音合成</title>
<description><img src="http://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-TTS-Flash/table2.png#center" referrerpolicy="no-referrer"><p><strong>Qwen3-TTS-Flash</strong> 是支持多音色、多语言和多方言的旗舰语音合成模型,旨在生成自然且具有表现力的语音,目前可通过<a href="https://help.aliyun.com/zh/model-studio/qwen-tts">Qwen API</a>访问。主要特点:这里有一些样例展示了单说话人的多语种生成能力:这里有一些样例展示了中英文的音色:这里有一些样例展示了方言的音色:这里有一些样例展示了混</p>
<p><a href="https://help.aliyun.com/zh/model-studio/qwen-tts">DASHSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-TTS-Demo">HUGGING FACE DEMO</a> | <a href="https://modelscope.cn/studios/Qwen/Qwen3-TTS-Demo">MODELSCOPE DEMO</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-tts</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-tts</guid>
<pubDate>Sun, 21 Sep 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3-Next:迈向更极致的训练推理性价比</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwen3-next.png" referrerpolicy="no-referrer"><p>我们认为<strong>Context Length Scaling</strong>和<strong>Total Parameter Scaling</strong>是未来大模型发展的两大趋势,为了进一步提升模型在长上下文和大规模总参数下的训练和推理效率,我们设计了全新的Qwen3-Next的模型结构。该结构相比Qwen3的MoE模型结构,进行了以下核心改进:<strong>混合注意力机制</strong>、<strong>高稀疏度 MoE 结构</strong>、一系列<strong>训练稳定友好的优化</strong></p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://help.aliyun.com/zh/model-studio/models#2c9c4628c9yyd">API</a> | <a href="https://huggingface.co/collections/Qwen/qwen3-next-68c25fd6838e585db8eeea9d">Hugging Face</a> | <a href="https://modelscope.cn/collections/Qwen3-Next-c314f23bd0264a">ModelScope</a> | <a href="https://www.kaggle.com/models/qwen-lm/qwen3-next-80b">Kaggle</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-next</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-next</guid>
<pubDate>Wed, 10 Sep 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3:思深,行速</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwen3-banner.png" referrerpolicy="no-referrer"><p>今天,我们宣布推出 <strong>Qwen3</strong>,这是 Qwen 系列大型语言模型的最新成员。我们的旗舰模型 <strong>Qwen3-235B-A22B</strong> 在代码、数学、通用能力等基准测试中,与 DeepSeek-R1、o1、o3-mini、Grok-3 和 Gemini-2.5-Pro 等顶级模型相比,表现出极具竞争力的结果。此外,小型 MoE 模型 <strong>Qwen3-30B-A3B</strong> 的激活参数数量是 QwQ</p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen3">GitHub</a> | <a href="https://huggingface.co/collections/Qwen/qwen3-67dd247413f0e2e4f653967f">Hugging Face</a> | <a href="https://modelscope.cn/collections/Qwen3-9743180bdc6b48">ModelScope</a> | <a href="https://www.kaggle.com/models/qwen-lm/qwen-3">Kaggle</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3</guid>
<pubDate>Mon, 28 Apr 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5 Omni:看得见、听得到、会说话、能写作,样样精通!</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2.5-Omni/qwen_omni.png" referrerpolicy="no-referrer"><p>我们发布了 <strong>Qwen2.5-Omni</strong>,Qwen 模型家族中新一代端到端多模态旗舰模型。该模型专为全方位多模态感知设计,能够无缝处理文本、图像、音频和视频等多种输入形式,并通过实时流式响应同时生成文本与自然语音合成输出。想要体验最新的模型,请访问 <a href="https://chat.qwenlm.ai/">Qwen Chat</a> 并选择Qwen2.5-Omni-7B。该模型现已在 [Hugging Fa</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://huggingface.co/Qwen/Qwen2.5-Omni-7B">HUGGING FACE</a> | <a href="https://modelscope.cn/models/Qwen/Qwen2.5-Omni-7B">MODELSCOPE</a> | <a href="https://help.aliyun.com/zh/model-studio/user-guide/qwen-omni">DASHSCOPE</a> | <a href="https://github.com/QwenLM/Qwen2.5-Omni">GITHUB</a> | <a href="https://github.com/QwenLM/Qwen2.5-Omni/blob/main/assets/Qwen2.5_Omni.pdf">PAPER</a> | <a href="https://modelscope.cn/studios/Qwen/Qwen2.5-Omni-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-omni</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-omni</guid>
<pubDate>Wed, 26 Mar 2025 16:00:45 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5-VL-32B: 更聪明、更轻量!</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-VL-32B/qwen2.5vl-32b-vision.jpg" referrerpolicy="no-referrer"><p>今年一月底,我们推出了 Qwen2.5-VL 系列模型,获得了社区的广泛关注和积极反馈。在 Qwen2.5-VL 系列的基础上,我们使用强化学习持续优化模型,并使用 Apache 2.0 协议开源 32B 这个备受喜爱的参数规模的新 VL 模型—— <strong>Qwen2.5-VL-32B-Instruct</strong>。相比此前发布的 Qwen2.5-VL 系列模型,本次推出的 32B 模型的特点如下:我们与业内</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen2.5-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-vl-6795ffac22b334a837c0f9a5">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen25-VL-58fbb5d31f1d47">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-vl-32b</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-vl-32b</guid>
<pubDate>Sun, 23 Mar 2025 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>QwQ-32B: 领略强化学习之力</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwq-32b-final.jpg" referrerpolicy="no-referrer"><p>大规模强化学习(RL)有潜力超越传统的预训练和后训练方法来提升模型性能。近期的研究表明,强化学习可以显著提高模型的推理能力。例如,DeepSeek R1 通过整合冷启动数据和多阶段训练,实现了最先进的性能,使其能够进行深度思考和复杂推理。这一次,我们探讨了大规模强化学习(RL)对大语言模型的智能的提升作用,同时很高兴推出我们最新的推理模型 QwQ-32B。这是一款拥有 320 亿参数的模型,其性能</p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://huggingface.co/Qwen/QwQ-32B">Hugging Face</a> | <a href="https://modelscope.cn/models/Qwen/QwQ-32B">ModelScope</a> | <a href="https://huggingface.co/spaces/Qwen/QwQ-32B-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwq-32b</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwq-32b</guid>
<pubDate>Wed, 05 Mar 2025 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5-1M: 支持100万Token上下文的开源Qwen模型</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-1M/passkey_retrieval.png" referrerpolicy="no-referrer"><p>两个月前,我们升级了 <a href="https://qwen.ai/qwen2.5-turbo">Qwen2.5-Turbo</a>,使其支持最多一百万个Tokens的上下文长度。今天,我们正式推出开源的 Qwen2.5-1M 模型及其对应的推理框架支持。以下是本次发布的亮点:现在,你可以访问我们在 <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-1M-Demo">Huggingface</a> 和 [Mo</p>
<p><a href="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-1M/Qwen2_5_1M_Technical_Report.pdf">Tech Report</a> | <a href="https://huggingface.co/Qwen">HuggingFace</a> | <a href="https://modelscope.cn/organization/qwen">ModelScope</a> | <a href="https://chat.qwenlm.ai/">Qwen Chat</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-1M-Demo">HuggingFace Demo</a> | <a href="https://www.modelscope.cn/studios/Qwen/Qwen2.5-1M-Demo">ModelScope Demo</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-1m</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-1m</guid>
<pubDate>Sun, 26 Jan 2025 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5 VL!Qwen2.5 VL!Qwen2.5 VL!</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2.5-vl-Capybara.png" referrerpolicy="no-referrer"><p>我们发布了 <strong>Qwen2.5-VL</strong>,Qwen 模型家族的旗舰视觉语言模型,对比此前发布的 Qwen2-VL 实现了巨大的飞跃。欢迎访问 <a href="https://chat.qwenlm.ai/">Qwen Chat</a> 并选择 Qwen2.5-VL-72B-Instruct 进行体验。此外,我们在 [Hugging Face](https://huggingface.co/collections/Qwe</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen2-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-vl-6795ffac22b334a837c0f9a5">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen25-VL-58fbb5d31f1d47">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-vl</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-vl</guid>
<pubDate>Sun, 26 Jan 2025 11:08:41 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5-Coder 全系列: 强大、多样、实用。</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5/Qwen2.5-Coder-Family/32b-top.jpg" referrerpolicy="no-referrer"><p>今天,我们很高兴开源「强大」、「多样」、「实用」的 Qwen2.5-Coder 全系列模型,致力于持续推动 Open CodeLLMs 的发展。另外,Qwen2.5-Coder-32B-Instruct 的多编程语言代码修复能力同样令人惊喜,这将有助于用户理解和修改自己熟悉的编程语言,极大缓解陌生语言的学习成本。与 McEval 类似,MdEval 是多编程语言的代码修复基准,Qwen2.5-Co</p>
<p><a href="https://github.com/QwenLM/Qwen2.5-Coder">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-coder-66eaa22e6f99801bf65b0c2f">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://www.kaggle.com/models/qwen-lm/qwen2.5-coder">KAGGLE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-Coder-demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-coder-family</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-coder-family</guid>
<pubDate>Mon, 11 Nov 2024 16:00:02 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5: 基础模型大派对!</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5/qwen2.5-main.jpg" referrerpolicy="no-referrer"><p>在 Qwen2 发布后的过去三个月里,许多开发者基于 Qwen2 语言模型构建了新的模型,并为我们提供了宝贵的反馈。在这段时间里,我们专注于创建更智能、更博学的语言模型。今天,我们很高兴地向大家介绍 Qwen 家族的最新成员:<strong>Qwen2.5</strong>。我们将要宣布的可能是历史上最大的开源发布!让我们开始这场盛会吧!我们的最新发布包括了语言模型 <strong>Qwen2.5</strong>,以及专门针对编程的 **Qwen</p>
<p><a href="https://github.com/QwenLM/Qwen2.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a> | <a href="https://qwenlm.github.io/blog/qwen2.5-llm">Qwen2.5 LLM</a> | <a href="https://qwenlm.github.io/blog/qwen2.5-coder">Qwen2.5-Coder</a> | <a href="https://qwenlm.github.io/blog/qwen2.5-math">Qwen2.5-Math</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5</guid>
<pubDate>Wed, 18 Sep 2024 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5-Coder: 码无止境,学无止境!</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5/Qwen2.5-Coder/coder-main.png" referrerpolicy="no-referrer"><p>四月初,我们发布了 CodeQwen1.5, 得到了社区广泛的关注与喜爱。自那以后,我们一直在继续努力提升代码模型。今天,我们很高兴地宣布新一代的开放代码模型 Qwen2.5-Coder 的发布。并正式将 CodeQwen 的命名改为 Qwen-Coder,我们认为 Coder 更加拟人、灵动,期待其可以在未来真正与人类结对编程。Qwen2.5-Coder 是我们 Qwen2.5 开源家族的一员,</p>
<p><a href="https://github.com/QwenLM/Qwen2.5-Coder">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-Coder-7B-Instruct">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-coder</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-coder</guid>
<pubDate>Wed, 18 Sep 2024 16:00:02 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5-Math: 世界领先的数学开源大语言模型</title>
<description><img src="http://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5/2024-08-qwen2.5-math-72B.png" referrerpolicy="no-referrer"><p>**🚨 Qwen2.5-Math主要被设计用于通过CoT或TIR的方式解中英数学题,我们不推荐在其他任务上使用该系列模型。**一个月前,我们开源了 Qwen 家族的第一款数学专项大语言模型- <a href="https://qwenlm.github.io/blog/qwen2-math/">Qwen2-Math</a>。 今天,我们将它再度升级并开源 <strong>Qwen2.5-Math</strong> 系列,包括基础模型 **Qw</p>
<p><a href="https://github.com/QwenLM/Qwen2-Math">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-math</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-math</guid>
<pubDate>Wed, 18 Sep 2024 16:00:01 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2-VL: 更清晰地看世界</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen2-VL/qwen2vl-head.jpeg" referrerpolicy="no-referrer"><p>经历了接近一年时间的持续努力,今天我们很高兴地宣布我们最新一代的视觉语言模型:<strong>Qwen2-VL</strong> !Qwen2-VL 基于 Qwen2 打造,相比 Qwen-VL,它具有以下特点:我们以 Apache 2.0 协议开源了 Qwen2-VL-2B 和 Qwen2-VL-7B,并发布了 Qwen2-VL-72B 的 API!开源代码已集成到 Hugging Face Transformers、v</p>
<p><a href="https://huggingface.co/spaces/Qwen/Qwen2-VL">DEMO</a> | <a href="https://github.com/QwenLM/Qwen2-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen2-vl-66cee7455501d7126940800d">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://help.aliyun.com/zh/model-studio/developer-reference/qwen-vl-api">API</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2-vl</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2-vl</guid>
<pubDate>Wed, 28 Aug 2024 16:24:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2-Audio:开启语音对话!</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-Audio/demo/radar_compare_qwen_audio.png" referrerpolicy="no-referrer"><p>在一个通用的AI系统中,核心模型应该能够理解不同模态的信息。当前的大语言模型现在已经能够理解语言并进行推理,并且已经扩展到了更多的模态,包括视觉和音频。此前我们陆续发布了多个 Qwen 语言模型系列以及 Qwen-VL 和 Qwen-Audio 等多模态模型。今天,我们正式发布 Qwen2-Audio。这是 Qwen-Audio 的下一代版本,它能够接受音频和文本输入,并生成文本输出。Qwen2-</p>
<p><a href="https://huggingface.co/spaces/Qwen/Qwen2-Audio-Instruct-Demo">DEMO</a> | <a href="https://arxiv.org/pdf/2407.10759">PAPER</a> | <a href="https://github.com/QwenLM/Qwen2-Audio">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen2-audio-66b628d694096020e0c52ff6">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2-audio</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2-audio</guid>
<pubDate>Fri, 09 Aug 2024 08:22:39 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>你好,Qwen2</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/qwen2/qwen.jpg" referrerpolicy="no-referrer"><p>历经数月努力, 我们很高兴迎来了Qwen系列模型从Qwen1.5到Qwen2的重大升级。这一次,我们为大家带来了:目前,我们已在Hugging Face和ModelScope上同步开源。期待听到你们的使用反馈!Qwen2系列包含5个尺寸的预训练和指令微调模型,其中包括Qwen2-0.5B、Qwen2-1.5B、Qwen2-7B、Qwen2-57B-A14B和Qwen2-72B。如下表所示:在Qwe</p>
<p><a href="https://github.com/QwenLM/Qwen2">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2-72B-Instruct">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2</guid>
<pubDate>Thu, 06 Jun 2024 16:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>与 CodeQwen1.5 结对编程</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/codeqwen1.5/intro.png" referrerpolicy="no-referrer"><p>代码助手,是一种基于 LLMs 的智能化的编程工具,它可以帮助程序员更高效、更准确的编写代码,使得整个软件开发过程更加流畅和高效。然而流行的代码助手,比如 Github Copilot,依赖于闭源的商业模型,不仅昂贵还会引起如隐私、安全、版权等方面的担忧。幸运的是,开源社区正在致力于打造开放代码模型来实现开放的代码助手。近期涌现出了一批优秀的 Open CodeLLMs,比如 StarCoder2</p>
<p><a href="https://github.com/QwenLM/CodeQwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/CodeQwen1.5-7b-Chat-demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=codeqwen1.5</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=codeqwen1.5</guid>
<pubDate>Tue, 16 Apr 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen1.5 介绍</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/qwen1.5/intro.jpg" referrerpolicy="no-referrer"><p>最近几个月,我们专注探索如何构建一个真正「卓越」的模型,并在此过程中不断提升开发者的使用体验。农历新年到来之际,我们推出通义千问开源模型 1.5 版本: <strong>Qwen1.5</strong>。我们开源了包括 0.5B、1.8B、4B、7B、14B、32B、72B 和 110B 共计 8 个不同规模的 Base 和 Chat 模型,, 以及一个 MoE 模型(点击[博客](https://qwenlm.githu</p>
<p><a href="https://github.com/QwenLM/Qwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen1.5-72B-Chat">DEMO</a> | <a href="https://github.com/QwenLM/Qwen/blob/main/assets/wechat.png">WeChat</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen1.5</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen1.5</guid>
<pubDate>Sun, 04 Feb 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
</channel>
</rss>http://localhost:1200/qwen/research/zh-cn/Release - Success ✔️<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
<channel>
<title>Qwen 研究 - Release</title>
<link>https://qwen.ai/research</link>
<atom:link href="http://localhost:1200/qwen/research/zh-cn/Release" rel="self" type="application/rss+xml"></atom:link>
<description>Qwen 研究 - Release - Powered by RSSHub</description>
<generator>RSSHub</generator>
<webMaster>contact@rsshub.app (RSSHub)</webMaster>
<language>en</language>
<lastBuildDate>Thu, 02 Apr 2026 06:42:38 GMT</lastBuildDate>
<ttl>5</ttl>
<item>
<title>Qwen3-TTS 全面升级: 音色设计与音色克隆!</title>
<description><p><strong>Qwen3-TTS</strong> 家族新推出两款模型,音色创造模型Qwen3-TTS-VD-Flash(可通过<a href="https://www.alibabacloud.com/help/zh/model-studio/qwen-tts-voice-design">Qwen API</a>访问)和音色克隆模型Qwen3-TTS-VC-Flash(可通过<a href="https://www.alibabacloud.com/help/zh/model-studio/qwen-tts-voice-cloning">Qwen API</a>访问)。主要特点:Qwen3-TTS 支持通过自然语言描述生成定制化的音色形象。用户可以随意输入声</p>
</description>
<link>https://qwen.ai/blog?id=qwen3-tts-vc-voicedesign</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-tts-vc-voicedesign</guid>
<pubDate>Mon, 22 Dec 2025 16:00:45 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3-Omni-Flash-2025-12-01:声形意合,令出智随!</title>
<description><p><strong>Qwen3-Omni</strong>是新一代原生全模态大模型,能够无缝处理文本、图像、音频和视频等多种输入形式,并通过实时流式响应同时生成文本与自然语音输出。我们引入了多种升级来提升模型表现和效率。<strong>Qwen3-Omni-Flash-2025-12-01</strong>是在Qwen3-Omni基础上进行全面升级的版本。此次升级版本主要特点为:在客观性能指标上,<strong>Qwen3-Omni-Flash-2025-12-01</strong>全模态能力全面跃升,各项能力均显著超越Qwen3-Omni-Flash:此次升级,让 Qwen3-Omni-Flash-20251201 在全模态场景下真正做到“声形意合,令出智随”,为用户带来</p>
</description>
<link>https://qwen.ai/blog?id=qwen3-omni-flash-20251201</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-omni-flash-20251201</guid>
<pubDate>Mon, 08 Dec 2025 21:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3-TTS 全面升级!49种音色 + 10种语言 + 9种方言</title>
<description><p><strong>Qwen3-TTS</strong> 是支持多音色、多语种和多方言的旗舰语音合成模型,致力于实现稳定、自然和高效的语音生成,目前可通过<a href="https://help.aliyun.com/zh/model-studio/qwen-tts">Qwen API</a>访问。主要改进:Qwen3-TTS 提供了个性鲜明、情感饱满的多元声音形象供用户选择,可满足多样化的场景需求。以下是一些合成样音:Qwen3-TTS 深度支持多种汉语方言表达,精准还原口音语调与地域韵味。以下是一些合成样音:Qwen3-TTS 同样支持了地道自然的多语种音色,发声习惯更贴近母语表达。以下是一些合成样例:通过 Qwen API 使用 Qwe</p>
</description>
<link>https://qwen.ai/blog?id=qwen3-tts-1128</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-tts-1128</guid>
<pubDate>Thu, 04 Dec 2025 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen DeepResearch: 当灵感不再需要理由</title>
<description><p><a href="https://chat.qwen.ai/?inputFeature=deep_research">点我体验最新 Qwen DeepResearch</a>_<strong>灵感是如何死掉的?</strong>_它通常不是死于“不够好”,而是死于“太麻烦”。当一个念头闪现时,它还是脆弱的、未经证实的。我们的大脑在短暂兴奋后,会立刻开始评估“成本”:就在这个“成本评估”的瞬间,绝大多数灵感就被“理性”地扼杀了。我们下意识地回避了它,因为“深入研究”的传统门槛实在太高。我们一直在思考,如何让“深入研究”不再是一个需要启动的重型任务,而是成为思考的自然延伸。**这就是 Qwen DeepResearch 诞生的使命。**我们想做</p>
</description>
<link>https://qwen.ai/blog?id=qwen-deepresearch</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-deepresearch</guid>
<pubDate>Wed, 12 Nov 2025 20:59:26 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3-Max:大就是好</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwen3max-banner.png" referrerpolicy="no-referrer"><p>继 Qwen3-2507 系列发布之后,我们非常高兴地推出 Qwen3-Max —— 我们迄今为止规模最大、能力最强的模型。目前,Qwen3-Max-Instruct 的预览版在 LMArena 文本排行榜上位列第三,超越了 GPT-5-Chat。正式版本在代码能力和智能体(agent)能力方面进一步提升,在涵盖知识、推理、编程、指令遵循、人类偏好对齐、智能体任务和多语言理解的全面基准测试中均达</p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://help.aliyun.com/zh/model-studio/models#qwen-max-cn-bj">API</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-max</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-max</guid>
<pubDate>Wed, 24 Sep 2025 04:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3-LiveTranslate:视、听、说全模态同传大模型!</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-LiveTranslate-Flash/blog_pic_without_subtitles(1).png#center" referrerpolicy="no-referrer"><p><strong>Qwen3-LiveTranslate-Flash</strong> 是一款基于大语言模型的高精度、高响应、高鲁棒性的多语言实时音视频同传模型。依托Qwen3-Omni强大的基座能力、海量多模态数据、百万小时音视频数据,Qwen3-LiveTranslate-Flash 实现了覆盖18种语言的离线和实时两种音视频翻译能力。核心亮点:在公开测试集上中英及多语言语音翻译,Qwen3-LiveTranslate-</p>
<p><a href="https://help.aliyun.com/document_detail/2983281.html">DASHSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-Livetranslate-Demo">HUGGING FACE DEMO</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-livetranslate</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-livetranslate</guid>
<pubDate>Mon, 22 Sep 2025 23:00:26 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>旅行规划师:你的专属智能行程设计师</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/qwen-travel-planner/zn_q1.png" referrerpolicy="no-referrer"><p>我们非常高兴推出全新的<strong>旅行规划助手</strong>,这是一个基于 <strong>Multi-Agent 架构</strong> 并具备强大 <strong>真实工具调用能力</strong> 的旅行规划系统,能够高效应对复杂、多变的行程安排任务。无论你计划的是多城市连线旅行,还是单城深度游,它都能为你提供精准、可落地的旅行方案:旅行规划是一项系统工程,涵盖交通、景点、住宿、用餐等环节,它们环环相扣、相互影响,任何单一 Agent 都难以全面驾驭其中的复杂</p>
<p><a href="https://chat.qwen.ai/?inputFeature=travel">QWEN CHAT</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=agent</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=agent</guid>
<pubDate>Mon, 22 Sep 2025 21:00:59 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Time to Speak Some Dialects, Qwen-TTS!</title>
<description><p>我们通过 <a href="https://help.aliyun.com/zh/model-studio/qwen-tts">Qwen API</a> 更新了 <strong>Qwen-TTS</strong> ( <code>qwen-tts-latest</code> or <code>qwen-tts-2025-05-22</code> ) 的最新版本。Qwen-TTS 使用了超过 300 万小时的大规模语料库进行训练,合成效果实现了人类级别的自然度和表现力。比较亮眼的是,Qwe</p>
<p><a href="https://help.aliyun.com/zh/model-studio/qwen-tts">API</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-tts</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-tts</guid>
<pubDate>Fri, 27 Jun 2025 07:01:34 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen VLo: 从“看懂”世界到“描绘”世界</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen-VLo/vlo.png" referrerpolicy="no-referrer"><p>多模态大模型的演进正在不断突破我们对技术边界的认知。从最初的 QwenVL 到如今的 Qwen2.5 VL ,我们在提升模型对图像内容的理解能力方面取得了一些进展。今天,我们正式推出 Qwen VLo ——一个多模态统一理解与生成模型。这一全新升级的模型不仅能够“看懂”世界,更能基于理解进行高质量的再创造,真正实现了从感知到生成的跨越。需要注意的是,这是一款预览版本,您可以通过 Qwen Chat</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-vlo</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-vlo</guid>
<pubDate>Thu, 26 Jun 2025 14:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3 Embedding:新一代文本表征与排序模型</title>
<description><img src="https://mitalinlp.oss-cn-hangzhou.aliyuncs.com/dingkun/models/qwen-embedding/q3e-mteb-result-0605.png" referrerpolicy="no-referrer"><p>我们正式发布 Qwen3 Embedding 系列模型, Qwen 模型家族的新成员。该系列模型专为文本表征、检索与排序任务设计,基于 Qwen3 基础模型进行训练,充分继承了 Qwen3 在多语言文本理解能力方面的优势。在多项基准测试中,Qwen3 Embedding 系列在文本表征和排序任务中展现了卓越的性能。我们使用了 Apache 2.0 协议在 Hugging Face 和 ModelS</p>
<p><a href="https://github.com/QwenLM/Qwen3-Embedding">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-embedding</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-embedding</guid>
<pubDate>Thu, 05 Jun 2025 13:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>QVQ-Max:有依据地思考</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/QVQ-Max/test_time.png" referrerpolicy="no-referrer"><p>去年12月,我们推出了 QVQ-72B-Preview, 作为一个探索模型,它存在很多问题。今天,我们正式推出 QVQ-Max 视觉推理模型的第一版。这款模型的特点是,它不仅能够“看懂”图片和视频里的内容,还能结合这些信息进行分析、推理,甚至给出解决方案。从数学题到生活小问题,从编程代码到艺术创作,QVQ-Max 都表现出了不俗的能力。虽然这只是我们的第一个版本,但它的潜力已经让人眼前一亮。Mat</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen2.5-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-vl-6795ffac22b334a837c0f9a5">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen25-VL-58fbb5d31f1d47">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qvq-max-preview</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qvq-max-preview</guid>
<pubDate>Thu, 27 Mar 2025 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title><think>...</think> QwQ-Max-Preview</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwq-banner-zh.jpg" referrerpolicy="no-referrer"><p>这篇博客出自 QwQ-Max-Preview 之手。希望各位看官喜欢!我们很高兴向大家介绍 QwQ-Max-Preview,这是 Qwen 系列的最新成果。这一版本基于 Qwen2.5-Max 构建,在数学、编程以及通用任务中展现了更强的能力,同时在与 Agent 相关的工作流中也有不错的表现。作为即将发布的 QwQ-Max 的预览版,这个版本还在持续优化中。我们计划在不久的将来以 Apache</p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwq-max-preview</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwq-max-preview</guid>
<pubDate>Mon, 24 Feb 2025 18:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen2.5-Max:探索大规模 MoE 模型的智能</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-max-banner.png" referrerpolicy="no-referrer"><p>过去有一种观点认为,持续地增长数据规模和模型参数规模是一种通向 AGI 的可能的路径。然而,整个大模型社区对于训练超大规模的模型的经验都相对匮乏,不论是稠密模型还是 MoE 模型。近期,DeepSeek V3 的发布让大家了解到超大规模 MoE 模型的效果及实现方法,而同期,Qwen 也在研发超大规模的 MoE 模型 Qwen2.5-Max,使用超过 20 万亿 token 的预训练数据及精心设计</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://www.alibabacloud.com/help/en/model-studio/getting-started/first-api-call-to-qwen?spm=a2c63.p38356.help-menu-2400256.d_0_1_0.1f6574a72ddbKE">API</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-Max-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-max</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-max</guid>
<pubDate>Tue, 28 Jan 2025 15:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>面向有效的数学推理过程监督</title>
<description><img src="http://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2.5/Qwen2.5-Math-PRM/Qwen2.5-Math-PRM.png" referrerpolicy="no-referrer"><p>近年来,大型语言模型(LLMs)在数学推理方面取得了显著进展,但它们仍可能犯错误,如计算错误或逻辑错误,导致得出错误结论。<br>
此外,即使最终答案正确,这些强大的模型也经常编造看似合理的推理步骤,其中最终答案基于有缺陷的计算或推导过程,这削弱了LLMs推理过程的可靠性和可信度。<br>
因此,自动识别推理过程中的错误对于其可扩展监督变得越来越重要。过程奖励模型(Process Reward Models, P</p>
<p><a href="https://github.com/QwenLM/Qwen2-Math">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-math-66eaa240a1b7d5ee65f1da3e">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-math-prm</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-math-prm</guid>
<pubDate>Mon, 13 Jan 2025 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>QVQ: 更睿智地看世界</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/QVQ/QVQ.jpg" referrerpolicy="no-referrer"><p>在人类的思维中,语言和视觉紧密交织,塑造着我们感知和理解世界的方式。我们的推理能力深深植根于语言思维和视觉记忆之中。那么,当我们将这些能力赋予人工智能时,会发生什么呢?如今的大语言模型已经展现出卓越的推理能力,但我们不禁思考:它们能否通过掌握视觉理解的力量,攀登认知能力的新高峰?设想一下,一个人工智能能够像物理学大师一样,面对复杂的物理问题,沉着冷静地通过逻辑推理找到解决方案。正是这样的愿景激发我</p>
<p><a href="https://github.com/QwenLM/Qwen2-VL">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://www.kaggle.com/models/qwen-lm/qvq-72b-preview">KAGGLE</a> | <a href="https://huggingface.co/Qwen/QVQ-72B-Preview">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qvq-72b-preview</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qvq-72b-preview</guid>
<pubDate>Tue, 24 Dec 2024 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>QwQ: 思忖未知之界</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/assets/blog/qwq-32b-preview/QwQ-32B-Preview_result.png" referrerpolicy="no-referrer"><p>*注意:QwQ 的发音为 /kwju:/ ,与单词 "quill" 的读音近似。*思考、质疑、理解,是人类探索未知的永恒追求。在这条探索之路上,QwQ犹如一位怀抱无尽好奇的学徒,以思考和疑问照亮前路。QwQ体现了古老的哲学精神:它深知自己一无所知,而这种认知正是其好奇心的源泉。在探寻答案的过程中,它始终保持自省,以理性之光审视每一个假设,在不同的思维维度中穿行,追寻更深层的真理。然而,正如所有智慧</p>
<p><a href="https://github.com/QwenLM/Qwen2.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/QwQ-32B-preview">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwq-32b-preview</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwq-32b-preview</guid>
<pubDate>Wed, 27 Nov 2024 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>将上下文长度扩展至百万 Tokens !</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Turbo/cover_cn.png" referrerpolicy="no-referrer"><p>在 Qwen2.5 发布之后,我们听到社区对处理更长序列的需求。在这段时间,我们针对长序列处理能力以及长序列下的推理效率进行了很多优化。今天,我们隆重推出新的 Qwen2.5-Turbo 版本,其特点在于:现在,你可以通过[阿里云大模型服务平台](https://help.aliyun.com/zh/model-studio/developer-reference/what-is-qwen-llm</p>
<p><a href="https://help.aliyun.com/zh/model-studio/getting-started/first-api-call-to-qwen">API文档</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-Turbo-1M-Demo">HuggingFace Demo</a> | <a href="https://www.modelscope.cn/studios/Qwen/Qwen2.5-Turbo-1M-Demo">ModelScope Demo</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-turbo</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-turbo</guid>
<pubDate>Thu, 14 Nov 2024 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen2-Math,新一代数学模型</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/assets/blog/qwen2-math/fig1.jpg" referrerpolicy="no-referrer"><p>**🚨 此模型目前主要支持英语。我们将尽快推出中英双语版本。**在过去的一年里,我们非常关注大模型的推理能力的提升,尤其关注其在数学相关的任务上的表现。今天,我们非常高兴地介绍 Qwen2 开源家族的新成员——Qwen2-Math-1.5B/7B/72B 系列。Qwen2-Math 是一系列基于 Qwen2 LLM 构建的专门用于数学解题的语言模型,其数学能力显著超越了开源模型,甚至超过了闭源模</p>
<p><a href="https://github.com/QwenLM/Qwen2-Math">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2-math</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2-math</guid>
<pubDate>Wed, 07 Aug 2024 16:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen-Max-0428模型介绍</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/arena_leaderboard.jpg" referrerpolicy="no-referrer"><p>此前,我们开源了Qwen1.5系列的模型,参数规模最小至5亿,最大至1100亿。这一次,我们推出更大规模模型Qwen-Max-0428(通义千问网页端及APP产品版本从2.1升级至2.5)。Qwen-Max-0428是经过指令微调的Chat模型。近期该模型登陆了<a href="https://chat.lmsys.org/">Chatbot Arena</a>,并登榜前十。此外,我们在MT-Bench的评测上也观察到</p>
<p><a href="https://dashscope.aliyun.com/">API</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen-Max-0428">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-max-0428</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-max-0428</guid>
<pubDate>Sat, 11 May 2024 10:10:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen1.5-110B:Qwen1.5系列的首个千亿参数开源模型</title>
<description><p>近期开源社区陆续出现了千亿参数规模以上的大模型,这些模型都在各项评测中取得杰出的成绩。今天,我们开源1100亿参数的Qwen1.5系列首个千亿参数模型Qwen1.5-110B,该模型在基础能力评估中与Meta-Llama3-70B相媲美,在Chat评估中表现出色,包括MT-Bench和AlpacaEval 2.0。Qwen1.5-110B与其他Qwen1.5模型相似,采用了相同的Transform</p>
<p><a href="https://github.com/QwenLM/Qwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen1.5-110B-Chat-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen1.5-110b</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen1.5-110b</guid>
<pubDate>Thu, 25 Apr 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen1.5-32B:Qwen1.5语言模型系列的最后一块拼图</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/qwen1.5-32b/32b.png" referrerpolicy="no-referrer"><p>开源社区长期以来一直在寻求一种能在性能、效率和内存占用之间达到理想平衡的模型。尽管出现了诸如Qwen1.5-72B和DBRX这样的SOTA模型,但这些模型持续面临诸如内存消耗巨大、推理速度缓慢以及显著的微调成本等问题。当前,参数量约30B的模型往往在这方面被看好,得到很多用户的青睐。顺应这一趋势,我们推出Qwen1.5语言模型系列的最新成员:Qwen1.5-32B和Qwen1.5-32B-Chat</p>
<p><a href="https://github.com/QwenLM/Qwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen1.5-72B-Chat">DEMO</a> | <a href="https://github.com/QwenLM/Qwen/blob/main/assets/wechat.png">WeChat</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen1.5-32b</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen1.5-32b</guid>
<pubDate>Tue, 02 Apr 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen1.5-MoE: 1/3的激活参数量达到7B模型的性能</title>
<description><p>今天,我们推出Qwen系列的首个MoE模型,Qwen1.5-MoE-A2.7B。它仅拥有27亿个激活参数,但其性能却能与当前最先进的70亿参数模型,如Mistral 7B和Qwen1.5-7B相媲美。相较于包含65亿个Non-Embedding参数的Qwen1.5-7B,Qwen1.5-MoE-A2.7B只有20亿个Non-Embedding参数,约为原模型大小的三分之一。此外,相比Qwen1.5</p>
<p><a href="https://github.com/QwenLM/Qwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/qwen1.5-MoE-A2.7B-Chat-demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-moe</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-moe</guid>
<pubDate>Thu, 28 Mar 2024 03:31:44 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen-VL全新升级!</title>
<description><p>我们在 Qwen 语言模型的基础上,结合此前我们提出的多模态多任务训练,以解决多模态模型在泛化能力上的局限性,并于 2023 年 9 月开源了多模态模型 Qwen-VL。最近,Qwen-VL 系列有了重大升级,推出了两个增强版本:Qwen-VL-Plus 和 Qwen-VL-Max。这两个版本的关键提升包括:相比于开源版本的 Qwen-VL,这两个模型在多个文本-图像多模态任务中与 Gemini</p>
</description>
<link>https://qwen.ai/blog?id=qwen-vl</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-vl</guid>
<pubDate>Thu, 25 Jan 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen介绍</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/qwen/family.png" referrerpolicy="no-referrer"><p>四个月前,我们首次发布Qwen-7B大型语言模型(LLM),正式开启了我们的开源之旅。今天,我们介绍Qwen开源家族,更全面的展示我们的工作和目标。下面是开源项目和社区的重要链接。Additionally, we have WeChat groups for chatting and we invite you to join the groups through the provided lin</p>
<p><a href="https://arxiv.org/abs/2309.16609">PAPER</a> | <a href="https://github.com/QwenLM/Qwen">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/CV4E9rpNSD">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen</guid>
<pubDate>Tue, 23 Jan 2024 14:13:29 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
</channel>
</rss> |
Contributor
Auto ReviewNo clear rule violations found in the current diff. |
Contributor
|
Successfully generated as following: http://localhost:1200/qwen/research - Success ✔️<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
<channel>
<title>Qwen Research</title>
<link>https://qwen.ai/research</link>
<atom:link href="http://localhost:1200/qwen/research" rel="self" type="application/rss+xml"></atom:link>
<description>Qwen Research - Powered by RSSHub</description>
<generator>RSSHub</generator>
<webMaster>contact@rsshub.app (RSSHub)</webMaster>
<language>en</language>
<lastBuildDate>Thu, 02 Apr 2026 12:12:17 GMT</lastBuildDate>
<ttl>5</ttl>
<item>
<title>Qwen-Image-Edit-2511: Improve Consistency</title>
<description><p>We are excited to introduce Qwen-Image-Edit-2511, an enhanced version over Qwen-Image-Edit-2509, featuring multiple improvements—including notably better consistency. To try out the latest model, please visit <a href="https://chat.qwen.ai/?inputFeature=image_edit">Qwen Chat</a> and select the Image Editing fea</p>
</description>
<link>https://qwen.ai/blog?id=qwen-image-edit-2511</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image-edit-2511</guid>
<pubDate>Tue, 23 Dec 2025 05:08:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3-TTS Steps Up: Voice Cloning and Voice Design!</title>
<description><p><strong>Qwen3-TTS</strong> family has launched two new models: the voice design model Qwen3-TTS-VD-Flash (accessible via the <a href="https://www.alibabacloud.com/help/en/model-studio/qwen-tts-voice-design">Qwen API</a>) and the voice cloning model Qwen3-TTS-VC-Flash (accessible via the [Qwen API](https://www.alibabacloud.</p>
</description>
<link>https://qwen.ai/blog?id=qwen3-tts-vc-voicedesign</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-tts-vc-voicedesign</guid>
<pubDate>Mon, 22 Dec 2025 16:00:45 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen-Image-Layered: Layered Decomposition for Inherent Editablity</title>
<description><p>Today, we are excited to introduce Qwen-Image-Layered, a model capable of decomposing an image into multiple RGBA layers. This layered representation unlocks inherent editability: each layer can be independently manipulated without affecting other content. Meanwhile, such a layered representation na</p>
</description>
<link>https://qwen.ai/blog?id=qwen-image-layered</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image-layered</guid>
<pubDate>Fri, 19 Dec 2025 05:08:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen3-Omni-Flash-2025-12-01:Hear You. See You. Follow Smarter!</title>
<description><p><strong>Qwen3-Omni</strong> is a next-generation native multimodal large model capable of seamlessly processing multiple input modalities—including text, images, audio, and video—and generating both text and natural-sounding speech outputs simultaneously via real-time streaming responses. This version introduces</p>
</description>
<link>https://qwen.ai/blog?id=qwen3-omni-flash-20251201</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-omni-flash-20251201</guid>
<pubDate>Mon, 08 Dec 2025 21:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>SAPO: A Stable and Performant Reinforcement Learning Method for Training Large Language Models</title>
<description><p>Reinforcement learning (RL) has become a core ingredient in advancing the reasoning capabilities of large language models (LLMs). Modern RL pipelines enable models to solve harder mathematical problems, write complex code, and reason over multimodal inputs. In practice, group‑based policy optimizati</p>
</description>
<link>https://qwen.ai/blog?id=sapo</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=sapo</guid>
<pubDate>Thu, 04 Dec 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen3-TTS Update! 49 Timbres + 10 Languages + 9 Dialects</title>
<description><p><strong>Qwen3-TTS-Flash</strong> is a flagship text-to-speech model that supports multi-timbre, multi-lingual, and multi-dialect speech synthesis. It aims to produce natural and expressive speech and is available via <a href="https://help.aliyun.com/zh/model-studio/qwen-tts">Qwen API</a>.Major Improvements:Qwen3-TTS offers</p>
</description>
<link>https://qwen.ai/blog?id=qwen3-tts-1128</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-tts-1128</guid>
<pubDate>Thu, 04 Dec 2025 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen DeepResearch: When Inspiration Becomes Its Own Reason</title>
<description><p><a href="https://chat.qwen.ai/?inputFeature=deep_research">Click here to experience the latest Qwen DeepResearch</a>_<strong>How does inspiration die?</strong>_It usually doesn’t die from “not being good enough”, but from being “too much trouble”.When a thought flashes, it’s still fragile and unverified. After a brief mome</p>
</description>
<link>https://qwen.ai/blog?id=qwen-deepresearch</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-deepresearch</guid>
<pubDate>Wed, 12 Nov 2025 20:59:26 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3-Max: Just Scale it</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwen3max-banner.png" referrerpolicy="no-referrer"><p>Following the release of the Qwen3-2507 series, we are thrilled to introduce Qwen3-Max — our largest and most capable model to date. The preview version of Qwen3-Max-Instruct currently ranks third on</p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://www.alibabacloud.com/help/en/model-studio/models#c2d5833ae4jmo">API</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-max</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-max</guid>
<pubDate>Wed, 24 Sep 2025 04:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3‑LiveTranslate: Real‑Time Multimodal Interpretation — See It, Hear It, Speak It!</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-LiveTranslate-Flash/blog_pic_without_subtitles(1).png#center" referrerpolicy="no-referrer"><p><strong>Qwen3‑LiveTranslate‑Flash</strong> delivers high‑precision, lightning‑fast and ultra‑reliable real‑time multilingual audio and video interpretation. With the extensive capabilities of Qwen3‑Omni and traini</p>
<p><a href="https://www.alibabacloud.com/help/en/document_detail/2983281.html">DASHSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-Livetranslate-Demo">HUGGING FACE DEMO</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-livetranslate</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-livetranslate</guid>
<pubDate>Mon, 22 Sep 2025 23:00:26 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3-VL: Sharper Vision, Deeper Thought, Broader Action</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-VL/qwen3vl-head.png#center" referrerpolicy="no-referrer"><p>Today, we officially launch the all-new Qwen3-VL series — the most powerful vision-language model in the Qwen family to date. In this generation, we’ve made major improvements across multiple dimensio</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen3-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen3-vl-68d2a7c1b8a8afce4ebd2dbe">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen3-VL-5c7a94c8cb144b">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-vl</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-vl</guid>
<pubDate>Mon, 22 Sep 2025 22:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Travel Planner: Your Smart Travel Designer</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/qwen-travel-planner/en_q1.png" referrerpolicy="no-referrer"><p>We are excited to introduce our <strong>brand-new Travel Planning Assistant</strong>, a powerful system built on a <strong>Multi-Agent architecture</strong> with robust <strong>real-world tool-calling capabilities</strong>. It is designed</p>
<p><a href="https://chat.qwen.ai/?inputFeature=travel">QWEN CHAT</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=agent</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=agent</guid>
<pubDate>Mon, 22 Sep 2025 21:00:59 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3Guard: Real-time Safety for Your Token Stream</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3Guard/banner.png" referrerpolicy="no-referrer"><p>We are excited to introduce Qwen3Guard, the first safety guardrail model in the Qwen family. Built upon the powerful Qwen3 foundation models and fine-tuned specifically for safety classificatoin, Qwen</p>
<p><a href="https://github.com/QwenLM/Qwen3Guard/blob/main/Qwen3Guard_Technical_Report.pdf">Tech Report</a> | <a href="https://github.com/QwenLM/Qwen3Guard">GitHub</a> | <a href="https://huggingface.co/collections/Qwen/qwen3guard-68d2729abbfae4716f3343a1">Hugging Face</a> | <a href="https://modelscope.cn/collections/Qwen3Guard-308c39ef5ffb4b">ModelScope</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3guard</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3guard</guid>
<pubDate>Mon, 22 Sep 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen-Image-Edit-2509: Multi-Image Support, Improved Consistency</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen-Image/edit2509/edit2509_top.jpg#center" referrerpolicy="no-referrer"><p>This September, we are pleased to introduce Qwen-Image-Edit-2509, the monthly iteration of Qwen-Image-Edit. To experience the latest model, please visit <a href="https://qwen.ai/">Qwen Chat</a> and select the "I</p>
<p><a href="https://qwen.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen-Image">GITHUB</a> | <a href="https://huggingface.co/Qwen/Qwen-Image-Edit-2509">HUGGING FACE</a> | <a href="https://modelscope.cn/models/Qwen/Qwen-Image-Edit-2509">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-image-edit-2509</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image-edit-2509</guid>
<pubDate>Mon, 22 Sep 2025 16:08:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3-Omni: Natively Omni-Modal Foundation Models!</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen3-Omni/q3o.png#center" referrerpolicy="no-referrer"><p><strong>Qwen3-Omni</strong> is the natively end-to-end multilingual omni model. It processes text, images, audio, and video, and delivers real-time streaming responses in both text and natural speech. We introduce</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://huggingface.co/collections/Qwen/qwen3-omni-68d100a86cd0906843ceccbe">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen3-Omni-867aef131e7d4f">MODELSCOPE</a> | <a href="https://www.alibabacloud.com/help/en/model-studio/qwen-omni">DASHSCOPE</a> | <a href="https://github.com/QwenLM/Qwen3-Omni">GITHUB</a> | <a href="https://github.com/QwenLM/Qwen3-Omni/tree/main/assets/Qwen3_Omni.pdf">PAPER</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-Omni-Demo">HUGGING FACE DEMO</a> | <a href="https://modelscope.cn/studios/Qwen/Qwen3-Omni-Demo">MODELSCOPE DEMO</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-omni</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-omni</guid>
<pubDate>Sun, 21 Sep 2025 21:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3-TTS-Flash: Multi-timbre & Multi-lingual & Multi-dialect Speech Synthesis.</title>
<description><img src="http://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-TTS-Flash/table2.png#center" referrerpolicy="no-referrer"><p><strong>Qwen3-TTS-Flash</strong> is a flagship text-to-speech model that supports multi-timbre, multi-lingual, and multi-dialect speech synthesis. It aims to produce natural and expressive speech and is available</p>
<p><a href="https://www.alibabacloud.com/help/en/model-studio/qwen-tts">DASHSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-TTS-Demo">HUGGING FACE DEMO</a> | <a href="https://modelscope.cn/studios/Qwen/Qwen3-TTS-Demo">MODELSCOPE DEMO</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-tts</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-tts</guid>
<pubDate>Sun, 21 Sep 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3-Next: Towards Ultimate Training & Inference Efficiency</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwen3-next.png" referrerpolicy="no-referrer"><p>We believe that <strong>Context Length Scaling</strong> and <strong>Total Parameter Scaling</strong> are two major trends in the future of large models. To further improve training and inference efficiency under long-context a</p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://www.alibabacloud.com/help/en/model-studio/models#c5414da58bjgj">API</a> | <a href="https://huggingface.co/collections/Qwen/qwen3-next-68c25fd6838e585db8eeea9d">Hugging Face</a> | <a href="https://modelscope.cn/collections/Qwen3-Next-c314f23bd0264a">ModelScope</a> | <a href="https://www.kaggle.com/models/qwen-lm/qwen3-next-80b">Kaggle</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-next</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-next</guid>
<pubDate>Wed, 10 Sep 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3 ASR: Hear clearly, transcribe smartly.</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3-ASR/qwenasr-bear2.png#center" referrerpolicy="no-referrer"><p>We introduce Qwen3-ASR-Flash, a speech recognition service built upon the strong intelligence of Qwen3-Omni and large amount of multi-modal data especially ASR data on the scale of tens of millions ho</p>
</description>
<link>https://qwen.ai/blog?id=qwen3-asr-flash</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-asr-flash</guid>
<pubDate>Mon, 08 Sep 2025 06:38:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen-Image-Edit: Image Editing with Higher Quality and Efficiency</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen-Image/edit_homepage.jpg#center" referrerpolicy="no-referrer"><p>We are excited to introduce Qwen-Image-Edit, the image editing version of Qwen-Image. Built upon our 20B Qwen-Image model, Qwen-Image-Edit successfully extends Qwen-Image's unique text rendering capab</p>
<p><a href="https://qwen.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen-Image">GITHUB</a> | <a href="https://huggingface.co/Qwen/Qwen-Image-Edit">HUGGING FACE</a> | <a href="https://modelscope.cn/models/Qwen/Qwen-Image-Edit">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-image-edit</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image-edit</guid>
<pubDate>Mon, 18 Aug 2025 17:30:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen-Image: Crafting with Native Text Rendering</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen-Image/merge3.jpg#center" referrerpolicy="no-referrer"><p>We are thrilled to release <strong>Qwen-Image</strong>, a 20B MMDiT image foundation model that achieves significant advances in complex text rendering and precise image editing. To try the latest model, feel free</p>
<p><a href="https://github.com/QwenLM/Qwen-Image">GITHUB</a> | <a href="https://huggingface.co/Qwen/Qwen-Image">HUGGING FACE</a> | <a href="https://modelscope.cn/models/Qwen/Qwen-Image">MODELSCOPE</a> | <a href="https://modelscope.cn/aigc/imageGeneration?tab=advanced">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-image</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image</guid>
<pubDate>Mon, 04 Aug 2025 14:08:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>GSPO: Towards Scalable Reinforcement Learning for Language Models</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/results.jpg#center" referrerpolicy="no-referrer"><p>Reinforcement Learning (RL) has emerged as a pivotal paradigm for scaling language models and enhancing their deep reasoning and problem-solving capabilities. To scale RL, the foremost prerequisite is</p>
<p><a href="https://huggingface.co/papers/2507.18071">PAPER</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=gspo</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=gspo</guid>
<pubDate>Sun, 27 Jul 2025 07:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen-MT: Where Speed Meets Smart Translation</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwen-mt-001.jpeg" referrerpolicy="no-referrer"><p>Here we introduce the latest update of Qwen-MT (qwen-mt-turbo) via [Qwen API](https://modelstudio.console.alibabacloud.com/?tab=doc#/doc/?type=model&amp;url=https://www.alibabacloud.com/help/en/doc-detail</p>
<p><a href="https://huggingface.co/spaces/Qwen/Qwen3-MT-Demo">DEMO</a> | <a href="https://modelstudio.console.alibabacloud.com/?tab=doc#/doc/?type=model&amp;url=https://www.alibabacloud.com/help/en/doc-detail/2840914_2.html&amp;renderType=component&amp;modelId=qwen-mt-turbo">API</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-mt</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-mt</guid>
<pubDate>Thu, 24 Jul 2025 14:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen3-Coder: Agentic Coding in the World</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3-Coder/qwen3-coder-main.jpg" referrerpolicy="no-referrer"><p>Today, we're announcing Qwen3-Coder, our most agentic code model to date. Qwen3-Coder is available in multiple sizes, but we're excited to introduce its most powerful variant first: Qwen3-Coder-480B-A</p>
<p><a href="https://github.com/QwenLM/Qwen3-Coder">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-coder</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-coder</guid>
<pubDate>Tue, 22 Jul 2025 13:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Time to Speak Some Dialects, Qwen-TTS!</title>
<description><p>Here we introduce the latest update of <strong>Qwen-TTS</strong> (<code>qwen-tts-latest</code> or <code>qwen-tts-2025-05-22</code>) through <a href="https://help.aliyun.com/zh/model-studio/qwen-tts">Qwen API</a> . Trained on a large-scale dataset</p>
<p><a href="https://help.aliyun.com/zh/model-studio/qwen-tts">API</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-tts</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-tts</guid>
<pubDate>Fri, 27 Jun 2025 07:01:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen VLo: From "Understanding" the World to "Depicting" It</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen-VLo/vlo.png" referrerpolicy="no-referrer"><p>The evolution of multimodal large models is continually pushing the boundaries of what we believe technology can achieve. From the initial QwenVL to the latest Qwen2.5 VL, we have made progress in enh</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-vlo</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-vlo</guid>
<pubDate>Thu, 26 Jun 2025 14:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3 Embedding: Advancing Text Embedding and Reranking Through Foundation Models</title>
<description><img src="https://mitalinlp.oss-cn-hangzhou.aliyuncs.com/dingkun/models/qwen-embedding/q3e-mteb-result-0605.png" referrerpolicy="no-referrer"><p>We release <strong>Qwen3 Embedding series</strong>, a new proprietary model of the Qwen model family. These models are specifically designed for <strong>text embedding</strong>, <strong>retrieval</strong>, and <strong>reranking</strong> tasks, built on</p>
<p><a href="https://github.com/QwenLM/Qwen3-Embedding">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-embedding</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-embedding</guid>
<pubDate>Thu, 05 Jun 2025 13:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3: Think Deeper, Act Faster</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwen3-banner.png" referrerpolicy="no-referrer"><p>Today, we are excited to announce the release of <strong>Qwen3</strong>, the latest addition to the Qwen family of large language models. Our flagship model, <strong>Qwen3-235B-A22B</strong>, achieves competitive results in be</p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen3">GitHub</a> | <a href="https://huggingface.co/collections/Qwen/qwen3-67dd247413f0e2e4f653967f">Hugging Face</a> | <a href="https://modelscope.cn/collections/Qwen3-9743180bdc6b48">ModelScope</a> | <a href="https://www.kaggle.com/models/qwen-lm/qwen-3">Kaggle</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3</guid>
<pubDate>Mon, 28 Apr 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>QVQ-Max: Think with Evidence</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/QVQ-Max/test_time.png" referrerpolicy="no-referrer"><p>Last December, we launched QVQ-72B-Preview as an exploratory model, but it had many issues. Today, we are officially releasing the first version of QVQ-Max, our visual reasoning model. This model can</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen2.5-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-vl-6795ffac22b334a837c0f9a5">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen25-VL-58fbb5d31f1d47">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qvq-max-preview</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qvq-max-preview</guid>
<pubDate>Thu, 27 Mar 2025 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen2.5 Omni: See, Hear, Talk, Write, Do It All!</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2.5-Omni/qwen_omni.png" referrerpolicy="no-referrer"><p>We release <strong>Qwen2.5-Omni</strong>, the new flagship end-to-end multimodal model in the Qwen series. Designed for comprehensive multimodal perception, it seamlessly processes diverse inputs including text, i</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://huggingface.co/Qwen/Qwen2.5-Omni-7B">HUGGING FACE</a> | <a href="https://modelscope.cn/models/Qwen/Qwen2.5-Omni-7B">MODELSCOPE</a> | <a href="https://help.aliyun.com/zh/model-studio/user-guide/qwen-omni">DASHSCOPE</a> | <a href="https://github.com/QwenLM/Qwen2.5-Omni">GITHUB</a> | <a href="https://github.com/QwenLM/Qwen2.5-Omni/blob/main/assets/Qwen2.5_Omni.pdf">PAPER</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-Omni-7B-Demo">DEMO</a> | <a href="https://discord.com/invite/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-omni</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-omni</guid>
<pubDate>Wed, 26 Mar 2025 16:00:45 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5-VL-32B: Smarter and Lighter</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-VL-32B/qwen2.5vl-32b-vision.jpg" referrerpolicy="no-referrer"><p>At the end of January this year, we launched the Qwen2.5-VL series of models, which received widespread attention and positive feedback from the community. Building on the Qwen2.5-VL series, we contin</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen2.5-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-vl-6795ffac22b334a837c0f9a5">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen25-VL-58fbb5d31f1d47">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-vl-32b</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-vl-32b</guid>
<pubDate>Sun, 23 Mar 2025 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>QwQ-32B: Embracing the Power of Reinforcement Learning</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwq-32b-final.jpg" referrerpolicy="no-referrer"><p>Scaling Reinforcement Learning (RL) has the potential to enhance model performance beyond conventional pretraining and post-training methods. Recent studies have demonstrated that RL can significantly</p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://huggingface.co/Qwen/QwQ-32B">Hugging Face</a> | <a href="https://modelscope.cn/models/Qwen/QwQ-32B">ModelScope</a> | <a href="https://huggingface.co/spaces/Qwen/QwQ-32B-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwq-32b</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwq-32b</guid>
<pubDate>Wed, 05 Mar 2025 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title><think>...</think> QwQ-Max-Preview</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwq-banner-en.jpg" referrerpolicy="no-referrer"><p>This is a blog created by QwQ-Max-Preview. We hope you enjoy it!We’re happy to unveil QwQ-Max-Preview , the latest advancement in the Qwen series, designed to push the boundaries of deep reasoning and</p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwq-max-preview</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwq-max-preview</guid>
<pubDate>Mon, 24 Feb 2025 18:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen2.5-Max: Exploring the Intelligence of Large-scale MoE Model</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2.5-max-banner.png" referrerpolicy="no-referrer"><p>It is widely recognized that continuously scaling both data size and model size can lead to significant improvements in model intelligence. However, the research and industry community has limited exp</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://www.alibabacloud.com/help/en/model-studio/developer-reference/what-is-qwen-llm">API</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-Max-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-max</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-max</guid>
<pubDate>Tue, 28 Jan 2025 15:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen2.5-1M: Deploy Your Own Qwen with Context Length up to 1M Tokens</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-1M/passkey_retrieval.png" referrerpolicy="no-referrer"><p>Two months after upgrading <a href="https://qwen.ai/qwen2.5-turbo">Qwen2.5-Turbo</a> to support context length up to one million tokens, we are back with the open-source Qwen2.5-1M models and the corresponding inference fram</p>
<p><a href="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-1M/Qwen2_5_1M_Technical_Report.pdf">Tech Report</a> | <a href="https://huggingface.co/Qwen">HuggingFace</a> | <a href="https://modelscope.cn/organization/qwen">ModelScope</a> | <a href="https://chat.qwenlm.ai/">Qwen Chat</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-1M-Demo">HuggingFace Demo</a> | <a href="https://www.modelscope.cn/studios/Qwen/Qwen2.5-1M-Demo">ModelScope Demo</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-1m</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-1m</guid>
<pubDate>Sun, 26 Jan 2025 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5 VL! Qwen2.5 VL! Qwen2.5 VL!</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2.5-vl-Capybara.png" referrerpolicy="no-referrer"><p>We release <strong>Qwen2.5-VL</strong>, the new flagship vision-language model of Qwen and also a significant leap from the previous Qwen2-VL. To try the latest model, feel free to visit [Qwen Chat](https://chat.q</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen2-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-vl-6795ffac22b334a837c0f9a5">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen25-VL-58fbb5d31f1d47">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-vl</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-vl</guid>
<pubDate>Sun, 26 Jan 2025 11:08:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Global-batch load balance almost free lunch to improve your MoE LLM training</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/assets/balance/main_results.png" referrerpolicy="no-referrer"><p>The Mixture-of-Experts (MoEs) architecture has become a popular model-parameter-scale-up technique. Typically, one MoE layer consists of a router (often parameterized as one single Linear layer) and a</p>
<p><a href="https://github.com/QwenLM/Qwen2-Math">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-math-66eaa240a1b7d5ee65f1da3e">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=global-load-balance</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=global-load-balance</guid>
<pubDate>Mon, 20 Jan 2025 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Towards Effective Process Supervision in Mathematical Reasoning</title>
<description><img src="http://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2.5/Qwen2.5-Math-PRM/Qwen2.5-Math-PRM.png" referrerpolicy="no-referrer"><p>In recent years, Large Language Models (LLMs) have made remarkable advances in mathematical reasoning, yet they can make mistakes, such as miscalculations or logical errors, leading to wrong conclusio</p>
<p><a href="https://github.com/QwenLM/Qwen2-Math">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-math-66eaa240a1b7d5ee65f1da3e">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-math-prm</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-math-prm</guid>
<pubDate>Mon, 13 Jan 2025 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>QVQ: To See the World with Wisdom</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/QVQ/QVQ.jpg" referrerpolicy="no-referrer"><p>Language and vision intertwine in the human mind, shaping how we perceive and understand the world around us. Our ability to reason is deeply rooted in both linguistic thought and visual memory - but</p>
<p><a href="https://github.com/QwenLM/Qwen2-VL">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://www.kaggle.com/models/qwen-lm/qvq-72b-preview">KAGGLE</a> | <a href="https://huggingface.co/spaces/Qwen/QVQ-72B-Preview">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qvq-72b-preview</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qvq-72b-preview</guid>
<pubDate>Tue, 24 Dec 2024 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>QwQ: Reflect Deeply on the Boundaries of the Unknown</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/assets/blog/qwq-32b-preview/QwQ-32B-Preview_result.png" referrerpolicy="no-referrer"><p>*Note: This is the pronunciation of QwQ: /kwju:/ , similar to the word "quill".*What does it mean to think, to question, to understand? These are the deep waters that QwQ (Qwen with Questions) wades i</p>
<p><a href="https://github.com/QwenLM/Qwen2.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/QwQ-32B-preview">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwq-32b-preview</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwq-32b-preview</guid>
<pubDate>Wed, 27 Nov 2024 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Extending the Context Length to 1M Tokens!</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Turbo/cover_en.png" referrerpolicy="no-referrer"><p>After the release of Qwen2.5, we heard the community's demand for processing longer contexts. In recent months, we have made many optimizations for the model capabilities and inference performance of</p>
<p><a href="https://help.aliyun.com/zh/model-studio/getting-started/first-api-call-to-qwen">API Documentation (Chinese)</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-Turbo-1M-Demo">HuggingFace Demo</a> | <a href="https://www.modelscope.cn/studios/Qwen/Qwen2.5-Turbo-1M-Demo">ModelScope Demo</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-turbo</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-turbo</guid>
<pubDate>Thu, 14 Nov 2024 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen2.5-Coder Series: Powerful, Diverse, Practical.</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5/Qwen2.5-Coder-Family/32b-top.jpg" referrerpolicy="no-referrer"><p>Today, we are excited to open source the "Powerful", "Diverse", and "Practical" Qwen2.5-Coder series, dedicated to continuously promoting the development of Open CodeLLMs.Additionally, the multi-langu</p>
<p><a href="https://github.com/QwenLM/Qwen2.5-Coder">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-coder-66eaa22e6f99801bf65b0c2f">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://www.kaggle.com/models/qwen-lm/qwen2.5-coder">KAGGLE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-Coder-demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-coder-family</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-coder-family</guid>
<pubDate>Mon, 11 Nov 2024 16:00:02 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5: A Party of Foundation Models!</title>
<description><img src="http://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5/Qwen2.5%20modelcard.001.jpeg" referrerpolicy="no-referrer"><p>In the past three months since Qwen2's release, numerous developers have built new models on the Qwen2 language models, providing us with valuable feedback. During this period, we have focused on crea</p>
<p><a href="https://github.com/QwenLM/Qwen2.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a> | <a href="https://qwenlm.github.io/blog/qwen2.5-llm">Qwen2.5 LLM</a> | <a href="https://qwenlm.github.io/blog/qwen2.5-coder">Qwen2.5-Coder</a> | <a href="https://qwenlm.github.io/blog/qwen2.5-math">Qwen2.5-Math</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5</guid>
<pubDate>Wed, 18 Sep 2024 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5-LLM: Extending the boundary of LLMs</title>
<description><p>In this blog, we delve into the details of our latest Qwen2.5 series language models. We have developed a range of decoder-only dense models, with seven of them open-sourced, spanning from 0.5B to 72B</p>
<p><a href="https://github.com/QwenLM/Qwen2.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-72B-Instruct">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-llm</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-llm</guid>
<pubDate>Wed, 18 Sep 2024 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen2.5-Coder: Code More, Learn More!</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5/Qwen2.5-Coder/coder-main.png" referrerpolicy="no-referrer"><p>In early April, we introduced CodeQwen1.5, which garnered significant attention from the community. Since then, we have been working to enhance the coding model. Today, we are excited to announce the</p>
<p><a href="https://github.com/QwenLM/Qwen2.5-Coder">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-Coder-7B-Instruct">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-coder</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-coder</guid>
<pubDate>Wed, 18 Sep 2024 16:00:02 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5-Math: The world's leading open-sourced mathematical LLMs</title>
<description><img src="http://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5/2024-08-qwen2.5-math-72B.png" referrerpolicy="no-referrer"><p>**🚨 Qwen2.5-Math mainly supports solving English and Chinese math problems through CoT and TIR. We do not recommend using this series of models for other tasks.**A month ago, we released the first se</p>
<p><a href="https://github.com/QwenLM/Qwen2-Math">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-math</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-math</guid>
<pubDate>Wed, 18 Sep 2024 16:00:01 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2-VL: To See the World More Clearly</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen2-VL/qwen2vl-head.jpeg" referrerpolicy="no-referrer"><p>After a year's relentless efforts, today we are thrilled to release <strong>Qwen2-VL</strong>! Qwen2-VL is the latest version of the vision language models based on <strong>Qwen2</strong> in the Qwen model familities. Compared</p>
<p><a href="https://huggingface.co/spaces/Qwen/Qwen2-VL">DEMO</a> | <a href="https://github.com/QwenLM/Qwen2-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen2-vl-66cee7455501d7126940800d">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://help.aliyun.com/zh/model-studio/developer-reference/qwen-vl-api">API</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2-vl</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2-vl</guid>
<pubDate>Wed, 28 Aug 2024 16:24:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2-Audio: Chat with Your Voice!</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-Audio/demo/radar_compare_qwen_audio.png" referrerpolicy="no-referrer"><p>To achieve the objective of building an AGI system, the model should be capable of understanding information from different modalities. Thanks to the rapid development of large language models, LLMs a</p>
<p><a href="https://huggingface.co/spaces/Qwen/Qwen2-Audio-Instruct-Demo">DEMO</a> | <a href="https://arxiv.org/pdf/2407.10759">PAPER</a> | <a href="https://github.com/QwenLM/Qwen2-Audio">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen2-audio-66b628d694096020e0c52ff6">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2-audio</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2-audio</guid>
<pubDate>Fri, 09 Aug 2024 08:18:19 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Introducing Qwen2-Math</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/assets/blog/qwen2-math/fig1.jpg" referrerpolicy="no-referrer"><p>**🚨 This model mainly supports English. We will release bilingual (English and Chinese) math models soon.**Over the past year, we have dedicated significant effort to researching and enhancing the re</p>
<p><a href="https://github.com/QwenLM/Qwen2-Math">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2-math</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2-math</guid>
<pubDate>Wed, 07 Aug 2024 16:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Hello Qwen2</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/qwen2/qwen.jpg" referrerpolicy="no-referrer"><p>After months of efforts, we are pleased to announce the evolution from Qwen1.5 to Qwen2. This time, we bring to you:We have opensourced the models in Hugging Face and ModelScope to you and we are look</p>
<p><a href="https://github.com/QwenLM/Qwen2">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2-72B-Instruct">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2</guid>
<pubDate>Thu, 06 Jun 2024 16:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Generalizing an LLM from 8k to 1M Context using Qwen-Agent</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/assets/qwen_agent/qwen-agent-2405-lv1-agent.png" referrerpolicy="no-referrer"><p><strong>TLDR:</strong> We've created an agent using Qwen2 models with an 8k context size to understand documents with 1M tokens, surpassing RAG and native long-context models. This agent was also used to generate</p>
<p><a href="https://github.com/QwenLM/Qwen-Agent">Qwen-Agent</a> | <a href="https://github.com/QwenLM/Qwen-Agent/blob/main/examples/assistant_rag.py">RAG Code</a> | <a href="https://github.com/QwenLM/Qwen-Agent/blob/main/examples/parallel_doc_qa.py">Agent Code</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-agent-2405</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-agent-2405</guid>
<pubDate>Thu, 06 Jun 2024 03:59:59 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Notes on Qwen-Max-0428</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/arena_leaderboard.jpg" referrerpolicy="no-referrer"><p>Previously, we opensourced a series of Qwen1.5 model ranging from 0.5 to 110 billion parameters. Now, we release a larger model, Qwen-Max-0428. Qwen-Max-0428 is an instruction-tuned model for chat ser</p>
<p><a href="https://dashscope.aliyun.com/">API</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen-Max-0428">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-max-0428</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-max-0428</guid>
<pubDate>Sat, 11 May 2024 10:10:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen1.5-110B: The First 100B+ Model of the Qwen1.5 Series</title>
<description><p>Recently we have witnessed a burst of large-scale models with over 100 billion parameters in the opensource community. These models have demonstrated remarkable performance in both benchmark evaluatio</p>
<p><a href="https://github.com/QwenLM/Qwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen1.5-110B-Chat-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen1.5-110b</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen1.5-110b</guid>
<pubDate>Thu, 25 Apr 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Code with CodeQwen1.5</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/codeqwen1.5/intro.png" referrerpolicy="no-referrer"><p>The advent of advanced programming tools, which harnesses the power of large language models (LLMs), has significantly enhanced programmer productivity and accuracy. Notwithstanding these advancements</p>
<p><a href="https://github.com/QwenLM/CodeQwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/CodeQwen1.5-7b-Chat-demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=codeqwen1.5</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=codeqwen1.5</guid>
<pubDate>Tue, 16 Apr 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen1.5-32B: Fitting the Capstone of the Qwen1.5 Language Model Series</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/qwen1.5-32b/32b.png" referrerpolicy="no-referrer"><p>The open-source community has long sought a model that strikes an ideal balance between performance, efficiency, and memory footprint. Despite the emergence of cutting-edge models like Qwen1.5-72B and</p>
<p><a href="https://github.com/QwenLM/Qwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen1.5-72B-Chat">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen1.5-32b</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen1.5-32b</guid>
<pubDate>Tue, 02 Apr 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen1.5-MoE: Matching 7B Model Performance with 1/3 Activated Parameters</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/assets/blog/qwen1.5/qwen-moe.jpg" referrerpolicy="no-referrer"><p>Since the surge in interest sparked by Mixtral, research on mixture-of-expert (MoE) models has gained significant momentum. Both researchers and practitioners are keenly interested in understanding ho</p>
<p><a href="https://github.com/QwenLM/Qwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/qwen1.5-MoE-A2.7B-Chat-demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-moe</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-moe</guid>
<pubDate>Thu, 28 Mar 2024 03:31:44 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Introducing Qwen1.5</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/qwen1.5/intro.jpg" referrerpolicy="no-referrer"><p>In recent months, our focus has been on developing a "good" model while optimizing the developer experience. As we progress towards <strong>Qwen1.5</strong>, the next iteration in our Qwen series, this update arri</p>
<p><a href="https://github.com/QwenLM/Qwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen1.5-72B-Chat">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen1.5</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen1.5</guid>
<pubDate>Sun, 04 Feb 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Introducing Qwen-VL</title>
<description><p>Along with the rapid development of our large language model Qwen, we leveraged Qwen’s capabilities and unified multimodal pretraining to address the limitations of multimodal models in generalization</p>
</description>
<link>https://qwen.ai/blog?id=qwen-vl</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-vl</guid>
<pubDate>Thu, 25 Jan 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Introducing Qwen</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/qwen/family.png" referrerpolicy="no-referrer"><p>4 months after our first release of Qwen-7B, which is the starting point of our opensource journey of large language models (LLM), we now provide an introduction to the Qwen series to give you a whole</p>
<p><a href="https://arxiv.org/abs/2309.16609">PAPER</a> | <a href="https://github.com/QwenLM/Qwen">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/CV4E9rpNSD">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen</guid>
<pubDate>Tue, 23 Jan 2024 14:13:29 GMT</pubDate>
... |
Contributor
http://localhost:1200/qwen/research/zh-cn - Success ✔️<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
<channel>
<title>Qwen 研究</title>
<link>https://qwen.ai/research</link>
<atom:link href="http://localhost:1200/qwen/research/zh-cn" rel="self" type="application/rss+xml"></atom:link>
<description>Qwen 研究 - Powered by RSSHub</description>
<generator>RSSHub</generator>
<webMaster>contact@rsshub.app (RSSHub)</webMaster>
<language>en</language>
<lastBuildDate>Thu, 02 Apr 2026 12:12:26 GMT</lastBuildDate>
<ttl>5</ttl>
<item>
<title>Qwen-Image-Edit-2511: 一致性再提升</title>
<description><p>我们很高兴推出 Qwen-Image-Edit-2511,相比于Qwen-Image-Edit-2509,进行了包括一致性提升在内的多项增强。如需体验最新模型,欢迎访问 <a href="https://chat.qwen.ai/?inputFeature=image_edit">Qwen Chat</a> 并选择“图像编辑”功能。注意,线上版本有一定优化加速,如果要获取模型最佳效果,可以去 <a href="https://modelscope.cn/models/Qwen/Qwen-Image-Edit-2511">ModelScope</a> 本地部署以获取最佳性能。Qwen-Image-Edit-2511 的主要特性包括:**</p>
</description>
<link>https://qwen.ai/blog?id=qwen-image-edit-2511</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image-edit-2511</guid>
<pubDate>Tue, 23 Dec 2025 05:08:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3-TTS 全面升级: 音色设计与音色克隆!</title>
<description><p><strong>Qwen3-TTS</strong> 家族新推出两款模型,音色创造模型Qwen3-TTS-VD-Flash(可通过<a href="https://www.alibabacloud.com/help/zh/model-studio/qwen-tts-voice-design">Qwen API</a>访问)和音色克隆模型Qwen3-TTS-VC-Flash(可通过<a href="https://www.alibabacloud.com/help/zh/model-studio/qwen-tts-voice-cloning">Qwen API</a>访问)。主要特点:Qwen3-TTS 支持通过自然语言描述生成定制化的音色形象。用户可以随意输入声</p>
</description>
<link>https://qwen.ai/blog?id=qwen3-tts-vc-voicedesign</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-tts-vc-voicedesign</guid>
<pubDate>Mon, 22 Dec 2025 16:00:45 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen-Image-Layered: 面向内在可编辑性的图层分解</title>
<description><p>今天我们很高兴推出 Qwen-Image-Layered,这是一款能够将图像分解为多个 RGBA 图层的模型。这种分层表示赋予了图像内在的可编辑性:每个图层都可以独立操作,而不会影响其他内容。同时,这种分层结构天然支持高保真的基本编辑操作,例如缩放、移动和重新着色。通过将不同元素物理地隔离到不同的图层中,我们的方法实现了高保真的编辑效果。给定一张图像,Qwen-Image-Layered 可将其分解为若干个 RGBA 图层:分解完成后,编辑操作仅作用于目标图层,将其与其他内容物理隔离,从根本上确保了编辑的一致性。例如,我们可以对第一个图层重新着色,而保持其余内容不变:我们也可以将第二个图层中的</p>
</description>
<link>https://qwen.ai/blog?id=qwen-image-layered</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image-layered</guid>
<pubDate>Fri, 19 Dec 2025 05:08:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen3-Omni-Flash-2025-12-01:声形意合,令出智随!</title>
<description><p><strong>Qwen3-Omni</strong>是新一代原生全模态大模型,能够无缝处理文本、图像、音频和视频等多种输入形式,并通过实时流式响应同时生成文本与自然语音输出。我们引入了多种升级来提升模型表现和效率。<strong>Qwen3-Omni-Flash-2025-12-01</strong>是在Qwen3-Omni基础上进行全面升级的版本。此次升级版本主要特点为:在客观性能指标上,<strong>Qwen3-Omni-Flash-2025-12-01</strong>全模态能力全面跃升,各项能力均显著超越Qwen3-Omni-Flash:此次升级,让 Qwen3-Omni-Flash-20251201 在全模态场景下真正做到“声形意合,令出智随”,为用户带来</p>
</description>
<link>https://qwen.ai/blog?id=qwen3-omni-flash-20251201</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-omni-flash-20251201</guid>
<pubDate>Mon, 08 Dec 2025 21:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>SAPO:一种稳定且高性能的大语言模型强化学习方法</title>
<description><p>强化学习(Reinforcement Learning, RL)已经成为提升大语言模型(Large Language Models, LLM)推理能力的核心技术之一。现代 RL 训练流程使模型能够解决困难的数学问题、编写复杂代码和进行多模态推理。实践中,一种被广泛采用的方法是基于组的策略优化(group‑based policy optimization):对每个提示采样多个回复,并在组内进行奖励归一化。<br>
然而,尽管该方法效果显著,稳定且高性能的策略优化仍然困难。关键挑战在于 token 级重要性比率(importance ratio)的高方差,尤其是在 MoE 模型中。该比率衡量当前策略偏离</p>
</description>
<link>https://qwen.ai/blog?id=sapo</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=sapo</guid>
<pubDate>Thu, 04 Dec 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen3-TTS 全面升级!49种音色 + 10种语言 + 9种方言</title>
<description><p><strong>Qwen3-TTS</strong> 是支持多音色、多语种和多方言的旗舰语音合成模型,致力于实现稳定、自然和高效的语音生成,目前可通过<a href="https://help.aliyun.com/zh/model-studio/qwen-tts">Qwen API</a>访问。主要改进:Qwen3-TTS 提供了个性鲜明、情感饱满的多元声音形象供用户选择,可满足多样化的场景需求。以下是一些合成样音:Qwen3-TTS 深度支持多种汉语方言表达,精准还原口音语调与地域韵味。以下是一些合成样音:Qwen3-TTS 同样支持了地道自然的多语种音色,发声习惯更贴近母语表达。以下是一些合成样例:通过 Qwen API 使用 Qwe</p>
</description>
<link>https://qwen.ai/blog?id=qwen3-tts-1128</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-tts-1128</guid>
<pubDate>Thu, 04 Dec 2025 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen DeepResearch: 当灵感不再需要理由</title>
<description><p><a href="https://chat.qwen.ai/?inputFeature=deep_research">点我体验最新 Qwen DeepResearch</a>_<strong>灵感是如何死掉的?</strong>_它通常不是死于“不够好”,而是死于“太麻烦”。当一个念头闪现时,它还是脆弱的、未经证实的。我们的大脑在短暂兴奋后,会立刻开始评估“成本”:就在这个“成本评估”的瞬间,绝大多数灵感就被“理性”地扼杀了。我们下意识地回避了它,因为“深入研究”的传统门槛实在太高。我们一直在思考,如何让“深入研究”不再是一个需要启动的重型任务,而是成为思考的自然延伸。**这就是 Qwen DeepResearch 诞生的使命。**我们想做</p>
</description>
<link>https://qwen.ai/blog?id=qwen-deepresearch</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-deepresearch</guid>
<pubDate>Wed, 12 Nov 2025 20:59:26 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3-Max:大就是好</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwen3max-banner.png" referrerpolicy="no-referrer"><p>继 Qwen3-2507 系列发布之后,我们非常高兴地推出 Qwen3-Max —— 我们迄今为止规模最大、能力最强的模型。目前,Qwen3-Max-Instruct 的预览版在 LMArena 文本排行榜上位列第三,超越了 GPT-5-Chat。正式版本在代码能力和智能体(agent)能力方面进一步提升,在涵盖知识、推理、编程、指令遵循、人类偏好对齐、智能体任务和多语言理解的全面基准测试中均达</p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://help.aliyun.com/zh/model-studio/models#qwen-max-cn-bj">API</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-max</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-max</guid>
<pubDate>Wed, 24 Sep 2025 04:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3-LiveTranslate:视、听、说全模态同传大模型!</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-LiveTranslate-Flash/blog_pic_without_subtitles(1).png#center" referrerpolicy="no-referrer"><p><strong>Qwen3-LiveTranslate-Flash</strong> 是一款基于大语言模型的高精度、高响应、高鲁棒性的多语言实时音视频同传模型。依托Qwen3-Omni强大的基座能力、海量多模态数据、百万小时音视频数据,Qwen3-LiveTranslate-Flash 实现了覆盖18种语言的离线和实时两种音视频翻译能力。核心亮点:在公开测试集上中英及多语言语音翻译,Qwen3-LiveTranslate-</p>
<p><a href="https://help.aliyun.com/document_detail/2983281.html">DASHSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-Livetranslate-Demo">HUGGING FACE DEMO</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-livetranslate</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-livetranslate</guid>
<pubDate>Mon, 22 Sep 2025 23:00:26 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3-VL:明察、深思、广行</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-VL/qwen3vl-head.png#center" referrerpolicy="no-referrer"><p>今天,我们正式推出全新升级的 <strong>Qwen3-VL</strong> 系列——这是迄今为止 Qwen 系列中最强大的视觉语言模型。在这一代模型中,我们在多个维度实现了全面跃升:无论是纯文本理解与生成,还是视觉内容的感知与推理;无论是上下文长度的支持能力,还是对空间关系、动态视频的理解深度;乃至在与Agent交互中的表现,Qwen3-VL 都展现出显著进步。今天,我们率先开源的是该系列的旗舰模型 —— **Qwe</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen3-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen3-vl-68d2a7c1b8a8afce4ebd2dbe">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen3-VL-5c7a94c8cb144b">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-vl</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-vl</guid>
<pubDate>Mon, 22 Sep 2025 22:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>旅行规划师:你的专属智能行程设计师</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/qwen-travel-planner/zn_q1.png" referrerpolicy="no-referrer"><p>我们非常高兴推出全新的<strong>旅行规划助手</strong>,这是一个基于 <strong>Multi-Agent 架构</strong> 并具备强大 <strong>真实工具调用能力</strong> 的旅行规划系统,能够高效应对复杂、多变的行程安排任务。无论你计划的是多城市连线旅行,还是单城深度游,它都能为你提供精准、可落地的旅行方案:旅行规划是一项系统工程,涵盖交通、景点、住宿、用餐等环节,它们环环相扣、相互影响,任何单一 Agent 都难以全面驾驭其中的复杂</p>
<p><a href="https://chat.qwen.ai/?inputFeature=travel">QWEN CHAT</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=agent</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=agent</guid>
<pubDate>Mon, 22 Sep 2025 21:00:59 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3Guard: 实时安全,逐词响应</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3Guard/banner.png" referrerpolicy="no-referrer"><p>我们隆重推出 Qwen3Guard —— Qwen 家族中首款专为安全防护设计的护栏模型。该模型基于强大的 Qwen3 基础架构打造,并针对安全分类任务进行了专项微调,旨在为人工智能交互提供精准、可靠的安全保障。无论是用户输入的提示,还是模型生成的回复,Qwen3Guard 均可高效识别潜在风险,输出细粒度的风险等级与分类标签,助力实现更负责任的 AI 应用。在多项主流安全评测基准上,Qwen3G</p>
<p><a href="https://github.com/QwenLM/Qwen3Guard/blob/main/Qwen3Guard_Technical_Report.pdf">Tech Report</a> | <a href="https://github.com/QwenLM/Qwen3Guard">GitHub</a> | <a href="https://huggingface.co/collections/Qwen/qwen3guard-68d2729abbfae4716f3343a1">Hugging Face</a> | <a href="https://modelscope.cn/collections/Qwen3Guard-308c39ef5ffb4b">ModelScope</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3guard</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3guard</guid>
<pubDate>Mon, 22 Sep 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen-Image-Edit-2509: 多图编辑支持,单图一致性提升</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen-Image/edit2509/edit2509_top.jpg#center" referrerpolicy="no-referrer"><p>这个9月,我们很高兴推出 Qwen-Image-Edit-2509,作为 Qwen-Image-Edit 的月迭代版本。如需体验最新模型,欢迎访问 <a href="https://qwen.ai/">Qwen Chat</a> 并选择“图像编辑”功能。相比于8月发布的 Qwen-Image-Edit,Qwen-Image-Edit-2509 的主要特性包括:**Qwen-Image-Edit-2509 的首要更新是支</p>
<p><a href="https://qwen.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen-Image">GITHUB</a> | <a href="https://huggingface.co/Qwen/Qwen-Image-Edit-2509">HUGGING FACE</a> | <a href="https://modelscope.cn/models/Qwen/Qwen-Image-Edit-2509">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-image-edit-2509</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image-edit-2509</guid>
<pubDate>Mon, 22 Sep 2025 16:08:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3-Omni:新一代原生全模态大模型!</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen3-Omni/q3o.png#center" referrerpolicy="no-referrer"><p><strong>Qwen3-Omni</strong>是新一代原生全模态大模型,能够无缝处理文本、图像、音频和视频等多种输入形式,并通过实时流式响应同时生成文本与自然语音输出。我们引入了多种升级来提升模型表现和效率。主要特点:Qwen3-Omni采用Thinker-Talker架构:Thinker负责文本生成,Talker专注于流式语音Token生成,直接接收来自Thinker的高层语义表征。为实现超低延迟流式生成,Tal</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://huggingface.co/collections/Qwen/qwen3-omni-68d100a86cd0906843ceccbe">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen3-Omni-867aef131e7d4f">MODELSCOPE</a> | <a href="https://help.aliyun.com/zh/model-studio/qwen-omni">DASHSCOPE</a> | <a href="https://github.com/QwenLM/Qwen3-Omni">GITHUB</a> | <a href="https://github.com/QwenLM/Qwen3-Omni/tree/main/assets/Qwen3_Omni.pdf">PAPER</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-Omni-Demo">HUGGING FACE DEMO</a> | <a href="https://modelscope.cn/studios/Qwen/Qwen3-Omni-Demo">MODELSCOPE DEMO</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-omni</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-omni</guid>
<pubDate>Sun, 21 Sep 2025 21:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3-TTS-Flash:多音色 & 多语言 & 多方言的语音合成</title>
<description><img src="http://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-TTS-Flash/table2.png#center" referrerpolicy="no-referrer"><p><strong>Qwen3-TTS-Flash</strong> 是支持多音色、多语言和多方言的旗舰语音合成模型,旨在生成自然且具有表现力的语音,目前可通过<a href="https://help.aliyun.com/zh/model-studio/qwen-tts">Qwen API</a>访问。主要特点:这里有一些样例展示了单说话人的多语种生成能力:这里有一些样例展示了中英文的音色:这里有一些样例展示了方言的音色:这里有一些样例展示了混</p>
<p><a href="https://help.aliyun.com/zh/model-studio/qwen-tts">DASHSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-TTS-Demo">HUGGING FACE DEMO</a> | <a href="https://modelscope.cn/studios/Qwen/Qwen3-TTS-Demo">MODELSCOPE DEMO</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-tts</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-tts</guid>
<pubDate>Sun, 21 Sep 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3-Next:迈向更极致的训练推理性价比</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwen3-next.png" referrerpolicy="no-referrer"><p>我们认为<strong>Context Length Scaling</strong>和<strong>Total Parameter Scaling</strong>是未来大模型发展的两大趋势,为了进一步提升模型在长上下文和大规模总参数下的训练和推理效率,我们设计了全新的Qwen3-Next的模型结构。该结构相比Qwen3的MoE模型结构,进行了以下核心改进:<strong>混合注意力机制</strong>、<strong>高稀疏度 MoE 结构</strong>、一系列<strong>训练稳定友好的优化</strong></p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://help.aliyun.com/zh/model-studio/models#2c9c4628c9yyd">API</a> | <a href="https://huggingface.co/collections/Qwen/qwen3-next-68c25fd6838e585db8eeea9d">Hugging Face</a> | <a href="https://modelscope.cn/collections/Qwen3-Next-c314f23bd0264a">ModelScope</a> | <a href="https://www.kaggle.com/models/qwen-lm/qwen3-next-80b">Kaggle</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-next</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-next</guid>
<pubDate>Wed, 10 Sep 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3 ASR:听得清楚,转写聪明。</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3-ASR/qwenasr-bear-zh.png#center" referrerpolicy="no-referrer"><p>Qwen3-ASR-Flash现已正式发布,一个基于Qwen3基座模型强大的智能、海量多模态数据以及千万小时规模的ASR数据构建的语音识别服务。<br>
Qwen3-ASR-Flash实现了高精度高鲁棒性的语音识别性能,支持11种语言和多种口音。与众不同的是,Qwen3-ASR-Flash支持用户以任意格式提供文本上下文,从而获得定制化的 ASR 结果,同时还支持歌声识别。<strong>📊 性能表现:</strong>**</p>
</description>
<link>https://qwen.ai/blog?id=qwen3-asr-flash</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-asr-flash</guid>
<pubDate>Mon, 08 Sep 2025 11:37:47 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen-Image-Edit: 全能图像编辑,驱动内容创作提质增效</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen-Image/edit_homepage.jpg#center" referrerpolicy="no-referrer"><p>我们很高兴推出 Qwen-Image-Edit,Qwen-Image 的图像编辑版本。Qwen-Image-Edit 基于我们20B的 Qwen-Image 模型进一步训练,成功将 Qwen-Image 的独特的文本渲染能力延展至图像编辑领域,实现了对图片中文字的精准编辑。此外,Qwen-Image-Edit 将输入图像同时输入到 Qwen2.5-VL(实现视觉语义控制)和 VAE Encoder</p>
<p><a href="https://qwen.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen-Image">GITHUB</a> | <a href="https://huggingface.co/Qwen/Qwen-Image-Edit">HUGGING FACE</a> | <a href="https://modelscope.cn/models/Qwen/Qwen-Image-Edit">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-image-edit</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image-edit</guid>
<pubDate>Mon, 18 Aug 2025 17:30:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen-Image:擅长文字渲染的创作利器</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen-Image/merge3.jpg#center" referrerpolicy="no-referrer"><p>我们很高兴推出 Qwen-Image,一个20B的MMDiT模型。这是通义千问系列中首个图像生成基础模型,其在复杂文本渲染和精确图像编辑方面取得了显著进展。如需体验最新模型,欢迎访问 <a href="https://chat.qwen.ai/">Qwen Chat</a> 并选择“图像生成”功能。主要特性包括:我们在多个公开基准上对Qwen-Image进行了全面评估,包括用于通用图像生成的GenEval、DPG和O</p>
<p><a href="https://github.com/QwenLM/Qwen-Image">GITHUB</a> | <a href="https://huggingface.co/Qwen/Qwen-Image">HUGGING FACE</a> | <a href="https://modelscope.cn/models/Qwen/Qwen-Image">MODELSCOPE</a> | <a href="https://modelscope.cn/aigc/imageGeneration?tab=advanced">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-image</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image</guid>
<pubDate>Mon, 04 Aug 2025 14:08:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>GSPO:迈向持续拓展的语言模型强化学习</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/results.jpg#center" referrerpolicy="no-referrer"><p>强化学习 (Reinforcement Learning,RL)已成为拓展语言模型、增强其深度推理与问题求解能力的关键技术范式。为了持续拓展 RL,首要前提是确保稳定、鲁棒的训练过程。然而,我们观察到现有的 RL 算法(如 GRPO)在长期训练中会暴露出严重的不稳定性问题并招致不可逆转的模型崩溃,阻碍了通过增加计算以获得进一步的性能提升。为了能够持续拓展 RL,我们提出了 **Group Sequ</p>
<p><a href="https://huggingface.co/papers/2507.18071">PAPER</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=gspo</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=gspo</guid>
<pubDate>Sun, 27 Jul 2025 07:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen-MT:速度与智能翻译的完美融合</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwen-mt-001.jpeg" referrerpolicy="no-referrer"><p>我们通过<a href="https://bailian.console.aliyun.com/?tab=model#/model-market/detail/qwen-mt-turbo">Qwen API</a> 推出了 Qwen-MT(qwen-mt-turbo)的最新升级版本。本次更新基于强大的 Qwen3 模型,进一步使用超大规模多语言和翻译数据对模型进行训练,全面增强其多语言理解与翻译能力,并结合强化学习技术</p>
<p><a href="https://modelscope.cn/studios/Qwen/Qwen3-MT-demo">DEMO</a> | <a href="https://bailian.console.aliyun.com/?tab=model#/model-market/detail/qwen-mt-turbo">API</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-mt</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-mt</guid>
<pubDate>Thu, 24 Jul 2025 14:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen3-Coder: 在世界中自主编程</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3-Coder/qwen3-coder-main.jpg" referrerpolicy="no-referrer"><p>今天我们正式发布 Qwen3-Coder,这是我们迄今为止最具代理能力的代码模型。Qwen3-Coder 拥有多个尺寸,但我们迫不及待地给大家提供当前最强大的版本,Qwen3-Coder-480B-A35B-Instruct。这是一个总参数量 480B,激活 35B 的 MoE 模型,原生支持 256K token 的上下文并可通过 YaRN 扩展到 1M token,拥有卓越的代码和 Agent</p>
<p><a href="https://github.com/QwenLM/Qwen3-Coder">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-coder</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-coder</guid>
<pubDate>Tue, 22 Jul 2025 13:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Time to Speak Some Dialects, Qwen-TTS!</title>
<description><p>我们通过 <a href="https://help.aliyun.com/zh/model-studio/qwen-tts">Qwen API</a> 更新了 <strong>Qwen-TTS</strong> ( <code>qwen-tts-latest</code> or <code>qwen-tts-2025-05-22</code> ) 的最新版本。Qwen-TTS 使用了超过 300 万小时的大规模语料库进行训练,合成效果实现了人类级别的自然度和表现力。比较亮眼的是,Qwe</p>
<p><a href="https://help.aliyun.com/zh/model-studio/qwen-tts">API</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-tts</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-tts</guid>
<pubDate>Fri, 27 Jun 2025 07:01:34 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen VLo: 从“看懂”世界到“描绘”世界</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen-VLo/vlo.png" referrerpolicy="no-referrer"><p>多模态大模型的演进正在不断突破我们对技术边界的认知。从最初的 QwenVL 到如今的 Qwen2.5 VL ,我们在提升模型对图像内容的理解能力方面取得了一些进展。今天,我们正式推出 Qwen VLo ——一个多模态统一理解与生成模型。这一全新升级的模型不仅能够“看懂”世界,更能基于理解进行高质量的再创造,真正实现了从感知到生成的跨越。需要注意的是,这是一款预览版本,您可以通过 Qwen Chat</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-vlo</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-vlo</guid>
<pubDate>Thu, 26 Jun 2025 14:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3 Embedding:新一代文本表征与排序模型</title>
<description><img src="https://mitalinlp.oss-cn-hangzhou.aliyuncs.com/dingkun/models/qwen-embedding/q3e-mteb-result-0605.png" referrerpolicy="no-referrer"><p>我们正式发布 Qwen3 Embedding 系列模型, Qwen 模型家族的新成员。该系列模型专为文本表征、检索与排序任务设计,基于 Qwen3 基础模型进行训练,充分继承了 Qwen3 在多语言文本理解能力方面的优势。在多项基准测试中,Qwen3 Embedding 系列在文本表征和排序任务中展现了卓越的性能。我们使用了 Apache 2.0 协议在 Hugging Face 和 ModelS</p>
<p><a href="https://github.com/QwenLM/Qwen3-Embedding">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-embedding</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-embedding</guid>
<pubDate>Thu, 05 Jun 2025 13:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3:思深,行速</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwen3-banner.png" referrerpolicy="no-referrer"><p>今天,我们宣布推出 <strong>Qwen3</strong>,这是 Qwen 系列大型语言模型的最新成员。我们的旗舰模型 <strong>Qwen3-235B-A22B</strong> 在代码、数学、通用能力等基准测试中,与 DeepSeek-R1、o1、o3-mini、Grok-3 和 Gemini-2.5-Pro 等顶级模型相比,表现出极具竞争力的结果。此外,小型 MoE 模型 <strong>Qwen3-30B-A3B</strong> 的激活参数数量是 QwQ</p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen3">GitHub</a> | <a href="https://huggingface.co/collections/Qwen/qwen3-67dd247413f0e2e4f653967f">Hugging Face</a> | <a href="https://modelscope.cn/collections/Qwen3-9743180bdc6b48">ModelScope</a> | <a href="https://www.kaggle.com/models/qwen-lm/qwen-3">Kaggle</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3</guid>
<pubDate>Mon, 28 Apr 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>QVQ-Max:有依据地思考</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/QVQ-Max/test_time.png" referrerpolicy="no-referrer"><p>去年12月,我们推出了 QVQ-72B-Preview, 作为一个探索模型,它存在很多问题。今天,我们正式推出 QVQ-Max 视觉推理模型的第一版。这款模型的特点是,它不仅能够“看懂”图片和视频里的内容,还能结合这些信息进行分析、推理,甚至给出解决方案。从数学题到生活小问题,从编程代码到艺术创作,QVQ-Max 都表现出了不俗的能力。虽然这只是我们的第一个版本,但它的潜力已经让人眼前一亮。Mat</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen2.5-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-vl-6795ffac22b334a837c0f9a5">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen25-VL-58fbb5d31f1d47">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qvq-max-preview</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qvq-max-preview</guid>
<pubDate>Thu, 27 Mar 2025 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen2.5 Omni:看得见、听得到、会说话、能写作,样样精通!</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2.5-Omni/qwen_omni.png" referrerpolicy="no-referrer"><p>我们发布了 <strong>Qwen2.5-Omni</strong>,Qwen 模型家族中新一代端到端多模态旗舰模型。该模型专为全方位多模态感知设计,能够无缝处理文本、图像、音频和视频等多种输入形式,并通过实时流式响应同时生成文本与自然语音合成输出。想要体验最新的模型,请访问 <a href="https://chat.qwenlm.ai/">Qwen Chat</a> 并选择Qwen2.5-Omni-7B。该模型现已在 [Hugging Fa</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://huggingface.co/Qwen/Qwen2.5-Omni-7B">HUGGING FACE</a> | <a href="https://modelscope.cn/models/Qwen/Qwen2.5-Omni-7B">MODELSCOPE</a> | <a href="https://help.aliyun.com/zh/model-studio/user-guide/qwen-omni">DASHSCOPE</a> | <a href="https://github.com/QwenLM/Qwen2.5-Omni">GITHUB</a> | <a href="https://github.com/QwenLM/Qwen2.5-Omni/blob/main/assets/Qwen2.5_Omni.pdf">PAPER</a> | <a href="https://modelscope.cn/studios/Qwen/Qwen2.5-Omni-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-omni</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-omni</guid>
<pubDate>Wed, 26 Mar 2025 16:00:45 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5-VL-32B: 更聪明、更轻量!</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-VL-32B/qwen2.5vl-32b-vision.jpg" referrerpolicy="no-referrer"><p>今年一月底,我们推出了 Qwen2.5-VL 系列模型,获得了社区的广泛关注和积极反馈。在 Qwen2.5-VL 系列的基础上,我们使用强化学习持续优化模型,并使用 Apache 2.0 协议开源 32B 这个备受喜爱的参数规模的新 VL 模型—— <strong>Qwen2.5-VL-32B-Instruct</strong>。相比此前发布的 Qwen2.5-VL 系列模型,本次推出的 32B 模型的特点如下:我们与业内</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen2.5-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-vl-6795ffac22b334a837c0f9a5">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen25-VL-58fbb5d31f1d47">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-vl-32b</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-vl-32b</guid>
<pubDate>Sun, 23 Mar 2025 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>QwQ-32B: 领略强化学习之力</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwq-32b-final.jpg" referrerpolicy="no-referrer"><p>大规模强化学习(RL)有潜力超越传统的预训练和后训练方法来提升模型性能。近期的研究表明,强化学习可以显著提高模型的推理能力。例如,DeepSeek R1 通过整合冷启动数据和多阶段训练,实现了最先进的性能,使其能够进行深度思考和复杂推理。这一次,我们探讨了大规模强化学习(RL)对大语言模型的智能的提升作用,同时很高兴推出我们最新的推理模型 QwQ-32B。这是一款拥有 320 亿参数的模型,其性能</p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://huggingface.co/Qwen/QwQ-32B">Hugging Face</a> | <a href="https://modelscope.cn/models/Qwen/QwQ-32B">ModelScope</a> | <a href="https://huggingface.co/spaces/Qwen/QwQ-32B-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwq-32b</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwq-32b</guid>
<pubDate>Wed, 05 Mar 2025 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title><think>...</think> QwQ-Max-Preview</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwq-banner-zh.jpg" referrerpolicy="no-referrer"><p>这篇博客出自 QwQ-Max-Preview 之手。希望各位看官喜欢!我们很高兴向大家介绍 QwQ-Max-Preview,这是 Qwen 系列的最新成果。这一版本基于 Qwen2.5-Max 构建,在数学、编程以及通用任务中展现了更强的能力,同时在与 Agent 相关的工作流中也有不错的表现。作为即将发布的 QwQ-Max 的预览版,这个版本还在持续优化中。我们计划在不久的将来以 Apache</p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwq-max-preview</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwq-max-preview</guid>
<pubDate>Mon, 24 Feb 2025 18:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen2.5-Max:探索大规模 MoE 模型的智能</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-max-banner.png" referrerpolicy="no-referrer"><p>过去有一种观点认为,持续地增长数据规模和模型参数规模是一种通向 AGI 的可能的路径。然而,整个大模型社区对于训练超大规模的模型的经验都相对匮乏,不论是稠密模型还是 MoE 模型。近期,DeepSeek V3 的发布让大家了解到超大规模 MoE 模型的效果及实现方法,而同期,Qwen 也在研发超大规模的 MoE 模型 Qwen2.5-Max,使用超过 20 万亿 token 的预训练数据及精心设计</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://www.alibabacloud.com/help/en/model-studio/getting-started/first-api-call-to-qwen?spm=a2c63.p38356.help-menu-2400256.d_0_1_0.1f6574a72ddbKE">API</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-Max-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-max</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-max</guid>
<pubDate>Tue, 28 Jan 2025 15:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen2.5-1M: 支持100万Token上下文的开源Qwen模型</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-1M/passkey_retrieval.png" referrerpolicy="no-referrer"><p>两个月前,我们升级了 <a href="https://qwen.ai/qwen2.5-turbo">Qwen2.5-Turbo</a>,使其支持最多一百万个Tokens的上下文长度。今天,我们正式推出开源的 Qwen2.5-1M 模型及其对应的推理框架支持。以下是本次发布的亮点:现在,你可以访问我们在 <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-1M-Demo">Huggingface</a> 和 [Mo</p>
<p><a href="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-1M/Qwen2_5_1M_Technical_Report.pdf">Tech Report</a> | <a href="https://huggingface.co/Qwen">HuggingFace</a> | <a href="https://modelscope.cn/organization/qwen">ModelScope</a> | <a href="https://chat.qwenlm.ai/">Qwen Chat</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-1M-Demo">HuggingFace Demo</a> | <a href="https://www.modelscope.cn/studios/Qwen/Qwen2.5-1M-Demo">ModelScope Demo</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-1m</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-1m</guid>
<pubDate>Sun, 26 Jan 2025 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5 VL!Qwen2.5 VL!Qwen2.5 VL!</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2.5-vl-Capybara.png" referrerpolicy="no-referrer"><p>我们发布了 <strong>Qwen2.5-VL</strong>,Qwen 模型家族的旗舰视觉语言模型,对比此前发布的 Qwen2-VL 实现了巨大的飞跃。欢迎访问 <a href="https://chat.qwenlm.ai/">Qwen Chat</a> 并选择 Qwen2.5-VL-72B-Instruct 进行体验。此外,我们在 [Hugging Face](https://huggingface.co/collections/Qwe</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen2-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-vl-6795ffac22b334a837c0f9a5">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen25-VL-58fbb5d31f1d47">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-vl</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-vl</guid>
<pubDate>Sun, 26 Jan 2025 11:08:41 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>通过全局负载均衡提升混合专家模型的性能和特异化程度</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/assets/balance/main_results.png" referrerpolicy="no-referrer"><p>混合专家模型(MoEs)通过路由机制动态并稀疏地激活模型参数,使得能高效地增大模型参数规模。基于 TopK 机制的稀疏激活会在训练中会遇到专家激活不均衡的问题:少数被频繁选择的专家会被优化得更多,进一步使得这些专家被更频繁地选择,最终导致只选择少数专家,造成剩余专家的冗余。因此,MoE 在训练中需要引入额外的辅助损失(load balance loss,LBL)来鼓励专家的选择趋于均衡。目前主流</p>
<p><a href="https://github.com/QwenLM/Qwen2-Math">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-math-66eaa240a1b7d5ee65f1da3e">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=global-load-balance</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=global-load-balance</guid>
<pubDate>Mon, 20 Jan 2025 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>面向有效的数学推理过程监督</title>
<description><img src="http://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2.5/Qwen2.5-Math-PRM/Qwen2.5-Math-PRM.png" referrerpolicy="no-referrer"><p>近年来,大型语言模型(LLMs)在数学推理方面取得了显著进展,但它们仍可能犯错误,如计算错误或逻辑错误,导致得出错误结论。<br>
此外,即使最终答案正确,这些强大的模型也经常编造看似合理的推理步骤,其中最终答案基于有缺陷的计算或推导过程,这削弱了LLMs推理过程的可靠性和可信度。<br>
因此,自动识别推理过程中的错误对于其可扩展监督变得越来越重要。过程奖励模型(Process Reward Models, P</p>
<p><a href="https://github.com/QwenLM/Qwen2-Math">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-math-66eaa240a1b7d5ee65f1da3e">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-math-prm</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-math-prm</guid>
<pubDate>Mon, 13 Jan 2025 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>QVQ: 更睿智地看世界</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/QVQ/QVQ.jpg" referrerpolicy="no-referrer"><p>在人类的思维中,语言和视觉紧密交织,塑造着我们感知和理解世界的方式。我们的推理能力深深植根于语言思维和视觉记忆之中。那么,当我们将这些能力赋予人工智能时,会发生什么呢?如今的大语言模型已经展现出卓越的推理能力,但我们不禁思考:它们能否通过掌握视觉理解的力量,攀登认知能力的新高峰?设想一下,一个人工智能能够像物理学大师一样,面对复杂的物理问题,沉着冷静地通过逻辑推理找到解决方案。正是这样的愿景激发我</p>
<p><a href="https://github.com/QwenLM/Qwen2-VL">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://www.kaggle.com/models/qwen-lm/qvq-72b-preview">KAGGLE</a> | <a href="https://huggingface.co/Qwen/QVQ-72B-Preview">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qvq-72b-preview</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qvq-72b-preview</guid>
<pubDate>Tue, 24 Dec 2024 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>QwQ: 思忖未知之界</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/assets/blog/qwq-32b-preview/QwQ-32B-Preview_result.png" referrerpolicy="no-referrer"><p>*注意:QwQ 的发音为 /kwju:/ ,与单词 "quill" 的读音近似。*思考、质疑、理解,是人类探索未知的永恒追求。在这条探索之路上,QwQ犹如一位怀抱无尽好奇的学徒,以思考和疑问照亮前路。QwQ体现了古老的哲学精神:它深知自己一无所知,而这种认知正是其好奇心的源泉。在探寻答案的过程中,它始终保持自省,以理性之光审视每一个假设,在不同的思维维度中穿行,追寻更深层的真理。然而,正如所有智慧</p>
<p><a href="https://github.com/QwenLM/Qwen2.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/QwQ-32B-preview">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwq-32b-preview</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwq-32b-preview</guid>
<pubDate>Wed, 27 Nov 2024 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>将上下文长度扩展至百万 Tokens !</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Turbo/cover_cn.png" referrerpolicy="no-referrer"><p>在 Qwen2.5 发布之后,我们听到社区对处理更长序列的需求。在这段时间,我们针对长序列处理能力以及长序列下的推理效率进行了很多优化。今天,我们隆重推出新的 Qwen2.5-Turbo 版本,其特点在于:现在,你可以通过[阿里云大模型服务平台](https://help.aliyun.com/zh/model-studio/developer-reference/what-is-qwen-llm</p>
<p><a href="https://help.aliyun.com/zh/model-studio/getting-started/first-api-call-to-qwen">API文档</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-Turbo-1M-Demo">HuggingFace Demo</a> | <a href="https://www.modelscope.cn/studios/Qwen/Qwen2.5-Turbo-1M-Demo">ModelScope Demo</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-turbo</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-turbo</guid>
<pubDate>Thu, 14 Nov 2024 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen2.5-Coder 全系列: 强大、多样、实用。</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5/Qwen2.5-Coder-Family/32b-top.jpg" referrerpolicy="no-referrer"><p>今天,我们很高兴开源「强大」、「多样」、「实用」的 Qwen2.5-Coder 全系列模型,致力于持续推动 Open CodeLLMs 的发展。另外,Qwen2.5-Coder-32B-Instruct 的多编程语言代码修复能力同样令人惊喜,这将有助于用户理解和修改自己熟悉的编程语言,极大缓解陌生语言的学习成本。与 McEval 类似,MdEval 是多编程语言的代码修复基准,Qwen2.5-Co</p>
<p><a href="https://github.com/QwenLM/Qwen2.5-Coder">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-coder-66eaa22e6f99801bf65b0c2f">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://www.kaggle.com/models/qwen-lm/qwen2.5-coder">KAGGLE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-Coder-demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-coder-family</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-coder-family</guid>
<pubDate>Mon, 11 Nov 2024 16:00:02 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5: 基础模型大派对!</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5/qwen2.5-main.jpg" referrerpolicy="no-referrer"><p>在 Qwen2 发布后的过去三个月里,许多开发者基于 Qwen2 语言模型构建了新的模型,并为我们提供了宝贵的反馈。在这段时间里,我们专注于创建更智能、更博学的语言模型。今天,我们很高兴地向大家介绍 Qwen 家族的最新成员:<strong>Qwen2.5</strong>。我们将要宣布的可能是历史上最大的开源发布!让我们开始这场盛会吧!我们的最新发布包括了语言模型 <strong>Qwen2.5</strong>,以及专门针对编程的 **Qwen</p>
<p><a href="https://github.com/QwenLM/Qwen2.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a> | <a href="https://qwenlm.github.io/blog/qwen2.5-llm">Qwen2.5 LLM</a> | <a href="https://qwenlm.github.io/blog/qwen2.5-coder">Qwen2.5-Coder</a> | <a href="https://qwenlm.github.io/blog/qwen2.5-math">Qwen2.5-Math</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5</guid>
<pubDate>Wed, 18 Sep 2024 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5-LLM:扩展大型语言模型的边界</title>
<description><p>我们隆重推出最新发布的Qwen2.5系列语言模型!我们共开源了7款decoder-only的稠密模型,参数规模从0.5B到72B不等。我们调研发现产品对10B至30B模型的兴趣明显增加,同时3B规模的模型也越来越适用于移动端场景。为此,Qwen2.5系列开源了Qwen2.5-3B、Qwen2.5-14B 和 Qwen2.5-32B。同时,我们还推出了Qwen-Plus与Qwen-Turbo版本,可</p>
<p><a href="https://github.com/QwenLM/Qwen2.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-72B-Instruct">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-llm</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-llm</guid>
<pubDate>Wed, 18 Sep 2024 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen2.5-Coder: 码无止境,学无止境!</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5/Qwen2.5-Coder/coder-main.png" referrerpolicy="no-referrer"><p>四月初,我们发布了 CodeQwen1.5, 得到了社区广泛的关注与喜爱。自那以后,我们一直在继续努力提升代码模型。今天,我们很高兴地宣布新一代的开放代码模型 Qwen2.5-Coder 的发布。并正式将 CodeQwen 的命名改为 Qwen-Coder,我们认为 Coder 更加拟人、灵动,期待其可以在未来真正与人类结对编程。Qwen2.5-Coder 是我们 Qwen2.5 开源家族的一员,</p>
<p><a href="https://github.com/QwenLM/Qwen2.5-Coder">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-Coder-7B-Instruct">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-coder</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-coder</guid>
<pubDate>Wed, 18 Sep 2024 16:00:02 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5-Math: 世界领先的数学开源大语言模型</title>
<description><img src="http://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5/2024-08-qwen2.5-math-72B.png" referrerpolicy="no-referrer"><p>**🚨 Qwen2.5-Math主要被设计用于通过CoT或TIR的方式解中英数学题,我们不推荐在其他任务上使用该系列模型。**一个月前,我们开源了 Qwen 家族的第一款数学专项大语言模型- <a href="https://qwenlm.github.io/blog/qwen2-math/">Qwen2-Math</a>。 今天,我们将它再度升级并开源 <strong>Qwen2.5-Math</strong> 系列,包括基础模型 **Qw</p>
<p><a href="https://github.com/QwenLM/Qwen2-Math">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-math</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-math</guid>
<pubDate>Wed, 18 Sep 2024 16:00:01 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2-VL: 更清晰地看世界</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen2-VL/qwen2vl-head.jpeg" referrerpolicy="no-referrer"><p>经历了接近一年时间的持续努力,今天我们很高兴地宣布我们最新一代的视觉语言模型:<strong>Qwen2-VL</strong> !Qwen2-VL 基于 Qwen2 打造,相比 Qwen-VL,它具有以下特点:我们以 Apache 2.0 协议开源了 Qwen2-VL-2B 和 Qwen2-VL-7B,并发布了 Qwen2-VL-72B 的 API!开源代码已集成到 Hugging Face Transformers、v</p>
<p><a href="https://huggingface.co/spaces/Qwen/Qwen2-VL">DEMO</a> | <a href="https://github.com/QwenLM/Qwen2-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen2-vl-66cee7455501d7126940800d">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://help.aliyun.com/zh/model-studio/developer-reference/qwen-vl-api">API</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2-vl</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2-vl</guid>
<pubDate>Wed, 28 Aug 2024 16:24:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2-Audio:开启语音对话!</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-Audio/demo/radar_compare_qwen_audio.png" referrerpolicy="no-referrer"><p>在一个通用的AI系统中,核心模型应该能够理解不同模态的信息。当前的大语言模型现在已经能够理解语言并进行推理,并且已经扩展到了更多的模态,包括视觉和音频。此前我们陆续发布了多个 Qwen 语言模型系列以及 Qwen-VL 和 Qwen-Audio 等多模态模型。今天,我们正式发布 Qwen2-Audio。这是 Qwen-Audio 的下一代版本,它能够接受音频和文本输入,并生成文本输出。Qwen2-</p>
<p><a href="https://huggingface.co/spaces/Qwen/Qwen2-Audio-Instruct-Demo">DEMO</a> | <a href="https://arxiv.org/pdf/2407.10759">PAPER</a> | <a href="https://github.com/QwenLM/Qwen2-Audio">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen2-audio-66b628d694096020e0c52ff6">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2-audio</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2-audio</guid>
<pubDate>Fri, 09 Aug 2024 08:22:39 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2-Math,新一代数学模型</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/assets/blog/qwen2-math/fig1.jpg" referrerpolicy="no-referrer"><p>**🚨 此模型目前主要支持英语。我们将尽快推出中英双语版本。**在过去的一年里,我们非常关注大模型的推理能力的提升,尤其关注其在数学相关的任务上的表现。今天,我们非常高兴地介绍 Qwen2 开源家族的新成员——Qwen2-Math-1.5B/7B/72B 系列。Qwen2-Math 是一系列基于 Qwen2 LLM 构建的专门用于数学解题的语言模型,其数学能力显著超越了开源模型,甚至超过了闭源模</p>
<p><a href="https://github.com/QwenLM/Qwen2-Math">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2-math</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2-math</guid>
<pubDate>Wed, 07 Aug 2024 16:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>你好,Qwen2</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/qwen2/qwen.jpg" referrerpolicy="no-referrer"><p>历经数月努力, 我们很高兴迎来了Qwen系列模型从Qwen1.5到Qwen2的重大升级。这一次,我们为大家带来了:目前,我们已在Hugging Face和ModelScope上同步开源。期待听到你们的使用反馈!Qwen2系列包含5个尺寸的预训练和指令微调模型,其中包括Qwen2-0.5B、Qwen2-1.5B、Qwen2-7B、Qwen2-57B-A14B和Qwen2-72B。如下表所示:在Qwe</p>
<p><a href="https://github.com/QwenLM/Qwen2">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2-72B-Instruct">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2</guid>
<pubDate>Thu, 06 Jun 2024 16:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>使用Qwen-Agent将上下文记忆扩展到百万量级</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/assets/qwen_agent/qwen-agent-2405-lv1-agent.png" referrerpolicy="no-referrer"><p><strong>长话短说:</strong> 我们开发了一个智能体用于理解包含百万字词的文档,虽然仅使用Qwen2模型的8k上下文,但效果超过RAG和长序列原生模型。我们还利用此智能体合成长上下文数据,用于训练长上下文的Qwen模型。近期,能够原生处理数百万字输入的大型语言模型(LLMs)成为了一种趋势。大部分工作集中在模型架构调整,如位置编码扩展或线性注意力机制等。然而,准备足够长度的微调数据作为讨论较少但同样重要的议题</p>
<p><a href="https://github.com/QwenLM/Qwen-Agent">Qwen-Agent</a> | <a href="https://github.com/QwenLM/Qwen-Agent/blob/main/examples/assistant_rag.py">RAG Code</a> | <a href="https://github.com/QwenLM/Qwen-Agent/blob/main/examples/parallel_doc_qa.py">Agent Code</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-agent-2405</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-agent-2405</guid>
<pubDate>Thu, 06 Jun 2024 03:59:59 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen-Max-0428模型介绍</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/arena_leaderboard.jpg" referrerpolicy="no-referrer"><p>此前,我们开源了Qwen1.5系列的模型,参数规模最小至5亿,最大至1100亿。这一次,我们推出更大规模模型Qwen-Max-0428(通义千问网页端及APP产品版本从2.1升级至2.5)。Qwen-Max-0428是经过指令微调的Chat模型。近期该模型登陆了<a href="https://chat.lmsys.org/">Chatbot Arena</a>,并登榜前十。此外,我们在MT-Bench的评测上也观察到</p>
<p><a href="https://dashscope.aliyun.com/">API</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen-Max-0428">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-max-0428</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-max-0428</guid>
<pubDate>Sat, 11 May 2024 10:10:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen1.5-110B:Qwen1.5系列的首个千亿参数开源模型</title>
<description><p>近期开源社区陆续出现了千亿参数规模以上的大模型,这些模型都在各项评测中取得杰出的成绩。今天,我们开源1100亿参数的Qwen1.5系列首个千亿参数模型Qwen1.5-110B,该模型在基础能力评估中与Meta-Llama3-70B相媲美,在Chat评估中表现出色,包括MT-Bench和AlpacaEval 2.0。Qwen1.5-110B与其他Qwen1.5模型相似,采用了相同的Transform</p>
<p><a href="https://github.com/QwenLM/Qwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen1.5-110B-Chat-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen1.5-110b</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen1.5-110b</guid>
<pubDate>Thu, 25 Apr 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>与 CodeQwen1.5 结对编程</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/codeqwen1.5/intro.png" referrerpolicy="no-referrer"><p>代码助手,是一种基于 LLMs 的智能化的编程工具,它可以帮助程序员更高效、更准确的编写代码,使得整个软件开发过程更加流畅和高效。然而流行的代码助手,比如 Github Copilot,依赖于闭源的商业模型,不仅昂贵还会引起如隐私、安全、版权等方面的担忧。幸运的是,开源社区正在致力于打造开放代码模型来实现开放的代码助手。近期涌现出了一批优秀的 Open CodeLLMs,比如 StarCoder2</p>
<p><a href="https://github.com/QwenLM/CodeQwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/CodeQwen1.5-7b-Chat-demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=codeqwen1.5</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=codeqwen1.5</guid>
<pubDate>Tue, 16 Apr 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen1.5-32B:Qwen1.5语言模型系列的最后一块拼图</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/qwen1.5-32b/32b.png" referrerpolicy="no-referrer"><p>开源社区长期以来一直在寻求一种能在性能、效率和内存占用之间达到理想平衡的模型。尽管出现了诸如Qwen1.5-72B和DBRX这样的SOTA模型,但这些模型持续面临诸如内存消耗巨大、推理速度缓慢以及显著的微调成本等问题。当前,参数量约30B的模型往往在这方面被看好,得到很多用户的青睐。顺应这一趋势,我们推出Qwen1.5语言模型系列的最新成员:Qwen1.5-32B和Qwen1.5-32B-Chat</p>
<p><a href="https://github.com/QwenLM/Qwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen1.5-72B-Chat">DEMO</a> | <a href="https://github.com/QwenLM/Qwen/blob/main/assets/wechat.png">WeChat</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen1.5-32b</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen1.5-32b</guid>
<pubDate>Tue, 02 Apr 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen1.5-MoE: 1/3的激活参数量达到7B模型的性能</title>
<description><p>今天,我们推出Qwen系列的首个MoE模型,Qwen1.5-MoE-A2.7B。它仅拥有27亿个激活参数,但其性能却能与当前最先进的70亿参数模型,如Mistral 7B和Qwen1.5-7B相媲美。相较于包含65亿个Non-Embedding参数的Qwen1.5-7B,Qwen1.5-MoE-A2.7B只有20亿个Non-Embedding参数,约为原模型大小的三分之一。此外,相比Qwen1.5</p>
<p><a href="https://github.com/QwenLM/Qwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/qwen1.5-MoE-A2.7B-Chat-demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-moe</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-moe</guid>
<pubDate>Thu, 28 Mar 2024 03:31:44 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen1.5 介绍</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/qwen1.5/intro.jpg" referrerpolicy="no-referrer"><p>最近几个月,我们专注探索如何构建一个真正「卓越」的模型,并在此过程中不断提升开发者的使用体验。农历新年到来之际,我们推出通义千问开源模型 1.5 版本: <strong>Qwen1.5</strong>。我们开源了包括 0.5B、1.8B、4B、7B、14B、32B、72B 和 110B 共计 8 个不同规模的 Base 和 Chat 模型,, 以及一个 MoE 模型(点击[博客](https://qwenlm.githu</p>
<p><a href="https://github.com/QwenLM/Qwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen1.5-72B-Chat">DEMO</a> | <a href="https://github.com/QwenLM/Qwen/blob/main/assets/wechat.png">WeChat</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen1.5</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen1.5</guid>
<pubDate>Sun, 04 Feb 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen-VL全新升级!</title>
<description><p>我们在 Qwen 语言模型的基础上,结合此前我们提出的多模态多任务训练,以解决多模态模型在泛化能力上的局限性,并于 2023 年 9 月开源了多模态模型 Qwen-VL。最近,Qwen-VL 系列有了重大升级,推出了两个增强版本:Qwen-VL-Plus 和 Qwen-VL-Max。这两个版本的关键提升包括:相比于开源版本的 Qwen-VL,这两个模型在多个文本-图像多模态任务中与 Gemini</p>
</description>
<link>https://qwen.ai/blog?id=qwen-vl</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-vl</guid>
<pubDate>Thu, 25 Jan 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen介绍</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/qwen/family.png" referrerpolicy="no-referrer"><p>四个月前,我们首次发布Qwen-7B大型语言模型(LLM),正式开启了我们的开源之旅。今天,我们介绍Qwen开源家族,更全面的展示我们的工作和目标。下面是开源项目和社区的重要链接。Additionally, we have WeChat groups for chatting and we invite you to join the groups through the provided lin</p>
<p><a href="https://arxiv.org/abs/2309.16609">PAPER</a> | <a href="https://github.com/QwenLM/Qwen">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/CV4E9rpNSD">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen</guid>
<pubDate>Tue, 23 Jan 2024 14:13:29 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>OFASys:一行代码带你搞定多任务学习!</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/ofasys/demo.jpg" referrerpolicy="no-referrer"><p>通用模型非常火!我们现在跟随多模态多任务学习的发展似乎看到了实现一个真正的通用模型的机会。我们此前推出的OFA便是朝着这个目标迈向的重要一步。但是,我们在实际实现过程中遇到了非常多的困难。比如说,把多任务训练的模型搭建起来,组织多任务的训练比如给数据打batch和保证训练稳定等等,都非常困难。因此,我们推出一个AI系统OFASys,它主要解决多模态多任务学习的实现问题。简单来说,它主要通过一个叫做</p>
<p><a href="https://arxiv.org/abs/2212.04408">论文</a> | <a href="https://github.com/OFA-Sys/OFASys">GitHub</a></p>
</description>
<link>https://qwen.ai/blog?id=ofasys</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=ofasys</guid>
<pubDate>Wed, 28 Dec 2022 10:01:21 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Chinese CLIP: 中文图文对比学习预训练</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/cnclip/search.jpg" referrerpolicy="no-referrer"><p>CLIP[^1]是多模态表示学习领域一个现象级的模型。它不仅扮演基础模型,并且建立了视觉和语言的桥梁。它还推动了很多其他领域技术的发展,尤其是文本生成图像。然而,我们还需要特定语言的CLIP,尤其在现实应用中,比如跨模... |
Contributor
http://localhost:1200/qwen/research/en/Research - Success ✔️<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
<channel>
<title>Qwen Research - Research</title>
<link>https://qwen.ai/research</link>
<atom:link href="http://localhost:1200/qwen/research/en/Research" rel="self" type="application/rss+xml"></atom:link>
<description>Qwen Research - Research - Powered by RSSHub</description>
<generator>RSSHub</generator>
<webMaster>contact@rsshub.app (RSSHub)</webMaster>
<language>en</language>
<lastBuildDate>Thu, 02 Apr 2026 12:12:26 GMT</lastBuildDate>
<ttl>5</ttl>
<item>
<title>Qwen-Image-Layered: Layered Decomposition for Inherent Editablity</title>
<description><p>Today, we are excited to introduce Qwen-Image-Layered, a model capable of decomposing an image into multiple RGBA layers. This layered representation unlocks inherent editability: each layer can be independently manipulated without affecting other content. Meanwhile, such a layered representation na</p>
</description>
<link>https://qwen.ai/blog?id=qwen-image-layered</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image-layered</guid>
<pubDate>Fri, 19 Dec 2025 05:08:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>SAPO: A Stable and Performant Reinforcement Learning Method for Training Large Language Models</title>
<description><p>Reinforcement learning (RL) has become a core ingredient in advancing the reasoning capabilities of large language models (LLMs). Modern RL pipelines enable models to solve harder mathematical problems, write complex code, and reason over multimodal inputs. In practice, group‑based policy optimizati</p>
</description>
<link>https://qwen.ai/blog?id=sapo</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=sapo</guid>
<pubDate>Thu, 04 Dec 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen3 ASR: Hear clearly, transcribe smartly.</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3-ASR/qwenasr-bear2.png#center" referrerpolicy="no-referrer"><p>We introduce Qwen3-ASR-Flash, a speech recognition service built upon the strong intelligence of Qwen3-Omni and large amount of multi-modal data especially ASR data on the scale of tens of millions ho</p>
</description>
<link>https://qwen.ai/blog?id=qwen3-asr-flash</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-asr-flash</guid>
<pubDate>Mon, 08 Sep 2025 06:38:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen-Image-Edit: Image Editing with Higher Quality and Efficiency</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen-Image/edit_homepage.jpg#center" referrerpolicy="no-referrer"><p>We are excited to introduce Qwen-Image-Edit, the image editing version of Qwen-Image. Built upon our 20B Qwen-Image model, Qwen-Image-Edit successfully extends Qwen-Image's unique text rendering capab</p>
<p><a href="https://qwen.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen-Image">GITHUB</a> | <a href="https://huggingface.co/Qwen/Qwen-Image-Edit">HUGGING FACE</a> | <a href="https://modelscope.cn/models/Qwen/Qwen-Image-Edit">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-image-edit</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image-edit</guid>
<pubDate>Mon, 18 Aug 2025 17:30:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen-Image: Crafting with Native Text Rendering</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen-Image/merge3.jpg#center" referrerpolicy="no-referrer"><p>We are thrilled to release <strong>Qwen-Image</strong>, a 20B MMDiT image foundation model that achieves significant advances in complex text rendering and precise image editing. To try the latest model, feel free</p>
<p><a href="https://github.com/QwenLM/Qwen-Image">GITHUB</a> | <a href="https://huggingface.co/Qwen/Qwen-Image">HUGGING FACE</a> | <a href="https://modelscope.cn/models/Qwen/Qwen-Image">MODELSCOPE</a> | <a href="https://modelscope.cn/aigc/imageGeneration?tab=advanced">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-image</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image</guid>
<pubDate>Mon, 04 Aug 2025 14:08:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>GSPO: Towards Scalable Reinforcement Learning for Language Models</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/results.jpg#center" referrerpolicy="no-referrer"><p>Reinforcement Learning (RL) has emerged as a pivotal paradigm for scaling language models and enhancing their deep reasoning and problem-solving capabilities. To scale RL, the foremost prerequisite is</p>
<p><a href="https://huggingface.co/papers/2507.18071">PAPER</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=gspo</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=gspo</guid>
<pubDate>Sun, 27 Jul 2025 07:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen-MT: Where Speed Meets Smart Translation</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwen-mt-001.jpeg" referrerpolicy="no-referrer"><p>Here we introduce the latest update of Qwen-MT (qwen-mt-turbo) via [Qwen API](https://modelstudio.console.alibabacloud.com/?tab=doc#/doc/?type=model&amp;url=https://www.alibabacloud.com/help/en/doc-detail</p>
<p><a href="https://huggingface.co/spaces/Qwen/Qwen3-MT-Demo">DEMO</a> | <a href="https://modelstudio.console.alibabacloud.com/?tab=doc#/doc/?type=model&amp;url=https://www.alibabacloud.com/help/en/doc-detail/2840914_2.html&amp;renderType=component&amp;modelId=qwen-mt-turbo">API</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-mt</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-mt</guid>
<pubDate>Thu, 24 Jul 2025 14:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen3-Coder: Agentic Coding in the World</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3-Coder/qwen3-coder-main.jpg" referrerpolicy="no-referrer"><p>Today, we're announcing Qwen3-Coder, our most agentic code model to date. Qwen3-Coder is available in multiple sizes, but we're excited to introduce its most powerful variant first: Qwen3-Coder-480B-A</p>
<p><a href="https://github.com/QwenLM/Qwen3-Coder">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-coder</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-coder</guid>
<pubDate>Tue, 22 Jul 2025 13:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Global-batch load balance almost free lunch to improve your MoE LLM training</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/assets/balance/main_results.png" referrerpolicy="no-referrer"><p>The Mixture-of-Experts (MoEs) architecture has become a popular model-parameter-scale-up technique. Typically, one MoE layer consists of a router (often parameterized as one single Linear layer) and a</p>
<p><a href="https://github.com/QwenLM/Qwen2-Math">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-math-66eaa240a1b7d5ee65f1da3e">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=global-load-balance</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=global-load-balance</guid>
<pubDate>Mon, 20 Jan 2025 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen2.5-LLM: Extending the boundary of LLMs</title>
<description><p>In this blog, we delve into the details of our latest Qwen2.5 series language models. We have developed a range of decoder-only dense models, with seven of them open-sourced, spanning from 0.5B to 72B</p>
<p><a href="https://github.com/QwenLM/Qwen2.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-72B-Instruct">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-llm</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-llm</guid>
<pubDate>Wed, 18 Sep 2024 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Generalizing an LLM from 8k to 1M Context using Qwen-Agent</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/assets/qwen_agent/qwen-agent-2405-lv1-agent.png" referrerpolicy="no-referrer"><p><strong>TLDR:</strong> We've created an agent using Qwen2 models with an 8k context size to understand documents with 1M tokens, surpassing RAG and native long-context models. This agent was also used to generate</p>
<p><a href="https://github.com/QwenLM/Qwen-Agent">Qwen-Agent</a> | <a href="https://github.com/QwenLM/Qwen-Agent/blob/main/examples/assistant_rag.py">RAG Code</a> | <a href="https://github.com/QwenLM/Qwen-Agent/blob/main/examples/parallel_doc_qa.py">Agent Code</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-agent-2405</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-agent-2405</guid>
<pubDate>Thu, 06 Jun 2024 03:59:59 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>OFASys: Enabling Multitask Learning with One Line of Code!</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/ofasys/demo.jpg" referrerpolicy="no-referrer"><p>Generalist Models are hot! We all see an opportunity towards a real generalist model by multimodal multitask learning. We previously release an opensourced unified multimodal pretrained model OFA for</p>
<p><a href="https://arxiv.org/abs/2212.04408">Paper</a> | <a href="https://github.com/OFA-Sys/OFASys">GitHub</a></p>
</description>
<link>https://qwen.ai/blog?id=ofasys</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=ofasys</guid>
<pubDate>Wed, 28 Dec 2022 10:01:21 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/cnclip/search.jpg" referrerpolicy="no-referrer"><p>CLIP[^1] is a phenomenal playmaker in vision and multimodal representation learning. It plays not only as a foundation model but also a bridge between vision and language. It has triggered a series of</p>
<p><a href="https://arxiv.org/abs/2211.01335">Paper</a> | <a href="https://github.com/OFA-Sys/Chinese-CLIP">GitHub</a> | <a href="https://www.modelscope.cn/models/damo/multi-modal_clip-vit-base-patch16_zh/summary">ModelScope</a> | <a href="https://huggingface.co/spaces/OFA-Sys/chinese-clip-zero-shot-image-classification">Demo</a></p>
</description>
<link>https://qwen.ai/blog?id=chinese-clip</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=chinese-clip</guid>
<pubDate>Sat, 24 Dec 2022 06:54:19 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>OFA: Towards Building a One-For-All Model</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/ofa/uniter.jpg" referrerpolicy="no-referrer"><p>2022 is a year of generalist models! With the bloom of multimodal pretraining, especially the unified model, we have witnessed the opportunity to building a generalist model that is capable of process</p>
<p><a href="https://arxiv.org/abs/2202.03052">Paper</a> | <a href="https://github.com/OFA-Sys/OFA">Github</a> | <a href="https://www.modelscope.cn/models?name=ofa">ModelScope</a> | <a href="https://huggingface.co/spaces/OFA-Sys/OFA-Generic_Interface">Demo</a></p>
</description>
<link>https://qwen.ai/blog?id=ofa</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=ofa</guid>
<pubDate>Mon, 14 Nov 2022 08:01:41 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
</channel>
</rss>http://localhost:1200/qwen/research/en/Open-Source - Success ✔️<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
<channel>
<title>Qwen Research - Open-Source</title>
<link>https://qwen.ai/research</link>
<atom:link href="http://localhost:1200/qwen/research/en/Open-Source" rel="self" type="application/rss+xml"></atom:link>
<description>Qwen Research - Open-Source - Powered by RSSHub</description>
<generator>RSSHub</generator>
<webMaster>contact@rsshub.app (RSSHub)</webMaster>
<language>en</language>
<lastBuildDate>Thu, 02 Apr 2026 12:12:27 GMT</lastBuildDate>
<ttl>5</ttl>
<item>
<title>Qwen-Image-Edit-2511: Improve Consistency</title>
<description><p>We are excited to introduce Qwen-Image-Edit-2511, an enhanced version over Qwen-Image-Edit-2509, featuring multiple improvements—including notably better consistency. To try out the latest model, please visit <a href="https://chat.qwen.ai/?inputFeature=image_edit">Qwen Chat</a> and select the Image Editing fea</p>
</description>
<link>https://qwen.ai/blog?id=qwen-image-edit-2511</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image-edit-2511</guid>
<pubDate>Tue, 23 Dec 2025 05:08:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3-VL: Sharper Vision, Deeper Thought, Broader Action</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-VL/qwen3vl-head.png#center" referrerpolicy="no-referrer"><p>Today, we officially launch the all-new Qwen3-VL series — the most powerful vision-language model in the Qwen family to date. In this generation, we’ve made major improvements across multiple dimensio</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen3-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen3-vl-68d2a7c1b8a8afce4ebd2dbe">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen3-VL-5c7a94c8cb144b">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-vl</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-vl</guid>
<pubDate>Mon, 22 Sep 2025 22:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3Guard: Real-time Safety for Your Token Stream</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3Guard/banner.png" referrerpolicy="no-referrer"><p>We are excited to introduce Qwen3Guard, the first safety guardrail model in the Qwen family. Built upon the powerful Qwen3 foundation models and fine-tuned specifically for safety classificatoin, Qwen</p>
<p><a href="https://github.com/QwenLM/Qwen3Guard/blob/main/Qwen3Guard_Technical_Report.pdf">Tech Report</a> | <a href="https://github.com/QwenLM/Qwen3Guard">GitHub</a> | <a href="https://huggingface.co/collections/Qwen/qwen3guard-68d2729abbfae4716f3343a1">Hugging Face</a> | <a href="https://modelscope.cn/collections/Qwen3Guard-308c39ef5ffb4b">ModelScope</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3guard</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3guard</guid>
<pubDate>Mon, 22 Sep 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen-Image-Edit-2509: Multi-Image Support, Improved Consistency</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen-Image/edit2509/edit2509_top.jpg#center" referrerpolicy="no-referrer"><p>This September, we are pleased to introduce Qwen-Image-Edit-2509, the monthly iteration of Qwen-Image-Edit. To experience the latest model, please visit <a href="https://qwen.ai/">Qwen Chat</a> and select the "I</p>
<p><a href="https://qwen.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen-Image">GITHUB</a> | <a href="https://huggingface.co/Qwen/Qwen-Image-Edit-2509">HUGGING FACE</a> | <a href="https://modelscope.cn/models/Qwen/Qwen-Image-Edit-2509">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-image-edit-2509</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image-edit-2509</guid>
<pubDate>Mon, 22 Sep 2025 16:08:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3-Omni: Natively Omni-Modal Foundation Models!</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen3-Omni/q3o.png#center" referrerpolicy="no-referrer"><p><strong>Qwen3-Omni</strong> is the natively end-to-end multilingual omni model. It processes text, images, audio, and video, and delivers real-time streaming responses in both text and natural speech. We introduce</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://huggingface.co/collections/Qwen/qwen3-omni-68d100a86cd0906843ceccbe">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen3-Omni-867aef131e7d4f">MODELSCOPE</a> | <a href="https://www.alibabacloud.com/help/en/model-studio/qwen-omni">DASHSCOPE</a> | <a href="https://github.com/QwenLM/Qwen3-Omni">GITHUB</a> | <a href="https://github.com/QwenLM/Qwen3-Omni/tree/main/assets/Qwen3_Omni.pdf">PAPER</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-Omni-Demo">HUGGING FACE DEMO</a> | <a href="https://modelscope.cn/studios/Qwen/Qwen3-Omni-Demo">MODELSCOPE DEMO</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-omni</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-omni</guid>
<pubDate>Sun, 21 Sep 2025 21:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3-TTS-Flash: Multi-timbre & Multi-lingual & Multi-dialect Speech Synthesis.</title>
<description><img src="http://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-TTS-Flash/table2.png#center" referrerpolicy="no-referrer"><p><strong>Qwen3-TTS-Flash</strong> is a flagship text-to-speech model that supports multi-timbre, multi-lingual, and multi-dialect speech synthesis. It aims to produce natural and expressive speech and is available</p>
<p><a href="https://www.alibabacloud.com/help/en/model-studio/qwen-tts">DASHSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-TTS-Demo">HUGGING FACE DEMO</a> | <a href="https://modelscope.cn/studios/Qwen/Qwen3-TTS-Demo">MODELSCOPE DEMO</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-tts</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-tts</guid>
<pubDate>Sun, 21 Sep 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3-Next: Towards Ultimate Training & Inference Efficiency</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwen3-next.png" referrerpolicy="no-referrer"><p>We believe that <strong>Context Length Scaling</strong> and <strong>Total Parameter Scaling</strong> are two major trends in the future of large models. To further improve training and inference efficiency under long-context a</p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://www.alibabacloud.com/help/en/model-studio/models#c5414da58bjgj">API</a> | <a href="https://huggingface.co/collections/Qwen/qwen3-next-68c25fd6838e585db8eeea9d">Hugging Face</a> | <a href="https://modelscope.cn/collections/Qwen3-Next-c314f23bd0264a">ModelScope</a> | <a href="https://www.kaggle.com/models/qwen-lm/qwen3-next-80b">Kaggle</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-next</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-next</guid>
<pubDate>Wed, 10 Sep 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3: Think Deeper, Act Faster</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwen3-banner.png" referrerpolicy="no-referrer"><p>Today, we are excited to announce the release of <strong>Qwen3</strong>, the latest addition to the Qwen family of large language models. Our flagship model, <strong>Qwen3-235B-A22B</strong>, achieves competitive results in be</p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen3">GitHub</a> | <a href="https://huggingface.co/collections/Qwen/qwen3-67dd247413f0e2e4f653967f">Hugging Face</a> | <a href="https://modelscope.cn/collections/Qwen3-9743180bdc6b48">ModelScope</a> | <a href="https://www.kaggle.com/models/qwen-lm/qwen-3">Kaggle</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3</guid>
<pubDate>Mon, 28 Apr 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5 Omni: See, Hear, Talk, Write, Do It All!</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2.5-Omni/qwen_omni.png" referrerpolicy="no-referrer"><p>We release <strong>Qwen2.5-Omni</strong>, the new flagship end-to-end multimodal model in the Qwen series. Designed for comprehensive multimodal perception, it seamlessly processes diverse inputs including text, i</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://huggingface.co/Qwen/Qwen2.5-Omni-7B">HUGGING FACE</a> | <a href="https://modelscope.cn/models/Qwen/Qwen2.5-Omni-7B">MODELSCOPE</a> | <a href="https://help.aliyun.com/zh/model-studio/user-guide/qwen-omni">DASHSCOPE</a> | <a href="https://github.com/QwenLM/Qwen2.5-Omni">GITHUB</a> | <a href="https://github.com/QwenLM/Qwen2.5-Omni/blob/main/assets/Qwen2.5_Omni.pdf">PAPER</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-Omni-7B-Demo">DEMO</a> | <a href="https://discord.com/invite/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-omni</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-omni</guid>
<pubDate>Wed, 26 Mar 2025 16:00:45 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5-VL-32B: Smarter and Lighter</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-VL-32B/qwen2.5vl-32b-vision.jpg" referrerpolicy="no-referrer"><p>At the end of January this year, we launched the Qwen2.5-VL series of models, which received widespread attention and positive feedback from the community. Building on the Qwen2.5-VL series, we contin</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen2.5-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-vl-6795ffac22b334a837c0f9a5">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen25-VL-58fbb5d31f1d47">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-vl-32b</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-vl-32b</guid>
<pubDate>Sun, 23 Mar 2025 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>QwQ-32B: Embracing the Power of Reinforcement Learning</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwq-32b-final.jpg" referrerpolicy="no-referrer"><p>Scaling Reinforcement Learning (RL) has the potential to enhance model performance beyond conventional pretraining and post-training methods. Recent studies have demonstrated that RL can significantly</p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://huggingface.co/Qwen/QwQ-32B">Hugging Face</a> | <a href="https://modelscope.cn/models/Qwen/QwQ-32B">ModelScope</a> | <a href="https://huggingface.co/spaces/Qwen/QwQ-32B-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwq-32b</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwq-32b</guid>
<pubDate>Wed, 05 Mar 2025 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5-1M: Deploy Your Own Qwen with Context Length up to 1M Tokens</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-1M/passkey_retrieval.png" referrerpolicy="no-referrer"><p>Two months after upgrading <a href="https://qwen.ai/qwen2.5-turbo">Qwen2.5-Turbo</a> to support context length up to one million tokens, we are back with the open-source Qwen2.5-1M models and the corresponding inference fram</p>
<p><a href="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-1M/Qwen2_5_1M_Technical_Report.pdf">Tech Report</a> | <a href="https://huggingface.co/Qwen">HuggingFace</a> | <a href="https://modelscope.cn/organization/qwen">ModelScope</a> | <a href="https://chat.qwenlm.ai/">Qwen Chat</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-1M-Demo">HuggingFace Demo</a> | <a href="https://www.modelscope.cn/studios/Qwen/Qwen2.5-1M-Demo">ModelScope Demo</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-1m</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-1m</guid>
<pubDate>Sun, 26 Jan 2025 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5 VL! Qwen2.5 VL! Qwen2.5 VL!</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2.5-vl-Capybara.png" referrerpolicy="no-referrer"><p>We release <strong>Qwen2.5-VL</strong>, the new flagship vision-language model of Qwen and also a significant leap from the previous Qwen2-VL. To try the latest model, feel free to visit [Qwen Chat](https://chat.q</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen2-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-vl-6795ffac22b334a837c0f9a5">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen25-VL-58fbb5d31f1d47">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-vl</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-vl</guid>
<pubDate>Sun, 26 Jan 2025 11:08:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5-Coder Series: Powerful, Diverse, Practical.</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5/Qwen2.5-Coder-Family/32b-top.jpg" referrerpolicy="no-referrer"><p>Today, we are excited to open source the "Powerful", "Diverse", and "Practical" Qwen2.5-Coder series, dedicated to continuously promoting the development of Open CodeLLMs.Additionally, the multi-langu</p>
<p><a href="https://github.com/QwenLM/Qwen2.5-Coder">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-coder-66eaa22e6f99801bf65b0c2f">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://www.kaggle.com/models/qwen-lm/qwen2.5-coder">KAGGLE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-Coder-demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-coder-family</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-coder-family</guid>
<pubDate>Mon, 11 Nov 2024 16:00:02 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5: A Party of Foundation Models!</title>
<description><img src="http://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5/Qwen2.5%20modelcard.001.jpeg" referrerpolicy="no-referrer"><p>In the past three months since Qwen2's release, numerous developers have built new models on the Qwen2 language models, providing us with valuable feedback. During this period, we have focused on crea</p>
<p><a href="https://github.com/QwenLM/Qwen2.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a> | <a href="https://qwenlm.github.io/blog/qwen2.5-llm">Qwen2.5 LLM</a> | <a href="https://qwenlm.github.io/blog/qwen2.5-coder">Qwen2.5-Coder</a> | <a href="https://qwenlm.github.io/blog/qwen2.5-math">Qwen2.5-Math</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5</guid>
<pubDate>Wed, 18 Sep 2024 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5-Coder: Code More, Learn More!</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5/Qwen2.5-Coder/coder-main.png" referrerpolicy="no-referrer"><p>In early April, we introduced CodeQwen1.5, which garnered significant attention from the community. Since then, we have been working to enhance the coding model. Today, we are excited to announce the</p>
<p><a href="https://github.com/QwenLM/Qwen2.5-Coder">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-Coder-7B-Instruct">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-coder</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-coder</guid>
<pubDate>Wed, 18 Sep 2024 16:00:02 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5-Math: The world's leading open-sourced mathematical LLMs</title>
<description><img src="http://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5/2024-08-qwen2.5-math-72B.png" referrerpolicy="no-referrer"><p>**🚨 Qwen2.5-Math mainly supports solving English and Chinese math problems through CoT and TIR. We do not recommend using this series of models for other tasks.**A month ago, we released the first se</p>
<p><a href="https://github.com/QwenLM/Qwen2-Math">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-math</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-math</guid>
<pubDate>Wed, 18 Sep 2024 16:00:01 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2-VL: To See the World More Clearly</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen2-VL/qwen2vl-head.jpeg" referrerpolicy="no-referrer"><p>After a year's relentless efforts, today we are thrilled to release <strong>Qwen2-VL</strong>! Qwen2-VL is the latest version of the vision language models based on <strong>Qwen2</strong> in the Qwen model familities. Compared</p>
<p><a href="https://huggingface.co/spaces/Qwen/Qwen2-VL">DEMO</a> | <a href="https://github.com/QwenLM/Qwen2-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen2-vl-66cee7455501d7126940800d">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://help.aliyun.com/zh/model-studio/developer-reference/qwen-vl-api">API</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2-vl</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2-vl</guid>
<pubDate>Wed, 28 Aug 2024 16:24:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2-Audio: Chat with Your Voice!</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-Audio/demo/radar_compare_qwen_audio.png" referrerpolicy="no-referrer"><p>To achieve the objective of building an AGI system, the model should be capable of understanding information from different modalities. Thanks to the rapid development of large language models, LLMs a</p>
<p><a href="https://huggingface.co/spaces/Qwen/Qwen2-Audio-Instruct-Demo">DEMO</a> | <a href="https://arxiv.org/pdf/2407.10759">PAPER</a> | <a href="https://github.com/QwenLM/Qwen2-Audio">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen2-audio-66b628d694096020e0c52ff6">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2-audio</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2-audio</guid>
<pubDate>Fri, 09 Aug 2024 08:18:19 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Hello Qwen2</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/qwen2/qwen.jpg" referrerpolicy="no-referrer"><p>After months of efforts, we are pleased to announce the evolution from Qwen1.5 to Qwen2. This time, we bring to you:We have opensourced the models in Hugging Face and ModelScope to you and we are look</p>
<p><a href="https://github.com/QwenLM/Qwen2">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2-72B-Instruct">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2</guid>
<pubDate>Thu, 06 Jun 2024 16:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Code with CodeQwen1.5</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/codeqwen1.5/intro.png" referrerpolicy="no-referrer"><p>The advent of advanced programming tools, which harnesses the power of large language models (LLMs), has significantly enhanced programmer productivity and accuracy. Notwithstanding these advancements</p>
<p><a href="https://github.com/QwenLM/CodeQwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/CodeQwen1.5-7b-Chat-demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=codeqwen1.5</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=codeqwen1.5</guid>
<pubDate>Tue, 16 Apr 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Introducing Qwen1.5</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/qwen1.5/intro.jpg" referrerpolicy="no-referrer"><p>In recent months, our focus has been on developing a "good" model while optimizing the developer experience. As we progress towards <strong>Qwen1.5</strong>, the next iteration in our Qwen series, this update arri</p>
<p><a href="https://github.com/QwenLM/Qwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen1.5-72B-Chat">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen1.5</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen1.5</guid>
<pubDate>Sun, 04 Feb 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
</channel>
</rss>... |
Contributor
http://localhost:1200/qwen/research/en/Release - Success ✔️<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
<channel>
<title>Qwen Research - Release</title>
<link>https://qwen.ai/research</link>
<atom:link href="http://localhost:1200/qwen/research/en/Release" rel="self" type="application/rss+xml"></atom:link>
<description>Qwen Research - Release - Powered by RSSHub</description>
<generator>RSSHub</generator>
<webMaster>contact@rsshub.app (RSSHub)</webMaster>
<language>en</language>
<lastBuildDate>Thu, 02 Apr 2026 12:12:27 GMT</lastBuildDate>
<ttl>5</ttl>
<item>
<title>Qwen3-TTS Steps Up: Voice Cloning and Voice Design!</title>
<description><p><strong>Qwen3-TTS</strong> family has launched two new models: the voice design model Qwen3-TTS-VD-Flash (accessible via the <a href="https://www.alibabacloud.com/help/en/model-studio/qwen-tts-voice-design">Qwen API</a>) and the voice cloning model Qwen3-TTS-VC-Flash (accessible via the [Qwen API](https://www.alibabacloud.</p>
</description>
<link>https://qwen.ai/blog?id=qwen3-tts-vc-voicedesign</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-tts-vc-voicedesign</guid>
<pubDate>Mon, 22 Dec 2025 16:00:45 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3-Omni-Flash-2025-12-01:Hear You. See You. Follow Smarter!</title>
<description><p><strong>Qwen3-Omni</strong> is a next-generation native multimodal large model capable of seamlessly processing multiple input modalities—including text, images, audio, and video—and generating both text and natural-sounding speech outputs simultaneously via real-time streaming responses. This version introduces</p>
</description>
<link>https://qwen.ai/blog?id=qwen3-omni-flash-20251201</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-omni-flash-20251201</guid>
<pubDate>Mon, 08 Dec 2025 21:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3-TTS Update! 49 Timbres + 10 Languages + 9 Dialects</title>
<description><p><strong>Qwen3-TTS-Flash</strong> is a flagship text-to-speech model that supports multi-timbre, multi-lingual, and multi-dialect speech synthesis. It aims to produce natural and expressive speech and is available via <a href="https://help.aliyun.com/zh/model-studio/qwen-tts">Qwen API</a>.Major Improvements:Qwen3-TTS offers</p>
</description>
<link>https://qwen.ai/blog?id=qwen3-tts-1128</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-tts-1128</guid>
<pubDate>Thu, 04 Dec 2025 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen DeepResearch: When Inspiration Becomes Its Own Reason</title>
<description><p><a href="https://chat.qwen.ai/?inputFeature=deep_research">Click here to experience the latest Qwen DeepResearch</a>_<strong>How does inspiration die?</strong>_It usually doesn’t die from “not being good enough”, but from being “too much trouble”.When a thought flashes, it’s still fragile and unverified. After a brief mome</p>
</description>
<link>https://qwen.ai/blog?id=qwen-deepresearch</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-deepresearch</guid>
<pubDate>Wed, 12 Nov 2025 20:59:26 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3-Max: Just Scale it</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwen3max-banner.png" referrerpolicy="no-referrer"><p>Following the release of the Qwen3-2507 series, we are thrilled to introduce Qwen3-Max — our largest and most capable model to date. The preview version of Qwen3-Max-Instruct currently ranks third on</p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://www.alibabacloud.com/help/en/model-studio/models#c2d5833ae4jmo">API</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-max</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-max</guid>
<pubDate>Wed, 24 Sep 2025 04:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3‑LiveTranslate: Real‑Time Multimodal Interpretation — See It, Hear It, Speak It!</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-LiveTranslate-Flash/blog_pic_without_subtitles(1).png#center" referrerpolicy="no-referrer"><p><strong>Qwen3‑LiveTranslate‑Flash</strong> delivers high‑precision, lightning‑fast and ultra‑reliable real‑time multilingual audio and video interpretation. With the extensive capabilities of Qwen3‑Omni and traini</p>
<p><a href="https://www.alibabacloud.com/help/en/document_detail/2983281.html">DASHSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-Livetranslate-Demo">HUGGING FACE DEMO</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-livetranslate</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-livetranslate</guid>
<pubDate>Mon, 22 Sep 2025 23:00:26 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Travel Planner: Your Smart Travel Designer</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/qwen-travel-planner/en_q1.png" referrerpolicy="no-referrer"><p>We are excited to introduce our <strong>brand-new Travel Planning Assistant</strong>, a powerful system built on a <strong>Multi-Agent architecture</strong> with robust <strong>real-world tool-calling capabilities</strong>. It is designed</p>
<p><a href="https://chat.qwen.ai/?inputFeature=travel">QWEN CHAT</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=agent</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=agent</guid>
<pubDate>Mon, 22 Sep 2025 21:00:59 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Time to Speak Some Dialects, Qwen-TTS!</title>
<description><p>Here we introduce the latest update of <strong>Qwen-TTS</strong> (<code>qwen-tts-latest</code> or <code>qwen-tts-2025-05-22</code>) through <a href="https://help.aliyun.com/zh/model-studio/qwen-tts">Qwen API</a> . Trained on a large-scale dataset</p>
<p><a href="https://help.aliyun.com/zh/model-studio/qwen-tts">API</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-tts</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-tts</guid>
<pubDate>Fri, 27 Jun 2025 07:01:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen VLo: From "Understanding" the World to "Depicting" It</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen-VLo/vlo.png" referrerpolicy="no-referrer"><p>The evolution of multimodal large models is continually pushing the boundaries of what we believe technology can achieve. From the initial QwenVL to the latest Qwen2.5 VL, we have made progress in enh</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-vlo</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-vlo</guid>
<pubDate>Thu, 26 Jun 2025 14:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3 Embedding: Advancing Text Embedding and Reranking Through Foundation Models</title>
<description><img src="https://mitalinlp.oss-cn-hangzhou.aliyuncs.com/dingkun/models/qwen-embedding/q3e-mteb-result-0605.png" referrerpolicy="no-referrer"><p>We release <strong>Qwen3 Embedding series</strong>, a new proprietary model of the Qwen model family. These models are specifically designed for <strong>text embedding</strong>, <strong>retrieval</strong>, and <strong>reranking</strong> tasks, built on</p>
<p><a href="https://github.com/QwenLM/Qwen3-Embedding">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-embedding</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-embedding</guid>
<pubDate>Thu, 05 Jun 2025 13:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>QVQ-Max: Think with Evidence</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/QVQ-Max/test_time.png" referrerpolicy="no-referrer"><p>Last December, we launched QVQ-72B-Preview as an exploratory model, but it had many issues. Today, we are officially releasing the first version of QVQ-Max, our visual reasoning model. This model can</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen2.5-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-vl-6795ffac22b334a837c0f9a5">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen25-VL-58fbb5d31f1d47">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qvq-max-preview</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qvq-max-preview</guid>
<pubDate>Thu, 27 Mar 2025 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title><think>...</think> QwQ-Max-Preview</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwq-banner-en.jpg" referrerpolicy="no-referrer"><p>This is a blog created by QwQ-Max-Preview. We hope you enjoy it!We’re happy to unveil QwQ-Max-Preview , the latest advancement in the Qwen series, designed to push the boundaries of deep reasoning and</p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwq-max-preview</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwq-max-preview</guid>
<pubDate>Mon, 24 Feb 2025 18:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen2.5-Max: Exploring the Intelligence of Large-scale MoE Model</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2.5-max-banner.png" referrerpolicy="no-referrer"><p>It is widely recognized that continuously scaling both data size and model size can lead to significant improvements in model intelligence. However, the research and industry community has limited exp</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://www.alibabacloud.com/help/en/model-studio/developer-reference/what-is-qwen-llm">API</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-Max-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-max</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-max</guid>
<pubDate>Tue, 28 Jan 2025 15:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Towards Effective Process Supervision in Mathematical Reasoning</title>
<description><img src="http://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2.5/Qwen2.5-Math-PRM/Qwen2.5-Math-PRM.png" referrerpolicy="no-referrer"><p>In recent years, Large Language Models (LLMs) have made remarkable advances in mathematical reasoning, yet they can make mistakes, such as miscalculations or logical errors, leading to wrong conclusio</p>
<p><a href="https://github.com/QwenLM/Qwen2-Math">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-math-66eaa240a1b7d5ee65f1da3e">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-math-prm</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-math-prm</guid>
<pubDate>Mon, 13 Jan 2025 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>QVQ: To See the World with Wisdom</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/QVQ/QVQ.jpg" referrerpolicy="no-referrer"><p>Language and vision intertwine in the human mind, shaping how we perceive and understand the world around us. Our ability to reason is deeply rooted in both linguistic thought and visual memory - but</p>
<p><a href="https://github.com/QwenLM/Qwen2-VL">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://www.kaggle.com/models/qwen-lm/qvq-72b-preview">KAGGLE</a> | <a href="https://huggingface.co/spaces/Qwen/QVQ-72B-Preview">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qvq-72b-preview</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qvq-72b-preview</guid>
<pubDate>Tue, 24 Dec 2024 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>QwQ: Reflect Deeply on the Boundaries of the Unknown</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/assets/blog/qwq-32b-preview/QwQ-32B-Preview_result.png" referrerpolicy="no-referrer"><p>*Note: This is the pronunciation of QwQ: /kwju:/ , similar to the word "quill".*What does it mean to think, to question, to understand? These are the deep waters that QwQ (Qwen with Questions) wades i</p>
<p><a href="https://github.com/QwenLM/Qwen2.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/QwQ-32B-preview">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwq-32b-preview</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwq-32b-preview</guid>
<pubDate>Wed, 27 Nov 2024 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Extending the Context Length to 1M Tokens!</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Turbo/cover_en.png" referrerpolicy="no-referrer"><p>After the release of Qwen2.5, we heard the community's demand for processing longer contexts. In recent months, we have made many optimizations for the model capabilities and inference performance of</p>
<p><a href="https://help.aliyun.com/zh/model-studio/getting-started/first-api-call-to-qwen">API Documentation (Chinese)</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-Turbo-1M-Demo">HuggingFace Demo</a> | <a href="https://www.modelscope.cn/studios/Qwen/Qwen2.5-Turbo-1M-Demo">ModelScope Demo</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-turbo</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-turbo</guid>
<pubDate>Thu, 14 Nov 2024 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Introducing Qwen2-Math</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/assets/blog/qwen2-math/fig1.jpg" referrerpolicy="no-referrer"><p>**🚨 This model mainly supports English. We will release bilingual (English and Chinese) math models soon.**Over the past year, we have dedicated significant effort to researching and enhancing the re</p>
<p><a href="https://github.com/QwenLM/Qwen2-Math">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2-math</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2-math</guid>
<pubDate>Wed, 07 Aug 2024 16:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Notes on Qwen-Max-0428</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/arena_leaderboard.jpg" referrerpolicy="no-referrer"><p>Previously, we opensourced a series of Qwen1.5 model ranging from 0.5 to 110 billion parameters. Now, we release a larger model, Qwen-Max-0428. Qwen-Max-0428 is an instruction-tuned model for chat ser</p>
<p><a href="https://dashscope.aliyun.com/">API</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen-Max-0428">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-max-0428</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-max-0428</guid>
<pubDate>Sat, 11 May 2024 10:10:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen1.5-110B: The First 100B+ Model of the Qwen1.5 Series</title>
<description><p>Recently we have witnessed a burst of large-scale models with over 100 billion parameters in the opensource community. These models have demonstrated remarkable performance in both benchmark evaluatio</p>
<p><a href="https://github.com/QwenLM/Qwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen1.5-110B-Chat-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen1.5-110b</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen1.5-110b</guid>
<pubDate>Thu, 25 Apr 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen1.5-32B: Fitting the Capstone of the Qwen1.5 Language Model Series</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/qwen1.5-32b/32b.png" referrerpolicy="no-referrer"><p>The open-source community has long sought a model that strikes an ideal balance between performance, efficiency, and memory footprint. Despite the emergence of cutting-edge models like Qwen1.5-72B and</p>
<p><a href="https://github.com/QwenLM/Qwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen1.5-72B-Chat">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen1.5-32b</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen1.5-32b</guid>
<pubDate>Tue, 02 Apr 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen1.5-MoE: Matching 7B Model Performance with 1/3 Activated Parameters</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/assets/blog/qwen1.5/qwen-moe.jpg" referrerpolicy="no-referrer"><p>Since the surge in interest sparked by Mixtral, research on mixture-of-expert (MoE) models has gained significant momentum. Both researchers and practitioners are keenly interested in understanding ho</p>
<p><a href="https://github.com/QwenLM/Qwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/qwen1.5-MoE-A2.7B-Chat-demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-moe</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-moe</guid>
<pubDate>Thu, 28 Mar 2024 03:31:44 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Introducing Qwen-VL</title>
<description><p>Along with the rapid development of our large language model Qwen, we leveraged Qwen’s capabilities and unified multimodal pretraining to address the limitations of multimodal models in generalization</p>
</description>
<link>https://qwen.ai/blog?id=qwen-vl</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-vl</guid>
<pubDate>Thu, 25 Jan 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Introducing Qwen</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/qwen/family.png" referrerpolicy="no-referrer"><p>4 months after our first release of Qwen-7B, which is the starting point of our opensource journey of large language models (LLM), we now provide an introduction to the Qwen series to give you a whole</p>
<p><a href="https://arxiv.org/abs/2309.16609">PAPER</a> | <a href="https://github.com/QwenLM/Qwen">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/CV4E9rpNSD">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen</guid>
<pubDate>Tue, 23 Jan 2024 14:13:29 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
</channel>
</rss>http://localhost:1200/qwen/research/zh-cn/Research - Success ✔️<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
<channel>
<title>Qwen 研究 - Research</title>
<link>https://qwen.ai/research</link>
<atom:link href="http://localhost:1200/qwen/research/zh-cn/Research" rel="self" type="application/rss+xml"></atom:link>
<description>Qwen 研究 - Research - Powered by RSSHub</description>
<generator>RSSHub</generator>
<webMaster>contact@rsshub.app (RSSHub)</webMaster>
<language>en</language>
<lastBuildDate>Thu, 02 Apr 2026 12:12:28 GMT</lastBuildDate>
<ttl>5</ttl>
<item>
<title>Qwen-Image-Layered: 面向内在可编辑性的图层分解</title>
<description><p>今天我们很高兴推出 Qwen-Image-Layered,这是一款能够将图像分解为多个 RGBA 图层的模型。这种分层表示赋予了图像内在的可编辑性:每个图层都可以独立操作,而不会影响其他内容。同时,这种分层结构天然支持高保真的基本编辑操作,例如缩放、移动和重新着色。通过将不同元素物理地隔离到不同的图层中,我们的方法实现了高保真的编辑效果。给定一张图像,Qwen-Image-Layered 可将其分解为若干个 RGBA 图层:分解完成后,编辑操作仅作用于目标图层,将其与其他内容物理隔离,从根本上确保了编辑的一致性。例如,我们可以对第一个图层重新着色,而保持其余内容不变:我们也可以将第二个图层中的</p>
</description>
<link>https://qwen.ai/blog?id=qwen-image-layered</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image-layered</guid>
<pubDate>Fri, 19 Dec 2025 05:08:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>SAPO:一种稳定且高性能的大语言模型强化学习方法</title>
<description><p>强化学习(Reinforcement Learning, RL)已经成为提升大语言模型(Large Language Models, LLM)推理能力的核心技术之一。现代 RL 训练流程使模型能够解决困难的数学问题、编写复杂代码和进行多模态推理。实践中,一种被广泛采用的方法是基于组的策略优化(group‑based policy optimization):对每个提示采样多个回复,并在组内进行奖励归一化。<br>
然而,尽管该方法效果显著,稳定且高性能的策略优化仍然困难。关键挑战在于 token 级重要性比率(importance ratio)的高方差,尤其是在 MoE 模型中。该比率衡量当前策略偏离</p>
</description>
<link>https://qwen.ai/blog?id=sapo</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=sapo</guid>
<pubDate>Thu, 04 Dec 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen3 ASR:听得清楚,转写聪明。</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3-ASR/qwenasr-bear-zh.png#center" referrerpolicy="no-referrer"><p>Qwen3-ASR-Flash现已正式发布,一个基于Qwen3基座模型强大的智能、海量多模态数据以及千万小时规模的ASR数据构建的语音识别服务。<br>
Qwen3-ASR-Flash实现了高精度高鲁棒性的语音识别性能,支持11种语言和多种口音。与众不同的是,Qwen3-ASR-Flash支持用户以任意格式提供文本上下文,从而获得定制化的 ASR 结果,同时还支持歌声识别。<strong>📊 性能表现:</strong>**</p>
</description>
<link>https://qwen.ai/blog?id=qwen3-asr-flash</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-asr-flash</guid>
<pubDate>Mon, 08 Sep 2025 11:37:47 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen-Image-Edit: 全能图像编辑,驱动内容创作提质增效</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen-Image/edit_homepage.jpg#center" referrerpolicy="no-referrer"><p>我们很高兴推出 Qwen-Image-Edit,Qwen-Image 的图像编辑版本。Qwen-Image-Edit 基于我们20B的 Qwen-Image 模型进一步训练,成功将 Qwen-Image 的独特的文本渲染能力延展至图像编辑领域,实现了对图片中文字的精准编辑。此外,Qwen-Image-Edit 将输入图像同时输入到 Qwen2.5-VL(实现视觉语义控制)和 VAE Encoder</p>
<p><a href="https://qwen.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen-Image">GITHUB</a> | <a href="https://huggingface.co/Qwen/Qwen-Image-Edit">HUGGING FACE</a> | <a href="https://modelscope.cn/models/Qwen/Qwen-Image-Edit">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-image-edit</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image-edit</guid>
<pubDate>Mon, 18 Aug 2025 17:30:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen-Image:擅长文字渲染的创作利器</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen-Image/merge3.jpg#center" referrerpolicy="no-referrer"><p>我们很高兴推出 Qwen-Image,一个20B的MMDiT模型。这是通义千问系列中首个图像生成基础模型,其在复杂文本渲染和精确图像编辑方面取得了显著进展。如需体验最新模型,欢迎访问 <a href="https://chat.qwen.ai/">Qwen Chat</a> 并选择“图像生成”功能。主要特性包括:我们在多个公开基准上对Qwen-Image进行了全面评估,包括用于通用图像生成的GenEval、DPG和O</p>
<p><a href="https://github.com/QwenLM/Qwen-Image">GITHUB</a> | <a href="https://huggingface.co/Qwen/Qwen-Image">HUGGING FACE</a> | <a href="https://modelscope.cn/models/Qwen/Qwen-Image">MODELSCOPE</a> | <a href="https://modelscope.cn/aigc/imageGeneration?tab=advanced">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-image</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image</guid>
<pubDate>Mon, 04 Aug 2025 14:08:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>GSPO:迈向持续拓展的语言模型强化学习</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/results.jpg#center" referrerpolicy="no-referrer"><p>强化学习 (Reinforcement Learning,RL)已成为拓展语言模型、增强其深度推理与问题求解能力的关键技术范式。为了持续拓展 RL,首要前提是确保稳定、鲁棒的训练过程。然而,我们观察到现有的 RL 算法(如 GRPO)在长期训练中会暴露出严重的不稳定性问题并招致不可逆转的模型崩溃,阻碍了通过增加计算以获得进一步的性能提升。为了能够持续拓展 RL,我们提出了 **Group Sequ</p>
<p><a href="https://huggingface.co/papers/2507.18071">PAPER</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=gspo</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=gspo</guid>
<pubDate>Sun, 27 Jul 2025 07:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen-MT:速度与智能翻译的完美融合</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwen-mt-001.jpeg" referrerpolicy="no-referrer"><p>我们通过<a href="https://bailian.console.aliyun.com/?tab=model#/model-market/detail/qwen-mt-turbo">Qwen API</a> 推出了 Qwen-MT(qwen-mt-turbo)的最新升级版本。本次更新基于强大的 Qwen3 模型,进一步使用超大规模多语言和翻译数据对模型进行训练,全面增强其多语言理解与翻译能力,并结合强化学习技术</p>
<p><a href="https://modelscope.cn/studios/Qwen/Qwen3-MT-demo">DEMO</a> | <a href="https://bailian.console.aliyun.com/?tab=model#/model-market/detail/qwen-mt-turbo">API</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-mt</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-mt</guid>
<pubDate>Thu, 24 Jul 2025 14:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen3-Coder: 在世界中自主编程</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3-Coder/qwen3-coder-main.jpg" referrerpolicy="no-referrer"><p>今天我们正式发布 Qwen3-Coder,这是我们迄今为止最具代理能力的代码模型。Qwen3-Coder 拥有多个尺寸,但我们迫不及待地给大家提供当前最强大的版本,Qwen3-Coder-480B-A35B-Instruct。这是一个总参数量 480B,激活 35B 的 MoE 模型,原生支持 256K token 的上下文并可通过 YaRN 扩展到 1M token,拥有卓越的代码和 Agent</p>
<p><a href="https://github.com/QwenLM/Qwen3-Coder">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-coder</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-coder</guid>
<pubDate>Tue, 22 Jul 2025 13:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>通过全局负载均衡提升混合专家模型的性能和特异化程度</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/assets/balance/main_results.png" referrerpolicy="no-referrer"><p>混合专家模型(MoEs)通过路由机制动态并稀疏地激活模型参数,使得能高效地增大模型参数规模。基于 TopK 机制的稀疏激活会在训练中会遇到专家激活不均衡的问题:少数被频繁选择的专家会被优化得更多,进一步使得这些专家被更频繁地选择,最终导致只选择少数专家,造成剩余专家的冗余。因此,MoE 在训练中需要引入额外的辅助损失(load balance loss,LBL)来鼓励专家的选择趋于均衡。目前主流</p>
<p><a href="https://github.com/QwenLM/Qwen2-Math">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-math-66eaa240a1b7d5ee65f1da3e">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=global-load-balance</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=global-load-balance</guid>
<pubDate>Mon, 20 Jan 2025 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Qwen2.5-LLM:扩展大型语言模型的边界</title>
<description><p>我们隆重推出最新发布的Qwen2.5系列语言模型!我们共开源了7款decoder-only的稠密模型,参数规模从0.5B到72B不等。我们调研发现产品对10B至30B模型的兴趣明显增加,同时3B规模的模型也越来越适用于移动端场景。为此,Qwen2.5系列开源了Qwen2.5-3B、Qwen2.5-14B 和 Qwen2.5-32B。同时,我们还推出了Qwen-Plus与Qwen-Turbo版本,可</p>
<p><a href="https://github.com/QwenLM/Qwen2.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-72B-Instruct">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-llm</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-llm</guid>
<pubDate>Wed, 18 Sep 2024 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>使用Qwen-Agent将上下文记忆扩展到百万量级</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/assets/qwen_agent/qwen-agent-2405-lv1-agent.png" referrerpolicy="no-referrer"><p><strong>长话短说:</strong> 我们开发了一个智能体用于理解包含百万字词的文档,虽然仅使用Qwen2模型的8k上下文,但效果超过RAG和长序列原生模型。我们还利用此智能体合成长上下文数据,用于训练长上下文的Qwen模型。近期,能够原生处理数百万字输入的大型语言模型(LLMs)成为了一种趋势。大部分工作集中在模型架构调整,如位置编码扩展或线性注意力机制等。然而,准备足够长度的微调数据作为讨论较少但同样重要的议题</p>
<p><a href="https://github.com/QwenLM/Qwen-Agent">Qwen-Agent</a> | <a href="https://github.com/QwenLM/Qwen-Agent/blob/main/examples/assistant_rag.py">RAG Code</a> | <a href="https://github.com/QwenLM/Qwen-Agent/blob/main/examples/parallel_doc_qa.py">Agent Code</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-agent-2405</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-agent-2405</guid>
<pubDate>Thu, 06 Jun 2024 03:59:59 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>OFASys:一行代码带你搞定多任务学习!</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/ofasys/demo.jpg" referrerpolicy="no-referrer"><p>通用模型非常火!我们现在跟随多模态多任务学习的发展似乎看到了实现一个真正的通用模型的机会。我们此前推出的OFA便是朝着这个目标迈向的重要一步。但是,我们在实际实现过程中遇到了非常多的困难。比如说,把多任务训练的模型搭建起来,组织多任务的训练比如给数据打batch和保证训练稳定等等,都非常困难。因此,我们推出一个AI系统OFASys,它主要解决多模态多任务学习的实现问题。简单来说,它主要通过一个叫做</p>
<p><a href="https://arxiv.org/abs/2212.04408">论文</a> | <a href="https://github.com/OFA-Sys/OFASys">GitHub</a></p>
</description>
<link>https://qwen.ai/blog?id=ofasys</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=ofasys</guid>
<pubDate>Wed, 28 Dec 2022 10:01:21 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>Chinese CLIP: 中文图文对比学习预训练</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/cnclip/search.jpg" referrerpolicy="no-referrer"><p>CLIP[^1]是多模态表示学习领域一个现象级的模型。它不仅扮演基础模型,并且建立了视觉和语言的桥梁。它还推动了很多其他领域技术的发展,尤其是文本生成图像。然而,我们还需要特定语言的CLIP,尤其在现实应用中,比如跨模态检索。在此之前还没有效果较好的开源中文CLIP。因此我们希望通过这个项目推动中文多模态的发展。在诸如跨模态检索的图文应用中,语言往往扮演重要的角色。假设直接使用CLIP和翻译文本,</p>
<p><a href="https://arxiv.org/abs/2211.01335">论文</a> | <a href="https://github.com/OFA-Sys/Chinese-CLIP">Github</a> | <a href="https://www.modelscope.cn/models/damo/multi-modal_clip-vit-base-patch16_zh/summary">ModelScope</a> | <a href="https://huggingface.co/spaces/OFA-Sys/chinese-clip-zero-shot-image-classification">体验</a></p>
</description>
<link>https://qwen.ai/blog?id=chinese-clip</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=chinese-clip</guid>
<pubDate>Sat, 24 Dec 2022 06:54:19 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
<item>
<title>OFA:走向通用统一模型</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/ofa/uniter.jpg" referrerpolicy="no-referrer"><p>2022年可以说是属于通用模型的一年!随着多模态预训练的蓬勃发展,尤其是通用模型,我们看到实现一个具有处理多种模态的多种任务的能力的通用模型的机会。因此我们提出OFA[^1],即One-For-All。它是一个统一的多模态预训练模型,以统一的模型架构和任务形式兼容多模态和单模态的理解与生成任务。我们使用多模态多任务的方式预训练OFA,使其成为一个接近全能的模型。我们将OFA的模型和代码全部开源到社</p>
<p><a href="https://arxiv.org/abs/2202.03052">论文</a> | <a href="https://github.com/OFA-Sys/OFA">GitHub</a> | <a href="https://www.modelscope.cn/models?name=ofa">ModelScope</a> | <a href="https://huggingface.co/spaces/OFA-Sys/OFA-Generic_Interface">体验</a></p>
</description>
<link>https://qwen.ai/blog?id=ofa</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=ofa</guid>
<pubDate>Mon, 14 Nov 2022 08:01:41 GMT</pubDate>
<author>QwenTeam</author>
<category>Research</category>
</item>
</channel>
</rss>... |
Contributor
http://localhost:1200/qwen/research/zh-cn/Open-Source - Success ✔️<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
<channel>
<title>Qwen 研究 - Open-Source</title>
<link>https://qwen.ai/research</link>
<atom:link href="http://localhost:1200/qwen/research/zh-cn/Open-Source" rel="self" type="application/rss+xml"></atom:link>
<description>Qwen 研究 - Open-Source - Powered by RSSHub</description>
<generator>RSSHub</generator>
<webMaster>contact@rsshub.app (RSSHub)</webMaster>
<language>en</language>
<lastBuildDate>Thu, 02 Apr 2026 12:12:28 GMT</lastBuildDate>
<ttl>5</ttl>
<item>
<title>Qwen-Image-Edit-2511: 一致性再提升</title>
<description><p>我们很高兴推出 Qwen-Image-Edit-2511,相比于Qwen-Image-Edit-2509,进行了包括一致性提升在内的多项增强。如需体验最新模型,欢迎访问 <a href="https://chat.qwen.ai/?inputFeature=image_edit">Qwen Chat</a> 并选择“图像编辑”功能。注意,线上版本有一定优化加速,如果要获取模型最佳效果,可以去 <a href="https://modelscope.cn/models/Qwen/Qwen-Image-Edit-2511">ModelScope</a> 本地部署以获取最佳性能。Qwen-Image-Edit-2511 的主要特性包括:**</p>
</description>
<link>https://qwen.ai/blog?id=qwen-image-edit-2511</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image-edit-2511</guid>
<pubDate>Tue, 23 Dec 2025 05:08:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3-VL:明察、深思、广行</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-VL/qwen3vl-head.png#center" referrerpolicy="no-referrer"><p>今天,我们正式推出全新升级的 <strong>Qwen3-VL</strong> 系列——这是迄今为止 Qwen 系列中最强大的视觉语言模型。在这一代模型中,我们在多个维度实现了全面跃升:无论是纯文本理解与生成,还是视觉内容的感知与推理;无论是上下文长度的支持能力,还是对空间关系、动态视频的理解深度;乃至在与Agent交互中的表现,Qwen3-VL 都展现出显著进步。今天,我们率先开源的是该系列的旗舰模型 —— **Qwe</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen3-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen3-vl-68d2a7c1b8a8afce4ebd2dbe">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen3-VL-5c7a94c8cb144b">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-vl</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-vl</guid>
<pubDate>Mon, 22 Sep 2025 22:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3Guard: 实时安全,逐词响应</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3Guard/banner.png" referrerpolicy="no-referrer"><p>我们隆重推出 Qwen3Guard —— Qwen 家族中首款专为安全防护设计的护栏模型。该模型基于强大的 Qwen3 基础架构打造,并针对安全分类任务进行了专项微调,旨在为人工智能交互提供精准、可靠的安全保障。无论是用户输入的提示,还是模型生成的回复,Qwen3Guard 均可高效识别潜在风险,输出细粒度的风险等级与分类标签,助力实现更负责任的 AI 应用。在多项主流安全评测基准上,Qwen3G</p>
<p><a href="https://github.com/QwenLM/Qwen3Guard/blob/main/Qwen3Guard_Technical_Report.pdf">Tech Report</a> | <a href="https://github.com/QwenLM/Qwen3Guard">GitHub</a> | <a href="https://huggingface.co/collections/Qwen/qwen3guard-68d2729abbfae4716f3343a1">Hugging Face</a> | <a href="https://modelscope.cn/collections/Qwen3Guard-308c39ef5ffb4b">ModelScope</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3guard</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3guard</guid>
<pubDate>Mon, 22 Sep 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen-Image-Edit-2509: 多图编辑支持,单图一致性提升</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen-Image/edit2509/edit2509_top.jpg#center" referrerpolicy="no-referrer"><p>这个9月,我们很高兴推出 Qwen-Image-Edit-2509,作为 Qwen-Image-Edit 的月迭代版本。如需体验最新模型,欢迎访问 <a href="https://qwen.ai/">Qwen Chat</a> 并选择“图像编辑”功能。相比于8月发布的 Qwen-Image-Edit,Qwen-Image-Edit-2509 的主要特性包括:**Qwen-Image-Edit-2509 的首要更新是支</p>
<p><a href="https://qwen.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen-Image">GITHUB</a> | <a href="https://huggingface.co/Qwen/Qwen-Image-Edit-2509">HUGGING FACE</a> | <a href="https://modelscope.cn/models/Qwen/Qwen-Image-Edit-2509">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-image-edit-2509</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-image-edit-2509</guid>
<pubDate>Mon, 22 Sep 2025 16:08:30 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3-Omni:新一代原生全模态大模型!</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen3-Omni/q3o.png#center" referrerpolicy="no-referrer"><p><strong>Qwen3-Omni</strong>是新一代原生全模态大模型,能够无缝处理文本、图像、音频和视频等多种输入形式,并通过实时流式响应同时生成文本与自然语音输出。我们引入了多种升级来提升模型表现和效率。主要特点:Qwen3-Omni采用Thinker-Talker架构:Thinker负责文本生成,Talker专注于流式语音Token生成,直接接收来自Thinker的高层语义表征。为实现超低延迟流式生成,Tal</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://huggingface.co/collections/Qwen/qwen3-omni-68d100a86cd0906843ceccbe">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen3-Omni-867aef131e7d4f">MODELSCOPE</a> | <a href="https://help.aliyun.com/zh/model-studio/qwen-omni">DASHSCOPE</a> | <a href="https://github.com/QwenLM/Qwen3-Omni">GITHUB</a> | <a href="https://github.com/QwenLM/Qwen3-Omni/tree/main/assets/Qwen3_Omni.pdf">PAPER</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-Omni-Demo">HUGGING FACE DEMO</a> | <a href="https://modelscope.cn/studios/Qwen/Qwen3-Omni-Demo">MODELSCOPE DEMO</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-omni</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-omni</guid>
<pubDate>Sun, 21 Sep 2025 21:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3-TTS-Flash:多音色 & 多语言 & 多方言的语音合成</title>
<description><img src="http://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-TTS-Flash/table2.png#center" referrerpolicy="no-referrer"><p><strong>Qwen3-TTS-Flash</strong> 是支持多音色、多语言和多方言的旗舰语音合成模型,旨在生成自然且具有表现力的语音,目前可通过<a href="https://help.aliyun.com/zh/model-studio/qwen-tts">Qwen API</a>访问。主要特点:这里有一些样例展示了单说话人的多语种生成能力:这里有一些样例展示了中英文的音色:这里有一些样例展示了方言的音色:这里有一些样例展示了混</p>
<p><a href="https://help.aliyun.com/zh/model-studio/qwen-tts">DASHSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-TTS-Demo">HUGGING FACE DEMO</a> | <a href="https://modelscope.cn/studios/Qwen/Qwen3-TTS-Demo">MODELSCOPE DEMO</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-tts</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-tts</guid>
<pubDate>Sun, 21 Sep 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3-Next:迈向更极致的训练推理性价比</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwen3-next.png" referrerpolicy="no-referrer"><p>我们认为<strong>Context Length Scaling</strong>和<strong>Total Parameter Scaling</strong>是未来大模型发展的两大趋势,为了进一步提升模型在长上下文和大规模总参数下的训练和推理效率,我们设计了全新的Qwen3-Next的模型结构。该结构相比Qwen3的MoE模型结构,进行了以下核心改进:<strong>混合注意力机制</strong>、<strong>高稀疏度 MoE 结构</strong>、一系列<strong>训练稳定友好的优化</strong></p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://help.aliyun.com/zh/model-studio/models#2c9c4628c9yyd">API</a> | <a href="https://huggingface.co/collections/Qwen/qwen3-next-68c25fd6838e585db8eeea9d">Hugging Face</a> | <a href="https://modelscope.cn/collections/Qwen3-Next-c314f23bd0264a">ModelScope</a> | <a href="https://www.kaggle.com/models/qwen-lm/qwen3-next-80b">Kaggle</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-next</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-next</guid>
<pubDate>Wed, 10 Sep 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen3:思深,行速</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwen3-banner.png" referrerpolicy="no-referrer"><p>今天,我们宣布推出 <strong>Qwen3</strong>,这是 Qwen 系列大型语言模型的最新成员。我们的旗舰模型 <strong>Qwen3-235B-A22B</strong> 在代码、数学、通用能力等基准测试中,与 DeepSeek-R1、o1、o3-mini、Grok-3 和 Gemini-2.5-Pro 等顶级模型相比,表现出极具竞争力的结果。此外,小型 MoE 模型 <strong>Qwen3-30B-A3B</strong> 的激活参数数量是 QwQ</p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen3">GitHub</a> | <a href="https://huggingface.co/collections/Qwen/qwen3-67dd247413f0e2e4f653967f">Hugging Face</a> | <a href="https://modelscope.cn/collections/Qwen3-9743180bdc6b48">ModelScope</a> | <a href="https://www.kaggle.com/models/qwen-lm/qwen-3">Kaggle</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3</guid>
<pubDate>Mon, 28 Apr 2025 20:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5 Omni:看得见、听得到、会说话、能写作,样样精通!</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2.5-Omni/qwen_omni.png" referrerpolicy="no-referrer"><p>我们发布了 <strong>Qwen2.5-Omni</strong>,Qwen 模型家族中新一代端到端多模态旗舰模型。该模型专为全方位多模态感知设计,能够无缝处理文本、图像、音频和视频等多种输入形式,并通过实时流式响应同时生成文本与自然语音合成输出。想要体验最新的模型,请访问 <a href="https://chat.qwenlm.ai/">Qwen Chat</a> 并选择Qwen2.5-Omni-7B。该模型现已在 [Hugging Fa</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://huggingface.co/Qwen/Qwen2.5-Omni-7B">HUGGING FACE</a> | <a href="https://modelscope.cn/models/Qwen/Qwen2.5-Omni-7B">MODELSCOPE</a> | <a href="https://help.aliyun.com/zh/model-studio/user-guide/qwen-omni">DASHSCOPE</a> | <a href="https://github.com/QwenLM/Qwen2.5-Omni">GITHUB</a> | <a href="https://github.com/QwenLM/Qwen2.5-Omni/blob/main/assets/Qwen2.5_Omni.pdf">PAPER</a> | <a href="https://modelscope.cn/studios/Qwen/Qwen2.5-Omni-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-omni</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-omni</guid>
<pubDate>Wed, 26 Mar 2025 16:00:45 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5-VL-32B: 更聪明、更轻量!</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-VL-32B/qwen2.5vl-32b-vision.jpg" referrerpolicy="no-referrer"><p>今年一月底,我们推出了 Qwen2.5-VL 系列模型,获得了社区的广泛关注和积极反馈。在 Qwen2.5-VL 系列的基础上,我们使用强化学习持续优化模型,并使用 Apache 2.0 协议开源 32B 这个备受喜爱的参数规模的新 VL 模型—— <strong>Qwen2.5-VL-32B-Instruct</strong>。相比此前发布的 Qwen2.5-VL 系列模型,本次推出的 32B 模型的特点如下:我们与业内</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen2.5-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-vl-6795ffac22b334a837c0f9a5">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen25-VL-58fbb5d31f1d47">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-vl-32b</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-vl-32b</guid>
<pubDate>Sun, 23 Mar 2025 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>QwQ-32B: 领略强化学习之力</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwq-32b-final.jpg" referrerpolicy="no-referrer"><p>大规模强化学习(RL)有潜力超越传统的预训练和后训练方法来提升模型性能。近期的研究表明,强化学习可以显著提高模型的推理能力。例如,DeepSeek R1 通过整合冷启动数据和多阶段训练,实现了最先进的性能,使其能够进行深度思考和复杂推理。这一次,我们探讨了大规模强化学习(RL)对大语言模型的智能的提升作用,同时很高兴推出我们最新的推理模型 QwQ-32B。这是一款拥有 320 亿参数的模型,其性能</p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://huggingface.co/Qwen/QwQ-32B">Hugging Face</a> | <a href="https://modelscope.cn/models/Qwen/QwQ-32B">ModelScope</a> | <a href="https://huggingface.co/spaces/Qwen/QwQ-32B-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwq-32b</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwq-32b</guid>
<pubDate>Wed, 05 Mar 2025 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5-1M: 支持100万Token上下文的开源Qwen模型</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-1M/passkey_retrieval.png" referrerpolicy="no-referrer"><p>两个月前,我们升级了 <a href="https://qwen.ai/qwen2.5-turbo">Qwen2.5-Turbo</a>,使其支持最多一百万个Tokens的上下文长度。今天,我们正式推出开源的 Qwen2.5-1M 模型及其对应的推理框架支持。以下是本次发布的亮点:现在,你可以访问我们在 <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-1M-Demo">Huggingface</a> 和 [Mo</p>
<p><a href="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-1M/Qwen2_5_1M_Technical_Report.pdf">Tech Report</a> | <a href="https://huggingface.co/Qwen">HuggingFace</a> | <a href="https://modelscope.cn/organization/qwen">ModelScope</a> | <a href="https://chat.qwenlm.ai/">Qwen Chat</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-1M-Demo">HuggingFace Demo</a> | <a href="https://www.modelscope.cn/studios/Qwen/Qwen2.5-1M-Demo">ModelScope Demo</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-1m</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-1m</guid>
<pubDate>Sun, 26 Jan 2025 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5 VL!Qwen2.5 VL!Qwen2.5 VL!</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2.5-vl-Capybara.png" referrerpolicy="no-referrer"><p>我们发布了 <strong>Qwen2.5-VL</strong>,Qwen 模型家族的旗舰视觉语言模型,对比此前发布的 Qwen2-VL 实现了巨大的飞跃。欢迎访问 <a href="https://chat.qwenlm.ai/">Qwen Chat</a> 并选择 Qwen2.5-VL-72B-Instruct 进行体验。此外,我们在 [Hugging Face](https://huggingface.co/collections/Qwe</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen2-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-vl-6795ffac22b334a837c0f9a5">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen25-VL-58fbb5d31f1d47">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-vl</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-vl</guid>
<pubDate>Sun, 26 Jan 2025 11:08:41 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5-Coder 全系列: 强大、多样、实用。</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5/Qwen2.5-Coder-Family/32b-top.jpg" referrerpolicy="no-referrer"><p>今天,我们很高兴开源「强大」、「多样」、「实用」的 Qwen2.5-Coder 全系列模型,致力于持续推动 Open CodeLLMs 的发展。另外,Qwen2.5-Coder-32B-Instruct 的多编程语言代码修复能力同样令人惊喜,这将有助于用户理解和修改自己熟悉的编程语言,极大缓解陌生语言的学习成本。与 McEval 类似,MdEval 是多编程语言的代码修复基准,Qwen2.5-Co</p>
<p><a href="https://github.com/QwenLM/Qwen2.5-Coder">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-coder-66eaa22e6f99801bf65b0c2f">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://www.kaggle.com/models/qwen-lm/qwen2.5-coder">KAGGLE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-Coder-demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-coder-family</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-coder-family</guid>
<pubDate>Mon, 11 Nov 2024 16:00:02 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5: 基础模型大派对!</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5/qwen2.5-main.jpg" referrerpolicy="no-referrer"><p>在 Qwen2 发布后的过去三个月里,许多开发者基于 Qwen2 语言模型构建了新的模型,并为我们提供了宝贵的反馈。在这段时间里,我们专注于创建更智能、更博学的语言模型。今天,我们很高兴地向大家介绍 Qwen 家族的最新成员:<strong>Qwen2.5</strong>。我们将要宣布的可能是历史上最大的开源发布!让我们开始这场盛会吧!我们的最新发布包括了语言模型 <strong>Qwen2.5</strong>,以及专门针对编程的 **Qwen</p>
<p><a href="https://github.com/QwenLM/Qwen2.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a> | <a href="https://qwenlm.github.io/blog/qwen2.5-llm">Qwen2.5 LLM</a> | <a href="https://qwenlm.github.io/blog/qwen2.5-coder">Qwen2.5-Coder</a> | <a href="https://qwenlm.github.io/blog/qwen2.5-math">Qwen2.5-Math</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5</guid>
<pubDate>Wed, 18 Sep 2024 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5-Coder: 码无止境,学无止境!</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5/Qwen2.5-Coder/coder-main.png" referrerpolicy="no-referrer"><p>四月初,我们发布了 CodeQwen1.5, 得到了社区广泛的关注与喜爱。自那以后,我们一直在继续努力提升代码模型。今天,我们很高兴地宣布新一代的开放代码模型 Qwen2.5-Coder 的发布。并正式将 CodeQwen 的命名改为 Qwen-Coder,我们认为 Coder 更加拟人、灵动,期待其可以在未来真正与人类结对编程。Qwen2.5-Coder 是我们 Qwen2.5 开源家族的一员,</p>
<p><a href="https://github.com/QwenLM/Qwen2.5-Coder">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-Coder-7B-Instruct">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-coder</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-coder</guid>
<pubDate>Wed, 18 Sep 2024 16:00:02 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2.5-Math: 世界领先的数学开源大语言模型</title>
<description><img src="http://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5/2024-08-qwen2.5-math-72B.png" referrerpolicy="no-referrer"><p>**🚨 Qwen2.5-Math主要被设计用于通过CoT或TIR的方式解中英数学题,我们不推荐在其他任务上使用该系列模型。**一个月前,我们开源了 Qwen 家族的第一款数学专项大语言模型- <a href="https://qwenlm.github.io/blog/qwen2-math/">Qwen2-Math</a>。 今天,我们将它再度升级并开源 <strong>Qwen2.5-Math</strong> 系列,包括基础模型 **Qw</p>
<p><a href="https://github.com/QwenLM/Qwen2-Math">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-math</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-math</guid>
<pubDate>Wed, 18 Sep 2024 16:00:01 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2-VL: 更清晰地看世界</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen2-VL/qwen2vl-head.jpeg" referrerpolicy="no-referrer"><p>经历了接近一年时间的持续努力,今天我们很高兴地宣布我们最新一代的视觉语言模型:<strong>Qwen2-VL</strong> !Qwen2-VL 基于 Qwen2 打造,相比 Qwen-VL,它具有以下特点:我们以 Apache 2.0 协议开源了 Qwen2-VL-2B 和 Qwen2-VL-7B,并发布了 Qwen2-VL-72B 的 API!开源代码已集成到 Hugging Face Transformers、v</p>
<p><a href="https://huggingface.co/spaces/Qwen/Qwen2-VL">DEMO</a> | <a href="https://github.com/QwenLM/Qwen2-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen2-vl-66cee7455501d7126940800d">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://help.aliyun.com/zh/model-studio/developer-reference/qwen-vl-api">API</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2-vl</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2-vl</guid>
<pubDate>Wed, 28 Aug 2024 16:24:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen2-Audio:开启语音对话!</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-Audio/demo/radar_compare_qwen_audio.png" referrerpolicy="no-referrer"><p>在一个通用的AI系统中,核心模型应该能够理解不同模态的信息。当前的大语言模型现在已经能够理解语言并进行推理,并且已经扩展到了更多的模态,包括视觉和音频。此前我们陆续发布了多个 Qwen 语言模型系列以及 Qwen-VL 和 Qwen-Audio 等多模态模型。今天,我们正式发布 Qwen2-Audio。这是 Qwen-Audio 的下一代版本,它能够接受音频和文本输入,并生成文本输出。Qwen2-</p>
<p><a href="https://huggingface.co/spaces/Qwen/Qwen2-Audio-Instruct-Demo">DEMO</a> | <a href="https://arxiv.org/pdf/2407.10759">PAPER</a> | <a href="https://github.com/QwenLM/Qwen2-Audio">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen2-audio-66b628d694096020e0c52ff6">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2-audio</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2-audio</guid>
<pubDate>Fri, 09 Aug 2024 08:22:39 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>你好,Qwen2</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/qwen2/qwen.jpg" referrerpolicy="no-referrer"><p>历经数月努力, 我们很高兴迎来了Qwen系列模型从Qwen1.5到Qwen2的重大升级。这一次,我们为大家带来了:目前,我们已在Hugging Face和ModelScope上同步开源。期待听到你们的使用反馈!Qwen2系列包含5个尺寸的预训练和指令微调模型,其中包括Qwen2-0.5B、Qwen2-1.5B、Qwen2-7B、Qwen2-57B-A14B和Qwen2-72B。如下表所示:在Qwe</p>
<p><a href="https://github.com/QwenLM/Qwen2">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2-72B-Instruct">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2</guid>
<pubDate>Thu, 06 Jun 2024 16:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>与 CodeQwen1.5 结对编程</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/codeqwen1.5/intro.png" referrerpolicy="no-referrer"><p>代码助手,是一种基于 LLMs 的智能化的编程工具,它可以帮助程序员更高效、更准确的编写代码,使得整个软件开发过程更加流畅和高效。然而流行的代码助手,比如 Github Copilot,依赖于闭源的商业模型,不仅昂贵还会引起如隐私、安全、版权等方面的担忧。幸运的是,开源社区正在致力于打造开放代码模型来实现开放的代码助手。近期涌现出了一批优秀的 Open CodeLLMs,比如 StarCoder2</p>
<p><a href="https://github.com/QwenLM/CodeQwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/CodeQwen1.5-7b-Chat-demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=codeqwen1.5</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=codeqwen1.5</guid>
<pubDate>Tue, 16 Apr 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
<item>
<title>Qwen1.5 介绍</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/qwen1.5/intro.jpg" referrerpolicy="no-referrer"><p>最近几个月,我们专注探索如何构建一个真正「卓越」的模型,并在此过程中不断提升开发者的使用体验。农历新年到来之际,我们推出通义千问开源模型 1.5 版本: <strong>Qwen1.5</strong>。我们开源了包括 0.5B、1.8B、4B、7B、14B、32B、72B 和 110B 共计 8 个不同规模的 Base 和 Chat 模型,, 以及一个 MoE 模型(点击[博客](https://qwenlm.githu</p>
<p><a href="https://github.com/QwenLM/Qwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen1.5-72B-Chat">DEMO</a> | <a href="https://github.com/QwenLM/Qwen/blob/main/assets/wechat.png">WeChat</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen1.5</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen1.5</guid>
<pubDate>Sun, 04 Feb 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Open-Source</category>
</item>
</channel>
</rss>http://localhost:1200/qwen/research/zh-cn/Release - Success ✔️<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
<channel>
<title>Qwen 研究 - Release</title>
<link>https://qwen.ai/research</link>
<atom:link href="http://localhost:1200/qwen/research/zh-cn/Release" rel="self" type="application/rss+xml"></atom:link>
<description>Qwen 研究 - Release - Powered by RSSHub</description>
<generator>RSSHub</generator>
<webMaster>contact@rsshub.app (RSSHub)</webMaster>
<language>en</language>
<lastBuildDate>Thu, 02 Apr 2026 12:12:29 GMT</lastBuildDate>
<ttl>5</ttl>
<item>
<title>Qwen3-TTS 全面升级: 音色设计与音色克隆!</title>
<description><p><strong>Qwen3-TTS</strong> 家族新推出两款模型,音色创造模型Qwen3-TTS-VD-Flash(可通过<a href="https://www.alibabacloud.com/help/zh/model-studio/qwen-tts-voice-design">Qwen API</a>访问)和音色克隆模型Qwen3-TTS-VC-Flash(可通过<a href="https://www.alibabacloud.com/help/zh/model-studio/qwen-tts-voice-cloning">Qwen API</a>访问)。主要特点:Qwen3-TTS 支持通过自然语言描述生成定制化的音色形象。用户可以随意输入声</p>
</description>
<link>https://qwen.ai/blog?id=qwen3-tts-vc-voicedesign</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-tts-vc-voicedesign</guid>
<pubDate>Mon, 22 Dec 2025 16:00:45 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3-Omni-Flash-2025-12-01:声形意合,令出智随!</title>
<description><p><strong>Qwen3-Omni</strong>是新一代原生全模态大模型,能够无缝处理文本、图像、音频和视频等多种输入形式,并通过实时流式响应同时生成文本与自然语音输出。我们引入了多种升级来提升模型表现和效率。<strong>Qwen3-Omni-Flash-2025-12-01</strong>是在Qwen3-Omni基础上进行全面升级的版本。此次升级版本主要特点为:在客观性能指标上,<strong>Qwen3-Omni-Flash-2025-12-01</strong>全模态能力全面跃升,各项能力均显著超越Qwen3-Omni-Flash:此次升级,让 Qwen3-Omni-Flash-20251201 在全模态场景下真正做到“声形意合,令出智随”,为用户带来</p>
</description>
<link>https://qwen.ai/blog?id=qwen3-omni-flash-20251201</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-omni-flash-20251201</guid>
<pubDate>Mon, 08 Dec 2025 21:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3-TTS 全面升级!49种音色 + 10种语言 + 9种方言</title>
<description><p><strong>Qwen3-TTS</strong> 是支持多音色、多语种和多方言的旗舰语音合成模型,致力于实现稳定、自然和高效的语音生成,目前可通过<a href="https://help.aliyun.com/zh/model-studio/qwen-tts">Qwen API</a>访问。主要改进:Qwen3-TTS 提供了个性鲜明、情感饱满的多元声音形象供用户选择,可满足多样化的场景需求。以下是一些合成样音:Qwen3-TTS 深度支持多种汉语方言表达,精准还原口音语调与地域韵味。以下是一些合成样音:Qwen3-TTS 同样支持了地道自然的多语种音色,发声习惯更贴近母语表达。以下是一些合成样例:通过 Qwen API 使用 Qwe</p>
</description>
<link>https://qwen.ai/blog?id=qwen3-tts-1128</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-tts-1128</guid>
<pubDate>Thu, 04 Dec 2025 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen DeepResearch: 当灵感不再需要理由</title>
<description><p><a href="https://chat.qwen.ai/?inputFeature=deep_research">点我体验最新 Qwen DeepResearch</a>_<strong>灵感是如何死掉的?</strong>_它通常不是死于“不够好”,而是死于“太麻烦”。当一个念头闪现时,它还是脆弱的、未经证实的。我们的大脑在短暂兴奋后,会立刻开始评估“成本”:就在这个“成本评估”的瞬间,绝大多数灵感就被“理性”地扼杀了。我们下意识地回避了它,因为“深入研究”的传统门槛实在太高。我们一直在思考,如何让“深入研究”不再是一个需要启动的重型任务,而是成为思考的自然延伸。**这就是 Qwen DeepResearch 诞生的使命。**我们想做</p>
</description>
<link>https://qwen.ai/blog?id=qwen-deepresearch</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-deepresearch</guid>
<pubDate>Wed, 12 Nov 2025 20:59:26 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3-Max:大就是好</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwen3max-banner.png" referrerpolicy="no-referrer"><p>继 Qwen3-2507 系列发布之后,我们非常高兴地推出 Qwen3-Max —— 我们迄今为止规模最大、能力最强的模型。目前,Qwen3-Max-Instruct 的预览版在 LMArena 文本排行榜上位列第三,超越了 GPT-5-Chat。正式版本在代码能力和智能体(agent)能力方面进一步提升,在涵盖知识、推理、编程、指令遵循、人类偏好对齐、智能体任务和多语言理解的全面基准测试中均达</p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://help.aliyun.com/zh/model-studio/models#qwen-max-cn-bj">API</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-max</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-max</guid>
<pubDate>Wed, 24 Sep 2025 04:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3-LiveTranslate:视、听、说全模态同传大模型!</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3-LiveTranslate-Flash/blog_pic_without_subtitles(1).png#center" referrerpolicy="no-referrer"><p><strong>Qwen3-LiveTranslate-Flash</strong> 是一款基于大语言模型的高精度、高响应、高鲁棒性的多语言实时音视频同传模型。依托Qwen3-Omni强大的基座能力、海量多模态数据、百万小时音视频数据,Qwen3-LiveTranslate-Flash 实现了覆盖18种语言的离线和实时两种音视频翻译能力。核心亮点:在公开测试集上中英及多语言语音翻译,Qwen3-LiveTranslate-</p>
<p><a href="https://help.aliyun.com/document_detail/2983281.html">DASHSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen3-Livetranslate-Demo">HUGGING FACE DEMO</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-livetranslate</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-livetranslate</guid>
<pubDate>Mon, 22 Sep 2025 23:00:26 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>旅行规划师:你的专属智能行程设计师</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/qwen-travel-planner/zn_q1.png" referrerpolicy="no-referrer"><p>我们非常高兴推出全新的<strong>旅行规划助手</strong>,这是一个基于 <strong>Multi-Agent 架构</strong> 并具备强大 <strong>真实工具调用能力</strong> 的旅行规划系统,能够高效应对复杂、多变的行程安排任务。无论你计划的是多城市连线旅行,还是单城深度游,它都能为你提供精准、可落地的旅行方案:旅行规划是一项系统工程,涵盖交通、景点、住宿、用餐等环节,它们环环相扣、相互影响,任何单一 Agent 都难以全面驾驭其中的复杂</p>
<p><a href="https://chat.qwen.ai/?inputFeature=travel">QWEN CHAT</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=agent</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=agent</guid>
<pubDate>Mon, 22 Sep 2025 21:00:59 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Time to Speak Some Dialects, Qwen-TTS!</title>
<description><p>我们通过 <a href="https://help.aliyun.com/zh/model-studio/qwen-tts">Qwen API</a> 更新了 <strong>Qwen-TTS</strong> ( <code>qwen-tts-latest</code> or <code>qwen-tts-2025-05-22</code> ) 的最新版本。Qwen-TTS 使用了超过 300 万小时的大规模语料库进行训练,合成效果实现了人类级别的自然度和表现力。比较亮眼的是,Qwe</p>
<p><a href="https://help.aliyun.com/zh/model-studio/qwen-tts">API</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-tts</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-tts</guid>
<pubDate>Fri, 27 Jun 2025 07:01:34 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen VLo: 从“看懂”世界到“描绘”世界</title>
<description><img src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen-VLo/vlo.png" referrerpolicy="no-referrer"><p>多模态大模型的演进正在不断突破我们对技术边界的认知。从最初的 QwenVL 到如今的 Qwen2.5 VL ,我们在提升模型对图像内容的理解能力方面取得了一些进展。今天,我们正式推出 Qwen VLo ——一个多模态统一理解与生成模型。这一全新升级的模型不仅能够“看懂”世界,更能基于理解进行高质量的再创造,真正实现了从感知到生成的跨越。需要注意的是,这是一款预览版本,您可以通过 Qwen Chat</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-vlo</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-vlo</guid>
<pubDate>Thu, 26 Jun 2025 14:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen3 Embedding:新一代文本表征与排序模型</title>
<description><img src="https://mitalinlp.oss-cn-hangzhou.aliyuncs.com/dingkun/models/qwen-embedding/q3e-mteb-result-0605.png" referrerpolicy="no-referrer"><p>我们正式发布 Qwen3 Embedding 系列模型, Qwen 模型家族的新成员。该系列模型专为文本表征、检索与排序任务设计,基于 Qwen3 基础模型进行训练,充分继承了 Qwen3 在多语言文本理解能力方面的优势。在多项基准测试中,Qwen3 Embedding 系列在文本表征和排序任务中展现了卓越的性能。我们使用了 Apache 2.0 协议在 Hugging Face 和 ModelS</p>
<p><a href="https://github.com/QwenLM/Qwen3-Embedding">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen3-embedding</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen3-embedding</guid>
<pubDate>Thu, 05 Jun 2025 13:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>QVQ-Max:有依据地思考</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/QVQ-Max/test_time.png" referrerpolicy="no-referrer"><p>去年12月,我们推出了 QVQ-72B-Preview, 作为一个探索模型,它存在很多问题。今天,我们正式推出 QVQ-Max 视觉推理模型的第一版。这款模型的特点是,它不仅能够“看懂”图片和视频里的内容,还能结合这些信息进行分析、推理,甚至给出解决方案。从数学题到生活小问题,从编程代码到艺术创作,QVQ-Max 都表现出了不俗的能力。虽然这只是我们的第一个版本,但它的潜力已经让人眼前一亮。Mat</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://github.com/QwenLM/Qwen2.5-VL">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-vl-6795ffac22b334a837c0f9a5">HUGGING FACE</a> | <a href="https://modelscope.cn/collections/Qwen25-VL-58fbb5d31f1d47">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qvq-max-preview</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qvq-max-preview</guid>
<pubDate>Thu, 27 Mar 2025 16:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title><think>...</think> QwQ-Max-Preview</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/qwq-banner-zh.jpg" referrerpolicy="no-referrer"><p>这篇博客出自 QwQ-Max-Preview 之手。希望各位看官喜欢!我们很高兴向大家介绍 QwQ-Max-Preview,这是 Qwen 系列的最新成果。这一版本基于 Qwen2.5-Max 构建,在数学、编程以及通用任务中展现了更强的能力,同时在与 Agent 相关的工作流中也有不错的表现。作为即将发布的 QwQ-Max 的预览版,这个版本还在持续优化中。我们计划在不久的将来以 Apache</p>
<p><a href="https://chat.qwen.ai/">QWEN CHAT</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwq-max-preview</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwq-max-preview</guid>
<pubDate>Mon, 24 Feb 2025 18:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen2.5-Max:探索大规模 MoE 模型的智能</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-max-banner.png" referrerpolicy="no-referrer"><p>过去有一种观点认为,持续地增长数据规模和模型参数规模是一种通向 AGI 的可能的路径。然而,整个大模型社区对于训练超大规模的模型的经验都相对匮乏,不论是稠密模型还是 MoE 模型。近期,DeepSeek V3 的发布让大家了解到超大规模 MoE 模型的效果及实现方法,而同期,Qwen 也在研发超大规模的 MoE 模型 Qwen2.5-Max,使用超过 20 万亿 token 的预训练数据及精心设计</p>
<p><a href="https://chat.qwenlm.ai/">QWEN CHAT</a> | <a href="https://www.alibabacloud.com/help/en/model-studio/getting-started/first-api-call-to-qwen?spm=a2c63.p38356.help-menu-2400256.d_0_1_0.1f6574a72ddbKE">API</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-Max-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-max</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-max</guid>
<pubDate>Tue, 28 Jan 2025 15:00:04 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>面向有效的数学推理过程监督</title>
<description><img src="http://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2.5/Qwen2.5-Math-PRM/Qwen2.5-Math-PRM.png" referrerpolicy="no-referrer"><p>近年来,大型语言模型(LLMs)在数学推理方面取得了显著进展,但它们仍可能犯错误,如计算错误或逻辑错误,导致得出错误结论。<br>
此外,即使最终答案正确,这些强大的模型也经常编造看似合理的推理步骤,其中最终答案基于有缺陷的计算或推导过程,这削弱了LLMs推理过程的可靠性和可信度。<br>
因此,自动识别推理过程中的错误对于其可扩展监督变得越来越重要。过程奖励模型(Process Reward Models, P</p>
<p><a href="https://github.com/QwenLM/Qwen2-Math">GITHUB</a> | <a href="https://huggingface.co/collections/Qwen/qwen25-math-66eaa240a1b7d5ee65f1da3e">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-math-prm</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-math-prm</guid>
<pubDate>Mon, 13 Jan 2025 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>QVQ: 更睿智地看世界</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/QVQ/QVQ.jpg" referrerpolicy="no-referrer"><p>在人类的思维中,语言和视觉紧密交织,塑造着我们感知和理解世界的方式。我们的推理能力深深植根于语言思维和视觉记忆之中。那么,当我们将这些能力赋予人工智能时,会发生什么呢?如今的大语言模型已经展现出卓越的推理能力,但我们不禁思考:它们能否通过掌握视觉理解的力量,攀登认知能力的新高峰?设想一下,一个人工智能能够像物理学大师一样,面对复杂的物理问题,沉着冷静地通过逻辑推理找到解决方案。正是这样的愿景激发我</p>
<p><a href="https://github.com/QwenLM/Qwen2-VL">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://www.kaggle.com/models/qwen-lm/qvq-72b-preview">KAGGLE</a> | <a href="https://huggingface.co/Qwen/QVQ-72B-Preview">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qvq-72b-preview</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qvq-72b-preview</guid>
<pubDate>Tue, 24 Dec 2024 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>QwQ: 思忖未知之界</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/assets/blog/qwq-32b-preview/QwQ-32B-Preview_result.png" referrerpolicy="no-referrer"><p>*注意:QwQ 的发音为 /kwju:/ ,与单词 "quill" 的读音近似。*思考、质疑、理解,是人类探索未知的永恒追求。在这条探索之路上,QwQ犹如一位怀抱无尽好奇的学徒,以思考和疑问照亮前路。QwQ体现了古老的哲学精神:它深知自己一无所知,而这种认知正是其好奇心的源泉。在探寻答案的过程中,它始终保持自省,以理性之光审视每一个假设,在不同的思维维度中穿行,追寻更深层的真理。然而,正如所有智慧</p>
<p><a href="https://github.com/QwenLM/Qwen2.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/QwQ-32B-preview">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwq-32b-preview</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwq-32b-preview</guid>
<pubDate>Wed, 27 Nov 2024 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>将上下文长度扩展至百万 Tokens !</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Turbo/cover_cn.png" referrerpolicy="no-referrer"><p>在 Qwen2.5 发布之后,我们听到社区对处理更长序列的需求。在这段时间,我们针对长序列处理能力以及长序列下的推理效率进行了很多优化。今天,我们隆重推出新的 Qwen2.5-Turbo 版本,其特点在于:现在,你可以通过[阿里云大模型服务平台](https://help.aliyun.com/zh/model-studio/developer-reference/what-is-qwen-llm</p>
<p><a href="https://help.aliyun.com/zh/model-studio/getting-started/first-api-call-to-qwen">API文档</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen2.5-Turbo-1M-Demo">HuggingFace Demo</a> | <a href="https://www.modelscope.cn/studios/Qwen/Qwen2.5-Turbo-1M-Demo">ModelScope Demo</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2.5-turbo</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2.5-turbo</guid>
<pubDate>Thu, 14 Nov 2024 16:00:03 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen2-Math,新一代数学模型</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/assets/blog/qwen2-math/fig1.jpg" referrerpolicy="no-referrer"><p>**🚨 此模型目前主要支持英语。我们将尽快推出中英双语版本。**在过去的一年里,我们非常关注大模型的推理能力的提升,尤其关注其在数学相关的任务上的表现。今天,我们非常高兴地介绍 Qwen2 开源家族的新成员——Qwen2-Math-1.5B/7B/72B 系列。Qwen2-Math 是一系列基于 Qwen2 LLM 构建的专门用于数学解题的语言模型,其数学能力显著超越了开源模型,甚至超过了闭源模</p>
<p><a href="https://github.com/QwenLM/Qwen2-Math">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen2-math</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen2-math</guid>
<pubDate>Wed, 07 Aug 2024 16:00:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen-Max-0428模型介绍</title>
<description><img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/arena_leaderboard.jpg" referrerpolicy="no-referrer"><p>此前,我们开源了Qwen1.5系列的模型,参数规模最小至5亿,最大至1100亿。这一次,我们推出更大规模模型Qwen-Max-0428(通义千问网页端及APP产品版本从2.1升级至2.5)。Qwen-Max-0428是经过指令微调的Chat模型。近期该模型登陆了<a href="https://chat.lmsys.org/">Chatbot Arena</a>,并登榜前十。此外,我们在MT-Bench的评测上也观察到</p>
<p><a href="https://dashscope.aliyun.com/">API</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen-Max-0428">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-max-0428</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-max-0428</guid>
<pubDate>Sat, 11 May 2024 10:10:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen1.5-110B:Qwen1.5系列的首个千亿参数开源模型</title>
<description><p>近期开源社区陆续出现了千亿参数规模以上的大模型,这些模型都在各项评测中取得杰出的成绩。今天,我们开源1100亿参数的Qwen1.5系列首个千亿参数模型Qwen1.5-110B,该模型在基础能力评估中与Meta-Llama3-70B相媲美,在Chat评估中表现出色,包括MT-Bench和AlpacaEval 2.0。Qwen1.5-110B与其他Qwen1.5模型相似,采用了相同的Transform</p>
<p><a href="https://github.com/QwenLM/Qwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen1.5-110B-Chat-Demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen1.5-110b</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen1.5-110b</guid>
<pubDate>Thu, 25 Apr 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen1.5-32B:Qwen1.5语言模型系列的最后一块拼图</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/qwen1.5-32b/32b.png" referrerpolicy="no-referrer"><p>开源社区长期以来一直在寻求一种能在性能、效率和内存占用之间达到理想平衡的模型。尽管出现了诸如Qwen1.5-72B和DBRX这样的SOTA模型,但这些模型持续面临诸如内存消耗巨大、推理速度缓慢以及显著的微调成本等问题。当前,参数量约30B的模型往往在这方面被看好,得到很多用户的青睐。顺应这一趋势,我们推出Qwen1.5语言模型系列的最新成员:Qwen1.5-32B和Qwen1.5-32B-Chat</p>
<p><a href="https://github.com/QwenLM/Qwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/Qwen1.5-72B-Chat">DEMO</a> | <a href="https://github.com/QwenLM/Qwen/blob/main/assets/wechat.png">WeChat</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen1.5-32b</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen1.5-32b</guid>
<pubDate>Tue, 02 Apr 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen1.5-MoE: 1/3的激活参数量达到7B模型的性能</title>
<description><p>今天,我们推出Qwen系列的首个MoE模型,Qwen1.5-MoE-A2.7B。它仅拥有27亿个激活参数,但其性能却能与当前最先进的70亿参数模型,如Mistral 7B和Qwen1.5-7B相媲美。相较于包含65亿个Non-Embedding参数的Qwen1.5-7B,Qwen1.5-MoE-A2.7B只有20亿个Non-Embedding参数,约为原模型大小的三分之一。此外,相比Qwen1.5</p>
<p><a href="https://github.com/QwenLM/Qwen1.5">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://huggingface.co/spaces/Qwen/qwen1.5-MoE-A2.7B-Chat-demo">DEMO</a> | <a href="https://discord.gg/yPEP2vHTu4">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen-moe</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-moe</guid>
<pubDate>Thu, 28 Mar 2024 03:31:44 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen-VL全新升级!</title>
<description><p>我们在 Qwen 语言模型的基础上,结合此前我们提出的多模态多任务训练,以解决多模态模型在泛化能力上的局限性,并于 2023 年 9 月开源了多模态模型 Qwen-VL。最近,Qwen-VL 系列有了重大升级,推出了两个增强版本:Qwen-VL-Plus 和 Qwen-VL-Max。这两个版本的关键提升包括:相比于开源版本的 Qwen-VL,这两个模型在多个文本-图像多模态任务中与 Gemini</p>
</description>
<link>https://qwen.ai/blog?id=qwen-vl</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen-vl</guid>
<pubDate>Thu, 25 Jan 2024 05:33:00 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
<item>
<title>Qwen介绍</title>
<description><img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/assets/blog/qwen/family.png" referrerpolicy="no-referrer"><p>四个月前,我们首次发布Qwen-7B大型语言模型(LLM),正式开启了我们的开源之旅。今天,我们介绍Qwen开源家族,更全面的展示我们的工作和目标。下面是开源项目和社区的重要链接。Additionally, we have WeChat groups for chatting and we invite you to join the groups through the provided lin</p>
<p><a href="https://arxiv.org/abs/2309.16609">PAPER</a> | <a href="https://github.com/QwenLM/Qwen">GITHUB</a> | <a href="https://huggingface.co/Qwen">HUGGING FACE</a> | <a href="https://modelscope.cn/organization/qwen">MODELSCOPE</a> | <a href="https://discord.gg/CV4E9rpNSD">DISCORD</a></p>
</description>
<link>https://qwen.ai/blog?id=qwen</link>
<guid isPermaLink="false">https://qwen.ai/blog?id=qwen</guid>
<pubDate>Tue, 23 Jan 2024 14:13:29 GMT</pubDate>
<author>QwenTeam</author>
<category>Release</category>
</item>
</channel>
</rss> |
Contributor
Auto ReviewNo clear rule violations found in the current diff. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Involved Issue / 该 PR 相关 Issue
Close #
Example for the Proposed Route(s) / 路由地址示例
New RSS Route Checklist / 新 RSS 路由检查表
PuppeteerNote / 说明
新增 Qwen Research 路由,通过
qwen.ai官方 JSON API 获取研究文章列表。支持中英文(
en/zh-cn)和按标签过滤(Research、Open-Source、Release)。每篇文章包含标题、封面图、introduction 渲染为 HTML、以及从 tokenLinks 提取的 Paper/GitHub 等外部链接。
数据源:
https://qwen.ai/api/page_config?code=research.research-list