Skip to content

Commit a1817d0

Browse files
chaunceyjiangwuisawesome
authored andcommitted
[Bugfix] Fix AssertionError: skip_special_tokens=False is not supported for Mistral tokenizers (vllm-project#16964)
Signed-off-by: chaunceyjiang <[email protected]>
1 parent 697de45 commit a1817d0

File tree

1 file changed

+8
-4
lines changed

1 file changed

+8
-4
lines changed

vllm/entrypoints/openai/tool_parsers/mistral_tool_parser.py

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -72,10 +72,14 @@ def __init__(self, tokenizer: AnyTokenizer):
7272

7373
def adjust_request(
7474
self, request: ChatCompletionRequest) -> ChatCompletionRequest:
75-
if request.tools and request.tool_choice != 'none':
76-
# do not skip special tokens because mistral uses the special
77-
# tokens to indicate the start and end of the tool calls
78-
# information.
75+
if not isinstance(
76+
self.model_tokenizer, MistralTokenizer
77+
) and request.tools and request.tool_choice != 'none':
78+
# Do not skip special tokens when using chat template
79+
# with Mistral parser as TOOL_CALL token is needed
80+
# for tool detection.
81+
# Note: we don't want skip_special_tokens=False
82+
# with MistralTokenizer as it is incompatible
7983
request.skip_special_tokens = False
8084
return request
8185

0 commit comments

Comments
 (0)