Skip to content

fix: Updated utils.py to fix stop token issue#56

Merged
erictang000 merged 1 commit intoNovaSky-AI:mainfrom
AtakanTekparmak:main
Jul 4, 2025
Merged

fix: Updated utils.py to fix stop token issue#56
erictang000 merged 1 commit intoNovaSky-AI:mainfrom
AtakanTekparmak:main

Conversation

@AtakanTekparmak
Copy link
Contributor

I was getting this error when passing adding stop tokens to generator config by passing a "stop": [token1, token2, ...]

(VLLMInferenceEngine pid=132934)   File "/home/ubuntu/.cache/uv/builds-v0/.tmpqd4VTJ/lib/python3.12/site-packages/vllm/sampling_params.py", line 378, in __post_init__
(VLLMInferenceEngine pid=132934)     self._verify_args()
(VLLMInferenceEngine pid=132934)   File "/home/ubuntu/.cache/uv/builds-v0/.tmpqd4VTJ/lib/python3.12/site-packages/vllm/sampling_params.py", line 444, in _verify_args
(VLLMInferenceEngine pid=132934)     assert isinstance(self.stop, list)
(VLLMInferenceEngine pid=132934)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VLLMInferenceEngine pid=132934) AssertionError

this change fixes it, stop tokens can be added to generator config

@erictang000 erictang000 self-requested a review July 4, 2025 18:50
@erictang000 erictang000 self-assigned this Jul 4, 2025
Copy link
Collaborator

@erictang000 erictang000 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

makes sense, thanks for reporting and fixing this issue!

@erictang000 erictang000 merged commit 8053c90 into NovaSky-AI:main Jul 4, 2025
3 checks passed
fannie1208 pushed a commit to vinid/SkyRL that referenced this pull request Aug 19, 2025
I was getting this error when passing adding stop tokens to generator
config by passing a `"stop": [token1, token2, ...]`
```
(VLLMInferenceEngine pid=132934)   File "/home/ubuntu/.cache/uv/builds-v0/.tmpqd4VTJ/lib/python3.12/site-packages/vllm/sampling_params.py", line 378, in __post_init__
(VLLMInferenceEngine pid=132934)     self._verify_args()
(VLLMInferenceEngine pid=132934)   File "/home/ubuntu/.cache/uv/builds-v0/.tmpqd4VTJ/lib/python3.12/site-packages/vllm/sampling_params.py", line 444, in _verify_args
(VLLMInferenceEngine pid=132934)     assert isinstance(self.stop, list)
(VLLMInferenceEngine pid=132934)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VLLMInferenceEngine pid=132934) AssertionError
```

this change fixes it, stop tokens can be added to generator config
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants