Skip to content

Conversation

Zerohertz
Copy link
Contributor

@Zerohertz Zerohertz commented Sep 18, 2025

Continued #23649, #23743, #24092, #24740, #24791

Related: #25020


Note

Warnings fixed:

WARNING -  mkdocs_autorefs: api/vllm/entrypoints/openai/serving_engine.md: from /Users/zerohertz/Downloads/opensources/vllm/vllm/entrypoints/openai/serving_engine.py:699: (vllm.entrypoints.openai.serving_engine.OpenAIServing._tokenize_prompt_input_async) Could not find cross-reference target 'vllm.entrypoints.openai.serving_engine.OpenAIServing._tokenize_prompt_input_or_inputs'
WARNING -  mkdocs_autorefs: api/vllm/entrypoints/openai/serving_engine.md: from /Users/zerohertz/Downloads/opensources/vllm/vllm/entrypoints/openai/serving_engine.py:720: (vllm.entrypoints.openai.serving_engine.OpenAIServing._tokenize_prompt_inputs_async) Could not find cross-reference target 'vllm.entrypoints.openai.serving_engine.OpenAIServing._tokenize_prompt_input_or_inputs'

WARNING -  griffe: vllm/distributed/device_communicators/shm_object_storage.py:255: Failed to get 'name: description' pair from 'frees the maximum size of the ring buffer.'

WARNING -  mkdocs_autorefs: api/vllm/entrypoints/openai/serving_engine.md: from /Users/zerohertz/Downloads/opensources/vllm/vllm/entrypoints/openai/serving_engine.py:699: (vllm.entrypoints.openai.serving_engine.OpenAIServing._tokenize_prompt_input_async) Could not find cross-reference target 'vllm.entrypoints.openai.serving_engine.OpenAIServing._tokenize_prompt_input_or_inputs'
WARNING -  mkdocs_autorefs: api/vllm/entrypoints/openai/serving_engine.md: from /Users/zerohertz/Downloads/opensources/vllm/vllm/entrypoints/openai/serving_engine.py:720: (vllm.entrypoints.openai.serving_engine.OpenAIServing._tokenize_prompt_inputs_async) Could not find cross-reference target 'vllm.entrypoints.openai.serving_engine.OpenAIServing._tokenize_prompt_input_or_inputs'

Signed-off-by: Zerohertz <[email protected]>
WARNING -  griffe: vllm/distributed/device_communicators/shm_object_storage.py:255: Failed to get 'name: description' pair from 'frees the maximum size of the ring buffer.'

Signed-off-by: Zerohertz <[email protected]>
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses several documentation build warnings. The changes include fixing a docstring format issue in vllm/distributed/device_communicators/shm_object_storage.py by adding correct indentation, and removing broken cross-references in docstrings within vllm/entrypoints/openai/serving_engine.py. These changes are straightforward, correct, and effectively resolve the specified warnings, improving the overall quality of the documentation. The code modifications are well-contained and do not introduce any functional changes or issues. The pull request is good to merge.

@vllm-bot vllm-bot merged commit b419937 into vllm-project:main Sep 18, 2025
17 checks passed
debroy-rh pushed a commit to debroy-rh/vllm that referenced this pull request Sep 19, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants