Skip to content

Conversation

Zerohertz
Copy link
Contributor

@Zerohertz Zerohertz commented Aug 26, 2025

As discussed with @hmellor on Slack, this PR addresses the warnings generated when running mkdocs build. (#22588)

Please let me know if you have any feedback!


Note

Warnings fixed:

WARNING -  griffe: vllm/model_executor/layers/lightning_attn.py:461: No type or annotation for parameter 'q'
WARNING -  griffe: vllm/model_executor/layers/lightning_attn.py:462: No type or annotation for parameter 'k'
WARNING -  griffe: vllm/model_executor/layers/lightning_attn.py:463: No type or annotation for parameter 'v'
WARNING -  griffe: vllm/model_executor/layers/lightning_attn.py:464: No type or annotation for parameter 'ed'
WARNING -  griffe: vllm/model_executor/layers/lightning_attn.py:465: No type or annotation for parameter 'block_size'
WARNING -  griffe: vllm/model_executor/layers/lightning_attn.py:466: No type or annotation for parameter 'kv_history'
WARNING -  griffe: vllm/model_executor/layers/lightning_attn.py:469: No type or annotation for returned value 'output'
WARNING -  griffe: vllm/model_executor/layers/lightning_attn.py:470: No type or annotation for returned value 'kv'

WARNING -  griffe: vllm/model_executor/layers/linear.py:236: No type or annotation for parameter 'bias'
WARNING -  griffe: vllm/model_executor/layers/linear.py:236: Parameter 'bias' does not appear in the function signature
WARNING -  griffe: vllm/model_executor/layers/linear.py:381: No type or annotation for parameter 'output_size'
WARNING -  griffe: vllm/model_executor/layers/linear.py:381: Parameter 'output_size' does not appear in the function signature

WARNING -  griffe: vllm/attention/backends/differential_flash_attn.py:808: No type or annotation for parameter 'query'
WARNING -  griffe: vllm/attention/backends/differential_flash_attn.py:808: Parameter 'query' does not appear in the function signature
WARNING -  griffe: vllm/attention/backends/differential_flash_attn.py:809: No type or annotation for parameter 'key'
WARNING -  griffe: vllm/attention/backends/differential_flash_attn.py:809: Parameter 'key' does not appear in the function signature
WARNING -  griffe: vllm/attention/backends/differential_flash_attn.py:810: No type or annotation for parameter 'value'
WARNING -  griffe: vllm/attention/backends/differential_flash_attn.py:810: Parameter 'value' does not appear in the function signature
WARNING -  griffe: vllm/attention/backends/differential_flash_attn.py:812: Failed to get 'name: description' pair from 'kv_cache = [2, num_blocks, block_size, num_kv_heads, head_size]'

WARNING -  griffe: vllm/engine/async_llm_engine.py:482: No type or annotation for parameter '*args'
WARNING -  griffe: vllm/engine/async_llm_engine.py:483: No type or annotation for parameter '**kwargs'

WARNING -  griffe: vllm/attention/backends/xformers.py:647: No type or annotation for parameter 'output'
WARNING -  griffe: vllm/attention/backends/xformers.py:647: Parameter 'output' does not appear in the function signature
WARNING -  griffe: vllm/attention/backends/xformers.py:477: Failed to get 'name: description' pair from 'kv_cache = [2, num_blocks, block_size * num_kv_heads * head_size]'
WARNING -  griffe: vllm/attention/backends/xformers.py:481: No type or annotation for parameter 'attn_type'
WARNING -  griffe: vllm/attention/backends/xformers.py:481: Parameter 'attn_type' does not appear in the function signature

WARNING -  griffe: vllm/attention/backends/utils.py:563: Failed to get 'exception: description' pair from 'is `None` when required for the calculations.'

WARNING -  griffe: vllm/entrypoints/llm.py:150: No type or annotation for parameter '**kwargs'

WARNING -  griffe: vllm/attention/backends/flash_attn.py:608: Failed to get 'name: description' pair from 'kv_cache = [2, num_blocks, block_size, num_kv_heads, head_size]'
WARNING -  griffe: vllm/attention/backends/flash_attn.py:864: No type or annotation for parameter 'attn_metadata'

WARNING -  griffe: vllm/attention/backends/rocm_flash_attn.py:590: Failed to get 'name: description' pair from 'kv_cache = [2, num_blocks, block_size * num_kv_heads * head_size]'
WARNING -  griffe: vllm/attention/backends/rocm_flash_attn.py:594: No type or annotation for parameter 'attn_type'
WARNING -  griffe: vllm/attention/backends/rocm_flash_attn.py:594: Parameter 'attn_type' does not appear in the function signature

WARNING -  griffe: vllm/sequence.py:150: No type or annotation for parameter 'prompt_token_ids'
WARNING -  griffe: vllm/sequence.py:150: Parameter 'prompt_token_ids' does not appear in the function signature
WARNING -  griffe: vllm/sequence.py:151: No type or annotation for parameter 'output_token_ids'
WARNING -  griffe: vllm/sequence.py:151: Parameter 'output_token_ids' does not appear in the function signature
WARNING -  griffe: vllm/sequence.py:941: No type or annotation for parameter 'request_id'
WARNING -  griffe: vllm/sequence.py:941: Parameter 'request_id' does not appear in the function signature
WARNING -  griffe: vllm/sequence.py:942: No type or annotation for parameter 'is_prompt'
WARNING -  griffe: vllm/sequence.py:942: Parameter 'is_prompt' does not appear in the function signature
WARNING -  griffe: vllm/sequence.py:943: No type or annotation for parameter 'seq_data'
WARNING -  griffe: vllm/sequence.py:943: Parameter 'seq_data' does not appear in the function signature
WARNING -  griffe: vllm/sequence.py:944: No type or annotation for parameter 'sampling_params'
WARNING -  griffe: vllm/sequence.py:944: Parameter 'sampling_params' does not appear in the function signature
WARNING -  griffe: vllm/sequence.py:945: No type or annotation for parameter 'block_tables'
WARNING -  griffe: vllm/sequence.py:945: Parameter 'block_tables' does not appear in the function signature
WARNING -  griffe: vllm/sequence.py:947: No type or annotation for parameter 'do_sample'
WARNING -  griffe: vllm/sequence.py:947: Parameter 'do_sample' does not appear in the function signature
WARNING -  griffe: vllm/sequence.py:950: No type or annotation for parameter 'token_chunk_size'
WARNING -  griffe: vllm/sequence.py:950: Parameter 'token_chunk_size' does not appear in the function signature
WARNING -  griffe: vllm/sequence.py:952: No type or annotation for parameter 'lora_request'
WARNING -  griffe: vllm/sequence.py:952: Parameter 'lora_request' does not appear in the function signature
WARNING -  griffe: vllm/sequence.py:953: No type or annotation for parameter 'computed_block_nums'
WARNING -  griffe: vllm/sequence.py:953: Parameter 'computed_block_nums' does not appear in the function signature
WARNING -  griffe: vllm/sequence.py:955: No type or annotation for parameter 'state'
WARNING -  griffe: vllm/sequence.py:955: Parameter 'state' does not appear in the function signature
WARNING -  griffe: vllm/sequence.py:956: No type or annotation for parameter 'multi_modal_data'
WARNING -  griffe: vllm/sequence.py:956: Parameter 'multi_modal_data' does not appear in the function signature
WARNING -  griffe: vllm/sequence.py:957: No type or annotation for parameter 'mm_processor_kwargs'
WARNING -  griffe: vllm/sequence.py:957: Parameter 'mm_processor_kwargs' does not appear in the function signature
WARNING -  griffe: vllm/sequence.py:958: No type or annotation for parameter 'encoder_seq_data'
WARNING -  griffe: vllm/sequence.py:958: Parameter 'encoder_seq_data' does not appear in the function signature
WARNING -  griffe: vllm/sequence.py:962: No type or annotation for parameter 'cross_block_table'
WARNING -  griffe: vllm/sequence.py:962: Parameter 'cross_block_table' does not appear in the function signature
WARNING -  griffe: vllm/sequence.py:1044: No type or annotation for parameter 'parent_seq_id'
WARNING -  griffe: vllm/sequence.py:1044: Parameter 'parent_seq_id' does not appear in the function signature
WARNING -  griffe: vllm/sequence.py:1046: No type or annotation for parameter 'output_token'
WARNING -  griffe: vllm/sequence.py:1046: Parameter 'output_token' does not appear in the function signature
WARNING -  griffe: vllm/sequence.py:1047: No type or annotation for parameter 'logprobs'
WARNING -  griffe: vllm/sequence.py:1047: Parameter 'logprobs' does not appear in the function signature

WARNING -  griffe: vllm/entrypoints/openai/tool_parsers/minimax_tool_parser.py:469: No type or annotation for parameter 'args_match'

WARNING -  griffe: vllm/outputs.py:450: Failed to get 'name: description' pair from 'Its length depends on the number of classes.'
WARNING -  griffe: vllm/outputs.py:412: Failed to get 'name: description' pair from 'Its length depends on the hidden dimension of the model.'

WARNING -  mkdocs_autorefs: api/vllm/index.md: from vllm/vllm/entrypoints/llm.py:696: (vllm.LLM.chat) Could not find cross-reference target 'generate'
WARNING -  mkdocs_autorefs: api/vllm/index.md: from vllm/vllm/entrypoints/llm.py:1336: (vllm.LLM.wake_up) Could not find cross-reference target 'sleep'
WARNING -  mkdocs_autorefs: api/vllm/index.md: from vllm/vllm/engine/llm_engine.py:623: (vllm.LLMEngine.add_request) Could not find cross-reference target 'vllm.Sequence'
WARNING -  mkdocs_autorefs: api/vllm/index.md: from vllm/vllm/engine/llm_engine.py:623: (vllm.LLMEngine.add_request) Could not find cross-reference target 'vllm.SequenceGroup'
WARNING -  mkdocs_autorefs: api/vllm/index.md: from vllm/vllm/engine/llm_engine.py:623: (vllm.LLMEngine.add_request) Could not find cross-reference target 'vllm.Sequence'
WARNING -  mkdocs_autorefs: api/vllm/index.md: from vllm/vllm/engine/llm_engine.py:623: (vllm.LLMEngine.add_request) Could not find cross-reference target 'vllm.SequenceGroup'

WARNING -  mkdocs_autorefs: api/vllm/index.md: from vllm/vllm/entrypoints/llm.py:696: (vllm.LLM.chat) Could not find cross-reference target 'generate'
WARNING -  mkdocs_autorefs: api/vllm/index.md: from vllm/vllm/entrypoints/llm.py:1336: (vllm.LLM.wake_up) Could not find cross-reference target 'sleep'
WARNING -  mkdocs_autorefs: api/vllm/index.md: from vllm/vllm/engine/llm_engine.py:623: (vllm.LLMEngine.add_request) Could not find cross-reference target 'vllm.Sequence'
WARNING -  mkdocs_autorefs: api/vllm/index.md: from vllm/vllm/engine/llm_engine.py:623: (vllm.LLMEngine.add_request) Could not find cross-reference target 'vllm.SequenceGroup'
WARNING -  mkdocs_autorefs: api/vllm/index.md: from vllm/vllm/engine/llm_engine.py:623: (vllm.LLMEngine.add_request) Could not find cross-reference target 'vllm.Sequence'
WARNING -  mkdocs_autorefs: api/vllm/index.md: from vllm/vllm/engine/llm_engine.py:623: (vllm.LLMEngine.add_request) Could not find cross-reference target 'vllm.SequenceGroup'
WARNING -  mkdocs_autorefs: api/vllm/engine/llm_engine.md: from vllm/vllm/engine/llm_engine.py:623: (vllm.engine.llm_engine.LLMEngine.add_request) Could not find cross-reference target 'vllm.Sequence'
WARNING -  mkdocs_autorefs: api/vllm/engine/llm_engine.md: from vllm/vllm/engine/llm_engine.py:623: (vllm.engine.llm_engine.LLMEngine.add_request) Could not find cross-reference target 'vllm.SequenceGroup'
WARNING -  mkdocs_autorefs: api/vllm/engine/llm_engine.md: from vllm/vllm/engine/llm_engine.py:623: (vllm.engine.llm_engine.LLMEngine.add_request) Could not find cross-reference target 'vllm.Sequence'
WARNING -  mkdocs_autorefs: api/vllm/engine/llm_engine.md: from vllm/vllm/engine/llm_engine.py:623: (vllm.engine.llm_engine.LLMEngine.add_request) Could not find cross-reference target 'vllm.SequenceGroup'
WARNING -  mkdocs_autorefs: api/vllm/entrypoints/llm.md: from vllm/vllm/entrypoints/llm.py:696: (vllm.entrypoints.llm.LLM.chat) Could not find cross-reference target 'generate'
WARNING -  mkdocs_autorefs: api/vllm/entrypoints/llm.md: from vllm/vllm/entrypoints/llm.py:1336: (vllm.entrypoints.llm.LLM.wake_up) Could not find cross-reference target 'sleep'

WARNING -  griffe: vllm/core/block_manager.py:476: Parameter 'sequence_group' does not appear in the function signature
WARNING -  griffe: vllm/core/block_manager.py:355: Parameter 'sequence_group' does not appear in the function signature
WARNING -  griffe: vllm/core/block_manager.py:408: Parameter 'num_lookahead_slots' does not appear in the function signature
WARNING -  griffe: vllm/core/block_manager.py:423: Parameter 'sequence_group' does not appear in the function signature

WARNING -  griffe: vllm/model_executor/layers/lightning_attn.py:461: No type or annotation for parameter 'q'
WARNING -  griffe: vllm/model_executor/layers/lightning_attn.py:462: No type or annotation for parameter 'k'
WARNING -  griffe: vllm/model_executor/layers/lightning_attn.py:463: No type or annotation for parameter 'v'
WARNING -  griffe: vllm/model_executor/layers/lightning_attn.py:464: No type or annotation for parameter 'ed'
WARNING -  griffe: vllm/model_executor/layers/lightning_attn.py:465: No type or annotation for parameter 'block_size'
WARNING -  griffe: vllm/model_executor/layers/lightning_attn.py:466: No type or annotation for parameter 'kv_history'
WARNING -  griffe: vllm/model_executor/layers/lightning_attn.py:469: No type or annotation for returned value 'output'
WARNING -  griffe: vllm/model_executor/layers/lightning_attn.py:470: No type or annotation for returned value 'kv'

Signed-off-by: Zerohertz <[email protected]>
WARNING -  griffe: vllm/model_executor/layers/linear.py:236: No type or annotation for parameter 'bias'
WARNING -  griffe: vllm/model_executor/layers/linear.py:236: Parameter 'bias' does not appear in the function signature
WARNING -  griffe: vllm/model_executor/layers/linear.py:381: No type or annotation for parameter 'output_size'
WARNING -  griffe: vllm/model_executor/layers/linear.py:381: Parameter 'output_size' does not appear in the function signature

Signed-off-by: Zerohertz <[email protected]>
@Zerohertz Zerohertz force-pushed the docs/mkdocs-warnings branch from e313771 to 4e192ab Compare August 26, 2025 11:31
Copy link
Member

@hmellor hmellor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this PR! Don't feel you have to do this all in one go. You'll likely run into lots of merge conflicts as main changes if you do.

@hmellor hmellor self-assigned this Aug 26, 2025
@Zerohertz
Copy link
Contributor Author

Zerohertz commented Aug 26, 2025

Great, thank you! I will work on this and aim to finalize the PR by tomorrow.

I also have a quick question. To achieve a faster mkdocs build (It taskes almost 5min 😂), I'd like to filter for only the scope I'm interested in. Could you please let me know which part of the mkdocs.yml file I would need to modify?

Sharing that information would be a great help to my work.

Zerohertz and others added 4 commits August 26, 2025 20:47
Co-authored-by: Harry Mellor <[email protected]>
Signed-off-by: Hyogeun Oh (오효근) <[email protected]>
WARNING -  griffe: vllm/attention/backends/differential_flash_attn.py:808: No type or annotation for parameter 'query'
WARNING -  griffe: vllm/attention/backends/differential_flash_attn.py:808: Parameter 'query' does not appear in the function signature
WARNING -  griffe: vllm/attention/backends/differential_flash_attn.py:809: No type or annotation for parameter 'key'
WARNING -  griffe: vllm/attention/backends/differential_flash_attn.py:809: Parameter 'key' does not appear in the function signature
WARNING -  griffe: vllm/attention/backends/differential_flash_attn.py:810: No type or annotation for parameter 'value'
WARNING -  griffe: vllm/attention/backends/differential_flash_attn.py:810: Parameter 'value' does not appear in the function signature
WARNING -  griffe: vllm/attention/backends/differential_flash_attn.py:812: Failed to get 'name: description' pair from 'kv_cache = [2, num_blocks, block_size, num_kv_heads, head_size]'

Signed-off-by: Zerohertz <[email protected]>
WARNING -  griffe: vllm/engine/async_llm_engine.py:482: No type or annotation for parameter '*args'
WARNING -  griffe: vllm/engine/async_llm_engine.py:483: No type or annotation for parameter '**kwargs'

Signed-off-by: Zerohertz <[email protected]>
WARNING -  griffe: vllm/attention/backends/xformers.py:647: No type or annotation for parameter 'output'
WARNING -  griffe: vllm/attention/backends/xformers.py:647: Parameter 'output' does not appear in the function signature
WARNING -  griffe: vllm/attention/backends/xformers.py:477: Failed to get 'name: description' pair from 'kv_cache = [2, num_blocks, block_size * num_kv_heads * head_size]'
WARNING -  griffe: vllm/attention/backends/xformers.py:481: No type or annotation for parameter 'attn_type'
WARNING -  griffe: vllm/attention/backends/xformers.py:481: Parameter 'attn_type' does not appear in the function signature

Signed-off-by: Zerohertz <[email protected]>
@hmellor
Copy link
Member

hmellor commented Aug 26, 2025

You might be able to play with

vllm/mkdocs.yaml

Lines 75 to 81 in fdeb3da

- api-autonav:
modules: ["vllm"]
api_root_uri: "api"
exclude:
- "re:vllm\\._.*" # Internal modules
- "vllm.third_party"
- "vllm.vllm_flash_attn"
to reduce the amount of API reference that is generated by adding things to exclude.

The docs for the API generation package we use is https://github.com/tlambert03/mkdocs-api-autonav. The docs for the package that then renders the generated API docs is https://mkdocstrings.github.io/python/usage/ (this one might be less helpful for reducing the build time but more information can't hurt).

For bonus points, if you see anything that could implement the best practices from https://mkdocstrings.github.io/griffe/guide/users/recommendations/python-code (the package that mkdocs-api-autonav uses to crawl the codebase) that would improve the build time for everyone!

WARNING -  griffe: vllm/attention/backends/utils.py:563: Failed to get 'exception: description' pair from 'is `None` when required for the calculations.'

Signed-off-by: Zerohertz <[email protected]>
WARNING -  griffe: vllm/entrypoints/llm.py:150: No type or annotation for parameter '**kwargs'

Signed-off-by: Zerohertz <[email protected]>
WARNING -  griffe: vllm/attention/backends/flash_attn.py:608: Failed to get 'name: description' pair from 'kv_cache = [2, num_blocks, block_size, num_kv_heads, head_size]'
WARNING -  griffe: vllm/attention/backends/flash_attn.py:864: No type or annotation for parameter 'attn_metadata'

Signed-off-by: Zerohertz <[email protected]>
@mergify mergify bot added the frontend label Aug 26, 2025
WARNING -  griffe: vllm/attention/backends/rocm_flash_attn.py:590: Failed to get 'name: description' pair from 'kv_cache = [2, num_blocks, block_size * num_kv_heads * head_size]'
WARNING -  griffe: vllm/attention/backends/rocm_flash_attn.py:594: No type or annotation for parameter 'attn_type'
WARNING -  griffe: vllm/attention/backends/rocm_flash_attn.py:594: Parameter 'attn_type' does not appear in the function signature

Signed-off-by: Zerohertz <[email protected]>
@mergify mergify bot added the rocm Related to AMD ROCm label Aug 26, 2025
logits_processors: Optional[list[Union[str,
type[LogitsProcessor]]]] = None,
**kwargs,
**kwargs: Any,
Copy link
Contributor Author

@Zerohertz Zerohertz Aug 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I initially used object for the **kwargs type hint, as I had seen it used in most other codes.
However, this caused issues with mypy, so I changed it to Any.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like you did the right thing. According to https://peps.python.org/pep-0484/#arbitrary-argument-lists-and-default-argument-values you should type hint the type of the expected values of the contents of **kwargs. Since we use **kwargs to pass arbitrary keyword arguments, Any is an apprioriate choice.

WARNING -  griffe: vllm/sequence.py:150: No type or annotation for parameter 'prompt_token_ids'
WARNING -  griffe: vllm/sequence.py:150: Parameter 'prompt_token_ids' does not appear in the function signature
WARNING -  griffe: vllm/sequence.py:151: No type or annotation for parameter 'output_token_ids'
WARNING -  griffe: vllm/sequence.py:151: Parameter 'output_token_ids' does not appear in the function signature
WARNING -  griffe: vllm/sequence.py:941: No type or annotation for parameter 'request_id'
WARNING -  griffe: vllm/sequence.py:941: Parameter 'request_id' does not appear in the function signature
WARNING -  griffe: vllm/sequence.py:942: No type or annotation for parameter 'is_prompt'
WARNING -  griffe: vllm/sequence.py:942: Parameter 'is_prompt' does not appear in the function signature
WARNING -  griffe: vllm/sequence.py:943: No type or annotation for parameter 'seq_data'
WARNING -  griffe: vllm/sequence.py:943: Parameter 'seq_data' does not appear in the function signature
WARNING -  griffe: vllm/sequence.py:944: No type or annotation for parameter 'sampling_params'
WARNING -  griffe: vllm/sequence.py:944: Parameter 'sampling_params' does not appear in the function signature
WARNING -  griffe: vllm/sequence.py:945: No type or annotation for parameter 'block_tables'
WARNING -  griffe: vllm/sequence.py:945: Parameter 'block_tables' does not appear in the function signature
WARNING -  griffe: vllm/sequence.py:947: No type or annotation for parameter 'do_sample'
WARNING -  griffe: vllm/sequence.py:947: Parameter 'do_sample' does not appear in the function signature
WARNING -  griffe: vllm/sequence.py:950: No type or annotation for parameter 'token_chunk_size'
WARNING -  griffe: vllm/sequence.py:950: Parameter 'token_chunk_size' does not appear in the function signature
WARNING -  griffe: vllm/sequence.py:952: No type or annotation for parameter 'lora_request'
WARNING -  griffe: vllm/sequence.py:952: Parameter 'lora_request' does not appear in the function signature
WARNING -  griffe: vllm/sequence.py:953: No type or annotation for parameter 'computed_block_nums'
WARNING -  griffe: vllm/sequence.py:953: Parameter 'computed_block_nums' does not appear in the function signature
WARNING -  griffe: vllm/sequence.py:955: No type or annotation for parameter 'state'
WARNING -  griffe: vllm/sequence.py:955: Parameter 'state' does not appear in the function signature
WARNING -  griffe: vllm/sequence.py:956: No type or annotation for parameter 'multi_modal_data'
WARNING -  griffe: vllm/sequence.py:956: Parameter 'multi_modal_data' does not appear in the function signature
WARNING -  griffe: vllm/sequence.py:957: No type or annotation for parameter 'mm_processor_kwargs'
WARNING -  griffe: vllm/sequence.py:957: Parameter 'mm_processor_kwargs' does not appear in the function signature
WARNING -  griffe: vllm/sequence.py:958: No type or annotation for parameter 'encoder_seq_data'
WARNING -  griffe: vllm/sequence.py:958: Parameter 'encoder_seq_data' does not appear in the function signature
WARNING -  griffe: vllm/sequence.py:962: No type or annotation for parameter 'cross_block_table'
WARNING -  griffe: vllm/sequence.py:962: Parameter 'cross_block_table' does not appear in the function signature
WARNING -  griffe: vllm/sequence.py:1044: No type or annotation for parameter 'parent_seq_id'
WARNING -  griffe: vllm/sequence.py:1044: Parameter 'parent_seq_id' does not appear in the function signature
WARNING -  griffe: vllm/sequence.py:1046: No type or annotation for parameter 'output_token'
WARNING -  griffe: vllm/sequence.py:1046: Parameter 'output_token' does not appear in the function signature
WARNING -  griffe: vllm/sequence.py:1047: No type or annotation for parameter 'logprobs'
WARNING -  griffe: vllm/sequence.py:1047: Parameter 'logprobs' does not appear in the function signature

Signed-off-by: Zerohertz <[email protected]>
WARNING -  griffe: vllm/entrypoints/openai/tool_parsers/minimax_tool_parser.py:469: No type or annotation for parameter 'args_match'

Signed-off-by: Zerohertz <[email protected]>
WARNING -  griffe: vllm/outputs.py:450: Failed to get 'name: description' pair from 'Its length depends on the number of classes.'
WARNING -  griffe: vllm/outputs.py:412: Failed to get 'name: description' pair from 'Its length depends on the hidden dimension of the model.'

Signed-off-by: Zerohertz <[email protected]>
WARNING -  mkdocs_autorefs: api/vllm/index.md: from vllm/vllm/entrypoints/llm.py:696: (vllm.LLM.chat) Could not find cross-reference target 'generate'
WARNING -  mkdocs_autorefs: api/vllm/index.md: from vllm/vllm/entrypoints/llm.py:1336: (vllm.LLM.wake_up) Could not find cross-reference target 'sleep'
WARNING -  mkdocs_autorefs: api/vllm/index.md: from vllm/vllm/engine/llm_engine.py:623: (vllm.LLMEngine.add_request) Could not find cross-reference target 'vllm.Sequence'
WARNING -  mkdocs_autorefs: api/vllm/index.md: from vllm/vllm/engine/llm_engine.py:623: (vllm.LLMEngine.add_request) Could not find cross-reference target 'vllm.SequenceGroup'
WARNING -  mkdocs_autorefs: api/vllm/index.md: from vllm/vllm/engine/llm_engine.py:623: (vllm.LLMEngine.add_request) Could not find cross-reference target 'vllm.Sequence'
WARNING -  mkdocs_autorefs: api/vllm/index.md: from vllm/vllm/engine/llm_engine.py:623: (vllm.LLMEngine.add_request) Could not find cross-reference target 'vllm.SequenceGroup'
WARNING -  mkdocs_autorefs: api/vllm/engine/llm_engine.md: from vllm/vllm/engine/llm_engine.py:623: (vllm.engine.llm_engine.LLMEngine.add_request) Could not find cross-reference target 'vllm.Sequence'
WARNING -  mkdocs_autorefs: api/vllm/engine/llm_engine.md: from vllm/vllm/engine/llm_engine.py:623: (vllm.engine.llm_engine.LLMEngine.add_request) Could not find cross-reference target 'vllm.SequenceGroup'
WARNING -  mkdocs_autorefs: api/vllm/engine/llm_engine.md: from vllm/vllm/engine/llm_engine.py:623: (vllm.engine.llm_engine.LLMEngine.add_request) Could not find cross-reference target 'vllm.Sequence'
WARNING -  mkdocs_autorefs: api/vllm/engine/llm_engine.md: from vllm/vllm/engine/llm_engine.py:623: (vllm.engine.llm_engine.LLMEngine.add_request) Could not find cross-reference target 'vllm.SequenceGroup'
WARNING -  mkdocs_autorefs: api/vllm/entrypoints/llm.md: from vllm/vllm/entrypoints/llm.py:696: (vllm.entrypoints.llm.LLM.chat) Could not find cross-reference target 'generate'
WARNING -  mkdocs_autorefs: api/vllm/entrypoints/llm.md: from vllm/vllm/entrypoints/llm.py:1336: (vllm.entrypoints.llm.LLM.wake_up) Could not find cross-reference target 'sleep'

Signed-off-by: Zerohertz <[email protected]>
@hmellor
Copy link
Member

hmellor commented Aug 26, 2025

Some more info to keep in the thread.

As documented in https://mkdocstrings.github.io/python/usage/docstrings/google/#docstring any section that isn't Google style will be transformed into a MkDocs admonition.

WARNING -  griffe: vllm/core/block_manager.py:476: Parameter 'sequence_group' does not appear in the function signature
WARNING -  griffe: vllm/core/block_manager.py:355: Parameter 'sequence_group' does not appear in the function signature
WARNING -  griffe: vllm/core/block_manager.py:408: Parameter 'num_lookahead_slots' does not appear in the function signature
WARNING -  griffe: vllm/core/block_manager.py:423: Parameter 'sequence_group' does not appear in the function signature

Signed-off-by: Zerohertz <[email protected]>
@Zerohertz
Copy link
Contributor Author

Zerohertz commented Aug 26, 2025

Tip

  - api-autonav:
      modules: ["vllm"]
      api_root_uri: "api"
      exclude:
        - "re:vllm\\._.*"
        - "vllm.third_party"
        - "vllm.vllm_flash_attn"
        - "re:vllm\\.(?!core).*" # This can decrease build time (but can cause auto ref issue, you can ignore it)

@Zerohertz
Copy link
Contributor Author

As you mentioned, I'm also concerned about potential conflicts, so I'll remove the "WIP" and keep the PR open.
I'm done for today and will try to continue working on it tomorrow if I have time.
If this gets merged by then, I'll create a new PR.

@Zerohertz Zerohertz changed the title (WIP) [Docs] Fix warnings in mkdocs build [Docs] Fix warnings in mkdocs build Aug 26, 2025
Comment on lines -155 to -160
prompt_token_ids: The token IDs of the prompt.
output_token_ids: The token IDs of the output.
cumulative_logprob: The cumulative log probability of the output.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I moved these to their respective @property methods because this class seems to have no public attributes

@hmellor hmellor enabled auto-merge (squash) August 26, 2025 15:52
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Aug 26, 2025
@hmellor hmellor merged commit 730d0ac into vllm-project:main Aug 26, 2025
56 checks passed
@Zerohertz Zerohertz deleted the docs/mkdocs-warnings branch August 27, 2025 00:23
tc-mb pushed a commit to tc-mb/vllm that referenced this pull request Aug 27, 2025
Signed-off-by: Zerohertz <[email protected]>
Signed-off-by: Hyogeun Oh (오효근) <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Signed-off-by: tc-mb <[email protected]>
epwalsh pushed a commit to epwalsh/vllm that referenced this pull request Aug 28, 2025
Signed-off-by: Zerohertz <[email protected]>
Signed-off-by: Hyogeun Oh (오효근) <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
xiao-llm pushed a commit to xiao-llm/vllm that referenced this pull request Aug 28, 2025
Signed-off-by: Zerohertz <[email protected]>
Signed-off-by: Hyogeun Oh (오효근) <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Signed-off-by: Xiao Yu <[email protected]>
xiao-llm pushed a commit to xiao-llm/vllm that referenced this pull request Aug 28, 2025
Signed-off-by: Zerohertz <[email protected]>
Signed-off-by: Hyogeun Oh (오효근) <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Signed-off-by: Xiao Yu <[email protected]>
zhewenl pushed a commit to zhewenl/vllm that referenced this pull request Aug 28, 2025
Signed-off-by: Zerohertz <[email protected]>
Signed-off-by: Hyogeun Oh (오효근) <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
dumb0002 pushed a commit to dumb0002/vllm that referenced this pull request Aug 28, 2025
Signed-off-by: Zerohertz <[email protected]>
Signed-off-by: Hyogeun Oh (오효근) <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
2015aroras pushed a commit to 2015aroras/vllm that referenced this pull request Aug 29, 2025
Signed-off-by: Zerohertz <[email protected]>
Signed-off-by: Hyogeun Oh (오효근) <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
nopperl pushed a commit to pfnet/vllm that referenced this pull request Sep 3, 2025
Signed-off-by: Zerohertz <[email protected]>
Signed-off-by: Hyogeun Oh (오효근) <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
MatthewBonanni pushed a commit to MatthewBonanni/vllm that referenced this pull request Sep 3, 2025
Signed-off-by: Zerohertz <[email protected]>
Signed-off-by: Hyogeun Oh (오효근) <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Signed-off-by: Matthew Bonanni <[email protected]>
MatthewBonanni pushed a commit to MatthewBonanni/vllm that referenced this pull request Sep 3, 2025
Signed-off-by: Zerohertz <[email protected]>
Signed-off-by: Hyogeun Oh (오효근) <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
842974287 pushed a commit to 842974287/vllm that referenced this pull request Sep 3, 2025
Signed-off-by: Zerohertz <[email protected]>
Signed-off-by: Hyogeun Oh (오효근) <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Signed-off-by: Shiyan Deng <[email protected]>
zhewenl pushed a commit to zhewenl/vllm that referenced this pull request Sep 3, 2025
Signed-off-by: Zerohertz <[email protected]>
Signed-off-by: Hyogeun Oh (오효근) <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
frontend ready ONLY add when PR is ready to merge/full CI is needed rocm Related to AMD ROCm tool-calling
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

2 participants