Skip to content

[Bug]: Structured output requests can hang the server #14151

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
1 task done
joerunde opened this issue Mar 3, 2025 · 0 comments · Fixed by #14589
Closed
1 task done

[Bug]: Structured output requests can hang the server #14151

joerunde opened this issue Mar 3, 2025 · 0 comments · Fixed by #14589
Labels
bug Something isn't working structured-output

Comments

@joerunde
Copy link
Collaborator

joerunde commented Mar 3, 2025

Your current environment

This isn't version specific, the use of a ThreadPoolExecutor to build grammars for structured output has been around since the original outlines integration

🐛 Describe the bug

To build structured output (guided decoding) processors in vLLM, we currently either:

  • Execute the non-async grammar creation right in the event loop, or
  • Use a ThreadPoolExectuor to run the grammar creation in a separate thread

However, there are cases where a user may pass in a json schema for structured output that will cause grammar compilation to take a really long time. One such case reported with outlines is here: dottxt-ai/outlines-core#180, and we've had many reports from products with >1k line json schemas input as guided decoding parameters that exhibit this behavior.

The problem is that we don't have a way to cancel the construction of these grammars when the api request times out or is cancelled. Specifically when using the threadpool, the task that is waiting on the future from the pool will correctly cancel when the client disconnects but the thread that's doing the work in the pool will continue spinning. This causes a situation where we have 100% cpu usage for hours at a time while the server continues to report healthy. There is too much cpu contention to actually process new requests and hand them off to the engine though, so all requests to the server appear to hang.

Slack thread on this here regarding the structured output work for V1: https://vllm-dev.slack.com/archives/C07QQ8DAXMK/p1741024399466749

This might be worth fixing in V0 as well, depending on how fast we can actually get V1 out

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working structured-output
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants