You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This isn't version specific, the use of a ThreadPoolExecutor to build grammars for structured output has been around since the original outlines integration
🐛 Describe the bug
To build structured output (guided decoding) processors in vLLM, we currently either:
Execute the non-async grammar creation right in the event loop, or
Use a ThreadPoolExectuor to run the grammar creation in a separate thread
However, there are cases where a user may pass in a json schema for structured output that will cause grammar compilation to take a really long time. One such case reported with outlines is here: dottxt-ai/outlines-core#180, and we've had many reports from products with >1k line json schemas input as guided decoding parameters that exhibit this behavior.
The problem is that we don't have a way to cancel the construction of these grammars when the api request times out or is cancelled. Specifically when using the threadpool, the task that is waiting on the future from the pool will correctly cancel when the client disconnects but the thread that's doing the work in the pool will continue spinning. This causes a situation where we have 100% cpu usage for hours at a time while the server continues to report healthy. There is too much cpu contention to actually process new requests and hand them off to the engine though, so all requests to the server appear to hang.
This might be worth fixing in V0 as well, depending on how fast we can actually get V1 out
Before submitting a new issue...
Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
The text was updated successfully, but these errors were encountered:
Your current environment
This isn't version specific, the use of a
ThreadPoolExecutor
to build grammars for structured output has been around since the originaloutlines
integration🐛 Describe the bug
To build structured output (guided decoding) processors in vLLM, we currently either:
However, there are cases where a user may pass in a json schema for structured output that will cause grammar compilation to take a really long time. One such case reported with outlines is here: dottxt-ai/outlines-core#180, and we've had many reports from products with >1k line json schemas input as guided decoding parameters that exhibit this behavior.
The problem is that we don't have a way to cancel the construction of these grammars when the api request times out or is cancelled. Specifically when using the threadpool, the task that is waiting on the future from the pool will correctly cancel when the client disconnects but the thread that's doing the work in the pool will continue spinning. This causes a situation where we have 100% cpu usage for hours at a time while the server continues to report healthy. There is too much cpu contention to actually process new requests and hand them off to the engine though, so all requests to the server appear to hang.
Slack thread on this here regarding the structured output work for V1: https://vllm-dev.slack.com/archives/C07QQ8DAXMK/p1741024399466749
This might be worth fixing in V0 as well, depending on how fast we can actually get V1 out
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: