-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Avoid second roundtrip when call functions #652
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@Grogdunn , are you referring to providing support for |
Ok! Good point |
@tzolov Ok Done, if you use a Function<Input, Void> or a Consumer the second round-trip is skipped EDIT: |
The PR #656 has some discussion, but perhaps this can be solved via clever prompt engineering. A prompt such as The contract with AI models and function calling is a contract driven by the AI model itself, so one does need to reply to finish the conversation. The lower level API that lets you control the conversation is demonstrated here. Perhaps after the first call you can simply not reply. Don't know what state this will leave things in but you can experiment with these two approaches. |
For me this issue can be closed (as pull request). There are 2 workarounds to handle this:
Option 1 is less prone to hallucination, but "disrupts" the standard AI workflow/contract. So feel free to close this issue. |
Expected Behavior
For some customers we do not have necessity to pass the function-result to LLMs again.
Eg: extract some information from a blob of text and save it in structured mode. That can be accomplished with a function call with in input the "data structure" but the output is not necessary, so we can avoid totally the "second call" to LLM.
Current Behavior
Every function must return a value that is passed to LLM to produce a nice output.
Context
As workaround we implement some function with "spying" capability (like tests) and use it to grab the "first call" data. And ignore completly the second call-result.
We try to add this feature in spring-ai I think that can be useful to others. Or at least anyone can decide.
The text was updated successfully, but these errors were encountered: