You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+9-20Lines changed: 9 additions & 20 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -93,13 +93,14 @@ Take the following example:
93
93
pertinent_user_preferences: UserPreferences
94
94
95
95
</details>
96
-
If you naively extract these values using `gpt-4o`, it's prone to failure:
96
+
If you naively extract these values using tool calling with many LLMs, such as `gpt-4o` (with `tool_choice` set to `TelegramAndTrustFallPreferences` to [enforce the use of the tool call](https://platform.openai.com/docs/guides/function-calling?api-mode=responses#additional-configurations)), it's prone to failure:
For further information visit https://errors.pydantic.dev/2.8/v/model_type
125
126
```
126
127
127
-
If you try to use **strict** mode or OpenAI's `json_schema`, it will give you an error as well, since their parser doesn't support the complex JSON schemas:
bound.invoke(f"""Extract the preferences from the following conversation:
133
-
<convo>
134
-
{conversation}
135
-
</convo>""")
136
-
```
137
-
138
-
```text
139
-
BadRequestError: Error code: 400 - {'error': {'message': "Invalid schema for function 'TelegramAndTrustFallPreferences': "}}
140
-
```
141
-
142
128
With `trustcall`, this extraction task is easy.
143
129
144
130
```python
@@ -324,7 +310,8 @@ Alex: It's going well! I've been cooking almost every day now. I'd say I've beco
324
310
325
311
326
312
# Naive approach
327
-
bound = llm.with_structured_output(User)
313
+
bound = llm.bind_tools([User],
314
+
tool_choice="User")
328
315
naive_result = bound.invoke(
329
316
f"""Update the memory (JSON doc) to incorporate new information from the following conversation:
330
317
<user_info>
@@ -500,9 +487,11 @@ Output:
500
487
501
488
No fields omitted, and the important new information is seamlessly integrated.
502
489
503
-
### Simultanous updates & insertions
490
+
### Simultaneous generation & updating
491
+
492
+
Both problems above (difficulty with type-safe generation of complex schemas & difficulty with updating existing schemas) are compounded when you want the LLM to handle **both** updates **and** inserts.
504
493
505
-
Both problems above (difficulty with type-safe generation of complex schemas & difficulty with generating the correct edits to existing schemas) are compounded when you have to be prompting the LLM to handle **both** updates **and** inserts, as is often the case when extracting multiple memory "events" from conversations.
494
+
This is often the case when extracting things like multiple memory "events" from conversations.
506
495
507
496
Let's see an example below. Suppose you are managing a list of "relationships":
0 commit comments