Skip to content

Commit b7f1b1e

Browse files
authored
Update README (#45)
1 parent d61267d commit b7f1b1e

File tree

1 file changed

+9
-20
lines changed

1 file changed

+9
-20
lines changed

README.md

Lines changed: 9 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -93,13 +93,14 @@ Take the following example:
9393
pertinent_user_preferences: UserPreferences
9494

9595
</details>
96-
If you naively extract these values using `gpt-4o`, it's prone to failure:
96+
If you naively extract these values using tool calling with many LLMs, such as `gpt-4o` (with `tool_choice` set to `TelegramAndTrustFallPreferences` to [enforce the use of the tool call](https://platform.openai.com/docs/guides/function-calling?api-mode=responses#additional-configurations)), it's prone to failure:
9797

9898
```python
9999
from langchain_openai import ChatOpenAI
100100

101101
llm = ChatOpenAI(model="gpt-4o")
102-
bound = llm.with_structured_output(TelegramAndTrustFallPreferences)
102+
bound = llm.bind_tools([TelegramAndTrustFallPreferences],
103+
tool_choice="TelegramAndTrustFallPreferences")
103104

104105
conversation = """Operator: How may I assist with your telegram, sir?
105106
Customer: I need to send a message about our trust fall exercise.
@@ -124,21 +125,6 @@ pertinent_user_preferences.communication_preferences.semaphore
124125
For further information visit https://errors.pydantic.dev/2.8/v/model_type
125126
```
126127

127-
If you try to use **strict** mode or OpenAI's `json_schema`, it will give you an error as well, since their parser doesn't support the complex JSON schemas:
128-
129-
```python
130-
bound = llm.bind_tools([TelegramAndTrustFallPreferences], strict=True, response_format=TelegramAndTrustFallPreferences)
131-
132-
bound.invoke(f"""Extract the preferences from the following conversation:
133-
<convo>
134-
{conversation}
135-
</convo>""")
136-
```
137-
138-
```text
139-
BadRequestError: Error code: 400 - {'error': {'message': "Invalid schema for function 'TelegramAndTrustFallPreferences': "}}
140-
```
141-
142128
With `trustcall`, this extraction task is easy.
143129

144130
```python
@@ -324,7 +310,8 @@ Alex: It's going well! I've been cooking almost every day now. I'd say I've beco
324310

325311

326312
# Naive approach
327-
bound = llm.with_structured_output(User)
313+
bound = llm.bind_tools([User],
314+
tool_choice="User")
328315
naive_result = bound.invoke(
329316
f"""Update the memory (JSON doc) to incorporate new information from the following conversation:
330317
<user_info>
@@ -500,9 +487,11 @@ Output:
500487

501488
No fields omitted, and the important new information is seamlessly integrated.
502489

503-
### Simultanous updates & insertions
490+
### Simultaneous generation & updating
491+
492+
Both problems above (difficulty with type-safe generation of complex schemas & difficulty with updating existing schemas) are compounded when you want the LLM to handle **both** updates **and** inserts.
504493

505-
Both problems above (difficulty with type-safe generation of complex schemas & difficulty with generating the correct edits to existing schemas) are compounded when you have to be prompting the LLM to handle **both** updates **and** inserts, as is often the case when extracting multiple memory "events" from conversations.
494+
This is often the case when extracting things like multiple memory "events" from conversations.
506495

507496
Let's see an example below. Suppose you are managing a list of "relationships":
508497

0 commit comments

Comments
 (0)