Skip to content

Commit c81f6ac

Browse files
authored
Merge branch 'master' into update_couchbase_docs
2 parents cb0d2fd + 8377490 commit c81f6ac

File tree

32 files changed

+949
-519
lines changed

32 files changed

+949
-519
lines changed

docs/docs/concepts/retrievers.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -68,8 +68,8 @@ For example, you can build a retriever for a SQL database using text-to-SQL conv
6868

6969
:::info[Further reading]
7070

71-
* See our [tutorial](/docs/tutorials/sql_qa/) for context on how to build a retreiver using a SQL database and text-to-SQL.
72-
* See our [tutorial](/docs/tutorials/graph/) for context on how to build a retreiver using a graph database and text-to-Cypher.
71+
* See our [tutorial](/docs/tutorials/sql_qa/) for context on how to build a retriever using a SQL database and text-to-SQL.
72+
* See our [tutorial](/docs/tutorials/graph/) for context on how to build a retriever using a graph database and text-to-Cypher.
7373

7474
:::
7575

docs/docs/integrations/chat/ollama.ipynb

Lines changed: 36 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -39,9 +39,10 @@
3939
"\n",
4040
"## Setup\n",
4141
"\n",
42-
"First, follow [these instructions](https://github.com/jmorganca/ollama) to set up and run a local Ollama instance:\n",
42+
"First, follow [these instructions](https://github.com/ollama/ollama?tab=readme-ov-file#ollama) to set up and run a local Ollama instance:\n",
4343
"\n",
44-
"* [Download](https://ollama.ai/download) and install Ollama onto the available supported platforms (including Windows Subsystem for Linux)\n",
44+
"* [Download](https://ollama.ai/download) and install Ollama onto the available supported platforms (including Windows Subsystem for Linux aka WSL, macOS, and Linux)\n",
45+
" * macOS users can install via Homebrew with `brew install ollama` and start with `brew services start ollama`\n",
4546
"* Fetch available LLM model via `ollama pull <name-of-model>`\n",
4647
" * View a list of available models via the [model library](https://ollama.ai/library)\n",
4748
" * e.g., `ollama pull llama3`\n",
@@ -54,7 +55,7 @@
5455
"* Specify the exact version of the model of interest as such `ollama pull vicuna:13b-v1.5-16k-q4_0` (View the [various tags for the `Vicuna`](https://ollama.ai/library/vicuna/tags) model in this instance)\n",
5556
"* To view all pulled models, use `ollama list`\n",
5657
"* To chat directly with a model from the command line, use `ollama run <name-of-model>`\n",
57-
"* View the [Ollama documentation](https://github.com/jmorganca/ollama) for more commands. Run `ollama help` in the terminal to see available commands too.\n"
58+
"* View the [Ollama documentation](https://github.com/ollama/ollama/tree/main/docs) for more commands. You can run `ollama help` in the terminal to see available commands.\n"
5859
]
5960
},
6061
{
@@ -72,8 +73,8 @@
7273
"metadata": {},
7374
"outputs": [],
7475
"source": [
75-
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
76-
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
76+
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
77+
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
7778
]
7879
},
7980
{
@@ -159,17 +160,15 @@
159160
{
160161
"data": {
161162
"text/plain": [
162-
"AIMessage(content='The translation of \"I love programming\" from English to French is:\\n\\n\"J\\'adore programmer.\"', response_metadata={'model': 'llama3.1', 'created_at': '2024-08-19T16:05:32.81965Z', 'message': {'role': 'assistant', 'content': ''}, 'done_reason': 'stop', 'done': True, 'total_duration': 2167842917, 'load_duration': 54222584, 'prompt_eval_count': 35, 'prompt_eval_duration': 893007000, 'eval_count': 22, 'eval_duration': 1218962000}, id='run-0863daa2-43bf-4a43-86cc-611b23eae466-0', usage_metadata={'input_tokens': 35, 'output_tokens': 22, 'total_tokens': 57})"
163+
"AIMessage(content='The translation of \"I love programming\" in French is:\\n\\n\"J\\'adore le programmation.\"', additional_kwargs={}, response_metadata={'model': 'llama3.1', 'created_at': '2025-06-25T18:43:00.483666Z', 'done': True, 'done_reason': 'stop', 'total_duration': 619971208, 'load_duration': 27793125, 'prompt_eval_count': 35, 'prompt_eval_duration': 36354583, 'eval_count': 22, 'eval_duration': 555182667, 'model_name': 'llama3.1'}, id='run--348bb5ef-9dd9-4271-bc7e-a9ddb54c28c1-0', usage_metadata={'input_tokens': 35, 'output_tokens': 22, 'total_tokens': 57})"
163164
]
164165
},
165-
"execution_count": 10,
166+
"execution_count": 5,
166167
"metadata": {},
167168
"output_type": "execute_result"
168169
}
169170
],
170171
"source": [
171-
"from langchain_core.messages import AIMessage\n",
172-
"\n",
173172
"messages = [\n",
174173
" (\n",
175174
" \"system\",\n",
@@ -191,9 +190,9 @@
191190
"name": "stdout",
192191
"output_type": "stream",
193192
"text": [
194-
"The translation of \"I love programming\" from English to French is:\n",
193+
"The translation of \"I love programming\" in French is:\n",
195194
"\n",
196-
"\"J'adore programmer.\"\n"
195+
"\"J'adore le programmation.\"\n"
197196
]
198197
}
199198
],
@@ -220,10 +219,10 @@
220219
{
221220
"data": {
222221
"text/plain": [
223-
"AIMessage(content='Das Programmieren ist mir ein Leidenschaft! (That\\'s \"Programming is my passion!\" in German.) Would you like me to translate anything else?', response_metadata={'model': 'llama3.1', 'created_at': '2024-08-19T16:05:34.893548Z', 'message': {'role': 'assistant', 'content': ''}, 'done_reason': 'stop', 'done': True, 'total_duration': 2045997333, 'load_duration': 22584792, 'prompt_eval_count': 30, 'prompt_eval_duration': 213210000, 'eval_count': 32, 'eval_duration': 1808541000}, id='run-d18e1c6b-50e0-4b1d-b23a-973fa058edad-0', usage_metadata={'input_tokens': 30, 'output_tokens': 32, 'total_tokens': 62})"
222+
"AIMessage(content='\"Programmieren ist meine Leidenschaft.\"\\n\\n(I translated \"programming\" to the German word \"Programmieren\", and added \"ist meine Leidenschaft\" which means \"is my passion\")', additional_kwargs={}, response_metadata={'model': 'llama3.1', 'created_at': '2025-06-25T18:43:29.350032Z', 'done': True, 'done_reason': 'stop', 'total_duration': 1194744459, 'load_duration': 26982500, 'prompt_eval_count': 30, 'prompt_eval_duration': 117043458, 'eval_count': 41, 'eval_duration': 1049892167, 'model_name': 'llama3.1'}, id='run--efc6436e-2346-43d9-8118-3c20b3cdf0d0-0', usage_metadata={'input_tokens': 30, 'output_tokens': 41, 'total_tokens': 71})"
224223
]
225224
},
226-
"execution_count": 12,
225+
"execution_count": 7,
227226
"metadata": {},
228227
"output_type": "execute_result"
229228
}
@@ -258,7 +257,7 @@
258257
"source": [
259258
"## Tool calling\n",
260259
"\n",
261-
"We can use [tool calling](https://blog.langchain.dev/improving-core-tool-interfaces-and-docs-in-langchain/) with an LLM [that has been fine-tuned for tool use](https://ollama.com/search?&c=tools) such as `llama3.1`:\n",
260+
"We can use [tool calling](/docs/concepts/tool_calling/) with an LLM [that has been fine-tuned for tool use](https://ollama.com/search?&c=tools) such as `llama3.1`:\n",
262261
"\n",
263262
"```\n",
264263
"ollama pull llama3.1\n",
@@ -274,23 +273,17 @@
274273
"metadata": {},
275274
"outputs": [
276275
{
277-
"data": {
278-
"text/plain": [
279-
"[{'name': 'validate_user',\n",
280-
" 'args': {'addresses': '[\"123 Fake St, Boston, MA\", \"234 Pretend Boulevard, Houston, TX\"]',\n",
281-
" 'user_id': '123'},\n",
282-
" 'id': '40fe3de0-500c-4b91-9616-5932a929e640',\n",
283-
" 'type': 'tool_call'}]"
284-
]
285-
},
286-
"execution_count": 13,
287-
"metadata": {},
288-
"output_type": "execute_result"
276+
"name": "stdout",
277+
"output_type": "stream",
278+
"text": [
279+
"[{'name': 'validate_user', 'args': {'addresses': ['123 Fake St, Boston, MA', '234 Pretend Boulevard, Houston, TX'], 'user_id': '123'}, 'id': 'aef33a32-a34b-4b37-b054-e0d85584772f', 'type': 'tool_call'}]\n"
280+
]
289281
}
290282
],
291283
"source": [
292284
"from typing import List\n",
293285
"\n",
286+
"from langchain_core.messages import AIMessage\n",
294287
"from langchain_core.tools import tool\n",
295288
"from langchain_ollama import ChatOllama\n",
296289
"\n",
@@ -316,7 +309,9 @@
316309
" \"123 Fake St in Boston MA and 234 Pretend Boulevard in \"\n",
317310
" \"Houston TX.\"\n",
318311
")\n",
319-
"result.tool_calls"
312+
"\n",
313+
"if isinstance(result, AIMessage) and result.tool_calls:\n",
314+
" print(result.tool_calls)"
320315
]
321316
},
322317
{
@@ -333,6 +328,16 @@
333328
"Be sure to update Ollama so that you have the most recent version to support multi-modal."
334329
]
335330
},
331+
{
332+
"cell_type": "code",
333+
"execution_count": null,
334+
"id": "69920d39",
335+
"metadata": {},
336+
"outputs": [],
337+
"source": [
338+
"%pip install pillow"
339+
]
340+
},
336341
{
337342
"cell_type": "code",
338343
"execution_count": 15,
@@ -467,14 +472,13 @@
467472
"output_type": "stream",
468473
"text": [
469474
"Here is my thought process:\n",
470-
"This question is asking for the result of 3 raised to the power of 3, which is a basic mathematical operation. \n",
475+
"The user is asking for the value of 3 raised to the power of 3, which is a basic exponentiation operation.\n",
471476
"\n",
472477
"Here is my response:\n",
473-
"The expression 3^3 means 3 raised to the power of 3. To calculate this, you multiply the base number (3) by itself as many times as its exponent (3):\n",
474478
"\n",
475-
"3 * 3 * 3 = 27\n",
479+
"3^3 (read as \"3 to the power of 3\") equals 27. \n",
476480
"\n",
477-
"So, 3^3 equals 27.\n"
481+
"This calculation is performed by multiplying 3 by itself three times: 3*3*3 = 27.\n"
478482
]
479483
}
480484
],
@@ -508,7 +512,7 @@
508512
"source": [
509513
"## API reference\n",
510514
"\n",
511-
"For detailed documentation of all ChatOllama features and configurations head to the API reference: https://python.langchain.com/api_reference/ollama/chat_models/langchain_ollama.chat_models.ChatOllama.html"
515+
"For detailed documentation of all ChatOllama features and configurations head to the [API reference](https://python.langchain.com/api_reference/ollama/chat_models/langchain_ollama.chat_models.ChatOllama.html)."
512516
]
513517
}
514518
],

docs/docs/integrations/llms/clarifai.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -128,7 +128,7 @@
128128
"\n",
129129
"You will have to also initialize the model id and if needed, the model version id. Some models have many versions, you can choose the one appropriate for your task.\n",
130130
" \n",
131-
"Alternatively, You can use the model_url (for ex: \"https://clarifai.com/anthropic/completion/models/claude-v2\") for intialization."
131+
"Alternatively, You can use the model_url (for ex: \"https://clarifai.com/anthropic/completion/models/claude-v2\") for initialization."
132132
]
133133
},
134134
{

docs/docs/integrations/llms/ollama.ipynb

Lines changed: 29 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -46,29 +46,30 @@
4646
"source": [
4747
"## Setup\n",
4848
"\n",
49-
"First, follow [these instructions](https://github.com/jmorganca/ollama) to set up and run a local Ollama instance:\n",
49+
"First, follow [these instructions](https://github.com/ollama/ollama?tab=readme-ov-file#ollama) to set up and run a local Ollama instance:\n",
5050
"\n",
51-
"* [Download](https://ollama.ai/download) and install Ollama onto the available supported platforms (including Windows Subsystem for Linux)\n",
51+
"* [Download](https://ollama.ai/download) and install Ollama onto the available supported platforms (including Windows Subsystem for Linux aka WSL, macOS, and Linux)\n",
52+
" * macOS users can install via Homebrew with `brew install ollama` and start with `brew services start ollama`\n",
5253
"* Fetch available LLM model via `ollama pull <name-of-model>`\n",
5354
" * View a list of available models via the [model library](https://ollama.ai/library)\n",
5455
" * e.g., `ollama pull llama3`\n",
5556
"* This will download the default tagged version of the model. Typically, the default points to the latest, smallest sized-parameter model.\n",
5657
"\n",
5758
"> On Mac, the models will be download to `~/.ollama/models`\n",
58-
"> \n",
59+
">\n",
5960
"> On Linux (or WSL), the models will be stored at `/usr/share/ollama/.ollama/models`\n",
6061
"\n",
6162
"* Specify the exact version of the model of interest as such `ollama pull vicuna:13b-v1.5-16k-q4_0` (View the [various tags for the `Vicuna`](https://ollama.ai/library/vicuna/tags) model in this instance)\n",
6263
"* To view all pulled models, use `ollama list`\n",
6364
"* To chat directly with a model from the command line, use `ollama run <name-of-model>`\n",
64-
"* View the [Ollama documentation](https://github.com/jmorganca/ollama) for more commands. Run `ollama help` in the terminal to see available commands too.\n",
65+
"* View the [Ollama documentation](https://github.com/ollama/ollama/tree/main/docs) for more commands. You can run `ollama help` in the terminal to see available commands.\n",
6566
"\n",
6667
"## Usage"
6768
]
6869
},
6970
{
7071
"cell_type": "code",
71-
"execution_count": 2,
72+
"execution_count": 1,
7273
"id": "035dea0f",
7374
"metadata": {
7475
"tags": []
@@ -77,10 +78,10 @@
7778
{
7879
"data": {
7980
"text/plain": [
80-
"\"Sounds like a plan!\\n\\nTo answer what LangChain is, let's break it down step by step.\\n\\n**Step 1: Understand the Context**\\nLangChain seems to be related to language or programming, possibly in an AI context. This makes me wonder if it's a framework, library, or tool for building models or interacting with them.\\n\\n**Step 2: Research Possible Definitions**\\nAfter some quick searching, I found that LangChain is actually a Python library for building and composing conversational AI models. It seems to provide a way to create modular and reusable components for chatbots, voice assistants, and other conversational interfaces.\\n\\n**Step 3: Explore Key Features and Use Cases**\\nLangChain likely offers features such as:\\n\\n* Easy composition of conversational flows\\n* Support for various input/output formats (e.g., text, audio)\\n* Integration with popular AI frameworks and libraries\\n\\nUse cases might include building chatbots for customer service, creating voice assistants for smart homes, or developing interactive stories.\\n\\n**Step 4: Confirm the Definition**\\nAfter this step-by-step analysis, I'm fairly confident that LangChain is a Python library for building conversational AI models. If you'd like to verify or provide more context, feel free to do so!\""
81+
"'To break down what LangChain is, let\\'s analyze it step by step:\\n\\n1. **Break down the name**: \"Lang\" likely stands for \"Language\", suggesting that LangChain has something to do with language processing or AI-related tasks involving human languages.\\n\\n2. **Understanding the term \"chain\" in this context**: In technology and computing, particularly in the realm of artificial intelligence (AI) and machine learning (ML), a \"chain\" often refers to a series of processes linked together. This can imply that LangChain involves executing multiple tasks or functions in sequence.\\n\\n3. **Connection to AI/ML technologies**: Given its name and context, it\\'s reasonable to infer that LangChain is involved in the field of natural language processing (NLP) or more broadly, artificial intelligence. NLP is an area within computer science concerned with the interaction between computers and humans in a human language.\\n\\n4. **Possible functions or services**: Considering the focus on languages and the potential for multiple linked processes, LangChain might offer various AI-driven functionalities such as:\\n - Text analysis (like sentiment analysis or text classification).\\n - Language translation.\\n - Chatbots or conversational interfaces.\\n - Content generation (e.g., articles, summaries).\\n - Dialogue management systems.\\n\\n5. **Conclusion**: Based on the name and analysis of its components, LangChain is likely a tool or framework for developing applications that involve complex interactions with human languages through AI and ML technologies. It possibly enables creating custom chatbots, natural language interfaces, text generators, or other applications that require intricate language understanding and processing capabilities.\\n\\nThis step-by-step breakdown indicates that LangChain is focused on leveraging AI to understand, process, and interact with human languages in a sophisticated manner, likely through multiple linked processes (the \"chain\" part).'"
8182
]
8283
},
83-
"execution_count": 2,
84+
"execution_count": 1,
8485
"metadata": {},
8586
"output_type": "execute_result"
8687
}
@@ -116,6 +117,16 @@
116117
"Be sure to update Ollama so that you have the most recent version to support multi-modal."
117118
]
118119
},
120+
{
121+
"cell_type": "code",
122+
"execution_count": null,
123+
"id": "56f95afd",
124+
"metadata": {},
125+
"outputs": [],
126+
"source": [
127+
"%pip install pillow"
128+
]
129+
},
119130
{
120131
"cell_type": "code",
121132
"execution_count": 2,
@@ -177,7 +188,7 @@
177188
},
178189
{
179190
"cell_type": "code",
180-
"execution_count": 3,
191+
"execution_count": 5,
181192
"id": "79aaf863",
182193
"metadata": {},
183194
"outputs": [
@@ -200,6 +211,16 @@
200211
"llm_with_image_context = llm.bind(images=[image_b64])\n",
201212
"llm_with_image_context.invoke(\"What is the dollar based gross retention rate:\")"
202213
]
214+
},
215+
{
216+
"cell_type": "markdown",
217+
"id": "afd9494f",
218+
"metadata": {},
219+
"source": [
220+
"## API reference\n",
221+
"\n",
222+
"For detailed documentation of all ChatOllama features and configurations head to the [API reference](https://python.langchain.com/api_reference/ollama/llms/langchain_ollama.llms.OllamaLLM.html)."
223+
]
203224
}
204225
],
205226
"metadata": {

docs/docs/integrations/providers/mongodb_atlas.mdx

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -99,4 +99,3 @@ set_llm_cache(MongoDBAtlasSemanticCache(
9999
database_name=DATABASE_NAME,
100100
))
101101
```
102-
``
Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
# SurrealDB
2+
3+
[SurrealDB](https://surrealdb.com) is a unified, multi-model database purpose-built for AI systems. It combines structured and unstructured data (including vector search, graph traversal, relational queries, full-text search, document storage, and time-series data) into a single ACID-compliant engine, scaling from a 3 MB edge binary to petabyte-scale clusters in the cloud. By eliminating the need for multiple specialized stores, SurrealDB simplifies architectures, reduces latency, and ensures consistency for AI workloads.
4+
5+
**Why SurrealDB Matters for GenAI Systems**
6+
- **One engine for storage and memory:** Combine durable storage and fast, agent-friendly memory in a single system, providing all the data your agent needs and removing the need to sync multiple systems.
7+
- **One-hop memory for agents:** Run vector search, graph traversal, semantic joins, and transactional writes in a single query, giving LLM agents fast, consistent memory access without stitching relational, graph and vector databases together.
8+
- **In-place inference and real-time updates:** SurrealDB enables agents to run inference next to data and receive millisecond-fresh updates, critical for real-time reasoning and collaboration.
9+
- **Versioned, durable context:** SurrealDB supports time-travel queries and versioned records, letting agents audit or “replay” past states for consistent, explainable reasoning.
10+
- **Plug-and-play agent memory:** Expose AI memory as a native concept, making it easy to use SurrealDB as a drop-in backend for AI frameworks.
11+
12+
## Installation and Setup
13+
14+
```bash
15+
pip install langchain-surrealdb
16+
```
17+
18+
## Vector Store
19+
20+
[This notebook](/docs/integrations/vectorstores/surrealdb) covers how to get started with the SurrealDB vector store.
21+
22+
Find more [examples](https://github.com/surrealdb/langchain-surrealdb/blob/main/README.md#simple-example) in the repository.

0 commit comments

Comments
 (0)