|
39 | 39 | "\n",
|
40 | 40 | "## Setup\n",
|
41 | 41 | "\n",
|
42 |
| - "First, follow [these instructions](https://github.com/jmorganca/ollama) to set up and run a local Ollama instance:\n", |
| 42 | + "First, follow [these instructions](https://github.com/ollama/ollama?tab=readme-ov-file#ollama) to set up and run a local Ollama instance:\n", |
43 | 43 | "\n",
|
44 |
| - "* [Download](https://ollama.ai/download) and install Ollama onto the available supported platforms (including Windows Subsystem for Linux)\n", |
| 44 | + "* [Download](https://ollama.ai/download) and install Ollama onto the available supported platforms (including Windows Subsystem for Linux aka WSL, macOS, and Linux)\n", |
| 45 | + " * macOS users can install via Homebrew with `brew install ollama` and start with `brew services start ollama`\n", |
45 | 46 | "* Fetch available LLM model via `ollama pull <name-of-model>`\n",
|
46 | 47 | " * View a list of available models via the [model library](https://ollama.ai/library)\n",
|
47 | 48 | " * e.g., `ollama pull llama3`\n",
|
|
54 | 55 | "* Specify the exact version of the model of interest as such `ollama pull vicuna:13b-v1.5-16k-q4_0` (View the [various tags for the `Vicuna`](https://ollama.ai/library/vicuna/tags) model in this instance)\n",
|
55 | 56 | "* To view all pulled models, use `ollama list`\n",
|
56 | 57 | "* To chat directly with a model from the command line, use `ollama run <name-of-model>`\n",
|
57 |
| - "* View the [Ollama documentation](https://github.com/jmorganca/ollama) for more commands. Run `ollama help` in the terminal to see available commands too.\n" |
| 58 | + "* View the [Ollama documentation](https://github.com/ollama/ollama/tree/main/docs) for more commands. You can run `ollama help` in the terminal to see available commands.\n" |
58 | 59 | ]
|
59 | 60 | },
|
60 | 61 | {
|
|
72 | 73 | "metadata": {},
|
73 | 74 | "outputs": [],
|
74 | 75 | "source": [
|
75 |
| - "# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n", |
76 |
| - "# os.environ[\"LANGSMITH_TRACING\"] = \"true\"" |
| 76 | + "# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n", |
| 77 | + "# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")" |
77 | 78 | ]
|
78 | 79 | },
|
79 | 80 | {
|
|
159 | 160 | {
|
160 | 161 | "data": {
|
161 | 162 | "text/plain": [
|
162 |
| - "AIMessage(content='The translation of \"I love programming\" from English to French is:\\n\\n\"J\\'adore programmer.\"', response_metadata={'model': 'llama3.1', 'created_at': '2024-08-19T16:05:32.81965Z', 'message': {'role': 'assistant', 'content': ''}, 'done_reason': 'stop', 'done': True, 'total_duration': 2167842917, 'load_duration': 54222584, 'prompt_eval_count': 35, 'prompt_eval_duration': 893007000, 'eval_count': 22, 'eval_duration': 1218962000}, id='run-0863daa2-43bf-4a43-86cc-611b23eae466-0', usage_metadata={'input_tokens': 35, 'output_tokens': 22, 'total_tokens': 57})" |
| 163 | + "AIMessage(content='The translation of \"I love programming\" in French is:\\n\\n\"J\\'adore le programmation.\"', additional_kwargs={}, response_metadata={'model': 'llama3.1', 'created_at': '2025-06-25T18:43:00.483666Z', 'done': True, 'done_reason': 'stop', 'total_duration': 619971208, 'load_duration': 27793125, 'prompt_eval_count': 35, 'prompt_eval_duration': 36354583, 'eval_count': 22, 'eval_duration': 555182667, 'model_name': 'llama3.1'}, id='run--348bb5ef-9dd9-4271-bc7e-a9ddb54c28c1-0', usage_metadata={'input_tokens': 35, 'output_tokens': 22, 'total_tokens': 57})" |
163 | 164 | ]
|
164 | 165 | },
|
165 |
| - "execution_count": 10, |
| 166 | + "execution_count": 5, |
166 | 167 | "metadata": {},
|
167 | 168 | "output_type": "execute_result"
|
168 | 169 | }
|
169 | 170 | ],
|
170 | 171 | "source": [
|
171 |
| - "from langchain_core.messages import AIMessage\n", |
172 |
| - "\n", |
173 | 172 | "messages = [\n",
|
174 | 173 | " (\n",
|
175 | 174 | " \"system\",\n",
|
|
191 | 190 | "name": "stdout",
|
192 | 191 | "output_type": "stream",
|
193 | 192 | "text": [
|
194 |
| - "The translation of \"I love programming\" from English to French is:\n", |
| 193 | + "The translation of \"I love programming\" in French is:\n", |
195 | 194 | "\n",
|
196 |
| - "\"J'adore programmer.\"\n" |
| 195 | + "\"J'adore le programmation.\"\n" |
197 | 196 | ]
|
198 | 197 | }
|
199 | 198 | ],
|
|
220 | 219 | {
|
221 | 220 | "data": {
|
222 | 221 | "text/plain": [
|
223 |
| - "AIMessage(content='Das Programmieren ist mir ein Leidenschaft! (That\\'s \"Programming is my passion!\" in German.) Would you like me to translate anything else?', response_metadata={'model': 'llama3.1', 'created_at': '2024-08-19T16:05:34.893548Z', 'message': {'role': 'assistant', 'content': ''}, 'done_reason': 'stop', 'done': True, 'total_duration': 2045997333, 'load_duration': 22584792, 'prompt_eval_count': 30, 'prompt_eval_duration': 213210000, 'eval_count': 32, 'eval_duration': 1808541000}, id='run-d18e1c6b-50e0-4b1d-b23a-973fa058edad-0', usage_metadata={'input_tokens': 30, 'output_tokens': 32, 'total_tokens': 62})" |
| 222 | + "AIMessage(content='\"Programmieren ist meine Leidenschaft.\"\\n\\n(I translated \"programming\" to the German word \"Programmieren\", and added \"ist meine Leidenschaft\" which means \"is my passion\")', additional_kwargs={}, response_metadata={'model': 'llama3.1', 'created_at': '2025-06-25T18:43:29.350032Z', 'done': True, 'done_reason': 'stop', 'total_duration': 1194744459, 'load_duration': 26982500, 'prompt_eval_count': 30, 'prompt_eval_duration': 117043458, 'eval_count': 41, 'eval_duration': 1049892167, 'model_name': 'llama3.1'}, id='run--efc6436e-2346-43d9-8118-3c20b3cdf0d0-0', usage_metadata={'input_tokens': 30, 'output_tokens': 41, 'total_tokens': 71})" |
224 | 223 | ]
|
225 | 224 | },
|
226 |
| - "execution_count": 12, |
| 225 | + "execution_count": 7, |
227 | 226 | "metadata": {},
|
228 | 227 | "output_type": "execute_result"
|
229 | 228 | }
|
|
258 | 257 | "source": [
|
259 | 258 | "## Tool calling\n",
|
260 | 259 | "\n",
|
261 |
| - "We can use [tool calling](https://blog.langchain.dev/improving-core-tool-interfaces-and-docs-in-langchain/) with an LLM [that has been fine-tuned for tool use](https://ollama.com/search?&c=tools) such as `llama3.1`:\n", |
| 260 | + "We can use [tool calling](/docs/concepts/tool_calling/) with an LLM [that has been fine-tuned for tool use](https://ollama.com/search?&c=tools) such as `llama3.1`:\n", |
262 | 261 | "\n",
|
263 | 262 | "```\n",
|
264 | 263 | "ollama pull llama3.1\n",
|
|
274 | 273 | "metadata": {},
|
275 | 274 | "outputs": [
|
276 | 275 | {
|
277 |
| - "data": { |
278 |
| - "text/plain": [ |
279 |
| - "[{'name': 'validate_user',\n", |
280 |
| - " 'args': {'addresses': '[\"123 Fake St, Boston, MA\", \"234 Pretend Boulevard, Houston, TX\"]',\n", |
281 |
| - " 'user_id': '123'},\n", |
282 |
| - " 'id': '40fe3de0-500c-4b91-9616-5932a929e640',\n", |
283 |
| - " 'type': 'tool_call'}]" |
284 |
| - ] |
285 |
| - }, |
286 |
| - "execution_count": 13, |
287 |
| - "metadata": {}, |
288 |
| - "output_type": "execute_result" |
| 276 | + "name": "stdout", |
| 277 | + "output_type": "stream", |
| 278 | + "text": [ |
| 279 | + "[{'name': 'validate_user', 'args': {'addresses': ['123 Fake St, Boston, MA', '234 Pretend Boulevard, Houston, TX'], 'user_id': '123'}, 'id': 'aef33a32-a34b-4b37-b054-e0d85584772f', 'type': 'tool_call'}]\n" |
| 280 | + ] |
289 | 281 | }
|
290 | 282 | ],
|
291 | 283 | "source": [
|
292 | 284 | "from typing import List\n",
|
293 | 285 | "\n",
|
| 286 | + "from langchain_core.messages import AIMessage\n", |
294 | 287 | "from langchain_core.tools import tool\n",
|
295 | 288 | "from langchain_ollama import ChatOllama\n",
|
296 | 289 | "\n",
|
|
316 | 309 | " \"123 Fake St in Boston MA and 234 Pretend Boulevard in \"\n",
|
317 | 310 | " \"Houston TX.\"\n",
|
318 | 311 | ")\n",
|
319 |
| - "result.tool_calls" |
| 312 | + "\n", |
| 313 | + "if isinstance(result, AIMessage) and result.tool_calls:\n", |
| 314 | + " print(result.tool_calls)" |
320 | 315 | ]
|
321 | 316 | },
|
322 | 317 | {
|
|
333 | 328 | "Be sure to update Ollama so that you have the most recent version to support multi-modal."
|
334 | 329 | ]
|
335 | 330 | },
|
| 331 | + { |
| 332 | + "cell_type": "code", |
| 333 | + "execution_count": null, |
| 334 | + "id": "69920d39", |
| 335 | + "metadata": {}, |
| 336 | + "outputs": [], |
| 337 | + "source": [ |
| 338 | + "%pip install pillow" |
| 339 | + ] |
| 340 | + }, |
336 | 341 | {
|
337 | 342 | "cell_type": "code",
|
338 | 343 | "execution_count": 15,
|
|
467 | 472 | "output_type": "stream",
|
468 | 473 | "text": [
|
469 | 474 | "Here is my thought process:\n",
|
470 |
| - "This question is asking for the result of 3 raised to the power of 3, which is a basic mathematical operation. \n", |
| 475 | + "The user is asking for the value of 3 raised to the power of 3, which is a basic exponentiation operation.\n", |
471 | 476 | "\n",
|
472 | 477 | "Here is my response:\n",
|
473 |
| - "The expression 3^3 means 3 raised to the power of 3. To calculate this, you multiply the base number (3) by itself as many times as its exponent (3):\n", |
474 | 478 | "\n",
|
475 |
| - "3 * 3 * 3 = 27\n", |
| 479 | + "3^3 (read as \"3 to the power of 3\") equals 27. \n", |
476 | 480 | "\n",
|
477 |
| - "So, 3^3 equals 27.\n" |
| 481 | + "This calculation is performed by multiplying 3 by itself three times: 3*3*3 = 27.\n" |
478 | 482 | ]
|
479 | 483 | }
|
480 | 484 | ],
|
|
508 | 512 | "source": [
|
509 | 513 | "## API reference\n",
|
510 | 514 | "\n",
|
511 |
| - "For detailed documentation of all ChatOllama features and configurations head to the API reference: https://python.langchain.com/api_reference/ollama/chat_models/langchain_ollama.chat_models.ChatOllama.html" |
| 515 | + "For detailed documentation of all ChatOllama features and configurations head to the [API reference](https://python.langchain.com/api_reference/ollama/chat_models/langchain_ollama.chat_models.ChatOllama.html)." |
512 | 516 | ]
|
513 | 517 | }
|
514 | 518 | ],
|
|
0 commit comments