Skip to content

Commit f9829b3

Browse files
committed
update to use @fern-api/ai
1 parent 8c9b9eb commit f9829b3

File tree

953 files changed

+1465
-1462
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

953 files changed

+1465
-1462
lines changed

content/cookbook/01-next/10-generate-text.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@ export default function Page() {
5858
Let's create a route handler for `/api/completion` that will generate text based on the input prompt. The route will call the `generateText` function from the `ai` module, which will then generate text based on the input prompt and return it.
5959

6060
```typescript filename='app/api/completion/route.ts'
61-
import { generateText } from 'ai';
61+
import { generateText } from '@fern-api/ai';
6262
import { openai } from '@ai-sdk/openai';
6363

6464
export async function POST(req: Request) {

content/cookbook/01-next/11-generate-text-with-chat-prompt.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ Let's start by creating a simple chat interface with an input field that sends t
3131
```tsx filename='app/page.tsx'
3232
'use client';
3333

34-
import { ModelMessage } from 'ai';
34+
import { ModelMessage } from '@fern-api/ai';
3535
import { useState } from 'react';
3636

3737
export default function Page() {
@@ -90,7 +90,7 @@ export default function Page() {
9090
Next, let's create the `/api/chat` endpoint that generates the assistant's response based on the conversation history.
9191

9292
```typescript filename='app/api/chat/route.ts'
93-
import { ModelMessage, generateText } from 'ai';
93+
import { ModelMessage, generateText } from '@fern-api/ai';
9494
import { openai } from '@ai-sdk/openai';
9595

9696
export async function POST(req: Request) {

content/cookbook/01-next/12-generate-image-with-chat-prompt.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ Let's create an endpoint at `/api/chat` that generates the assistant's response
1414

1515
```typescript filename='app/api/chat/route.ts'
1616
import { openai } from '@ai-sdk/openai';
17-
import { experimental_generateImage, Message, streamText, tool } from 'ai';
17+
import { experimental_generateImage, Message, streamText, tool } from '@fern-api/ai';
1818
import { z } from 'zod';
1919

2020
export async function POST(request: Request) {

content/cookbook/01-next/122-caching-middleware.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ import {
6969
type LanguageModelV2Middleware,
7070
type LanguageModelV1StreamPart,
7171
simulateReadableStream,
72-
} from 'ai';
72+
} from '@fern-api/ai';
7373

7474
const redis = new Redis({
7575
url: process.env.KV_URL,
@@ -164,7 +164,7 @@ Finally, you will create an API route for `api/chat` to handle the assistant's m
164164
```tsx filename='app/api/chat/route.ts'
165165
import { cacheMiddleware } from '@/ai/middleware';
166166
import { openai } from '@ai-sdk/openai';
167-
import { wrapLanguageModel, streamText, tool } from 'ai';
167+
import { wrapLanguageModel, streamText, tool } from '@fern-api/ai';
168168
import { z } from 'zod';
169169

170170
const wrappedModel = wrapLanguageModel({

content/cookbook/01-next/20-stream-text.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ export default function Page() {
4747
Let's create a route handler for `/api/completion` that will generate text based on the input prompt. The route will call the `streamText` function from the `ai` module, which will then generate text based on the input prompt and stream it to the client.
4848

4949
```typescript filename='app/api/completion/route.ts'
50-
import { streamText } from 'ai';
50+
import { streamText } from '@fern-api/ai';
5151
import { openai } from '@ai-sdk/openai';
5252

5353
export async function POST(req: Request) {

content/cookbook/01-next/21-stream-text-with-chat-prompt.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ export default function Page() {
6262
Next, let's create the `/api/chat` endpoint that generates the assistant's response based on the conversation history.
6363

6464
```typescript filename='app/api/chat/route.ts'
65-
import { streamText, UIMessage } from 'ai';
65+
import { streamText, UIMessage } from '@fern-api/ai';
6666
import { openai } from '@ai-sdk/openai';
6767

6868
export async function POST(req: Request) {

content/cookbook/01-next/22-stream-text-with-image-prompt.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ The last message is the user's message, and we add the image URL to it.
1717

1818
```tsx filename='app/api/chat/route.ts' highlight="8,9,23"
1919
import { openai } from '@ai-sdk/openai';
20-
import { streamText } from 'ai';
20+
import { streamText } from '@fern-api/ai';
2121

2222
export const maxDuration = 60;
2323

content/cookbook/01-next/23-chat-with-pdf.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ Create a route handler that will use Anthropic's Claude model to process message
2323

2424
```tsx filename="app/api/chat/route.ts"
2525
import { anthropic } from '@ai-sdk/anthropic';
26-
import { streamText } from 'ai';
26+
import { streamText } from '@fern-api/ai';
2727

2828
export const maxDuration = 30;
2929

content/cookbook/01-next/24-stream-text-multistep.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ allowing you to have different steps in a single assistant UI message.
1717

1818
```typescript filename='app/api/chat/route.ts'
1919
import { openai } from '@ai-sdk/openai';
20-
import { createDataStreamResponse, streamText, tool } from 'ai';
20+
import { createDataStreamResponse, streamText, tool } from '@fern-api/ai';
2121
import { z } from 'zod';
2222

2323
export async function POST(req: Request) {

content/cookbook/01-next/25-markdown-chatbot-with-memoization.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ On the server, you use a simple route handler that streams the response from the
1818

1919
```tsx filename='app/api/chat/route.ts'
2020
import { openai } from '@ai-sdk/openai';
21-
import { streamText } from 'ai';
21+
import { streamText } from '@fern-api/ai';
2222

2323
export const maxDuration = 60;
2424

content/cookbook/01-next/30-generate-object.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@ export default function Page() {
8282
Let's create a route handler for `/api/completion` that will generate an object based on the input prompt. The route will call the `generateObject` function from the `ai` module, which will then generate an object based on the input prompt and return it.
8383

8484
```typescript filename='app/api/completion/route.ts'
85-
import { generateObject } from 'ai';
85+
import { generateObject } from '@fern-api/ai';
8686
import { openai } from '@ai-sdk/openai';
8787
import { z } from 'zod';
8888

content/cookbook/01-next/31-generate-object-with-file-prompt.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,7 @@ export default function Page() {
6767
On the server, create an API route that receives the PDF, sends it to the LLM, and returns the result. This example uses the [ `generateObject` ](/docs/reference/ai-sdk-core/generate-object) function to generate the summary as part of a structured output.
6868

6969
```typescript file="app/api/analyze/route.ts"
70-
import { generateObject } from 'ai';
70+
import { generateObject } from '@fern-api/ai';
7171
import { anthropic } from '@ai-sdk/anthropic';
7272
import { z } from 'zod';
7373

content/cookbook/01-next/40-stream-object.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,7 @@ On the server, we use [`streamObject`](/docs/reference/ai-sdk-core/stream-object
102102

103103
```typescript filename='app/api/use-object/route.ts'
104104
import { openai } from '@ai-sdk/openai';
105-
import { streamObject } from 'ai';
105+
import { streamObject } from '@fern-api/ai';
106106
import { notificationSchema } from './schema';
107107

108108
// Allow streaming responses up to 30 seconds
@@ -237,7 +237,7 @@ On the server, specify `output: 'array'` to generate an array of objects.
237237

238238
```typescript filename='app/api/use-object/route.ts'
239239
import { openai } from '@ai-sdk/openai';
240-
import { streamObject } from 'ai';
240+
import { streamObject } from '@fern-api/ai';
241241
import { notificationSchema } from './schema';
242242

243243
export const maxDuration = 30;
@@ -312,7 +312,7 @@ On the server, specify `output: 'no-schema'`.
312312

313313
```typescript filename='app/api/use-object/route.ts'
314314
import { openai } from '@ai-sdk/openai';
315-
import { streamObject } from 'ai';
315+
import { streamObject } from '@fern-api/ai';
316316
import { notificationSchema } from './schema';
317317

318318
export const maxDuration = 30;

content/cookbook/01-next/70-call-tools.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ You will use the [`tools`](/docs/reference/ai-sdk-core/generate-text#tools) para
7474
You will also use zod to specify the schema for the `celsiusToFahrenheit` function's parameters.
7575

7676
```tsx filename='app/api/chat/route.ts'
77-
import { ToolInvocation, streamText } from 'ai';
77+
import { ToolInvocation, streamText } from '@fern-api/ai';
7878
import { openai } from '@ai-sdk/openai';
7979
import { z } from 'zod';
8080

content/cookbook/01-next/71-call-tools-in-parallel.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ You will use the [`tools`](/docs/reference/ai-sdk-core/generate-text#tools) para
7474
You will add the `getWeather` function and use zod to specify the schema for its parameters.
7575

7676
```ts filename='app/api/chat/route.ts'
77-
import { ToolInvocation, streamText } from 'ai';
77+
import { ToolInvocation, streamText } from '@fern-api/ai';
7878
import { openai } from '@ai-sdk/openai';
7979
import { z } from 'zod';
8080

content/cookbook/01-next/72-call-tools-multiple-steps.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ You will use the [`tools`](/docs/reference/ai-sdk-core/generate-text#tools) para
5656
You will add the two functions mentioned earlier and use zod to specify the schema for its parameters.
5757

5858
```ts filename='app/api/chat/route.ts'
59-
import { ToolInvocation, streamText } from 'ai';
59+
import { ToolInvocation, streamText } from '@fern-api/ai';
6060
import { openai } from '@ai-sdk/openai';
6161
import { z } from 'zod';
6262

content/cookbook/01-next/73-mcp-tools.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ To use the `StreamableHTTPClientTransport`, you will need to install the officia
1717
<Snippet text="pnpm install @modelcontextprotocol/sdk" />
1818

1919
```ts filename="app/api/completion/route.ts"
20-
import { experimental_createMCPClient, streamText } from 'ai';
20+
import { experimental_createMCPClient, streamText } from '@fern-api/ai';
2121
import { Experimental_StdioMCPTransport } from 'ai/mcp-stdio';
2222
import { openai } from '@ai-sdk/openai';
2323
import { StreamableHTTPClientTransport } from '@modelcontextprotocol/sdk/client/streamableHttp';

content/cookbook/01-next/75-human-in-the-loop.mdx

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ On the backend, create a route handler (API Route) that returns a `DataStreamRes
5555

5656
```ts filename="api/chat/route.ts"
5757
import { openai } from '@ai-sdk/openai';
58-
import { createDataStreamResponse, streamText, tool } from 'ai';
58+
import { createDataStreamResponse, streamText, tool } from '@fern-api/ai';
5959
import { z } from 'zod';
6060

6161
export async function POST(req: Request) {
@@ -110,7 +110,7 @@ To implement HITL functionality, you start by omitting the `execute` function fr
110110

111111
```ts filename="api/chat/route.ts" highlight="17"
112112
import { openai } from '@ai-sdk/openai';
113-
import { createDataStreamResponse, streamText, tool } from 'ai';
113+
import { createDataStreamResponse, streamText, tool } from '@fern-api/ai';
114114
import { z } from 'zod';
115115

116116
export async function POST(req: Request) {
@@ -238,7 +238,7 @@ import {
238238
Message,
239239
streamText,
240240
tool,
241-
} from 'ai';
241+
} from '@fern-api/ai';
242242
import { z } from 'zod';
243243

244244
export async function POST(req: Request) {
@@ -340,7 +340,7 @@ import {
340340
DataStreamWriter,
341341
ToolExecutionOptions,
342342
ToolSet,
343-
} from 'ai';
343+
} from '@fern-api/ai';
344344
import { z } from 'zod';
345345

346346
// Approval string to be shared across frontend and backend
@@ -477,7 +477,7 @@ Finally, you declare a function called `getToolsRequiringConfirmation` that take
477477
In order to use these utility functions, you will need to move tool declarations to their own file:
478478

479479
```ts filename="tools.ts"
480-
import { tool } from 'ai';
480+
import { tool } from '@fern-api/ai';
481481
import { z } from 'zod';
482482

483483
const getWeatherInformation = tool({
@@ -510,7 +510,7 @@ Update your route handler to use the `processToolCalls` utility function.
510510

511511
```ts filename="app/api/chat/route.ts"
512512
import { openai } from '@ai-sdk/openai';
513-
import { createDataStreamResponse, Message, streamText } from 'ai';
513+
import { createDataStreamResponse, Message, streamText } from '@fern-api/ai';
514514
import { processToolCalls } from './utils';
515515
import { tools } from './tools';
516516

content/cookbook/01-next/80-send-custom-body-from-use-chat.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ The rest of the message history can be loaded from storage.
6666

6767
```tsx filename='app/api/chat/route.ts' highlight="8,9,23"
6868
import { openai } from '@ai-sdk/openai'
69-
import { streamText } from 'ai'
69+
import { streamText } from '@fern-api/ai'
7070

7171
// Allow streaming responses up to 30 seconds
7272
export const maxDuration = 30

content/cookbook/01-next/90-render-visual-interface-in-chat.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ Let's build an assistant that gets the weather for any city by calling the `getW
4444
```tsx filename='app/page.tsx'
4545
'use client';
4646

47-
import { ToolInvocation } from 'ai';
47+
import { ToolInvocation } from '@fern-api/ai';
4848
import { Message, useChat } from '@ai-sdk/react';
4949

5050
export default function Chat() {
@@ -186,7 +186,7 @@ export default function Chat() {
186186

187187
```tsx filename='api/chat.ts'
188188
import { openai } from '@ai-sdk/openai';
189-
import { streamText } from 'ai';
189+
import { streamText } from '@fern-api/ai';
190190
import { z } from 'zod';
191191

192192
export default async function POST(request: Request) {

content/cookbook/05-node/10-generate-text.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ For example, you may want to generate a response to a question or summarize a bo
1111
The `generateText` function can be used to generate text based on the input prompt.
1212

1313
```ts file='index.ts'
14-
import { generateText } from 'ai';
14+
import { generateText } from '@fern-api/ai';
1515
import { openai } from '@ai-sdk/openai';
1616

1717
const result = await generateText({

content/cookbook/05-node/100-retrieval-augmented-generation.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ import fs from 'fs';
1616
import path from 'path';
1717
import dotenv from 'dotenv';
1818
import { openai } from '@ai-sdk/openai';
19-
import { cosineSimilarity, embed, embedMany, generateText } from 'ai';
19+
import { cosineSimilarity, embed, embedMany, generateText } from '@fern-api/ai';
2020

2121
dotenv.config();
2222

content/cookbook/05-node/11-generate-text-with-chat-prompt.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ Previously, we were able to generate text and objects using either a single mess
1111
A chat completion allows you to generate text based on a series of messages. This series of messages can be any series of interactions between any number of systems, but the most popular and relatable use case has been a series of messages that represent a conversation between a user and a model.
1212

1313
```ts file='index.ts'
14-
import { generateText } from 'ai';
14+
import { generateText } from '@fern-api/ai';
1515
import { openai } from '@ai-sdk/openai';
1616

1717
const result = await generateText({

content/cookbook/05-node/12-generate-text-with-image-prompt.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ Some language models that support vision capabilities accept images as part of t
1111
## URL
1212

1313
```ts file='index.ts'
14-
import { generateText } from 'ai';
14+
import { generateText } from '@fern-api/ai';
1515
import { openai } from '@ai-sdk/openai';
1616

1717
const result = await generateText({
@@ -42,7 +42,7 @@ console.log(result);
4242
## File Buffer
4343

4444
```ts file='index.ts'
45-
import { generateText } from 'ai';
45+
import { generateText } from '@fern-api/ai';
4646
import { openai } from '@ai-sdk/openai';
4747
import fs from 'fs';
4848

content/cookbook/05-node/20-stream-text.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ Overall, Joyful Hearts Day is a day to celebrate love, kindness, and positivity,
2626
## Without reader
2727

2828
```ts file='index.ts'
29-
import { streamText } from 'ai';
29+
import { streamText } from '@fern-api/ai';
3030
import { openai } from '@ai-sdk/openai';
3131

3232
const result = streamText({
@@ -45,7 +45,7 @@ for await (const textPart of result.textStream) {
4545
## With reader
4646

4747
```ts file='index.ts'
48-
import { streamText } from 'ai';
48+
import { streamText } from '@fern-api/ai';
4949
import { openai } from '@ai-sdk/openai';
5050

5151
const result = streamText({

content/cookbook/05-node/21-stream-text-with-chat-prompt.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ This allows the client to display the new message as it is being generated by th
1212
rather than have users wait for it to finish.
1313

1414
```ts file='index.ts'
15-
import { streamText } from 'ai';
15+
import { streamText } from '@fern-api/ai';
1616
import { openai } from '@ai-sdk/openai';
1717

1818
const result = streamText({

content/cookbook/05-node/22-stream-text-with-image-prompt.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ Vision-language models can analyze images alongside text prompts to generate res
1010

1111
```ts
1212
import { anthropic } from '@ai-sdk/anthropic';
13-
import { streamText } from 'ai';
13+
import { streamText } from '@fern-api/ai';
1414
import 'dotenv/config';
1515
import fs from 'node:fs';
1616

content/cookbook/05-node/23-stream-text-with-file-prompt.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ Working with files in AI applications often requires analyzing documents, proces
1010

1111
```ts
1212
import { anthropic } from '@ai-sdk/anthropic';
13-
import { streamText } from 'ai';
13+
import { streamText } from '@fern-api/ai';
1414
import 'dotenv/config';
1515
import fs from 'node:fs';
1616

content/cookbook/05-node/30-generate-object-reasoning.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ One solution is to pass the output from a reasoning model through a smaller mode
1414
```ts
1515
import { deepseek } from '@ai-sdk/deepseek';
1616
import { openai } from '@ai-sdk/openai';
17-
import { generateObject, generateText } from 'ai';
17+
import { generateObject, generateText } from '@fern-api/ai';
1818
import 'dotenv/config';
1919
import { z } from 'zod';
2020

content/cookbook/05-node/30-generate-object.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ Earlier functions like `generateText` and `streamText` gave us the ability to ge
1111
The function requires you to provide a schema using [zod](https://zod.dev), a library for defining schemas for JavaScript objects. By using zod, you can also use it to validate the generated object and ensure that it conforms to the specified structure.
1212

1313
```ts file='index.ts'
14-
import { generateObject } from 'ai';
14+
import { generateObject } from '@fern-api/ai';
1515
import { openai } from '@ai-sdk/openai';
1616
import { z } from 'zod';
1717

content/cookbook/05-node/40-stream-object.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ You can use the [`streamObject`](/docs/reference/ai-sdk-core/stream-object) func
1515

1616
```ts file='index.ts'
1717
import { openai } from '@ai-sdk/openai';
18-
import { streamObject } from 'ai';
18+
import { streamObject } from '@fern-api/ai';
1919
import { z } from 'zod';
2020

2121
const { partialObjectStream } = streamObject({

0 commit comments

Comments
 (0)