Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions .changeset/young-dingos-march.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
'ai': major
---

chore (ai): rename CoreMessage to ModelMessage
Original file line number Diff line number Diff line change
Expand Up @@ -31,12 +31,12 @@ Let's start by creating a simple chat interface with an input field that sends t
```tsx filename='app/page.tsx'
'use client';

import { CoreMessage } from 'ai';
import { ModelMessage } from 'ai';
import { useState } from 'react';

export default function Page() {
const [input, setInput] = useState('');
const [messages, setMessages] = useState<CoreMessage[]>([]);
const [messages, setMessages] = useState<ModelMessage[]>([]);

return (
<div>
Expand Down Expand Up @@ -90,11 +90,11 @@ export default function Page() {
Next, let's create the `/api/chat` endpoint that generates the assistant's response based on the conversation history.

```typescript filename='app/api/chat/route.ts'
import { CoreMessage, generateText } from 'ai';
import { ModelMessage, generateText } from 'ai';
import { openai } from '@ai-sdk/openai';

export async function POST(req: Request) {
const { messages }: { messages: CoreMessage[] } = await req.json();
const { messages }: { messages: ModelMessage[] } = await req.json();

const { response } = await generateText({
model: openai('gpt-4'),
Expand Down
2 changes: 1 addition & 1 deletion content/cookbook/01-next/24-stream-text-multistep.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ export async function POST(req: Request) {
'You are a helpful assistant with a different system prompt. Repeat the extract user goal in your answer.',
// continue the workflow stream with the messages from the previous step:
messages: [
...convertToCoreMessages(messages),
...convertToModelMessages(messages),
...(await result1.response).messages,
],
});
Expand Down
4 changes: 2 additions & 2 deletions content/cookbook/01-next/75-human-in-the-loop.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -336,7 +336,7 @@ The solution above is low-level and not very friendly to use in a production env
import {
formatDataStreamPart,
Message,
convertToCoreMessages,
convertToModelMessages,
DataStreamWriter,
ToolExecutionOptions,
ToolSet,
Expand Down Expand Up @@ -419,7 +419,7 @@ export async function processToolCalls<
const toolInstance = executeFunctions[toolName];
if (toolInstance) {
result = await toolInstance(toolInvocation.args, {
messages: convertToCoreMessages(messages),
messages: convertToModelMessages(messages),
toolCallId: toolInvocation.toolCallId,
});
} else {
Expand Down
4 changes: 2 additions & 2 deletions content/docs/02-foundations/03-prompts.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -164,9 +164,9 @@ const messages = [
AI SDK UI hooks like [`useChat`](/docs/reference/ai-sdk-ui/use-chat) return
arrays of `UIMessage` objects, which do not support provider options. We
recommend using the
[`convertToCoreMessages`](/docs/reference/ai-sdk-ui/convert-to-core-messages)
[`convertToModelMessages`](/docs/reference/ai-sdk-ui/convert-to-core-messages)
function to convert `UIMessage` objects to
[`CoreMessage`](/docs/reference/ai-sdk-core/core-message) objects before
[`ModelMessage`](/docs/reference/ai-sdk-core/core-message) objects before
applying or appending message(s) or message parts with `providerOptions`.
</Note>

Expand Down
20 changes: 10 additions & 10 deletions content/docs/02-getting-started/06-nodejs.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ Create an `index.ts` file in the root of your project and add the following code

```ts filename="index.ts"
import { openai } from '@ai-sdk/openai';
import { CoreMessage, streamText } from 'ai';
import { ModelMessage, streamText } from 'ai';
import dotenv from 'dotenv';
import * as readline from 'node:readline/promises';

Expand All @@ -87,7 +87,7 @@ const terminal = readline.createInterface({
output: process.stdout,
});

const messages: CoreMessage[] = [];
const messages: ModelMessage[] = [];

async function main() {
while (true) {
Expand Down Expand Up @@ -151,7 +151,7 @@ Modify your `index.ts` file to include the new weather tool:

```ts filename="index.ts" highlight="2,4,25-38"
import { openai } from '@ai-sdk/openai';
import { CoreMessage, streamText, tool } from 'ai';
import { ModelMessage, streamText, tool } from 'ai';
import dotenv from 'dotenv';
import { z } from 'zod';
import * as readline from 'node:readline/promises';
Expand All @@ -163,7 +163,7 @@ const terminal = readline.createInterface({
output: process.stdout,
});

const messages: CoreMessage[] = [];
const messages: ModelMessage[] = [];

async function main() {
while (true) {
Expand Down Expand Up @@ -221,7 +221,7 @@ Notice the blank "assistant" response? This is because instead of generating a t

```typescript highlight="47-48"
import { openai } from '@ai-sdk/openai';
import { CoreMessage, streamText, tool } from 'ai';
import { ModelMessage, streamText, tool } from 'ai';
import dotenv from 'dotenv';
import { z } from 'zod';
import * as readline from 'node:readline/promises';
Expand All @@ -233,7 +233,7 @@ const terminal = readline.createInterface({
output: process.stdout,
});

const messages: CoreMessage[] = [];
const messages: ModelMessage[] = [];

async function main() {
while (true) {
Expand Down Expand Up @@ -291,7 +291,7 @@ Modify your `index.ts` file to include the `maxSteps` option:

```ts filename="index.ts" highlight="39-42"
import { openai } from '@ai-sdk/openai';
import { CoreMessage, streamText, tool } from 'ai';
import { ModelMessage, streamText, tool } from 'ai';
import dotenv from 'dotenv';
import { z } from 'zod';
import * as readline from 'node:readline/promises';
Expand All @@ -303,7 +303,7 @@ const terminal = readline.createInterface({
output: process.stdout,
});

const messages: CoreMessage[] = [];
const messages: ModelMessage[] = [];

async function main() {
while (true) {
Expand Down Expand Up @@ -364,7 +364,7 @@ Update your `index.ts` file to add a new tool to convert the temperature from Ce

```ts filename="index.ts" highlight="38-49"
import { openai } from '@ai-sdk/openai';
import { CoreMessage, streamText, tool } from 'ai';
import { ModelMessage, streamText, tool } from 'ai';
import dotenv from 'dotenv';
import { z } from 'zod';
import * as readline from 'node:readline/promises';
Expand All @@ -376,7 +376,7 @@ const terminal = readline.createInterface({
output: process.stdout,
});

const messages: CoreMessage[] = [];
const messages: ModelMessage[] = [];

async function main() {
while (true) {
Expand Down
8 changes: 4 additions & 4 deletions content/docs/02-guides/03-slackbot.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -318,10 +318,10 @@ Here's how to implement it:

```typescript filename="lib/generate-response.ts"
import { openai } from '@ai-sdk/openai';
import { CoreMessage, generateText } from 'ai';
import { ModelMessage, generateText } from 'ai';

export const generateResponse = async (
messages: CoreMessage[],
messages: ModelMessage[],
updateStatus?: (status: string) => void,
) => {
const { text } = await generateText({
Expand Down Expand Up @@ -349,12 +349,12 @@ The real power of the AI SDK comes from tools that enable your bot to perform ac

```typescript filename="lib/generate-response.ts"
import { openai } from '@ai-sdk/openai';
import { CoreMessage, generateText, tool } from 'ai';
import { ModelMessage, generateText, tool } from 'ai';
import { z } from 'zod';
import { exa } from './utils';

export const generateResponse = async (
messages: CoreMessage[],
messages: ModelMessage[],
updateStatus?: (status: string) => void,
) => {
const { text } = await generateText({
Expand Down
4 changes: 2 additions & 2 deletions content/docs/03-ai-sdk-core/15-tools-and-tool-calling.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -191,12 +191,12 @@ Both `generateText` and `streamText` have a `response.messages` property that yo
add the assistant and tool messages to your conversation history.
It is also available in the `onFinish` callback of `streamText`.

The `response.messages` property contains an array of `CoreMessage` objects that you can add to your conversation history:
The `response.messages` property contains an array of `ModelMessage` objects that you can add to your conversation history:

```ts
import { generateText } from 'ai';

const messages: CoreMessage[] = [
const messages: ModelMessage[] = [
// ...
];

Expand Down
4 changes: 2 additions & 2 deletions content/docs/04-ai-sdk-ui/03-chatbot-message-persistence.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -124,14 +124,14 @@ We have enabled the `sendExtraMessageFields` option to send the id and createdAt
meaning that we store messages in the `useChat` message format.

<Note>
The `useChat` message format is different from the `CoreMessage` format. The
The `useChat` message format is different from the `ModelMessage` format. The
`useChat` message format is designed for frontend display, and contains
additional fields such as `id` and `createdAt`. We recommend storing the
messages in the `useChat` message format.
</Note>

Storing messages is done in the `onFinish` callback of the `streamText` function.
`onFinish` receives the messages from the AI response as a `CoreMessage[]`,
`onFinish` receives the messages from the AI response as a `ModelMessage[]`,
and we use the [`appendResponseMessages`](/docs/reference/ai-sdk-ui/append-response-messages)
helper to append the AI response messages to the chat messages.

Expand Down
4 changes: 2 additions & 2 deletions content/docs/05-ai-sdk-rsc/10-migrating-to-ui.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -483,12 +483,12 @@ With AI SDK UI, you will save chats using the `onFinish` callback function of `s
```ts filename="@/app/api/chat/route.ts"
import { openai } from '@ai-sdk/openai';
import { saveChat } from '@/utils/queries';
import { streamText, convertToCoreMessages } from 'ai';
import { streamText, convertToModelMessages } from 'ai';

export async function POST(request) {
const { id, messages } = await request.json();

const coreMessages = convertToCoreMessages(messages);
const coreMessages = convertToModelMessages(messages);

const result = streamText({
model: openai('gpt-4o'),
Expand Down
4 changes: 2 additions & 2 deletions content/docs/07-reference/01-ai-sdk-core/01-generate-text.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -333,7 +333,7 @@ To see `generateText` in action, check out [these examples](#examples).
},
{
name: 'messages',
type: 'CoreMessage[]',
type: 'ModelMessage[]',
description:
'Messages that were sent to the language model to initiate the response that contained the tool call. The messages do not include the system prompt nor the assistant response that contained the tool call.',
},
Expand Down Expand Up @@ -533,7 +533,7 @@ To see `generateText` in action, check out [these examples](#examples).
},
{
name: 'messages',
type: 'CoreMessage[]',
type: 'ModelMessage[]',
description: 'The messages in the current generation step.',
},
{
Expand Down
4 changes: 2 additions & 2 deletions content/docs/07-reference/01-ai-sdk-core/02-stream-text.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -334,7 +334,7 @@ To see `streamText` in action, check out [these examples](#examples).
},
{
name: 'messages',
type: 'CoreMessage[]',
type: 'ModelMessage[]',
description:
'Messages that were sent to the language model to initiate the response that contained the tool call. The messages do not include the system prompt nor the assistant response that contained the tool call.',
},
Expand Down Expand Up @@ -577,7 +577,7 @@ To see `streamText` in action, check out [these examples](#examples).
},
{
name: 'messages',
type: 'CoreMessage[]',
type: 'ModelMessage[]',
description: 'The messages in the current generation step.',
},
{
Expand Down
2 changes: 1 addition & 1 deletion content/docs/07-reference/01-ai-sdk-core/20-tool.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ export const weatherTool = tool({
},
{
name: "messages",
type: "CoreMessage[]",
type: "ModelMessage[]",
description: "Messages that were sent to the language model to initiate the response that contained the tool call. The messages do not include the system prompt nor the assistant response that contained the tool call."
},
{
Expand Down
12 changes: 6 additions & 6 deletions content/docs/07-reference/01-ai-sdk-core/30-core-message.mdx
Original file line number Diff line number Diff line change
@@ -1,16 +1,16 @@
---
title: CoreMessage
title: ModelMessage
description: Message types for AI SDK Core (API Reference)
---

# `CoreMessage`
# `ModelMessage`

`CoreMessage` represents the fundamental message structure used with AI SDK Core functions.
`ModelMessage` represents the fundamental message structure used with AI SDK Core functions.
It encompasses various message types that can be used in the `messages` field of any AI SDK Core functions.

You can access the Zod schema for `CoreMessage` with the `coreMessageSchema` export.
You can access the Zod schema for `ModelMessage` with the `coreMessageSchema` export.

## `CoreMessage` Types
## `ModelMessage` Types

### `CoreSystemMessage`

Expand Down Expand Up @@ -75,7 +75,7 @@ type ToolContent = Array<ToolResultPart>;

You can access the Zod schema for `CoreToolMessage` with the `coreToolMessageSchema` export.

## `CoreMessage` Parts
## `ModelMessage` Parts

### `TextPart`

Expand Down
Original file line number Diff line number Diff line change
@@ -1,27 +1,27 @@
---
title: convertToCoreMessages
description: Convert useChat messages to CoreMessages for AI core functions (API Reference)
title: convertToModelMessages
description: Convert useChat messages to ModelMessages for AI functions (API Reference)
---

# `convertToCoreMessages()`
# `convertToModelMessages()`

<Note title="warning">
The `convertToCoreMessages` function is no longer required. The AI SDK now
automatically converts the incoming messages to the `CoreMessage` format.
The `convertToModelMessages` function is no longer required. The AI SDK now
automatically converts the incoming messages to the `ModelMessage` format.
</Note>

The `convertToCoreMessages` function is used to transform an array of UI messages from the `useChat` hook into an array of `CoreMessage` objects. These `CoreMessage` objects are compatible with AI core functions like `streamText`.
The `convertToModelMessages` function is used to transform an array of UI messages from the `useChat` hook into an array of `ModelMessage` objects. These `ModelMessage` objects are compatible with AI core functions like `streamText`.

```ts filename="app/api/chat/route.ts"
import { openai } from '@ai-sdk/openai';
import { convertToCoreMessages, streamText } from 'ai';
import { convertToModelMessages, streamText } from 'ai';

export async function POST(req: Request) {
const { messages } = await req.json();

const result = streamText({
model: openai('gpt-4o'),
messages: convertToCoreMessages(messages),
messages: convertToModelMessages(messages),
});

return result.toDataStreamResponse();
Expand All @@ -30,7 +30,7 @@ export async function POST(req: Request) {

## Import

<Snippet text={`import { convertToCoreMessages } from "ai"`} prompt={false} />
<Snippet text={`import { convertToModelMessages } from "ai"`} prompt={false} />

## API Signature

Expand All @@ -55,21 +55,21 @@ export async function POST(req: Request) {

### Returns

An array of [`CoreMessage`](/docs/reference/ai-sdk-core/core-message) objects.
An array of [`ModelMessage`](/docs/reference/ai-sdk-core/core-message) objects.

<PropertiesTable
content={[
{
name: 'CoreMessage[]',
name: 'ModelMessage[]',
type: 'Array',
description: 'An array of CoreMessage objects',
description: 'An array of ModelMessage objects',
},
]}
/>

## Multi-modal Tool Responses

The `convertToCoreMessages` function supports tools that can return multi-modal content. This is useful when tools need to return non-text content like images.
The `convertToModelMessages` function supports tools that can return multi-modal content. This is useful when tools need to return non-text content like images.

```ts
import { tool } from 'ai';
Expand All @@ -83,7 +83,7 @@ const screenshotTool = tool({

const result = streamText({
model: openai('gpt-4'),
messages: convertToCoreMessages(messages, {
messages: convertToModelMessages(messages, {
tools: {
screenshot: screenshotTool,
},
Expand Down
Loading
Loading