Tool calling

Open in

Tool calling in AI Transport supports both server-executed and client-executed tools. Tool invocations and results are published to the channel, so all clients see tool activity in real time and tool state persists in history.

How it works

When the LLM invokes a tool, the invocation is streamed through the channel like any other turn event. Clients see tool calls appear as they're generated. If the tool runs on the server, the result is streamed back in the same turn. If the tool runs on the client, the turn ends and a continuation turn starts after the client submits the result.

Tool state - invocations, arguments, and results - is part of the channel's message history. Late joiners and reconnecting clients see the full tool activity, not just final text.

Server-executed tools

Server-executed tools are the default path. The AI SDK handles tool execution automatically during the LLM stream. Tool invocations and results are encoded by the codec and published to the channel as part of the turn.

JavaScript

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

const result = streamText({
  model: anthropic('claude-sonnet-4-20250514'),
  messages: conversationHistory,
  tools: {
    getWeather: {
      description: 'Get current weather for a location',
      inputSchema: z.object({ city: z.string() }),
      execute: async ({ city }) => {
        const data = await fetchWeather(city)
        return { temperature: data.temp, conditions: data.conditions }
      },
    },
  },
  abortSignal: turn.abortSignal,
})

const { reason } = await turn.streamResponse(result.toUIMessageStream())
await turn.end(reason)

Clients see the tool invocation as it streams, then the result, then the LLM's follow-up text - all within a single turn.

Client-executed tools

Client-executed tools require a round trip between the server and client. The LLM requests a tool call, the turn ends, the client executes the tool locally and submits the result, and a continuation turn starts.

On the server, define the tool without an execute function. When the LLM invokes it, the stream ends with a tool call that the client must fulfill:

JavaScript

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

const result = streamText({
  model: anthropic('claude-sonnet-4-20250514'),
  messages: conversationHistory,
  tools: {
    getUserLocation: {
      description: 'Get the user\'s current location',
      inputSchema: z.object({}),
      // No execute function - the client handles this
    },
  },
  abortSignal: turn.abortSignal,
})

const { reason } = await turn.streamResponse(result.toUIMessageStream())
await turn.end(reason)

On the client, detect the pending tool call and submit the result using view.update(). The first parameter is the Ably message ID of the node containing the tool invocation, not the tool call ID:

JavaScript

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

const { nodes } = useView(transport)

// Find the node with a pending tool invocation
const pendingNode = nodes
  .find(n => n.message.parts?.some(p => p.type === 'dynamic-tool' && p.state === 'input-available'))

if (pendingNode) {
  // Execute the tool locally and submit the result
  const location = await navigator.geolocation.getCurrentPosition()

  const toolCall = pendingNode.message.parts
    .find(p => p.type === 'dynamic-tool' && p.state === 'input-available')

  // First argument is the Ably message ID of the node containing the tool invocation
  await view.update(pendingNode.id, [{
    type: 'tool-output-available',
    toolCallId: toolCall.toolCallId,
    output: { lat: location.coords.latitude, lng: location.coords.longitude },
  }])
}

Calling view.update() submits the tool result to the server and triggers a continuation turn. The server receives the result, includes it in the conversation history, and the LLM generates a response that incorporates the tool output.

Cross-turn events with EventsNode

The addEvents() method delivers events to an existing assistant message. A common use is delivering tool results to a message that contains a pending tool call. You target the message by its Ably message ID:

JavaScript

1

2

3

4

5

6

7

8

9

10

11

// Deliver a tool result to an existing assistant message
const assistantMsgId = pendingNode.id
const toolCallId = pendingToolCall.toolCallId

await turn.addEvents(assistantMsgId, [
  {
    type: 'tool-output-available',
    toolCallId,
    output: { temperature: 22, conditions: 'sunny' },
  },
])

Events published through addEvents() update the target message in the view. Because they target an existing message by its ID, late joiners and reconnecting clients see the correct state when the conversation is replayed from history.

History persistence

Tool invocations and results are part of the channel's message history. When a client reconnects or a late joiner loads the conversation, tool activity is replayed along with text messages. The view reconstructs tool state so the UI shows the correct tool status - whether a tool is pending, complete, or failed.

This means tool-heavy conversations work correctly across disconnections and device switches. A user who starts a tool-assisted workflow on their laptop can continue it on their phone without losing context.