Ably AI Transport integrates with the Vercel AI SDK to add durable sessions, multi-device sync, and bidirectional control to your chat application. This guide explains what each SDK does, how they connect, and what capabilities the combination unlocks.
Ready to build? Get started with Vercel AI SDK.
Understand the Vercel AI SDK
The Vercel AI SDK is a toolkit for building AI-powered applications. It handles model interaction, streaming, and UI state management. The following concepts are the ones you need to understand for the AI Transport integration.
The provider system
The AI SDK abstracts model providers behind a unified interface. You call streamText() with any provider (Anthropic, OpenAI, Google) and get the same API. Switching models is a one-line change. The provider handles the specifics of each model's API, authentication, and capabilities.
streamText
streamText() is the core function for streaming AI responses. You pass it a model, a system prompt, and the conversation messages. It calls the model and returns a stream of events as the model generates its response token by token. The stream includes text deltas, tool call inputs, tool results, reasoning content, and lifecycle events (start, finish, error). On the server, streamText() is where the AI model interaction happens.
UIMessage and parts
UIMessage is Vercel's message model. Each message has a role (user, assistant, system) and an array of parts. Parts represent different types of content within a single message: text, reasoning, tool calls, tool results, files, and sources. Each part tracks its own streaming state (streaming or done), so the UI can show partial content as it arrives.
A UIMessageChunk is one streaming event that contributes to a UIMessage. As the model generates a response, it emits a series of chunks (text-start, text-delta, text-end, tool-input-start, finish, and so on). The client accumulates these chunks into complete UIMessage objects with fully populated parts.
useChat
useChat is the main React hook for building chat UIs. It manages the array of UIMessage objects, sends messages to the server via a transport, handles streaming updates as chunks arrive, and tracks the conversation status (submitted, streaming, ready, error). It provides helpers for common operations: sending messages, regenerating responses, stopping a stream, and submitting tool results.
ChatTransport
ChatTransport is the interface that useChat calls to send and receive messages. It defines two methods: sendMessages (submit messages and receive a stream of chunks back) and reconnectToStream (resume an interrupted stream).
The default implementation sends an HTTP POST to your server endpoint and reads back a server-sent events (SSE) stream. This is where AI Transport plugs in: it provides an alternative ChatTransport implementation that routes messages through an Ably channel instead of a direct HTTP stream.
Tool calling
Models can invoke tools that you define with a schema and an execute function. The model decides when to call a tool and generates the input parameters. The SDK executes the tool and feeds the result back to the model, which can then continue generating its response. Tool calls can require human approval before execution, creating approval gates in the conversation.
Understand the default transport and its limitations
Without AI Transport, the message flow in a Vercel AI SDK application works as follows:
useChatuses the default transport, which sends an HTTP POST to your server endpoint.- The server calls
streamText(), which returns a stream ofUIMessageChunkevents. - The server converts this to an SSE response using
createUIMessageStreamResponse(). - The client reads the SSE stream and reassembles the chunks into full
UIMessageobjects.
This is a direct, point-to-point HTTP connection. The stream is coupled to the connection. This works for simple interactions, but creates limitations in production:
- Streams die on disconnection. When a phone switches from Wi-Fi to cellular, a user refreshes the page, or a laptop lid closes mid-response, the stream fails. The model continues generating tokens, but there is no way to deliver them.
- Sessions do not span devices. The connection is exclusively between the requesting client and the server. A second tab or a phone cannot access the same stream.
- Clients cannot signal the agent. SSE is one-way: server to client. The only way for the client to communicate is to close the connection, which kills the stream. Cancel and resume are mutually exclusive.
- No persistence beyond the connection. When the connection ends, the stream is gone. There is no way to replay what happened or resume from where it left off.
Understand what AI Transport adds
AI Transport implements the ChatTransport interface and is a drop-in replacement for the default HTTP transport. You use it with useChat without changing your application code:
1
2
3
4
5
6
7
// Before: default HTTP transport
const { messages } = useChat()
// After: Ably transport
const transport = useClientTransport({ channelName: chatId })
const chatTransport = useChatTransport(transport)
const { messages } = useChat({ transport: chatTransport })Instead of SSE streaming between client and server, tokens flow through an Ably channel. The HTTP request triggers the server, but the response is decoupled from it.
The integration has four parts:
useChatTransportuses aUIMessageCodecunder the hood to encode Vercel'sUIMessageChunkevents to Ably messages. Every chunk type (text-delta, tool-input, finish, and others) maps to an Ably message with headers to track the metadata. The codec handles encoding on the server, decoding on the client, and reassembling the chunks into completeUIMessageobjects.useChatTransportis a wrapper that converts the Ably Core SDKClientTransportobject into theChatTransportinterface for use withuseChat.useMessageSyncsubscribes to the transport's conversation tree and pushes updates intouseChat'ssetMessages. This keeps Vercel's local state in sync with the authoritative state on the channel. This is required for features like multi-device sync and conversation branching, and is how the Ably SDK brings those features to Vercel'suseChatwithoutuseChatnatively supporting them.- On the server,
turn.streamResponse(result.toUIMessageStream())pipes the model's output through the codec encoder to the Ably channel. The HTTP response returns immediately (status 200, empty body). The tokens are delivered to all connected clients through the channel, not through the HTTP response.
See how they fit together
The architecture stacks four layers:
- Vercel AI SDK provides
useChat(),streamText(), tool calls, and UI state management. - The
ChatTransportinterface is the plug-in point that Vercel designed for custom transports. - AI Transport implements
ChatTransportand adds sessions, presence, recovery, and control. - Ably infrastructure provides the global edge network, ordering, and persistence.
Vercel AI SDK provides:
- Model orchestration (
streamText, providers) - UI state management (
useChat, message arrays, status tracking) - Tool calls and structured output
- The
ChatTransportinterface as the extension point
Ably AI Transport provides:
- Durable sessions on Ably channels
- Multi-device fan-out
- Reconnection and recovery
- Active turn tracking
- Bidirectional control (cancel, steer, interrupt)
- Ordering and persistence
- History and replay
- Token compaction
Choose an integration path
Both paths use the same server code. The difference is client-side only.
Use the Vercel useChat path
The simplest path. useChatTransport wraps the core transport for direct use with Vercel's useChat hook. useMessageSync pushes other clients' messages into useChat state. You get Vercel's message management with AI Transport's durable delivery.
Use this path when you want the standard Vercel useChat developer experience with durable sessions added. The Vercel AI SDK getting started guide follows this path.
Use the Core SDK path
Use AI Transport's React hooks (useView, useSend, useRegenerate, useEdit) directly instead of useChat. This gives you full access to the conversation tree, branch navigation, split-pane views, and custom message construction.
Use this path when you need branching UI, custom message rendering, or direct control over the conversation tree. The Core SDK getting started guide follows this path.
Discover what this unlocks
With AI Transport, your Vercel AI SDK application gains capabilities that are not possible with the default HTTP transport:
- Streams survive disconnection. The client reconnects and resumes from where it left off. The agent continues publishing regardless of client connectivity.
- Multi-device sync. The same conversation is accessible on phone, laptop, and tablet, all in realtime. Any device that subscribes to the session sees every message.
- Conversation branching. Edit and regenerate create forks in a conversation tree, not destructive replacements. The full history of every branch is preserved.
- Bidirectional control. Cancel, interrupt, and steer agents mid-stream through the same session. No separate control channel required.
- Approval gates reach the user on any device, even after reconnecting. A pending tool approval persists on the session until someone acts on it.
- Concurrent turns with independent cancel handles. Multiple requests can stream simultaneously on the same session.
- Agent presence. Real-time visibility into agent status: thinking, streaming, idle, or offline.
- Push notifications for completed background tasks. Reach users who have left the app.
- History and replay. Load the full conversation on reconnect, page refresh, or new device join.
Read next
- Get started with Vercel AI SDK: build your first AI Transport application.
- Sessions: understand durable sessions and how the conversation persists.
- Features: explore the full set of AI Transport features.
- Vercel integration API reference: the full API for the Vercel integration.