AI support chatbot pricing example

This example uses consumption-based pricing for an AI support chatbot use case, where a single agent is publishing tokens to user over AI Transport.

Open in

Assumptions

The scale and features used in this calculation.

ScaleFeatures
4 user prompts to get to resolution✓ Message-per-response
300 token events per LLM response
75 appends per second from agent
3 minute average chat duration
1 million chats

Cost summary

The high level cost breakdown for this scenario is given in the table below. Messages are billed for both inbound (published to Ably) and outbound (delivered to subscribers). Enabling the "Message updates, deletes and appends" channel rule will automatically enable message persistence.

ItemCalculationCost
Messages1212M × $2.50/M$3030
Connection minutes6M × $1.00/M$6
Channel minutes3M × $1.00/M$3
Package feeSee plans
Total~$3039/M chats

Message usage breakdown

Several factors influence the total message usage. The message-per-response pattern includes automatic rollup of append events to reduce consumption costs and avoid rate limits.

  • Agent stream time: 300 token events ÷ 75 appends per second = 4 seconds of streaming per response
  • Messages published after rollup: 4 seconds x 25 messages/s = 100 messages per response
TypeCalculationInboundOutboundTotal messagesCost
User prompts1M chats × 4 prompts4M4M8M$20
Agent responses1M chats x 4 responses x 100 messages per response400M400M800M$2000
Persisted messagesEvery inbound message is persisted404M0404M$1010
Total808M404M1212M$3030