- Topics
- /
- Architectural patterns
- /
- Comparing chat API pricing: Decoding pricing and finding the model that fits your needs
Comparing chat API pricing: Decoding pricing and finding the model that fits your needs
Pricing is critical to deciding which chat API you will use - however, it can often feel like there are limited options. Whether you are looking to gradually scale a chat app, or anticipate large and sudden spikes in traffic, pricing models can make or break the bank depending on your usage - and most vendors will expect you to accept the one or two industry standards.
Chat API providers fall into a handful of pricing model categories. We’ll explore them in this article by explaining them, comparing them, and ultimately concluding which is best for each particular use case.
Chat APIs: common pricing models
Chat API pricing models are designed to align with different usage patterns - like a steady user base and usage, or periodic spikes - but they also introduce trade-offs depending on an application’s scale and messaging demands. These models are generally categorized as forms of consumption-based pricing, where costs are tied to how the service is used. Let’s look at the most common pricing model in use today:
Monthly Active Users (MAU)
The Monthly Active Users (MAU) model is one of the most widely used pricing models in the industry. Providers like CometChat, Sendbird, Twilio, and Stream charge based on the number of unique active users per month.
You pay for each user who interacts with the chat API within a given month, regardless of the number of messages they send or receive. While this can simplify billing, it comes with the tradeoff of assuming the “typical usage” of a monthly active user. For example, an individual MAU may actually use much less active connection time or send much fewer messages than what is assumed for an average MAU. Simply put, this method is not granular.
This model is predictable for applications with small and steady user bases, since, if you’re not expecting much user volatility, it’s easy to estimate costs. But any volatility in workloads, like experiencing a brief viral period and dipping back down, can result in overpaying for peak costs (peak MAUs) in a monthly period.
For chat services operating at scale, the monthly amount spent on peak MAUs often grossly exceeds the bill for actual usage; it wastes allocated resources and money.
There is a pricing model designed to tackle these pricing issues at scale, however - and we use it at Ably.
Per-minute consumption
A per-minute consumption model goes beyond traditional consumption-based pricing by billing customers based on their actual usage of service resources—connection time, channels, and messages. This approach directly addresses the inefficiencies inherent inM AU pricing models. This isn’t a common model in the industry, but we’ve adopted it here at Ably to meet the usage needs of our customers at scale.
Per-minute consumption measures actual usage in fine-grained units, such as:
Connection minutes: The total time devices are connected.
Channel minutes: The time channels remain active.
Message events: Each message sent or received by users.
By tracking usage at this granular level, it ensures customers only pay for what they consume, without overpaying for resources they don’t use. Traffic spikes don’t necessarily lead to hugely increased costs either - the pricing is distributed across these dimensions, smoothing the overall impact. For example, livestreaming events, which may have a huge number of messages at their peak but a low number of channels, would see a more modest increase in cost than if they were billed by user count. Instead of penalizing a single metric, this approach provides greater predictability and reflects resource utilization more holistically.
Per-minute consumption also incentivizes resource optimization, such as reducing idle connections or batching messages, which can further mitigate cost surges during spikes.(Batching comes in handy when many:many chat interactions lead to an exponential increase in delivered messages, which we’re implementing soon at Ably on the server side).
Popular pricing models compared
Deciding whether an MAU, message throughput, or per-minute consumption pricing model works for you depends on your use case - but if you are looking to scale a chat application to any considerable degree, as a general rule, per-minute consumption will be the best option.
MAU pricing assumes a “typical user” for billing purposes. This involves bundling resources such as connection time, message throughput, and storage into a fixed monthly fee per active user, which doesn’t accurately reflect the actual usage of the user.
Now imagine a customer operating a live event platform. They’re running a live event for two hours in the month that peaks at 50,000 users. What would the monthly prices look like between an MAU model and a per-minute-consumption model?
Let’s say that the MAU model assumes that each “average” user will send 1k messages per month. While the bill comes in based on user count only, built into the cost per user is an assumption on how much one would use (in this case, a total of 50 million messages for 50k users). The MAU model then bills the whole month based on the peak of 50,000 users.
With per-minute consumption, costs reflect the actual connection time and messages used - we’ll estimate generously:
Connection time: 50,000 users × 240 minutes (accounting for pre- and post- event activity) = 12 million connection minutes.
Message volume: 50,000 users sending an average of 200 messages = 10 million messages.
Channels and channel time: let’s say 5 channels x 240 minutes = 1200 channel minutes.
Even without specific prices to hand, we can see that billing for a typical user is inefficient in this scenario. Per-minute billing focuses on ensuring fairness and transparency for highly volatile traffic situations like these (for more information on this, Matt O’Riordan, Ably’s CEO, talks about pricing model issues in his blog post).
What does this mean in practice, and what model is best for your use case and traffic patterns? This table breaks it down:
Model | Examples | Best suited to | Challenges |
Monthly Active Users (MAU) | Stream, Sendbird, Twilio | Apps with steady or low user activity | Paying for peak costs during volatile usage periods |
Per-minute consumption | Ably | Apps with scalable, high-volume messaging | Requires tracking of usage metrics |
Ably’s per-minute consumption model
If the per-minute consumption model we discussed above sounds promising to you, here’s some more information on how this works specifically with Ably.
At Ably, we’ve developed a pricing model designed to align more closely with the needs of realtime chat applications. Unlike traditional MAU or throughput-based models, Ably offers per-minute pricing that scales predictably and transparently with your application.
Here’s how Ably stands out:
Flexibility: Pay only for what you use, with no penalties for growing user bases or unexpected spikes in message throughput.
Scalability: Ably’s infrastructure supports billions of messages daily, with costs optimized for applications of any scale.
Transparency: Ably’s pricing eliminates the hidden costs often associated with rigid MAU or throughput models, giving you full visibility into your expenses.
Ably’s platform is built on a globally-distributed infrastructure designed for high-performance, scalable, and dependable messaging. With support for exactly-once delivery, message ordering, and <50ms global average latency, Ably ensures a seamless chat experience for users anywhere in the world.
Our Chat SDK in private beta offers fully-fledged chat features, like chat rooms at any scale; typing indicators; read receipts; presence tracking; and more. And of course, our per-minute pricing means that your consumption is as cost-effective as possible.
Sign up for private beta today to try out Ably Chat.
Recommended Articles
Maximize the value of realtime analytics with Kafka and Ably
Learn how you can use Kafka and Ably to engineer a high-performance analytics pipeline that connects your backend to end-users at the network edge in realtime.
Apache Kafka and Ably: Building scalable, dependable chat applications
Learn how Ably complements and extends Kafka to end-users at the edge, enabling you to architect scalable, dependable chat apps.
HQ style games
Recommendations to build HQ style quiz apps that demand reliable and super fast realtime communication. Includes a sample app!