We are excited to announce the release of the Ably Kafka Connector 3.0. Version 3 brings a host of improvements, including:
- Enhanced throughput and scalability through Dynamic Channel Configuration and REST support.
- Accreditation as a Confluent Cloud Custom Connector, freeing developers from the operational burden of provisioning and perpetually managing low-level connector infrastructure.
- Improved error handling and troubleshooting through dead letter queue (DLQ) support.
Overall, the Ably Kafka Connector v3.0 makes the management of Kafka pipelines extension to millions of web and mobile users simpler and more reliable.
Distribute Kafka messages to large, rapidly changing numbers of users over the Internet
Kafka is the first point of call for streaming and processing huge volumes of mission-critical, time-sensitive event data within the security of a company’s firewall. But it wasn't designed for distributing events from internal systems to consumers on the public internet. Building and maintaining an internet-facing messaging layer capable of streaming realtime event data from Kafka topics to web, mobile, and IoT clients is complicated and labor-intensive and can take time and focus away from core engineering work.
Ably enables the distribution of data from Kafka pipelines to millions of web and mobile devices in a granular way, with the same guarantees and capabilities as Kafka. The Ably Kafka Connector provides a ready-made integration between Kafka and Ably, helping companies distribute data from Kafka to internet-connected client devices in a fast, easy, dependable, and secure way.
Dynamic Channel Configuration
Ably Channels (the equivalent of Kafka topics) are very lightweight, and it's idiomatic to use very high numbers of distinct channels to send messages between users. In many use cases, it makes sense to create an Ably channel per user or per session, meaning there could be millions in total.
Contrast with a typical Kafka deployment, where you're more likely to be putting records related to all users of a common type through a single topic. It is not designed for fine-grained data distribution and doesn’t have a mechanism to ensure that when a client device connects over the Internet, it only receives the messages that are relevant for that user/device.
Ably’s channels are optimized for cross-network communication and allow for flexible routing of messages from Kafka topics, ensuring that clients connecting over the Internet only subscribe to relevant information. What’s more, Ably can quickly scale horizontally to handle millions of concurrent clients.
To enable "fan-out" to high numbers of channels from your Kafka topic, the Ably Kafka Connector supports Dynamic Channel Configuration, whereby you can configure a template string to substitute any data from incoming Kafka records into the outgoing Ably Channel name. The same functionality is also supported for the Message name field, if required.
Imagine a chat application where data is published to the Topic chat, and the messages is:
{
"userid": "andra",
"message": "Hi there!",
"chatroomid": "introductions"
}
You may want to send data going into the thick pipe of a Kafka Topic to a specific chat room, so it’s automatically only going to users who’re subscribed to a specific chat room. The publish rule of the connector can be:
#{value.chatroomid}
Or you can add in further details depending on our architecture. For example, you can specify certain elements such as the message name of an Ably message for the Connector to use”.
message.name = #{value.userid}
In addition to Dynamic Channel Configuration, the Ably Kafka Connector 3.0 now uses REST to publish records to Ably with parallel publishing using a thread pool for improved throughput per sink task. All configurations that related only to Realtime WebSocket connections have been removed and can be safely dropped from config files as they would no longer have any effect. Read more in our documentation.
Confluent Cloud Custom Connectors
The Ably Kafka Connector has always been available on the Confluent Hub as a verified Gold connector. With v3.0, it is now also certified as a Confluent Cloud Custom Connector.
Using Ably as a plugin on Confluent Cloud means that your team doesn’t need to manage Connect infrastructure, and you can:
- Quickly connect Ably to your Kafka without code changes.
- Ensure high availability and performance using logs and metrics to monitor the health of your connectors and workers.
- Eliminate the operational burden of provisioning and perpetually managing low-level connector infrastructure.
For instructions on how to integrate Ably with Confluent Cloud using Custom Connectors, read our documentation.
The Ably Kafka Connector is now also fully supported for MSK, with example deployments available.
Dead Letter Queue (DLQ) support
With version 3.0, The Ably Kafka Connector is now able to forward records to a dead-letter queue topic using Kafka Connect dead-letter queue support.
In any situation where something fails while sending data from Kafka to Ably, for example if the connector attempts to publish to an Ably channel where you don’t have permission to post to, if you send un-mapped data, if your credentials expired, or if the record is oversized and Ably rejects it, the failed message will be sent to the dead-letter queue with an appropiate error message, allowing developers to debug and fix any issues before a system goes down or data sets are missed.
Get started with the Ably Kafka Connector
Complete instructions on using the connector are available in the documentation. The connector is available under the Apache 2 open-source license and we are planning to continue extending and improving it, so we encourage feedback and feature requests. Please either raise issues or pull requests if you would like to talk about contributing or feature requests. You can also contact us at any time.
A series of demos and 'How to' guides are also available: