Custom Moderation
There may be situations where you have trained your own model, or you want to apply proprietary logic using your own infrastructure, whilst performing moderation.
Ably provides simple APIs to allow your moderation logic to prevent harmful content from being present in your chat room.
Before publish
Before publish moderation is where your moderation logic is invoked before the message is published to your chat room. This has the benefit of preventing harmful content from ever entering your chat room, at the cost of some latency in invoking your moderation logic as part of the publish path.
Integration configuration
To fine-tune how Ably handles messages according to your use-case, you can configure before publish rule behavior using the following fields:
Field | Description |
---|---|
Retry timeout | Maximum duration (in milliseconds) that an attempt to invoke the rule may take (including any retries). |
Max retries | Maximum number of retries after the first attempt at invoking the rule. |
Failed action | The action to take in the event that a rule fails to invoke. Options are reject the request or publish anyway. |
Too many requests action | The action to take if your endpoint returns 429 Too Many Requests , which may happen if your endpoint is overloaded. The options are to fail moderation, or retry. |
Room filter (optional) | A regular expression to match to specific chat rooms. |
The API
Ably provides a simple API for integrations to moderate chat messages. There are some nuances for particular transports, which can be seen on the individual transports page.
Request format
The request has the following JSON format.
Response format
action
: Must be eitheraccept
orreject
.accept
means that the message will be published to the chat room,reject
means it will be rejected.rejectionReason
: Optional. If provided withaction: "reject"
, it can contain any information about why the message was rejected. This information may be sent back to clients.
Error handling
If moderation was performed as expected, regardless of the outcome, your endpoint MUST return a status code of 200
. For other codes, Ably will take the following action:
Code | Description |
---|---|
4xx (excluding 429) | Ably will not retry moderation. The message will be handled according to your rule configuration. |
429 | Ably will only retry if your rule configuration permits retries in the 429 Too Many Requests case. |
5xx | Ably will retry moderation with backoff, until it either succeeds, or the retry window is exceeded. |
If, by the end of the retry window, Ably has not been able to get a definitive moderation answer from your endpoint, the action we take next will depend on your rule configuration. If you have elected to publish the message anyway, we will do so. You can always remove harmful content in hindsight using human moderators or community reporting schemes. Alternatively, you may have elected to reject the message. If this is the case, Ably will not publish the message.
After publish
After publish moderation is where your moderation logic is invoked after a message is published to the chat room. In this configuration, harmful content may briefly be visible in the room, although most moderation engines are able to process content and instruct its removal almost instantaneously. This configuration is helpful when you need to prioritize latency and performance.
There isn't currently a chat-specific custom API for after publish moderation.
However, you can still use standard Ably "integration rules"/docs/integrations to send chat messages to your infrastructure and then remove any offending content with the REST API.