Azure Content Safety

Azure Content Safety is a powerful AI service that can be used to moderate content in chat rooms.

The Azure Content Safety integration can be applied to chat rooms so that you can use Azure's text moderation capabilities to detect and handle inappropriate content before it's published to other users.

Integration setup

Configure the integration in your Ably dashboard or using the Control API.

The following are the fields specific to Azure Content Safety configuration:

FieldDescription
Azure API keyThe API key for your Azure Content Safety resource.
Azure EndpointThe endpoint URL for your Azure Content Safety resource (for example, https://your-resource.cognitiveservices.azure.com).
ThresholdsA map of content safety categories to severity levels. Azure supports four severity levels: 0 (safe), 2 (low), 4 (medium), and 6 (high). When moderating text, any message deemed to be at or above a specified threshold will be rejected and not published to the chat room. Categories include Hate, SelfHarm, Sexual, and Violence.

For additional configuration options shared across all before-publish moderation rules, see the common configuration fields.

Handling rejections

If a message fails moderation the message will not be published and the publish request will be rejected.

Moderation rejections will use error code 42213.