Bodyguard
Bodyguard is a powerful contextual analysis platform that can be used to moderate content in chat rooms.
The Bodyguard integration can be applied to chat rooms so that you can use Bodyguard's content moderation capabilities to detect and handle inappropriate content before it's published to other users.
Integration setup
Configure the integration in your Ably dashboard or using the Control API.
The following fields are specific to Bodyguard configuration:
Field | Description |
---|---|
Bodyguard API key | The API key for your Bodyguard account. |
Channel ID | The ID of your Bodyguard channel where moderation rules are configured. |
Default Language (optional) | The default language to use for content analysis. This will be used as a fallback in case automatic language detection fails. |
Model URL (optional) | A custom URL if using a custom moderation model. |
For additional configuration options shared across all before-publish moderation rules, see the common configuration fields.
Messages will be rejected if Bodyguard's analysis returns a REMOVE
recommended action based on the moderation rules configured in your Bodyguard channel.
Handling rejections
Messages are rejected when they fail Bodyguard's analysis. Bodyguard returns a REMOVE action in these instances and the messages will not be published to your channel. The publish request will be rejected. Moderation rejections will use the error code 42213
.