Running Moderation
Deep Mod provides two distinct modes for running content moderation, each optimized for different use cases and integration patterns.
Test Mode
Test mode provides immediate feedback during policy development and content evaluation. This synchronous mode is perfect for:
-
Policy Authoring: Get instant results while refining rules and thresholds
-
Content Validation: Quickly check specific content samples during development
-
Integration Testing: Validate your webhook handlers and error handling
-
Quality Assurance: Verify policy behavior before activating for production
Characteristics of Test Mode:
-
Immediate, synchronous responses
-
No webhook notifications sent
-
Policies can be inactive (useful during development)
-
Ideal for development and testing workflows
-
Lower throughput compared to production mode
When to Use Test Mode: Use test mode when you need immediate feedback and are working with small volumes of content. This mode is essential during the policy development phase and for debugging integration issues.
Test Mode is not intended to be used as the primary method of moderation. It is meant to be used for testing and refining your polices, and as such is only available via the DeepMod dashboard.
Production Mode
Production mode is designed for scalable, reliable content moderation in live applications. This asynchronous mode handles high volumes efficiently:
-
Asynchronous Processing: Content is queued for processing, freeing your application immediately
-
Webhook Notifications: Results delivered to your configured webhook endpoint
-
High Throughput: Optimized for processing large volumes of content
-
Reliable Delivery: Built-in retry mechanisms for webhook delivery
Endpoint: POST /v1/moderation/run
Characteristics of Production Mode:
-
Asynchronous processing with webhook delivery
-
Higher throughput and scalability
-
Built-in reliability and retry mechanisms
-
Requires webhook configuration (mandatory)
-
Optimal for production applications
When to Use Production Mode: Use production mode for all live content moderation in your application. This mode provides the reliability and scale needed for real-world content workflows.mode provides the reliability and scale needed for real-world content workflows.
API Integration Patterns
Effective integration requires understanding how to structure your API calls, handle responses, and build robust error handling.
Request Parameters
|
Parameter |
Type |
Required |
Description |
|---|---|---|---|
|
|
|
Yes |
Policy identifier(s). Single string for one policy, array for chaining (max 10). |
|
|
|
Yes |
Text content to moderate. 1-100,000 characters. Control characters blocked except tabs and newlines. |
|
|
|
No |
Moderation mode: |
|
|
|
No |
Custom key-value pairs for context and downstream processing. |
|
|
|
No |
Labels for categorizing and filtering moderation runs. |
Content Constraints
-
Minimum length: 1 character
-
Maximum length: 100,000 characters
-
Blocked characters: Control characters (except tabs
\tand newlines\n) -
Trimming: Content is trimmed before validation
Basic API Call Structure
Production moderation requests follow this pattern:
const response = await fetch('https://api.deepmod.ai/v1/moderation/run', {
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.DEEPMOD_API_TOKEN}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
policyUri: 'your-policy-uri',
content: 'Text content to moderate',
metadata: {
userId: 'user_12345',
postId: 'post_67890',
source: 'comment',
priority: 'standard',
},
tags: ['user-generated', 'comments', 'public'],
}),
});
const result = await response.json();
// Note: Production moderation is async, result delivered via webhook
console.log('Moderation job ID:', result.moderationJobId);
import requests
import os
response = requests.post('https://api.deepmod.ai/v1/moderation/run',
headers={
'Authorization': f"Bearer {os.environ['DEEPMOD_API_TOKEN']}",
'Content-Type': 'application/json'
},
json={
'policyUri': 'your-policy-uri',
'content': 'Text content to moderate',
'metadata': {
'userId': 'user_12345',
'postId': 'post_67890',
'source': 'comment',
'priority': 'standard'
},
'tags': ['user-generated', 'comments', 'public']
}
)
result = response.json()
# Note: Production moderation is async, result delivered via webhook
print(f"Moderation job ID: {result['moderationJobId']}")
<?php
$response = file_get_contents('https://api.deepmod.ai/v1/moderation/run', false,
stream_context_create([
'http' => [
'method' => 'POST',
'header' => [
'Authorization: Bearer ' . $_ENV['DEEPMOD_API_TOKEN'],
'Content-Type: application/json'
],
'content' => json_encode([
'policyUri' => 'your-policy-uri',
'content' => 'Text content to moderate',
'metadata' => [
'userId' => 'user_12345',
'postId' => 'post_67890',
'source' => 'comment',
'priority' => 'standard'
],
'tags' => ['user-generated', 'comments', 'public']
])
]
])
);
$result = json_decode($response, true);
// Note: Production moderation is async, result delivered via webhook
echo "Moderation job ID: " . $result['moderationJobId'];
?>
require 'net/http'
require 'json'
uri = URI('https://api.deepmod.ai/v1/moderation/run')
http = Net::HTTP.new(uri.host, uri.port)
http.use_ssl = true
request = Net::HTTP::Post.new(uri)
request['Authorization'] = "Bearer #{ENV['DEEPMOD_API_TOKEN']}"
request['Content-Type'] = 'application/json'
request.body = {
policyUri: 'your-policy-uri',
content: 'Text content to moderate',
metadata: {
userId: 'user_12345',
postId: 'post_67890',
source: 'comment',
priority: 'standard'
},
tags: ['user-generated', 'comments', 'public']
}.to_json
response = http.request(request)
result = JSON.parse(response.body)
# Note: Production moderation is async, result delivered via webhook
puts "Moderation job ID: #{result['moderationJobId']}"
Moderation Modes
Moderate Mode (Default)
Standard asynchronous moderation. Content is queued for processing and results are delivered via webhook.
{
"policyUri": "safety-policy",
"content": "Text content to moderate",
"mode": "moderate"
}
Test Mode
Synchronous moderation for development and testing. Returns results immediately without sending webhooks.
{
"policyUri": "safety-policy",
"content": "Text content to moderate",
"mode": "test"
}
Note: Test mode is ideal for policy development, debugging, and integration testing. For production workloads, use moderate mode with webhook handlers.
Policy Chaining
Policy chaining allows you to run content through multiple policies sequentially. Replace the single policyId with an array of policy identifiers to create a moderation chain.
Single Policy (existing):
{
"policyUri": "safety-policy",
"content": "Text content to moderate"
}
Policy Chain:
{
"policyUri": ["safety-policy", "legal-compliance", "brand-guidelines"],
"content": "Text content to moderate",
"metadata": {
"userId": "user_12345",
"postId": "post_67890"
}
}
Chain Execution:
-
Policies execute sequentially in the provided order
-
If any policy fails, the chain stops and remaining policies are marked as
abandoned -
Success requires all policies to pass
-
You receive one consolidated webhook with all results
-
Duplicate policy URIs are automatically deduplicated
Chain Limitations:
-
Maximum 10 policies per chain
-
All policies must be active and belong to your organization
-
Each policy in the chain can be referenced by ID or URI
Example Response Structure:
{
"batchId": "batch_abc123",
"moderationJobId": "job_abc123""
}
Webhook Payload for Chains:
{
"id": "job_abc123def456",
"type": "Moderation.BatchCompleted",
"data": {
"batch": {
"result": "failure",
"moderation": [
{
"policy": "safety-policy",
"result": "success",
"ruleGroupResults": [
{
"name": "Content Safety",
"result": "success",
"averageConfidence": 0.95,
"ruleResults": [
{
"ruleId": 101,
"condition": "must not contain harmful content",
"result": "success",
"averageConfidence": 0.95,
"matchedContent": [
{
"content": null,
"confidence": 0.95
}
]
}
]
}
],
"averageConfidence": 0.95,
"reviewed": false,
"reviewNote": null,
"moderationRunId": 1001
},
{
"policy": "legal-compliance-policy",
"result": "failure",
"ruleGroupResults": [
{
"name": "Copyright Detection",
"result": "failure",
"averageConfidence": 0.87,
"ruleResults": [
{
"ruleId": 202,
"condition": "must not contain copyrighted material",
"result": "failure",
"averageConfidence": 0.87,
"matchedContent": [
{
"content": "Potential copyright violation detected",
"confidence": 0.87
}
]
}
]
}
],
"averageConfidence": 0.87,
"reviewed": false,
"reviewNote": null,
"moderationRunId": 1002
},
{
"policy": "brand-guidelines",
"result": "abandoned",
"ruleGroupResults": [],
"averageConfidence": null,
"reviewed": false,
"reviewNote": null,
"moderationRunId": null
}
]
},
"metadata": {
"userId": "user_12345",
"source": "comment",
"priority": "standard"
},
"tags": ["user-generated", "comments"]
}
}
Request Components
policyUri: The identifier of the policy to use for moderation. Can be either the numeric ID or the URI string. For policy chaining, provide an array of up to 10 identifiers.
content: The text content to be moderated. Must be between 1 and 100,000 characters. Control characters (except tabs and newlines) are not allowed.
mode (optional): The moderation mode. Options are:
-
moderate(default): Asynchronous processing with webhook delivery -
test: Synchronous processing with immediate results (no webhooks)
metadata (optional): Custom key-value pairs that provide context and enable downstream processing. Common metadata includes:
-
User identifiers for tracking and analytics
-
Content identifiers for your internal systems
-
Source information (comments, posts, reviews, etc.)
-
Priority levels for processing order
-
Regional or localization context
tags (optional): Predefined labels for categorizing and filtering moderation runs. Tags help with:
-
Analytics segmentation
-
Queue filtering and monitoring
-
Business logic routing in webhook handlers
Tags must be created in the DeepMod dashboard or via the API before being used during moderation. Tags included in moderation requests that have not already been created via the dashboard or API will be ignored.
Response Handling
Production mode API calls return immediately with a job acknowledgment:
Single Policy Response:
{
"moderationJobId": "job_abc123def456"
}
Policy Chain Response:
{
"moderationJobId": "job_abc123def456",
"batchId": "batch_123"
}
The actual moderation results arrive via webhook notification when processing completes.k notification when processing completes.
Webhook Integration Best Practices
Webhooks are the cornerstone of effective production moderation. Building robust webhook handlers ensures reliable processing of moderation results.
Webhook Event Types
Deep Mod sends different webhook types based on the operation:
|
Event Type |
Description |
|---|---|
|
|
Single policy moderation completed successfully |
|
|
Policy chain completed (all policies processed) |
|
|
Moderation job failed due to system error |
Webhook Headers
All webhooks include the following headers:
|
Header |
Description |
|---|---|
|
|
Always |
|
|
HMAC-SHA256 signature for payload verification |
Signature Format: The x-deepmod-signature header contains a 64-character lowercase hexadecimal string representing the HMAC-SHA256 hash of the JSON payload using your organization's client secret.
Single Moderation Webhook Payload
When a single policy moderation completes, you'll receive:
{
"id": "job_abc123def456",
"type": "Moderation.Completed",
"data": {
"moderation": {
"policy": "community-guidelines-policy",
"result": "success",
"ruleGroupResults": [
{
"name": "Safety",
"result": "success",
"averageConfidence": 0.95,
"ruleResults": [
{
"ruleId": 101,
"condition": "must not contain harmful content",
"result": "success",
"averageConfidence": 0.95,
"matchedContent": [
{
"content": null,
"confidence": 0.95
}
]
}
]
}
],
"averageConfidence": 0.95,
"reviewed": false,
"reviewNote": null
},
"metadata": {
"userId": "user_12345",
"postId": "post_67890",
"source": "comment",
"priority": "standard"
},
"tags": ["user-generated", "comments"]
}
}
Failed Moderation Webhook Payload
When a moderation job fails due to a system error:
{
"id": "job_abc123def456",
"type": "Moderation.Failed",
"data": {
"policyId": "community-guidelines-policy",
"originalPayload": {
"content": "Original content submitted",
"metadata": {
"userId": "user_12345"
},
"tags": ["user-generated"]
},
"error": {
"type": "ProcessingError",
"message": "Failed to process moderation request",
"code": "PROCESSING_FAILED",
"isRetryable": true,
"timestamp": "2024-01-15T10:30:00.000Z"
}
}
}
Implementing Webhook Handlers
Essential Webhook Handler Features:
// Example webhook handler (Node.js/Express)
app.post('/webhook/moderation', async (req, res) => {
try {
// 1. Verify webhook signature (recommended)
const signature = req.headers['x-deepmod-signature'];
if (!verifySignature(req.body, signature, webhookSecret)) {
return res.status(401).send('Invalid signature');
}
// 2. Parse webhook payload
const { id: jobId, type, data } = req.body;
// 3. Implement idempotent handling
if (await isJobAlreadyProcessed(jobId)) {
return res.status(200).send('Already processed');
}
// 4. Route based on webhook type
switch (type) {
case 'Moderation.Completed':
await handleSingleModeration(data);
break;
case 'Moderation.BatchCompleted':
await handleBatchModeration(data);
break;
case 'Moderation.Failed':
await handleFailedModeration(data);
break;
}
// 5. Mark job as processed
await markJobProcessed(jobId);
// 6. Return success response
res.status(200).send('Processed');
} catch (error) {
// 7. Log errors and return error status
console.error('Webhook processing error:', error);
res.status(500).send('Processing failed');
}
});
async function handleSingleModeration(data) {
const { moderation, metadata } = data;
switch (moderation.result) {
case 'success':
await handleApprovedContent(metadata, moderation);
break;
case 'failure':
await handleRejectedContent(metadata, moderation);
break;
case 'ambiguous':
await handleAmbiguousContent(metadata, moderation);
break;
}
}
async function handleBatchModeration(data) {
const { batch, metadata } = data;
// Check overall batch result
switch (batch.result) {
case 'success':
await handleApprovedContent(metadata, batch);
break;
case 'failure':
// Find which policy failed
const failedPolicy = batch.moderation.find(m => m.result === 'failure');
await handleRejectedContent(metadata, failedPolicy);
break;
case 'ambiguous':
await handleAmbiguousContent(metadata, batch);
break;
}
// Handle any abandoned policies
const abandonedPolicies = batch.moderation.filter(m => m.result === 'abandoned');
if (abandonedPolicies.length > 0) {
console.log(`${abandonedPolicies.length} policies were abandoned due to earlier failure`);
}
}
from flask import Flask, request, jsonify
import hmac
import hashlib
app = Flask(__name__)
@app.route('/webhook/moderation', methods=['POST'])
def webhook_handler():
try:
# 1. Verify webhook signature (recommended)
signature = request.headers.get('x-deepmod-signature')
if not verify_signature(request.data, signature, webhook_secret):
return jsonify({'error': 'Invalid signature'}), 401
# 2. Parse webhook payload
data = request.json
job_id = data['id']
webhook_type = data['type']
payload = data['data']
# 3. Implement idempotent handling
if is_job_already_processed(job_id):
return jsonify({'status': 'Already processed'}), 200
# 4. Route based on webhook type
if webhook_type == 'Moderation.Completed':
handle_single_moderation(payload)
elif webhook_type == 'Moderation.BatchCompleted':
handle_batch_moderation(payload)
elif webhook_type == 'Moderation.Failed':
handle_failed_moderation(payload)
# 5. Mark job as processed
mark_job_processed(job_id)
# 6. Return success response
return jsonify({'status': 'Processed'}), 200
except Exception as error:
# 7. Log errors and return error status
print(f'Webhook processing error: {error}')
return jsonify({'error': 'Processing failed'}), 500
def handle_single_moderation(payload):
moderation = payload['moderation']
metadata = payload['metadata']
if moderation['result'] == 'success':
handle_approved_content(metadata, moderation)
elif moderation['result'] == 'failure':
handle_rejected_content(metadata, moderation)
elif moderation['result'] == 'ambiguous':
handle_ambiguous_content(metadata, moderation)
def handle_batch_moderation(payload):
batch = payload['batch']
metadata = payload['metadata']
if batch['result'] == 'success':
handle_approved_content(metadata, batch)
elif batch['result'] == 'failure':
failed_policy = next(m for m in batch['moderation'] if m['result'] == 'failure')
handle_rejected_content(metadata, failed_policy)
elif batch['result'] == 'ambiguous':
handle_ambiguous_content(metadata, batch)
# Handle abandoned policies
abandoned = [m for m in batch['moderation'] if m['result'] == 'abandoned']
if abandoned:
print(f"{len(abandoned)} policies were abandoned due to earlier failure")
<?php
// Example webhook handler (PHP)
if ($_SERVER['REQUEST_METHOD'] === 'POST') {
try {
// 1. Verify webhook signature (recommended)
$signature = $_SERVER['HTTP_X_DEEPMOD_SIGNATURE'] ?? '';
$requestBody = file_get_contents('php://input');
if (!verifySignature($requestBody, $signature, $webhookSecret)) {
http_response_code(401);
echo 'Invalid signature';
exit;
}
// 2. Parse webhook payload
$data = json_decode($requestBody, true);
$jobId = $data['id'];
$type = $data['type'];
$payload = $data['data'];
// 3. Implement idempotent handling
if (isJobAlreadyProcessed($jobId)) {
http_response_code(200);
echo 'Already processed';
exit;
}
// 4. Route based on webhook type
switch ($type) {
case 'Moderation.Completed':
handleSingleModeration($payload);
break;
case 'Moderation.BatchCompleted':
handleBatchModeration($payload);
break;
case 'Moderation.Failed':
handleFailedModeration($payload);
break;
}
// 5. Mark job as processed
markJobProcessed($jobId);
// 6. Return success response
http_response_code(200);
echo 'Processed';
} catch (Exception $error) {
// 7. Log errors and return error status
error_log('Webhook processing error: ' . $error->getMessage());
http_response_code(500);
echo 'Processing failed';
}
}
function handleSingleModeration($payload) {
$moderation = $payload['moderation'];
$metadata = $payload['metadata'];
switch ($moderation['result']) {
case 'success':
handleApprovedContent($metadata, $moderation);
break;
case 'failure':
handleRejectedContent($metadata, $moderation);
break;
case 'ambiguous':
handleAmbiguousContent($metadata, $moderation);
break;
}
}
function handleBatchModeration($payload) {
$batch = $payload['batch'];
$metadata = $payload['metadata'];
switch ($batch['result']) {
case 'success':
handleApprovedContent($metadata, $batch);
break;
case 'failure':
$failedPolicy = array_filter($batch['moderation'], fn($m) => $m['result'] === 'failure')[0];
handleRejectedContent($metadata, $failedPolicy);
break;
case 'ambiguous':
handleAmbiguousContent($metadata, $batch);
break;
}
// Handle abandoned policies
$abandoned = array_filter($batch['moderation'], fn($m) => $m['result'] === 'abandoned');
if (count($abandoned) > 0) {
error_log(count($abandoned) . " policies were abandoned due to earlier failure");
}
}
?>
require 'sinatra'
require 'json'
require 'openssl'
post '/webhook/moderation' do
begin
# 1. Verify webhook signature (recommended)
signature = request.env['HTTP_X_DEEPMOD_SIGNATURE']
request_body = request.body.read
unless verify_signature(request_body, signature, webhook_secret)
status 401
return 'Invalid signature'
end
# 2. Parse webhook payload
data = JSON.parse(request_body)
job_id = data['id']
webhook_type = data['type']
payload = data['data']
# 3. Implement idempotent handling
if job_already_processed?(job_id)
status 200
return 'Already processed'
end
# 4. Route based on webhook type
case webhook_type
when 'Moderation.Completed'
handle_single_moderation(payload)
when 'Moderation.BatchCompleted'
handle_batch_moderation(payload)
when 'Moderation.Failed'
handle_failed_moderation(payload)
end
# 5. Mark job as processed
mark_job_processed(job_id)
# 6. Return success response
status 200
'Processed'
rescue => error
# 7. Log errors and return error status
puts "Webhook processing error: #{error.message}"
status 500
'Processing failed'
end
end
def handle_single_moderation(payload)
moderation = payload['moderation']
metadata = payload['metadata']
case moderation['result']
when 'success'
handle_approved_content(metadata, moderation)
when 'failure'
handle_rejected_content(metadata, moderation)
when 'ambiguous'
handle_ambiguous_content(metadata, moderation)
end
end
def handle_batch_moderation(payload)
batch = payload['batch']
metadata = payload['metadata']
case batch['result']
when 'success'
handle_approved_content(metadata, batch)
when 'failure'
failed_policy = batch['moderation'].find { |m| m['result'] == 'failure' }
handle_rejected_content(metadata, failed_policy)
when 'ambiguous'
handle_ambiguous_content(metadata, batch)
end
# Handle abandoned policies
abandoned = batch['moderation'].select { |m| m['result'] == 'abandoned' }
puts "#{abandoned.length} policies were abandoned due to earlier failure" if abandoned.any?
end
Signature Verification
Verify webhook signatures to ensure requests are authentic:
const crypto = require('crypto');
function verifySignature(payload, signature, secret) {
const expectedSignature = crypto
.createHmac('sha256', secret)
.update(JSON.stringify(payload))
.digest('hex');
return crypto.timingSafeEqual(
Buffer.from(signature),
Buffer.from(expectedSignature)
);
}
Critical Webhook Implementation Guidelines
Idempotency: Always implement idempotent processing using the job ID to prevent duplicate handling if webhooks are retried.
Fast Response: Return a 2xx status code quickly (under 5 seconds) to prevent webhook timeouts and retries.
Error Handling: Log all errors with sufficient context for debugging, but don't let errors crash your webhook handler.
Signature Verification: While optional, signature verification provides important security for production environments.
Graceful Degradation: If your primary processing fails, consider implementing fallback mechanisms or queue for later retry.
Error Handling
Common Error Responses
|
Status |
Error |
Cause |
|---|---|---|
|
400 |
"At least one policy identifier is required" |
Empty array after deduplication |
|
400 |
"Maximum of 10 policy identifiers allowed" |
More than 10 policies in array |
|
400 |
"All policies must be active" |
Inactive policy in batch |
|
400 |
"All policies must have at least one rule" |
Policy without rules in batch |
|
400 |
"Organization has no webhook configured" |
Missing webhook configuration |
|
401 |
"Invalid key" |
Invalid or missing API key |
|
404 |
"Policy not found: {uri}" |
Policy doesn't exist or access denied |
|
422 |
Validation error |
Invalid request body format |
Error Response Format
{
"errors": [
{
"message": "Maximum of 10 policy identifiers allowed",
"code": "400"
}
]
}
Passing Context Effectively
Strategic use of metadata and tags enables sophisticated content routing, analytics, and business logic implementation.
Metadata Strategies
User Context: Include user identifiers, roles, and relevant profile information to enable user-specific handling.
{
"metadata": {
"userId": "user_12345",
"userRole": "premium",
"userReputation": 89,
"accountAge": "6months"
}
}
Content Context: Provide information about the content's purpose, location, and characteristics.
{
"metadata": {
"contentType": "product_review",
"productId": "prod_456",
"contentLength": 342,
"language": "en-US"
}
}
System Context: Include application-specific information that affects how results should be processed.
{
"metadata": {
"source": "mobile_app",
"version": "2.1.4",
"environment": "production",
"requestId": "req_789xyz"
}
}
Tag-Based Organization
Content Classification: Use tags to categorize content types for analytics and routing.
-
user-generated,admin-content,automated -
public,private,internal -
comments,reviews,posts,messages
Geographic and Regulatory: Tag content by region for compliance and localization.
-
us,eu,asia-pacific -
gdpr-applicable,coppa-protected -
california-resident
Business Priority: Use tags to indicate processing priorities and SLA requirements.
-
high-priority,standard,batch-processing -
real-time,near-real-time,background
Result Processing PatternsBuilding effective downstream logic requires understanding how to interpret and act on moderation results systematically.
Moderation Result Values
|
Result |
Description |
Typical Action |
|---|---|---|
|
|
Content passes all policy rules |
Publish/approve content |
|
|
Content violates one or more rules |
Block/reject content |
|
|
Uncertain result, confidence too low |
Queue for human review |
|
|
Policy skipped due to earlier chain failure |
Log for analytics |
Decision Logic Implementation
Simple Content Routing: Basic three-way routing based on moderation results.
async function processModerationResult(result, metadata) {
switch (result.result) {
case 'success':
await publishContent(metadata.contentId);
await notifyUser(metadata.userId, 'approved');
break;
case 'failure':
await blockContent(metadata.contentId, result.reasons);
await notifyUser(metadata.userId, 'rejected', result.summary);
break;
case 'ambiguous':
await queueForReview(metadata.contentId, result);
await notifyModerators('new_review_item');
break;
case 'abandoned':
// Log but take no action - this policy was skipped
console.log(`Policy ${result.policy} abandoned due to earlier failure`);
break;
}
}
Confidence-Based Logic: Use confidence scores to implement nuanced handling.
async function processWithConfidence(result, metadata) {
const { result: decision, averageConfidence } = result;
if (decision === 'success' && averageConfidence > 0.9) {
// High-confidence approval - fast track
await publishImmediately(metadata.contentId);
} else if (decision === 'success' && averageConfidence > 0.7) {
// Medium-confidence approval - normal flow
await publishContent(metadata.contentId);
} else if (decision === 'failure' && averageConfidence > 0.8) {
// High-confidence rejection - automatic block
await blockContent(metadata.contentId);
} else {
// Low confidence or ambiguous - human review
await queueForReview(metadata.contentId, result);
}
}
Rule-Specific Handling
Category-Based Responses: Different rule violations may require different responses.
async function processRuleViolations(result, metadata) {
if (result.result !== 'failure') return;
// Analyze which rule groups were violated
const violatedGroups = result.ruleGroupResults
.filter((group) => group.result === 'failure')
.map((group) => group.name.toLowerCase());
if (violatedGroups.includes('safety')) {
// Safety violations - immediate action
await blockContentImmediately(metadata.contentId);
await flagUserAccount(metadata.userId, 'safety_violation');
} else if (violatedGroups.includes('legal')) {
// Legal violations - escalate to legal team
await escalateToLegal(metadata.contentId, result);
} else if (violatedGroups.includes('brand')) {
// Brand violations - request revision
await requestContentRevision(metadata.contentId, result.reasons);
}
}
Monitoring and Optimization
Continuous monitoring helps identify optimization opportunities and ensures consistent performance.
Key Metrics to Track
Processing Metrics:
-
Average processing time per request
-
Queue depth and wait times
-
Success/failure/ambiguous/abandoned ratios
-
Webhook delivery success rates
Business Metrics:
-
Content approval rates by source/type
-
False positive and false negative rates
-
Human review workload and turnaround times
-
User experience impact (content delays)
System Metrics:
-
API response times and error rates
-
Webhook handler performance
-
Database query performance for metadata lookups
Performance Optimization Techniques
Content Preprocessing: Optimize content before submission to improve processing speed and accuracy.
Webhook Optimization: Ensure webhook handlers process results efficiently to prevent backlog buildup.
Caching Strategies: Cache frequently accessed data (user profiles, content metadata) to speed up webhook processing.
Database Optimization: Index fields used for filtering and analytics in your moderation run storage.
What's Next
With effective moderation workflows in place, explore these advanced topics:
-
Interpreting Results - Deep dive into understanding and acting on moderation outcomes
-
Human Review Workflows - Advanced strategies for managing human oversight efficiently
-
Troubleshooting - Common issues and solutions for moderation integrations
Effective moderation requires balancing automation with human oversight, optimizing for both speed and accuracy, and building systems that scale with your content volume and complexity.
Need help optimizing your moderation workflow? Contact our support team for guidance on scaling and performance optimization strategies specific to your use case.