Running Moderation

Once your policies are configured and tested, it's time to integrate content moderation into your production workflows. This section covers advanced techniques for scaling moderation, optimizing performance, and building robust integrations that handle high volumes reliably.

Two Modes of Operation

DeepMod provides two distinct modes for running content moderation, each optimized for different use cases and integration patterns.

Test Mode

Test mode provides immediate feedback during policy development and content evaluation. This synchronous mode is perfect for:

  • Policy Authoring: Get instant results while refining rules and thresholds

  • Content Validation: Quickly check specific content samples during development

  • Integration Testing: Validate your webhook handlers and error handling

  • Quality Assurance: Verify policy behavior before activating for production

Characteristics of Test Mode:

  • Immediate, synchronous responses

  • No webhook notifications sent

  • Ideal for development and testing workflows

  • Lower throughput compared to production mode

When to Use Test Mode: Use test mode when you need immediate feedback and are working with small volumes of content. This mode is essential during the policy development phase and for debugging integration issues.

Test Mode is not intended to be used as the primary method of moderation. It is meant to be used for testing and refining your polices, and as such is only available via the DeepMod dashboard.

Production Mode

Production mode is designed for scalable, reliable content moderation in live applications. This asynchronous mode handles high volumes efficiently:

  • Asynchronous Processing: Content is queued for processing, freeing your application immediately

  • Webhook Notifications: Results delivered to your configured webhook endpoint

  • High Throughput: Optimized for processing large volumes of content

  • Reliable Delivery: Built-in retry mechanisms for webhook delivery

Characteristics of Production Mode:

  • Asynchronous processing with webhook delivery

  • Higher throughput and scalability

  • Built-in reliability and retry mechanisms

  • Requires webhook configuration

  • Optimal for production applications

When to Use Production Mode: Use production mode for all live content moderation in your application. This mode provides the reliability and scale needed for real-world content workflows.

API Integration Patterns

Effective integration requires understanding how to structure your API calls, handle responses, and build robust error handling.

Basic API Call Structure

const response = await fetch('https://api.deepmod.ai/v1/moderation/run', {
  method: 'POST',
  headers: {
    'Authorization': `Bearer ${process.env.DEEPMOD_API_TOKEN}`,
    'Content-Type': 'application/json',
  },
  body: JSON.stringify({
    policyId: 'your-policy-id',
    content: 'Text content to moderate',
    metadata: {
      userId: 'user_12345',
      postId: 'post_67890',
      source: 'comment',
      priority: 'standard',
    },
    tags: ['user-generated', 'comments', 'public'],
  }),
});
import requests
import os

response = requests.post('https://api.deepmod.ai/v1/moderation/run',
  headers={
    'Authorization': f"Bearer {os.environ['DEEPMOD_API_TOKEN']}",
    'Content-Type': 'application/json'
  },
  json={
    'policyId': 'your-policy-id',
    'content': 'Text content to moderate',
    'metadata': {
      'userId': 'user_12345',
      'postId': 'post_67890',
      'source': 'comment',
      'priority': 'standard'
    },
    'tags': ['user-generated', 'comments', 'public']
  }
)
<?php
$response = file_get_contents('https://api.deepmod.ai/v1/moderation/run', false,
  stream_context_create([
    'http' => [
      'method' => 'POST',
      'header' => [
        'Authorization: Bearer ' . $_ENV['DEEPMOD_API_TOKEN'],
        'Content-Type: application/json'
      ],
      'content' => json_encode([
        'policyId' => 'your-policy-id',
        'content' => 'Text content to moderate',
        'metadata' => [
          'userId' => 'user_12345',
          'postId' => 'post_67890',
          'source' => 'comment',
          'priority' => 'standard'
        ],
        'tags' => ['user-generated', 'comments', 'public']
      ])
    ]
  ])
);
?>
require 'net/http'
require 'json'

uri = URI('https://api.deepmod.ai/v1/moderation/run')
http = Net::HTTP.new(uri.host, uri.port)
http.use_ssl = true

request = Net::HTTP::Post.new(uri)
request['Authorization'] = "Bearer #{ENV['DEEPMOD_API_TOKEN']}"
request['Content-Type'] = 'application/json'
request.body = {
  policyId: 'your-policy-id',
  content: 'Text content to moderate',
  metadata: {
    userId: 'user_12345',
    postId: 'post_67890',
    source: 'comment',
    priority: 'standard'
  },
  tags: ['user-generated', 'comments', 'public']
}.to_json

response = http.request(request)

Policy Chaining

Policy chaining allows you to run content through multiple policies sequentially. Replace the single policyId with an array of policy identifiers to create a moderation chain.

Single Policy (existing):

{
  "policyId": "safety-policy",
  "content": "Text content to moderate"
}

Policy Chain:

{
  "policyId": ["safety-policy", "legal-compliance", "brand-guidelines"],
  "content": "Text content to moderate",
  "metadata": {
    "userId": "user_12345",
    "postId": "post_67890"
  }
}

Chain Execution:

  1. Policies execute sequentially in the provided order

  2. If any policy fails, the chain stops and remaining policies are marked as "abandoned"

  3. Success requires all policies to pass

  4. You receive one consolidated webhook with all results

Chain Limitations:

  • Maximum 10 policies per chain

  • All policies must be active and belong to your organization

  • Each policy in the chain can be referenced by ID or friendly identifier

Example Response Structure:

{
  "batchId": "batch_abc123",
  "moderationJobId": "job_abc123""
}

Webhook Payload for Chains:

{
  "id": "job_abc123def456",
  "type": "Moderation.BatchCompleted",
  "data": {
    "batch": {
      "result": "failure",
      "moderation": [
        {
          "policy": "safety-policy",
          "result": "success",
          "ruleGroupResults": [
            {
              "name": "Content Safety",
              "result": "success",
              "averageConfidence": 0.95,
              "ruleResults": [
                {
                  "condition": "must not contain harmful content",
                  "result": "success",
                  "averageConfidence": 0.95,
                  "matchedContent": [
                    {
                      "content": null,
                      "confidence": 0.95
                    }
                  ]
                }
              ]
            }
          ],
          "averageConfidence": 0.95,
          "moderationRunId": 1001
        },
        {
          "policy": "legal-compliance-policy",
          "result": "failure",
          "ruleGroupResults": [
            {
              "name": "Copyright Detection",
              "result": "failure",
              "averageConfidence": 0.87,
              "ruleResults": [
                {
                  "condition": "must not contain copyrighted material",
                  "result": "failure",
                  "averageConfidence": 0.87,
                  "matchedContent": [
                    {
                      "content": "Potential copyright violation detected",
                      "confidence": 0.87
                    }
                  ]
                }
              ]
            }
          ],
          "averageConfidence": 0.87,
          "moderationRunId": 1002
        }
      ]
    },
    "metadata": {
      "userId": "user_12345",
      "source": "comment",
      "priority": "standard"
    },
    "tags": ["user-generated"]
  }
}

Request Components

PolicyId: The identifier of the policy to use for moderation. Can be either the numeric ID or the friendly identifier string.

Content: The text content to be moderated. Keep content focused and relevant to your policy rules for best results.

Metadata (optional): Custom key-value pairs that provide context and enable downstream processing. Common metadata includes:

  • User identifiers for tracking and analytics

  • Content identifiers for your internal systems

  • Source information (comments, posts, reviews, etc.)

  • Priority levels for processing order

  • Regional or localization context

Tags (optional): Predefined labels for categorizing and filtering moderation runs. Tags help with:

  • Analytics segmentation

  • Queue filtering and monitoring

  • Business logic routing in webhook handlers

Tags must be created in the DeepMod dashboard or via the API before being used during moderation. Tags included in moderation requests that have not already been created via the dashboard or API will be ignored.

Response Handling

Production mode API calls return immediately with a job acknowledgment:

{
  "moderationJobId": "abc123def456"
}

The actual moderation results arrive via webhook notification when processing completes.

Webhook Integration Best Practices

Webhooks are the cornerstone of effective production moderation. Building robust webhook handlers ensures reliable processing of moderation results.

Webhook Payload Structure

When moderation completes, you'll receive a webhook with this structure:

{
  "id": "job_abc123def456",
  "type": "Moderation.Completed",
  "data": {
    "moderation": {
      "policy": "community-guidelines",
      "result": "success",
      "ruleGroupResults": [
        {
          "name": "Safety",
          "result": "success",
          "averageConfidence": 0.95,
          "ruleResults": [
            {
              "condition": "must not contain harmful content",
              "result": "success",
              "averageConfidence": 0.95,
              "matchedContent": [
                {
                  "content": null,
                  "confidence": 0.95
                }
              ]
            }
          ],
          "averageConfidence": 0.95
        }
      ],
      "averageConfidence": 0.95
    },
    "metadata": {
      "userId": "user_12345",
      "postId": "post_67890",
      "source": "comment",
      "priority": "standard"
    },
    "tags": []
  }
}

Implementing Webhook Handlers

DeepMod webhook requests include a x-deepmod-signature header containing a token that is signed using your organization’s client secret. While optional, it is highly recommended that you verify the signature before handling the result. You can view your organizations client secret by navigating to Org Settings in the DeepMod dashboard.

// Example webhook handler (Node.js/Express)
app.post('/webhook/moderation', async (req, res) => {
  try {
    // 1. Verify webhook signature (recommended)
    const signature = req.headers['x-deepmod-signature'];
    if (!verifySignature(req.body, signature, webhookSecret)) {
      return res.status(401).send('Invalid signature');
    }

    // 2. Parse webhook payload
    const { id: jobId, type, data } = req.body;
    const { moderation, metadata } = data;

    // 3. Implement idempotent handling
    if (await isJobAlreadyProcessed(jobId)) {
      return res.status(200).send('Already processed');
    }

    // 4. Process based on moderation result
    switch (moderation.result) {
      case 'success':
        await handleApprovedContent(metadata, moderation);
        break;
      case 'failure':
        await handleRejectedContent(metadata, moderation);
        break;
      case 'ambiguous':
        await handleAmbiguousContent(metadata, moderation);
        break;
    }

    // 5. Mark job as processed
    await markJobProcessed(jobId);

    // 6. Return success response
    res.status(200).send('Processed');
  } catch (error) {
    // 7. Log errors and return error status
    console.error('Webhook processing error:', error);
    res.status(500).send('Processing failed');
  }
});
from flask import Flask, request, jsonify
import hmac
import hashlib

app = Flask(__name__)

@app.route('/webhook/moderation', methods=['POST'])
def webhook_handler():
    try:
        # 1. Verify webhook signature (recommended)
        signature = request.headers.get('X-DeepMod-Signature')
        if not verify_signature(request.data, signature, webhook_secret):
            return jsonify({'error': 'Invalid signature'}), 401

        # 2. Parse webhook payload
        data = request.json
        job_id = data['id']
        webhook_type = data['type']
        moderation_data = data['data']
        moderation = moderation_data['moderation']
        metadata = moderation_data['metadata']

        # 3. Implement idempotent handling
        if is_job_already_processed(job_id):
            return jsonify({'status': 'Already processed'}), 200

        # 4. Process based on moderation result
        if moderation['result'] == 'success':
            handle_approved_content(metadata, moderation)
        elif moderation['result'] == 'failure':
            handle_rejected_content(metadata, moderation)
        elif moderation['result'] == 'ambiguous':
            handle_ambiguous_content(metadata, moderation)

        # 5. Mark job as processed
        mark_job_processed(job_id)

        # 6. Return success response
        return jsonify({'status': 'Processed'}), 200

    except Exception as error:
        # 7. Log errors and return error status
        print(f'Webhook processing error: {error}')
        return jsonify({'error': 'Processing failed'}), 500
<?php
// Example webhook handler (PHP)
if ($_SERVER['REQUEST_METHOD'] === 'POST') {
    try {
        // 1. Verify webhook signature (recommended)
        $signature = $_SERVER['HTTP_X_DEEPMOD_SIGNATURE'] ?? '';
        $requestBody = file_get_contents('php://input');

        if (!verifySignature($requestBody, $signature, $webhookSecret)) {
            http_response_code(401);
            echo 'Invalid signature';
            exit;
        }

        // 2. Parse webhook payload
        $data = json_decode($requestBody, true);
        $jobId = $data['id'];
        $type = $data['type'];
        $moderationData = $data['data'];
        $moderation = $moderationData['moderation'];
        $metadata = $moderationData['metadata'];

        // 3. Implement idempotent handling
        if (isJobAlreadyProcessed($jobId)) {
            http_response_code(200);
            echo 'Already processed';
            exit;
        }

        // 4. Process based on moderation result
        switch ($moderation['result']) {
            case 'success':
                handleApprovedContent($metadata, $moderation);
                break;
            case 'failure':
                handleRejectedContent($metadata, $moderation);
                break;
            case 'ambiguous':
                handleAmbiguousContent($metadata, $moderation);
                break;
        }

        // 5. Mark job as processed
        markJobProcessed($jobId);

        // 6. Return success response
        http_response_code(200);
        echo 'Processed';

    } catch (Exception $error) {
        // 7. Log errors and return error status
        error_log('Webhook processing error: ' . $error->getMessage());
        http_response_code(500);
        echo 'Processing failed';
    }
}
?>
require 'sinatra'
require 'json'
require 'openssl'

post '/webhook/moderation' do
  begin
    # 1. Verify webhook signature (recommended)
    signature = request.env['HTTP_X_DEEPMOD_SIGNATURE']
    request_body = request.body.read

    unless verify_signature(request_body, signature, webhook_secret)
      status 401
      return 'Invalid signature'
    end

    # 2. Parse webhook payload
    data = JSON.parse(request_body)
    job_id = data['id']
    webhook_type = data['type']
    moderation_data = data['data']
    moderation = moderation_data['moderation']
    metadata = moderation_data['metadata']

    # 3. Implement idempotent handling
    if job_already_processed?(job_id)
      status 200
      return 'Already processed'
    end

    # 4. Process based on moderation result
    case moderation['result']
    when 'success'
      handle_approved_content(metadata, moderation)
    when 'failure'
      handle_rejected_content(metadata, moderation)
    when 'ambiguous'
      handle_ambiguous_content(metadata, moderation)
    end

    # 5. Mark job as processed
    mark_job_processed(job_id)

    # 6. Return success response
    status 200
    'Processed'

  rescue => error
    # 7. Log errors and return error status
    puts "Webhook processing error: #{error.message}"
    status 500
    'Processing failed'
  end
end

Webhook Implementation Guidelines

Idempotency: Always implement idempotent processing using the job ID to prevent duplicate handling if webhooks are retried.

Fast Response: Return a 2xx status code quickly (under 5 seconds) to prevent webhook timeouts and retries.

Error Handling: Log all errors with sufficient context for debugging, but don't let errors crash your webhook handler.

Signature Verification: While optional, signature verification provides important security for production environments.

Graceful Degradation: If your primary processing fails, consider implementing fallback mechanisms or queue for later retry.

Passing Context Effectively

Strategic use of metadata and tags enables sophisticated content routing, analytics, and business logic implementation.

Metadata Strategies

User Context: Include user identifiers, roles, and relevant profile information to enable user-specific handling.

{
  "metadata": {
    "userId": "user_12345",
    "userRole": "premium",
    "userReputation": 89,
    "accountAge": "6months"
  }
}

Content Context: Provide information about the content's purpose, location, and characteristics.

{
  "metadata": {
    "contentType": "product_review",
    "productId": "prod_456",
    "contentLength": 342,
    "language": "en-US"
  }
}

System Context: Include application-specific information that affects how results should be processed.

{
  "metadata": {
    "source": "mobile_app",
    "version": "2.1.4",
    "environment": "production",
    "requestId": "req_789xyz"
  }
}

Tag-Based Organization

Reminder: Tags must be created in the DeepMod dashboard or via the API before being used during moderation.

Content Classification: Use tags to categorize content types for analytics and routing.

  • user-generated, admin-content, automated

  • public, private, internal

  • comments, reviews, posts, messages

Geographic and Regulatory: Tag content by region for compliance and localization.

  • us, eu, asia-pacific

  • gdpr-applicable, coppa-protected

  • california-resident

Business Priority: Use tags to indicate processing priorities and SLA requirements.

  • high-priority, standard, batch-processing

  • real-time, near-real-time, background

Result Processing Patterns

Building effective downstream logic requires understanding how to interpret and act on moderation results systematically.

Decision Logic Implementation

Simple Content Routing: Basic three-way routing based on moderation results.

async function processModerationResult(result, metadata) {
  switch (result.result) {
    case 'success':
      await publishContent(metadata.contentId);
      await notifyUser(metadata.userId, 'approved');
      break;

    case 'failure':
      await blockContent(metadata.contentId, result.reasons);
      await notifyUser(metadata.userId, 'rejected', result.summary);
      break;

    case 'ambiguous':
      await queueForReview(metadata.contentId, result);
      await notifyModerators('new_review_item');
      break;
  }
}

Confidence-Based Logic: Use confidence scores to implement nuanced handling.

async function processWithConfidence(result, metadata) {
  const { result: decision, averageConfidence } = result;

  if (decision === 'success' && averageConfidence > 0.9) {
    // High-confidence approval - fast track
    await publishImmediately(metadata.contentId);
  } else if (decision === 'success' && averageConfidence > 0.7) {
    // Medium-confidence approval - normal flow
    await publishContent(metadata.contentId);
  } else if (decision === 'failure' && averageConfidence > 0.8) {
    // High-confidence rejection - automatic block
    await blockContent(metadata.contentId);
  } else {
    // Low confidence or ambiguous - human review
    await queueForReview(metadata.contentId, result);
  }
}

Category-Based Responses: Different rule violations may require different responses.

async function processRuleViolations(result, metadata) {
  if (result.result !== 'failure') return;

  // Analyze which rule groups were violated
  const violatedGroups = result.ruleGroupResults
    .filter((group) => group.result === 'failure')
    .map((group) => group.name.toLowerCase());

  if (violatedGroups.includes('safety')) {
    // Safety violations - immediate action
    await blockContentImmediately(metadata.contentId);
    await flagUserAccount(metadata.userId, 'safety_violation');
  } else if (violatedGroups.includes('legal')) {
    // Legal violations - escalate to legal team
    await escalateToLegal(metadata.contentId, result);
  } else if (violatedGroups.includes('brand')) {
    // Brand violations - request revision
    await requestContentRevision(metadata.contentId, result.reasons);
  }
}

What's Next

With effective moderation workflows in place, explore these advanced topics:

Effective moderation requires balancing automation with human oversight, optimizing for both speed and accuracy, and building systems that scale with your content volume and complexity.

Need help optimizing your moderation workflow? Contact our support team for guidance on scaling and performance optimization strategies specific to your use case.