Documentation / API / Moderation API Integration Guide

Working with the Moderation API

Basic Implementation Requirements

Our API follows standard REST conventions with JSON payloads, making integration straightforward with any modern development stack. You won’t need special libraries or SDKs—just the ability to make HTTPS requests with custom headers.

We designed the interface to be instantly familiar to developers who’ve worked with other RESTful services. If your team can integrate with payment processors or social media APIs, they’ll have no trouble implementing our content moderation endpoints.

Authentication Setup

API security uses a two-part authentication scheme:

  • x-client-id: Your organization identifier (shared across all your projects)
  • x-client-secret: Your project-specific secret key

To locate these credentials:

  1. Log into your organization dashboard
  2. Navigate to Project Settings → API Keys
  3. Copy both values (you’ll need admin-level permissions)

Only organization administrators can rotate the organization ID, while project administrators can manage project-specific secrets. This separation adds an extra layer of security control.

Both headers must be included in every API request. Here’s a simple example:

POST /api/v1/check HTTP/1.1
Host: api.discuse.com
x-client-id: your_org_id_here
x-client-secret: your_project_secret_here
Content-Type: application/json

Throughput Management

We currently don’t impose strict rate limits on API calls, though we do employ adaptive throttling during extreme traffic spikes to maintain system stability.

Your account tier directly affects request prioritization—enterprise customers receive dedicated capacity with guaranteed throughput even during peak demand periods. Rest assured that authenticated requests are never dropped, only potentially delayed during exceptional load conditions.

Our backend scales dynamically to handle fluctuating demand, so most users never experience throughput constraints under normal operating conditions.

User Context Tracking

While optional, we strongly recommend including a consistent user identifier with each moderation request:

{
  "request": {
    "user_id": "user_48291"
  }
}

This identifier allows you to:

  • Track moderation patterns by specific users
  • Detect repeated problematic behavior
  • Apply progressive enforcement actions
  • Build behavioral risk profiles
  • Correlate content patterns across time

Your user identifiers should be anonymized values rather than directly identifiable information like email addresses. We suggest using your internal database IDs or purpose-generated tokens.

Request Structure

Every moderation request follows this basic pattern:

{
  "request": {
    "media": {
      "gif_url": [
        "https://example.com/animated.gif"
      ],
      "image_url": [
        "https://example.com/photo.jpg"
      ],
      "text": "Content requiring moderation check"
    },
    "user_id": "user_48291"
  },
  "settings": {
    "project_id": 12345
  }
}

Available request components include:

  • media.gif_url: Array of animated GIF URLs for analysis (motion-aware screening)
  • media.image_url: Array of static image URLs for visual content screening
  • media.text: Text content for language analysis and toxicity detection
  • user_id: Your internal user identifier for tracking and analytics

Important Notes on Media URLs

For image and GIF analysis, your media must be:

  • Publicly accessible without authentication
  • Served via HTTPS from a reliable host
  • Properly formatted with standard file extensions
  • Under 10MB per image/GIF

For temporary media, we recommend creating dedicated cloud storage buckets with automatic expiration policies. Our demonstration page uses AWS S3 temporary storage with Lambda functions, applying the demo_ prefix to objects with a 24-hour TTL.

Response Format

Expand to view complete response structure
{
  "trace_context": null,
  "links": {
    "status": "",
    "hit": false
  },
  "badwords": {
    "status": "",
    "hit": false
  },
  "antivirus": {
    "status": "",
    "hit": false
  },
  "language": {
    "expected": "en",
    "detected": "pl",
    "hit": true
  },
  "images": {
    "result": {
      "category": "neutral",
      "confidence": 0.95909999832510948
    },
    "status": "ok",
    "porn": 0.068999981880188,
    "sexual": 0.029999999329447746,
    "neutral": 0.95909999832510948,
    "hit": true
  },
  "gifs": {
    "result": {}
  },
  "sentiment": {
    "toxic": 0.78711975,
    "profanity": 0.7888968,
    "threat": 0.011353259,
    "insult": 0.61794597,
    "hit": true
  },
  "hits": true
}

Response Components

  • trace_context: Request identifier for troubleshooting (null unless you provide one)
  • links: Link analysis results (coming soon)
  • badwords: Explicit language detection (coming soon)
  • antivirus: Malware scanning results (coming soon)
  • language: Language identification results
    • expected: Language specified in project settings
    • detected: Actual detected language
    • hit: Whether language mismatch violates your rules
  • images: Visual content analysis
    • result.category: Primary classification (neutral, sexual, pornographic)
    • result.confidence: Certainty level (0.0-1.0)
    • Individual confidence scores for each category
    • hit: Whether content violates your thresholds
  • sentiment: Text analysis results
    • Confidence scores across toxicity categories
    • hit: Whether content violates your thresholds
  • hits: Master flag indicating any policy violations

All confidence scores range from 0.0 (completely unlikely) to 1.0 (absolute certainty).

Implementation Examples

Here are real-world examples for common scenarios:

Text-Only Moderation

cURL example for text moderation
curl -X 'POST' \
  'https://api.discuse.com/api/v1/check' \
  -H 'accept: application/json' \
  -H 'x-client-id: xxx' \
  -H 'x-client-secret: xxx' \
  -H 'Content-Type: application/json' \
  -d '{
  "request": {
    "media": {
      "text": "Text requiring moderation check"
    },
    "user_id": "user_48291"
  },
  "settings": {
    "project_id": 12345
  }
}'

Image-Only Moderation

cURL example for image moderation
curl -X 'POST' \
  'https://api.discuse.com/api/v1/check' \
  -H 'accept: application/json' \
  -H 'x-client-id: xxx' \
  -H 'x-client-secret: xxx' \
  -H 'Content-Type: application/json' \
  -d '{
  "request": {
    "media": {
      "image_url": [
        "https://example.com/photo.jpg"
      ]
    },
    "user_id": "user_48291"
  },
  "settings": {
    "project_id": 12345
  }
}'

For efficiency, you can combine multiple moderation types (text, images, GIFs) in a single API call. While you’ll be billed for each type separately, you’ll save on network overhead and reduce code complexity.