Skip to main content
POST
/
v1
/
moderations
curl --request POST \
  --url https://direct.evolink.ai/v1/moderations \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "model": "evolink-moderation-1.0",
  "input": [
    {
      "type": "text",
      "text": "I want to kill them."
    }
  ]
}
'
{
  "evolink_summary": {
    "risk_level": "medium",
    "flagged": false,
    "violations": [],
    "max_score": 0.597383272,
    "max_category": "sexual"
  },
  "id": "modr-0d9740456c391e43c445bf0f010940c7",
  "model": "evolink-moderation-1.0",
  "results": [
    {
      "flagged": false,
      "categories": {
        "harassment": false,
        "harassment/threatening": false,
        "hate": false,
        "hate/threatening": false,
        "illicit": false,
        "illicit/violent": false,
        "self-harm": false,
        "self-harm/intent": false,
        "self-harm/instructions": false,
        "sexual": false,
        "sexual/minors": false,
        "violence": false,
        "violence/graphic": false
      },
      "category_scores": {
        "harassment": 0.0006,
        "harassment/threatening": 0.0007,
        "hate": 0.00003,
        "hate/threatening": 0.0000025,
        "illicit": 0.000013,
        "illicit/violent": 0.0000096,
        "self-harm": 0.0000166,
        "self-harm/intent": 0.000004,
        "self-harm/instructions": 0.0000031,
        "sexual": 0.597383272,
        "sexual/minors": 0.000004,
        "violence": 0.0231,
        "violence/graphic": 0.0089
      },
      "category_applied_input_types": {
        "harassment": [
          "text"
        ],
        "harassment/threatening": [
          "text"
        ],
        "hate": [
          "text"
        ],
        "hate/threatening": [
          "text"
        ],
        "illicit": [
          "text"
        ],
        "illicit/violent": [
          "text"
        ],
        "self-harm": [
          "text"
        ],
        "self-harm/intent": [
          "text"
        ],
        "self-harm/instructions": [
          "text"
        ],
        "sexual": [
          "text"
        ],
        "sexual/minors": [
          "text"
        ],
        "violence": [
          "text"
        ],
        "violence/graphic": [
          "text"
        ]
      }
    }
  ]
}
BaseURL: The default BaseURL is https://direct.evolink.ai, which has better support for text models and long-lived connections. https://api.evolink.ai is the primary endpoint for multimodal services and serves as a fallback address for text models.

Authorizations

Authorization
string
header
required

##All endpoints require Bearer Token authentication##

Get your API Key:

Visit the API Key management page to obtain your API Key.

Include it in the request header:

Authorization: Bearer YOUR_API_KEY

Body

application/json
model
enum<string>
required

Moderation model name. Fixed value evolink-moderation-1.0.

Available options:
evolink-moderation-1.0
Example:

"evolink-moderation-1.0"

input
(Text input item · object | Image input item · object)[]
required

Content to moderate, expressed uniformly as an array of objects. Each element is either a text or image_url object.

"input": [
{"type": "text", "text": "text to moderate"},
{"type": "image_url", "image_url": {"url": "https://..."}}
]

Limits:

  • The array can contain at most 1 object with type=image_url. To moderate multiple images, split into concurrent requests.
  • The number of type=text objects is unlimited.

Response

Moderation succeeded

Moderation response. The top-level evolink_summary is the recommended unified risk summary for application use; results provides per-category detailed scores.

EvoLink risk summary: a unified summary derived from the 13 categories' category_scores using differentiated thresholds tuned to each category's sensitivity. Your application can directly use it to allow / reject / send for review.

Risk grading thresholds

Categorymedium thresholdhigh thresholdNotes
sexual/minors0.050.20Red-line category, strictest thresholds
self-harm / self-harm/intent / self-harm/instructions0.300.60Life-threatening, strict
violence/graphic0.400.70Graphic violence
illicit/violent0.400.70Violent illicit
sexual0.500.80General sexual
violence0.500.80General violence
harassment/threatening0.500.80Threatening harassment
hate/threatening0.500.80Threatening hate
harassment0.600.85General harassment
hate0.600.85General hate
illicit0.600.85General illicit instructions

Decision rules

any category score >= its high threshold   → risk_level = "high",   flagged = true,  added to violations
any category score >= its medium threshold → risk_level = "medium", flagged = false
otherwise → risk_level = "low", flagged = false

Recommended usage

summary = response["evolink_summary"]

if summary["flagged"]: # high → reject directly
reject(reason=summary["violations"])
elif summary["risk_level"] == "medium": # gray area
log_for_review(summary) # log for manual sampling
proceed()
else: # low → allow
proceed()
id
string

Unique identifier for this moderation request.

Example:

"modr-0d9740456c391e43c445bf0f010940c7"

model
string

Name of the model actually used. Fixed value evolink-moderation-1.0.

Example:

"evolink-moderation-1.0"

results
object[]

List of moderation results. Always returns 1 result (array-form input is merged into a single scoring pass).

Multimodal evaluation scope

Among the 13 categories, some are evaluated on text only and are not evaluated on images:

CategoryEvaluation scope
harassment / harassment/threateningText only
hate / hate/threateningText only
illicit / illicit/violentText only
sexual/minorsText only (red-line category — handle with care)
self-harm / self-harm/intent / self-harm/instructionsText + image
sexualText + image
violence / violence/graphicText + image

Key facts:

  • When only an image is sent, the scores for the 7 text-only categories above will always be 0 and category_applied_input_types will be an empty array — this does NOT mean the content is safe, only that it was not evaluated.
  • If your business involves risks to minors (the sexual/minors red-line category), you must submit text context together for moderation and cannot rely on image scores alone.