evolink_summary field with risk_level / flagged / violations / max_score / max_categoryCore capabilities:
category_scores and a simplified evolink_summary — you can use either depending on your needsInput limits:
Typical usage: see the examples below, covering the three typical scenarios — text-only, text + image, and image-only.
https://direct.evolink.ai, which has better support for text models and long-lived connections. https://api.evolink.ai is the primary endpoint for multimodal services and serves as a fallback address for text models.##All endpoints require Bearer Token authentication##
Get your API Key:
Visit the API Key management page to obtain your API Key.
Include it in the request header:
Authorization: Bearer YOUR_API_KEYModeration model name. Fixed value evolink-moderation-1.0.
evolink-moderation-1.0 "evolink-moderation-1.0"
Content to moderate, expressed uniformly as an array of objects. Each element is either a text or image_url object.
"input": [
{"type": "text", "text": "text to moderate"},
{"type": "image_url", "image_url": {"url": "https://..."}}
]Limits:
type=image_url. To moderate multiple images, split into concurrent requests.type=text objects is unlimited.Moderation succeeded
Moderation response. The top-level evolink_summary is the recommended unified risk summary for application use; results provides per-category detailed scores.
EvoLink risk summary: a unified summary derived from the 13 categories' category_scores using differentiated thresholds tuned to each category's sensitivity. Your application can directly use it to allow / reject / send for review.
| Category | medium threshold | high threshold | Notes |
|---|---|---|---|
sexual/minors | 0.05 | 0.20 | Red-line category, strictest thresholds |
self-harm / self-harm/intent / self-harm/instructions | 0.30 | 0.60 | Life-threatening, strict |
violence/graphic | 0.40 | 0.70 | Graphic violence |
illicit/violent | 0.40 | 0.70 | Violent illicit |
sexual | 0.50 | 0.80 | General sexual |
violence | 0.50 | 0.80 | General violence |
harassment/threatening | 0.50 | 0.80 | Threatening harassment |
hate/threatening | 0.50 | 0.80 | Threatening hate |
harassment | 0.60 | 0.85 | General harassment |
hate | 0.60 | 0.85 | General hate |
illicit | 0.60 | 0.85 | General illicit instructions |
any category score >= its high threshold → risk_level = "high", flagged = true, added to violations
any category score >= its medium threshold → risk_level = "medium", flagged = false
otherwise → risk_level = "low", flagged = falsesummary = response["evolink_summary"]
if summary["flagged"]: # high → reject directly
reject(reason=summary["violations"])
elif summary["risk_level"] == "medium": # gray area
log_for_review(summary) # log for manual sampling
proceed()
else: # low → allow
proceed()Unique identifier for this moderation request.
"modr-0d9740456c391e43c445bf0f010940c7"
Name of the model actually used. Fixed value evolink-moderation-1.0.
"evolink-moderation-1.0"
List of moderation results. Always returns 1 result (array-form input is merged into a single scoring pass).
Among the 13 categories, some are evaluated on text only and are not evaluated on images:
| Category | Evaluation scope |
|---|---|
harassment / harassment/threatening | Text only |
hate / hate/threatening | Text only |
illicit / illicit/violent | Text only |
sexual/minors | Text only (red-line category — handle with care) |
self-harm / self-harm/intent / self-harm/instructions | Text + image |
sexual | Text + image |
violence / violence/graphic | Text + image |
Key facts:
0 and category_applied_input_types will be an empty array — this does NOT mean the content is safe, only that it was not evaluated.sexual/minors red-line category), you must submit text context together for moderation and cannot rely on image scores alone.