Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.evolink.ai/llms.txt

Use this file to discover all available pages before exploring further.

Quick Integration

This guide helps you complete your first EvoLink request in a few minutes. Multimodal workloads use asynchronous tasks; text models use a synchronous messages API for chat and coding tools.

Image Generation

Create an image generation task with GPT Image 2 and query the result through the task API.

Video Generation

Create text-to-video, image-to-video, and reference-to-video tasks with Seedance 2.0.

Text Generation

Use the Claude Messages API to receive synchronous text responses.

Prerequisites

1

Create an API Key

Open the EvoLink dashboard, create an API Key, and store it securely.
2

Choose a Base URL

Use https://api.evolink.ai for image, video, audio, and other multimodal tasks. Use https://direct.evolink.ai for text models.
3

Send a Request

Multimodal APIs return a task ID first. Text APIs return the model response directly.
API Keys can invoke resources on your account. Store them only on the server side or in secure environment variables. Do not put keys in frontend code, public repositories, or client packages.

Request Flow

Multimodal tasks use the same flow:

1. Submit Task

Call an image, video, or audio endpoint and receive a task ID in the response id field.

2. Query Status

Use GET /v1/tasks/{task_id} to check pending, processing, completed, or failed.

3. Get Results

When the task completes, read the generated file URL from the results field.
Generated image and video URLs usually expire. In production, download and store completed results in your own storage as soon as possible.

Image Generation

Create an image generation task with GPT Image 2:
curl -X POST https://api.evolink.ai/v1/images/generations \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-image-2",
    "prompt": "A cinematic wide-angle shot of a futuristic city skyline at dusk",
    "size": "16:9",
    "resolution": "4K",
    "quality": "high",
    "n": 1
  }'
The response returns a task object:
{
  "id": "task-unified-1757156493-imcg5zqt",
  "object": "image.generation.task",
  "model": "gpt-image-2",
  "status": "pending",
  "progress": 0,
  "task_info": {
    "can_cancel": true,
    "estimated_time": 100
  }
}
Query task status:
curl https://api.evolink.ai/v1/tasks/task-unified-1757156493-imcg5zqt \
  -H "Authorization: Bearer YOUR_API_KEY"
When the task completes, results appear in the results array:
{
  "id": "task-unified-1757156493-imcg5zqt",
  "object": "image.generation.task",
  "model": "gpt-image-2",
  "status": "completed",
  "progress": 100,
  "results": [
    "https://example.com/generated-image.png"
  ]
}

Video Generation

Create a text-to-video task with Seedance 2.0:
curl -X POST https://api.evolink.ai/v1/videos/generations \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "seedance-2.0-text-to-video",
    "prompt": "A macro shot focuses on a green glass frog on a leaf, then shifts to its transparent abdomen with a red heart beating rhythmically.",
    "duration": 8,
    "quality": "720p",
    "aspect_ratio": "16:9",
    "generate_audio": true
  }'
Query video tasks through the same task API:
curl https://api.evolink.ai/v1/tasks/YOUR_VIDEO_TASK_ID \
  -H "Authorization: Bearer YOUR_API_KEY"
For image-to-video or multi-reference video generation, start from the Seedance 2.0 full parameter guide.

Text Generation

Claude text models should use https://direct.evolink.ai with the /v1/messages endpoint:
curl -X POST https://direct.evolink.ai/v1/messages \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "claude-sonnet-4-5-20250929",
    "max_tokens": 1024,
    "messages": [
      {
        "role": "user",
        "content": "Introduce EvoLink in three sentences"
      }
    ]
  }'
The text API returns a message object synchronously:
{
  "id": "msg_xxx",
  "type": "message",
  "role": "assistant",
  "model": "claude-sonnet-4-5-20250929",
  "content": [
    {
      "type": "text",
      "text": "EvoLink provides a unified AI service gateway..."
    }
  ],
  "stop_reason": "end_turn"
}

Python Example

This example submits an image task, polls status, and reads the final result:
import os
import time
import requests

API_KEY = os.environ["EVOLINK_API_KEY"]
BASE_URL = "https://api.evolink.ai"

headers = {
    "Authorization": f"Bearer {API_KEY}",
    "Content-Type": "application/json",
}


def create_image_task(prompt: str) -> str:
    response = requests.post(
        f"{BASE_URL}/v1/images/generations",
        headers=headers,
        json={
            "model": "gpt-image-2",
            "prompt": prompt,
            "size": "1:1",
            "quality": "high",
        },
        timeout=30,
    )
    response.raise_for_status()
    return response.json()["id"]


def wait_for_task(task_id: str, timeout_seconds: int = 300):
    started_at = time.time()

    while time.time() - started_at < timeout_seconds:
        response = requests.get(
            f"{BASE_URL}/v1/tasks/{task_id}",
            headers={"Authorization": f"Bearer {API_KEY}"},
            timeout=30,
        )
        response.raise_for_status()
        task = response.json()

        if task["status"] == "completed":
            return task["results"]
        if task["status"] == "failed":
            raise RuntimeError(task.get("error", "Task failed"))

        time.sleep(5)

    raise TimeoutError("Task timed out")


task_id = create_image_task("A clean product poster, white background, soft studio lighting")
results = wait_for_task(task_id)
print(results[0])

Rate Limits

EvoLink request rate limits are configured per model. RPM, concurrency, and task submission limits may vary by model. The actual limits depend on model type, upstream service capacity, account tier, and real-time availability. Lightweight text models usually support higher request rates, while image and video generation models may have lower limits because tasks take longer and consume more upstream resources. For asynchronous generation models, a successful API response only means the task has been accepted or created; it does not mean the task has completed. For high-concurrency workloads, implement a server-side queue and retrieve final results through the task query API or callbacks. If you repeatedly receive HTTP 429 errors, or your workload requires higher RPM or concurrency limits, contact [email protected]. We will evaluate the request based on your use case and upstream capacity.

Production Recommendations

Key Management

Store API Keys in environment variables or a secrets manager, and use separate keys for different environments.

Task Polling

Set polling intervals based on task type. Image tasks can be polled more frequently; video tasks should usually be polled less often.

Error Handling

Handle failed task states and HTTP errors, including rate limits, insufficient balance, and parameter errors.

Result Storage

Generated result URLs expire. In production, download and store completed files in your own storage system.

Next Steps

Image API

View GPT Image 2 parameters, examples, and response structure.

Video API

View Seedance 2.0 text-to-video, image-to-video, and reference-to-video capabilities.

Task Management

View task status queries, result fields, and error structure.